Learn about watsonx→ https://ibm.biz/BdyEjK
Neural networks are great for predictive modeling — everything from stock trends to language translations. But what if the answer is wrong, how do they “learn” to do better? Martin Keen explains that during a process called backward propagation, the generated output is compared to the expected output, and then the error contributed by each neuron (or “node”) is examined. By adjusting the node’s weights and biases, error is reduced and thus the overall accuracy improved.
Get started for free on IBM Cloud → https://ibm.biz/sign-up-now
Subscribe to see more videos like this in the future → http://ibm.biz/subscribe-now