本文转载自https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/

Background

Backpropagation is a common method for training a neural network. There is no shortage of papers online that attempt to explain how backpropagation works, but few that include an example with actual numbers. This post is my attempt to explain how it works with a concrete example that folks can compare their own calculations to in order to ensure they understand backpropagation correctly.

If this kind of thing interests you, you should sign up for my newsletter where I post about AI-related projects that I’m working on.

Backpropagation in Python

You can play around with a Python script that I wrote that implements the backpropagation algorithm in this Github repo.

Backpropagation Visualization

For an interactive visualization showing a neural network as it learns, check out my Neural Network visualization.

Additional Resources

If you find this tutorial useful and want to continue learning about neural networks and their applications, I highly recommend checking out Adrian Rosebrock’s excellent tutorial on Getting Started with Deep Learning and Python.

Overview

For this tutorial, we’re going to use a neural network with two inputs, two hidden neurons, two output neurons. Additionally, the hidden and output neurons will include a bias.

Here’s the basic structure:

BP算法演示

In order to have some numbers to work with, here are the initial weights, the biases, and training inputs/outputs:

BP算法演示

The goal of backpropagation is to optimize the weights so that the neural network can learn how to correctly map arbitrary inputs to outputs.

For the rest of this tutorial we’re going to work with a single training set: given inputs 0.05 and 0.10, we want the neural network to output 0.01 and 0.99.

The Forward Pass

To begin, lets see what the neural network currently predicts given the weights and biases above and inputs of 0.05 and 0.10. To do this we’ll feed those inputs forward though the network.

We figure out the total net input to each hidden layer neuron, squash the total net input using an activation function (here we use the logistic function), then repeat the process with the output layer neurons.

Total net input is also referred to as just net input by some sources.

Here’s how we calculate the total net input for BP算法演示:

BP算法演示

BP算法演示

We then squash it using the logistic function to get the output of BP算法演示:

BP算法演示

Carrying out the same process for BP算法演示 we get:

BP算法演示

We repeat this process for the output layer neurons, using the output from the hidden layer neurons as inputs.

Here’s the output for BP算法演示:

BP算法演示

BP算法演示

BP算法演示

And carrying out the same process for BP算法演示 we get:

BP算法演示

Calculating the Total Error

We can now calculate the error for each output neuron using the squared error function and sum them to get the total error:

BP算法演示

Some sources refer to the target as the ideal and the output as the actual.
The BP算法演示 is included so that exponent is cancelled when we differentiate later on. The result is eventually multiplied by a learning rate anyway so it doesn’t matter that we introduce a constant here [1].

For example, the target output for BP算法演示 is 0.01 but the neural network output 0.75136507, therefore its error is:

BP算法演示

Repeating this process for BP算法演示 (remembering that the target is 0.99) we get:

BP算法演示

The total error for the neural network is the sum of these errors:

BP算法演示

The Backwards Pass

Our goal with backpropagation is to update each of the weights in the network so that they cause the actual output to be closer the target output, thereby minimizing the error for each output neuron and the network as a whole.

Output Layer

Consider BP算法演示. We want to know how much a change in BP算法演示 affects the total error, aka BP算法演示.

BP算法演示 is read as “the partial derivative of BP算法演示 with respect to BP算法演示“. You can also say “the gradient with respect to BP算法演示“.

By applying the chain rule we know that:

BP算法演示

Visually, here’s what we’re doing:

BP算法演示

We need to figure out each piece in this equation.

First, how much does the total error change with respect to the output?

BP算法演示

BP算法演示

BP算法演示

BP算法演示 is sometimes expressed as BP算法演示
When we take the partial derivative of the total error with respect to BP算法演示, the quantity BP算法演示 becomes zero because BP算法演示 does not affect it which means we’re taking the derivative of a constant which is zero.

Next, how much does the output of BP算法演示 change with respect to its total net input?

The partial derivative of the logistic function is the output multiplied by 1 minus the output:

BP算法演示

BP算法演示

Finally, how much does the total net input of BP算法演示 change with respect to BP算法演示?

BP算法演示

BP算法演示

Putting it all together:

BP算法演示

BP算法演示

You’ll often see this calculation combined in the form of the delta rule:

BP算法演示

Alternatively, we have BP算法演示 and BP算法演示 which can be written as BP算法演示, aka BP算法演示 (the Greek letter delta) aka the node delta. We can use this to rewrite the calculation above:

BP算法演示

BP算法演示

Therefore:

BP算法演示

Some sources extract the negative sign from BP算法演示 so it would be written as:

BP算法演示

/*每个权重的梯度都等于与其相连的前一层节点的输出(即BP算法演示*/

To decrease the error, we then subtract this value from the current weight (optionally multiplied by some learning rate, eta, which we’ll set to 0.5):

BP算法演示

Some sources use BP算法演示 (alpha) to represent the learning rate, others use BP算法演示(eta), and others even use BP算法演示 (epsilon).

We can repeat this process to get the new weights BP算法演示BP算法演示, and BP算法演示:

BP算法演示

BP算法演示

BP算法演示

We perform the actual updates in the neural network after we have the new weights leading into the hidden layer neurons (ie, we use the original weights, not the updated weights, when we continue the backpropagation algorithm below).

Hidden Layer

Next, we’ll continue the backwards pass by calculating new values for BP算法演示BP算法演示BP算法演示, and BP算法演示.

Big picture, here’s what we need to figure out:

BP算法演示

Visually:

BP算法演示

We’re going to use a similar process as we did for the output layer, but slightly different to account for the fact that the output of each hidden layer neuron contributes to the output (and therefore error) of multiple output neurons. We know that BP算法演示 affects both BP算法演示 and BP算法演示 therefore the BP算法演示 needs to take into consideration its effect on the both output neurons:

BP算法演示

Starting with BP算法演示:

BP算法演示

We can calculate BP算法演示 using values we calculated earlier:

BP算法演示

And BP算法演示 is equal to BP算法演示:

BP算法演示

BP算法演示

Plugging them in:

BP算法演示

Following the same process for BP算法演示, we get:

BP算法演示

Therefore:

BP算法演示

Now that we have BP算法演示, we need to figure out BP算法演示 and then BP算法演示 for each weight:

BP算法演示

BP算法演示

We calculate the partial derivative of the total net input to BP算法演示 with respect to BP算法演示the same as we did for the output neuron:

BP算法演示

BP算法演示

Putting it all together:

BP算法演示

BP算法演示

You might also see this written as:

BP算法演示

BP算法演示

BP算法演示

/*每个权重的梯度都等于与其相连的前一层节点的输出(即i1*/

We can now update BP算法演示:

BP算法演示

Repeating this for BP算法演示BP算法演示, and BP算法演示

BP算法演示

BP算法演示

BP算法演示

Finally, we’ve updated all of our weights! When we fed forward the 0.05 and 0.1 inputs originally, the error on the network was 0.298371109. After this first round of backpropagation, the total error is now down to 0.291027924. It might not seem like much, but after repeating this process 10,000 times, for example, the error plummets to 0.000035085. At this point, when we feed forward 0.05 and 0.1, the two outputs neurons generate 0.015912196 (vs 0.01 target) and 0.984065734 (vs 0.99 target).

 

总结:

1、每个权重的梯度都等于与其相连的前一层节点的输出  乘以  与其相连的后一层的反向传播的输出,重要的结论说三遍!

2、新权重 = 原权重 - BP算法演示*(总偏差对该权重的梯度值),如

BP算法演示

 

3、参考博文:http://blog.csdn.net/zhongkejingwang/article/details/44514073

相关文章:

  • 2021-10-11
  • 2021-10-17
  • 2021-10-05
  • 2021-10-11
  • 2021-10-01
  • 2019-03-15
  • 2021-12-18
  • 2022-01-02
猜你喜欢
  • 2021-12-18
  • 2021-11-06
  • 2021-08-05
  • 2018-05-19
  • 2022-01-07
  • 2021-09-27
  • 2021-12-23
  • 2021-10-08
相关资源
相似解决方案