Background
Backpropagation is a common method for training a neural network. There is no shortage of papers online that attempt to explain how backpropagation works, but few that include an example with actual numbers. This post is my attempt to explain how it works with a concrete example that folks can compare their own calculations to in order to ensure they understand backpropagation correctly.If this kind of thing interests you, you should sign up for my newsletter where I post about AI-related projects that I’m working on.
Backpropagation in Python
You can play around with a Python script that I wrote that implements the backpropagation algorithm in this Github repo.Overview
For this tutorial, we’re going to use a neural network with two inputs, two hidden neurons, two output neurons. Additionally, the hidden and output neurons will include a bias.Here’s the basic structure:
![neural_network (7)](https://matthewmazur.files.wordpress.com/2018/03/neural_network-7.png?w=525)
In order to have some numbers to work with, here’s are the initial weights, the biases, and training inputs/outputs:
![neural_network (9)](https://matthewmazur.files.wordpress.com/2018/03/neural_network-9.png?w=525)
The goal of backpropagation is to optimize the weights so that the neural network can learn how to correctly map arbitrary inputs to outputs.
For the rest of this tutorial we’re going to work with a single training set: given inputs 0.05 and 0.10, we want the neural network to output 0.01 and 0.99.
The Forward Pass
To begin, lets see what the neural network currently predicts given the weights and biases above and inputs of 0.05 and 0.10. To do this we’ll feed those inputs forward though the network.We figure out the total net input to each hidden layer neuron, squash the total net input using an activation function (here we use the logistic function), then repeat the process with the output layer neurons.
Total net input is also referred to as just net input by some sources.
Here’s how we calculate the total net input for We then squash it using the logistic function to get the output of
Carrying out the same process for
We repeat this process for the output layer neurons, using the output from the hidden layer neurons as inputs.
Here’s the output for
And carrying out the same process for
Calculating the Total Error
We can now calculate the error for each output neuron using the squared error function and sum them to get the total error:
Some sources refer to the target as the ideal and the output as the actual.
The
is included so that exponent is cancelled when we differentiate later
on. The result is eventually multiplied by a learning rate anyway so it
doesn’t matter that we introduce a constant here [1].
For example, the target output for Repeating this process for
The total error for the neural network is the sum of these errors:
The Backwards Pass
Our goal with backpropagation is to update each of the weights in the network so that they cause the actual output to be closer the target output, thereby minimizing the error for each output neuron and the network as a whole.Output Layer
ConsiderVisually, here’s what we’re doing:
![output_1_backprop (4)](https://matthewmazur.files.wordpress.com/2018/03/output_1_backprop-4.png?w=525)
We need to figure out each piece in this equation.
First, how much does the total error change with respect to the output?
When we take the partial derivative of the total error with respect to
, the quantity
becomes zero because
does not affect it which means we’re taking the derivative of a constant which is zero.
Next, how much does the output of The partial derivative of the logistic function is the output multiplied by 1 minus the output:
Finally, how much does the total net input of
Putting it all together:
You’ll often see this calculation combined in the form of the delta rule:
![\frac{\partial E_{total}}{\partial w_{5}} = -(target_{o1} - out_{o1}) * out_{o1}(1 - out_{o1}) * out_{h1} \frac{\partial E_{total}}{\partial w_{5}} = -(target_{o1} - out_{o1}) * out_{o1}(1 - out_{o1}) * out_{h1}](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_vRXWrIBI0Zba_GIlN37el5VF9ILz1_em4RzFbC3NHIYjqhQJalWVCOV_S-oJlCWVGXbaNAL3Dwxy22gwqaGZjQihFC5ciitA5qcEXxMwRGzgHiBM6u9skjz6e9BiZ7NP9lPEd2fQz-uU45lknG1DEaYGxkhWuYvXIF1UND9if_f8hji-IMgibybaC_7T_9OOV5tv7Yv6Z59FAE5n2q_NWNa0ZQ-5179fJJAX_iZLsmacV9zgzD1_uV0XeLjEYPSoe8yGf6zSlHGDXcyVpzJT0lkWtPTbZtofjOZZceMNdBc3fR6O-2YES_qAy2nxZ5ocH0mA=s0-d)
Alternatively, we have
and
which can be written as
, aka
(the Greek letter delta) aka the node delta. We can use this to rewrite the calculation above:
![\delta_{o1} = \frac{\partial E_{total}}{\partial out_{o1}} * \frac{\partial out_{o1}}{\partial net_{o1}} = \frac{\partial E_{total}}{\partial net_{o1}} \delta_{o1} = \frac{\partial E_{total}}{\partial out_{o1}} * \frac{\partial out_{o1}}{\partial net_{o1}} = \frac{\partial E_{total}}{\partial net_{o1}}](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_uiN8eUjwxpCrSibT5X5gLylyeBuyxBrR-vJ-71NitiwS8UiytnzJW0kSJ_QhVIw8dlxBmZ_MfukkhfImiCtt-sr3ZSajn_rQ8_d9m-uDl7vJ6mH5xsOhg2L2tY2mrnWUMjvx1N84mIvY_NsRTqG2rFVb76-yGrciWVB9ui4eDetJWD0FtSixTxM5TKlnQunXSUfAJ2bBil6aJ_LsLoRYfXY1E5QG5-4teEqYEqQ6p-2HV2u8kS_I_cQuSdvOJvbFgiWdrvpnQNCkHHcfqJXQcQ2dYAidn-4dyT69qCiwox3o8JI0uQ4JEKXOmP3lQQCf994y9jxFeTDYuJuAqG-ICOkK7dOFDnvljp1QPHXC5JrvhDloM5XDkjTyBG5am2Vzwr6wIb8cJ-UB9dL7phpFHEkaeyOxq3XMB2BG3C1fnzSQ=s0-d)
![\delta_{o1} = -(target_{o1} - out_{o1}) * out_{o1}(1 - out_{o1}) \delta_{o1} = -(target_{o1} - out_{o1}) * out_{o1}(1 - out_{o1})](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_uMjvjsVVbOCLBIbGlCYrYRTerS4gq1T7HAuS6da8Anq46I3U-2zd4M2vzR7_CDhJnJGcVj76JORLhGsu6jVGLQr8ijU-NL6D3qXRoQ5L3X5F9gRIcQlecZK-Qwni7_WLLvPsBe--8T_3Z9fRSo4pDQItTHQTIKKLNS2n7l_gWqe_Y5QRJvfJapT9RMCkoJ55msdbbGMxhDOo7hplc2Brx-KmwxkQkrqv2_M57FTyIpdJS9F3nD=s0-d)
Therefore:
![\frac{\partial E_{total}}{\partial w_{5}} = \delta_{o1} out_{h1} \frac{\partial E_{total}}{\partial w_{5}} = \delta_{o1} out_{h1}](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_u3JEXD-9ddWpwi7QeBZblKyC2B9M0839os6wiR1qnUGV3LbdHB-KCWHixmP6JwCptBNsURmZRj38c94XTPW6HuqrQRA3fPKnAdfF1EdJSmvD2BXJnwJwmW0vWh0Z1UtdFTRfitpwaGn1ZJM1e81uSWqYpTa7l0AilyJ-GAuJhAyhZVL2elbHUev4jvRKYglYqUTHwCJOoJE_ouwakRpj3GxObCrXAy_U2gpa5iTLfoNknakD5T-qPgMqaSd2_aJw=s0-d)
Some sources extract the negative sign from
so it would be written as:
![\frac{\partial E_{total}}{\partial w_{5}} = -\delta_{o1} out_{h1} \frac{\partial E_{total}}{\partial w_{5}} = -\delta_{o1} out_{h1}](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_tMzLxAeejVTVr1inQTxih3aov8kGpaU-1KVxcOQDcXW5dMve3RZ2SLRearD3rYjkW0VndbD2SxWBc-v72elmvS3GP0q5tHLEKlq40gx4RpzQBqf5RoYfyRtbs6E8Jwd_kbT1iwwx4deeTTAovCj-bHWJooVG-l-GtPTnZwQDgc-U7CLOxjNLS--VFaNjbEZCoBgEoov-1W55dP2esWOw89bQ3dFOy1KRkUlgXC9oJlRQVE7dG62cSLoBsqbZflb6g=s0-d)
To decrease the error, we then subtract this value from the current
weight (optionally multiplied by some learning rate, eta, which we’ll
set to 0.5):Alternatively, we have
Therefore:
Some sources extract the negative sign from
Some sources use
(alpha) to represent the learning rate, others use
(eta), and others even use
(epsilon).
We can repeat this process to get the new weights We perform the actual updates in the neural network after we have the new weights leading into the hidden layer neurons (ie, we use the original weights, not the updated weights, when we continue the backpropagation algorithm below).
Hidden Layer
Next, we’ll continue the backwards pass by calculating new values forBig picture, here’s what we need to figure out:
Visually:
![nn-calculation](https://matthewmazur.files.wordpress.com/2015/03/nn-calculation.png?w=525)
We’re going to use a similar process as we did for the output layer, but slightly different to account for the fact that the output of each hidden layer neuron contributes to the output (and therefore error) of multiple output neurons. We know that
Starting with
We can calculate
And
Plugging them in:
Following the same process for
Therefore:
Now that we have
We calculate the partial derivative of the total net input to
Putting it all together:
You might also see this written as:
![\frac{\partial E_{total}}{\partial w_{1}} = (\sum\limits_{o}{\frac{\partial E_{total}}{\partial out_{o}} * \frac{\partial out_{o}}{\partial net_{o}} * \frac{\partial net_{o}}{\partial out_{h1}}}) * \frac{\partial out_{h1}}{\partial net_{h1}} * \frac{\partial net_{h1}}{\partial w_{1}} \frac{\partial E_{total}}{\partial w_{1}} = (\sum\limits_{o}{\frac{\partial E_{total}}{\partial out_{o}} * \frac{\partial out_{o}}{\partial net_{o}} * \frac{\partial net_{o}}{\partial out_{h1}}}) * \frac{\partial out_{h1}}{\partial net_{h1}} * \frac{\partial net_{h1}}{\partial w_{1}}](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_sxgnW9dlSawtaGqwZa_UrIzoBqsnahLqIjlVKexqKQ_wJGGL9Cp1tPtNvh-dbVWbuEcFHuZZOWm5FhQbohR1yMIqr3b-FBVP3BRdagSdr-bkmC-WJX7AN3S9wwiv2aNC4FntsnTHYd3LaB5WfuUp7gjaRSdJbvE1Va7qZ5zAltbKmP4XL0KlfqxMSDOEzYRaMDsXVm3q6jlh8R7XCeGAcp3hJL6Hns9V6GRUT1uxs0jbuz1KNtnbYy8rSfe2KKi686tNNuT9pT_iLfqFxdOTaOOZFd-OjvGeOO1ZJqc-usDEAZzekf_wSzoIpThz8YeDQKdf7Th0IfKs7KRVzrahrezTRT84nfzNTi13lcNNqOo9ZHgLakemw3pOC2cVSmmNdzeg1e0E0cW7tJwNwY_oqgEHWGByh28vNjmRP8C_AFfYHuZZ0YImp7W04UspyzG_rABJRKW8BUL8K06jrG8NsO8cQbnb26RdCf7bepRddAM967pDUnsAHqT5aiTi8v5tlUC7SGT3jgH8bAapPWcLTR9iVHXekf2J7ddR54tds2kZyRzD9lBlch15UXYopNbKrR3cy48d-xSgRjEuiJIen4aLtTZHmYRdbpMn0DFLAPWdzpAL2iJzuNkvI-O6ffmJ_LOg21kI8-pMhwuDjxBDM97NerCoqbgpH4xA8_nNFSPGhMpLGSgCtL6wNX=s0-d)
![\frac{\partial E_{total}}{\partial w_{1}} = (\sum\limits_{o}{\delta_{o} * w_{ho}}) * out_{h1}(1 - out_{h1}) * i_{1} \frac{\partial E_{total}}{\partial w_{1}} = (\sum\limits_{o}{\delta_{o} * w_{ho}}) * out_{h1}(1 - out_{h1}) * i_{1}](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_t_XYs4zKcZZAyTHJ6SgARROtjzQeFQPERRolm2EU86rE2kP5y0APu1ulVXfK_ktBUOHOTNNRiMnJmSI3GB-XBJlZ_ixJdfGtOHOW4I53vyAfsI3x0ylx8X9UJzKRERwDpuzMXuIWYSfHcJCtOVrNcRXeYbsVL2mcFzLlyGeeMOY_8pwmjuSO27g3AVE4B5LEOJ9YRVjUxGs4uo8EgoCBMwCSO7Xd9BEya4jpDQi5QynqdKBwm_0ZssI-4oCMMG1hiz3tJ9i5muJy7YYggqYdnfckw5Lli0-KYgNhpI8z7PKYys_CtlNnaVskfvMRcYTw-mji2oMBl2rfMUG9ghpAS3E5zEFGEs6q1eHg=s0-d)
![\frac{\partial E_{total}}{\partial w_{1}} = \delta_{h1}i_{1} \frac{\partial E_{total}}{\partial w_{1}} = \delta_{h1}i_{1}](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_sDOmBpJ8_RSXmEDaHFIzzps_equ0HJ5Oyqb0-7GGTxZRcmIODR7_cjdjXOgbmN8PKLGxx_Bj1w5zbJkKGvBsiRoii-b45nP0aBYiEmVSVK5veX2UNlC92PDymC5t4odbFceaXd-upnLv5yoetEKvd4HHCPi267-ZRHUO5enf_kqgMCIvl-Mbn5N83jCl0cp3EgIUqwTaxp9Ar40T1_cCip1VSZHeg-1fdRqR3E1mGPFr_Y5smYRuz7sI_2=s0-d)
We can now update Repeating this for
Finally, we’ve updated all of our weights! When we fed forward the 0.05 and 0.1 inputs originally, the error on the network was 0.298371109. After this first round of backpropagation, the total error is now down to 0.291027924. It might not seem like much, but after repeating this process 10,000 times, for example, the error plummets to 0.000035085. At this point, when we feed forward 0.05 and 0.1, the two outputs neurons generate 0.015912196 (vs 0.01 actual) and 0.984065734 (vs 0.99 actual).
If you’ve made it this far and found any errors in any of the above or can think of any ways to make it clearer for future readers, don’t hesitate to drop me a note. Thanks!
No comments:
Post a Comment