Hi guys,
I have trouble understanding some theoretical notations mentioned in the markdown with backprop algorithm pen and paper task. My expectations about theoretical notations are based on the "Deep Learning" book by Ian Goodfellow. So, in the notebook, I found a notation:
\( \delta_o = \frac{\partial E}{\partial o_1} \cdot \frac{\partial o_1}{\partial z_o} \)
As far as I understood a context it is a gradient for the parameter \( z_0 \) which is actually input to the output neuron. If it is a gradient I expect to see the parameter in a notation, so \( \delta_{z_0} \).
Then we can see using the same notation for the gradient of the input of hidden neurons:
\( \delta_i = \delta_o \cdot w_{oi} \cdot h_i \cdot (1 - h_i) \)
But then in the task, we can see notations like \( \delta_{h1}. \delta_{h2} \)
Could you please specify if is it a gradient of hidden neuron outputs or anything else, it is unclear for me.