A Road of Equations to Recurrent Neural Network
This is a cheat sheet for Recurrent Neural Network learner, covers from math essentials of neural network to several different kinds of RNN. If you go through the whole content and understand thoroughly, you will at least be able to make your own neural network architecture, derive forward/backward propagation equations, which are fundamental to building your own working neural network.
 Math Essentials
 Types of probability spaces
 Expected value
 Expected value in continuous spaces
 Variance
 Covariance
 Correlation
 Complement rule
 Product rule
 Rule of total probability
 Independence
 Bayes rule
 Some Linear algebra knowledge
 Vector arithmetic
 Matrix arithmetic
 Vector projection
 Linear Regression
 Nonlinear Regression
 RNN (Recurrent Neural Network)
 Thanks to
Math Essentials
Notation  Meaning 

a ∈ A  set membership 
 B   A cardinality: number of items in set B 
 v   norm: length of vector v 
summation  
integral  
the set of real numbers  
real number space of dimension n n = 2 : plane or 2space n = 3 : 3 (dimensional) space n > 3 : nspace or hyperspace 

x, y, z, u, v  vector (bold, lower case) 
A, B, X  matrix (bold, upper case) 
function(map):assigns unique value in range of y to each value in domain of x  
derivative of y with respect to single  
function on multiple variables,i.e. a vector of variables; function in nspace  
partial derivative of y with respect to element i of vector x  
\(Ω\)  the set of possible outcomes O 
F  the set of possible events E 
P  the probability distribution 
Types of probability spaces
Define ; = number of possible outcomes
Discrete space is finite – Analysis involves summations
Continuous space is infinite – Analysis involves integrals
Expected value
Given:
 A discrete random variable X, with possible values x = x1, x2, … xn
 Probabilities that X takes on the various values of xi
 A function defined on X The expected value of f is the
probabilityweighted “average” value of :
Common form
Expected value in continuous spaces
Variance
Common form
Covariance
Common form
Correlation
Pearson’s correlation coefficient is covariance normalized by the standard deviations of the two variables
 Always lies in range 1 to 1
 Only reflects linear dependence between variables
Complement rule
Given: event A, which can occur or not
Product rule
Given: events A and B, which can cooccur (or not)
Rule of total probability
Given: events A and B, which can cooccur (or not)
Independence
Given: events A and B, which can cooccur (or not)
or
Bayes rule
A way to find conditional probabilities for one variable when conditional probabilities for another variable are known.
posterior probability ∝ likelihood × prior probability
where
Some Linear algebra knowledge
Vector arithmetic
Matrix arithmetic
 Multiplication is associative
 Multiplication is not commutative
(generally)
 Transposition rule:
Vector projection
Orthogonal projection of y onto x is the vector
(using dot product alternate form)
Stanford Machine Learning Tutorial:
Linear Regression
TODO
Nonlinear Regression
Sigmoid Function
Cost function
Gradient
Softmax Regression
Cost function (crossentropy)
Gradient
tanh
TODO
ReLU
TODO
TODO: more
RNN (Recurrent Neural Network)
LSTM
Overview of LSTM cell
Details of LSTM cell
Intuition

New memory generation:This stage is analogous to the new memory generation stage we saw in GRUs. We essentially use the input word and the past hidden state to generate a new memory which includes aspects of the new word .

Input Gate: We see that the new memory generation stage doesn’t check if the new word is even important before generating the new memory – this is exactly the input gate’s function. The input gate uses the input word and the past hidden state to determine whether or not the input is worth preserving and thus is used to gate the new memory. It thus produces it as an indicator of this information.

Forget Gate: This gate is similar to the input gate except that it does not make a determination of usefulness of the input word – instead it makes an assessment on whether the past memory cell is useful for the computation of the current memory cell. Thus, the forget gate looks at the input word and the past hidden state and produces .

Final memory generation: This stage first takes the advice of the forget gate and accordingly forgets the past memory . Similarly, it takes the advice of the input gate it and accordingly gates the new memory . It then sums these two results to produce the final memory \(c_t$$.

Output/Exposure Gate: This is a gate that does not explicitly exist in GRUs. It’s purpose is to separate the final memory from the hidden state. The final memory contains a lot of information that is not necessarily required to be saved in the hidden state. Hidden states are used in every single gate of an LSTM and thus, this gate makes the assessment regarding what parts of the memory ct needs to be exposed/present in the hidden state ht. The signal it produces to indicate this is ot and this is used to gate the pointwise tanh of the memory.
Equations
(Input gate)
(Forget gate)
(Output/Exposure gate)
(New memory cell)
(Final memory cell)
How LSTM backward propagation works?
Here’s a slide explaining this: LSTM Forward and Backward Pass
GRU
Overview of GRU cell
Details of GRU cell
Intuition

New memory generation: A new memory \(\tilde{h_t}\) is the consolidation of a new input word \(x_t\) with the past hidden state \(h_{t−1}\). Anthropomorphically, this stage is the one who knows the recipe of combining a newly observed word with the past hidden state \(h_{t−1}\) to summarize this new word in light of the contextual past as the vector \(\tilde{h_t}\).

Reset Gate: The reset signal \(r_t\) is responsible for determining how important \(h_{t−1}\) is to the summarization \(\tilde{h_t}\). The reset gate has the ability to completely diminish past hidden state if it finds that \(h_{t−1}\) is irrelevant to the computation of the new memory.

Update Gate: The update signal \(z_t\) is responsible for determining how much of \(h_{t−1}\) should be carried forward to the next state. For instance, if \(z_t ≈ 1\), then \(h_{t−1}\) is almost entirely copied out to ht. Conversely, if \(z_t ≈ 0\), then mostly the new memory \(\tilde{h_t}\) is forwarded to the next hidden state.

Hidden state: The hidden state ht is finally generated using the past hidden input \(h_{t−1}\) and the new memory generated \(\tilde{h_{t1}}\) with the advice of the update gate.
Equations
(Reset gate)
(Update gate)
(New memory)
(Hidden state)
TODO: more RNN network and math
Thanks to

Jeff Howbert for his lecture note summarizing Machine Learning Math Essentials: Washington University CSS490 Winter 2012 lecture slide

Computer Science Department, Stanford University for their Deep Learning Tutorial: Stanford Machine Learning Tutorial

Nal Kalchbrenner, Ivo Danihelka, Alex Graves for their paper: Grid Long ShortTerm Memory

Rafal Jozefowicz, Wojciech Zaremba and Ilya Sutskever for their paper: An Empirical Exploration of Recurrent Network Architectures

Christopher Olah for his clear explanation of LSTM: Understanding LSTM Networks

Andrej Karpathy for his post: The Unreasonable Effectiveness of Recurrent Neural Networks and excellent charrnn code

Arun Mallya for his explanation how LSTM backward propagation works: LSTM Forward and Backward Pass