­

Cs231n assignment3之LSTM

  • 2019 年 10 月 6 日
  • 筆記

Cs231n assignment3之LSTM

0.导语

好久没有更新cs231n的作业详解内容了,最近复习考试,利用业余时间来把LSTM完成!

有人问我:您的公众号定位是?

我想说,定位:作为自己的学习或博客输出,大家共同交流与分享。

给大家汇报个数据,前段时间一周之内有三个联系投广告,我直接拒绝了,为什么?

因为我想在这个公众号中输出更多的干货,更有用的知识,而不是仅仅的通过公众号刷流量之类的!所以我想说的是,如果大家觉得本公众号好对您有帮助,希望可以转发,收藏,甚至打赏,因为这样是我原创的动力!

1.LSTM

如果您阅读最近的论文,您会发现许多人在vanilla RNN上使用称为长短期记忆(LSTM)RNN的变体。 由于重复矩阵乘法引起的消失和爆炸梯度,vanilla RNN很难在长序列上训练。 LSTM通过用如下门控机制替换vanilla RNN的简单更新规则来解决这个问题。

换言之,RNN存在最大的问题就是梯度消失!因此本节则从LSTM角度来研究这个问题。

1.1 LSTM原理

来源于: https://blog.csdn.net/FortiLZ/article/details/80958149

与传统的 RNN 相比,LSTM 除了包含原有的 hidden state 以外,还增加了随时间更新的 memory cell。某一时刻的 cell 与 hidden state 有着相同的形状,两者相互依赖于彼此进行状态的更新。具体来看,需要学习的参数 WxWx 和 WhWh 由 RNN 中的形如 (W, H) 和 (H, H) 变成了 (W, 4H) 和 (H, 4H),即 (W, f+i+g+o) 和 (H, f+i+g+o),而 h(t−1)⋅Wh+x(t)⋅Wxh(t−1)⋅Wh+x(t)⋅Wx 的结果也成为形如 (N, f+i+g+o),其中 f/i/g 用来更新 cell 的状态,得到的新的 cell 状态 C(t) 与 o 一起来更新 h(t)。

1.2 LSTM实战

打开LSTM_Captioning.ipynb,并完成rnn_layers.py中的前向与反向传播。

单个时间步长前向传播

如上图所示,根据公式实现即可!

lstm_step_forward提示中的输入与输出:

输入:      - x: Input data, of shape (N, D)      - prev_h: Previous hidden state, of shape (N, H)      - prev_c: previous cell state, of shape (N, H)      - Wx: Input-to-hidden weights, of shape (D, 4H)      - Wh: Hidden-to-hidden weights, of shape (H, 4H)      - b: Biases, of shape (4H,)  返回:      - next_h: Next hidden state, of shape (N, H)      - next_c: Next cell state, of shape (N, H)      - cache: Tuple of values needed for backward pass.  

实现:

直接根据上述的输入与输出与lstm中的图与本节的图公式带入即可!

H = prev_h.shape[1]  ifog = x.dot(Wx)+prev_h.dot(Wh)+b  i = sigmoid(ifog[:, :H])  f = sigmoid(ifog[:, H:2*H])  o = sigmoid(ifog[:, 2*H:3*H])  g = np.tanh(ifog[:, 3*H:])  next_c=f*prev_c+i*g  next_h=o*np.tanh(next_c)  cache = (x, prev_h, prev_c, Wx, Wh, ifog, f, i, g, o, next_c, next_h)  

单个时间步长反向传播

lstm_step_backward提示中的输入与输出:

输入:      - dnext_h: Gradients of next hidden state, of shape (N, H)      - dnext_c: Gradients of next cell state, of shape (N, H)      - cache: Values from the forward pass  返回:      - dx: Gradient of input data, of shape (N, D)      - dprev_h: Gradient of previous hidden state, of shape (N, H)      - dprev_c: Gradient of previous cell state, of shape (N, H)      - dWx: Gradient of input-to-hidden weights, of shape (D, 4H)      - dWh: Gradient of hidden-to-hidden weights, of shape (H, 4H)      - db: Gradient of biases, of shape (4H,)  

实现:

反向传播难点手推提示:

这里要注意一下dnext_c求的时候,由于上面还由数据流动,所以得在加上一个dnext_c!(不懂的看原理图)!

其余的反向传播看注释!

x, prev_h, prev_c, Wx, Wh, ifog, f, i, g, o, next_c, next_h=cache    N, H = dnext_h.shape  difog = np.zeros((N, 4*H))  # next_h=o*np.tanh(next_c)  do = dnext_h * np.tanh(next_c)  dnext_c = dnext_h * o * (1 - np.tanh(next_c) ** 2) + dnext_c    # next_c=f*prev_c+i*g  df = dnext_c * prev_c  dprev_c = dnext_c * f  di = dnext_c * g  dg = dnext_c * i      # g = tanh(ifog[:, 3H:])  difog[:, 3*H:] = dg * (1 - np.tanh(ifog[:, 3*H:]) ** 2)  # o = sigmoid(ifog[:, 2H:3H])  difog[:, 2*H:3*H] = do * (sigmoid(ifog[:, 2*H:3*H]) * (1 - sigmoid(ifog[:, 2*H:3*H])))  # f = sigmoid(ifog[:, H:2H])  difog[:, H:2*H] = df * (sigmoid(ifog[:, H:2*H]) * (1 - sigmoid(ifog[:, H:2*H])))  # i = sigmoid(ifog[:, :H])  difog[:, :H] = di * (sigmoid(ifog[:, :H]) * (1 - sigmoid(ifog[:, :H])))    # ifog = x.dot(Wx)+prev_h.dot(Wh)+b  # difog(N,4H) Wx(D,4H) dx(N,D)  dx = difog.dot(Wx.T)  # x(N,D) difog(N,4H) dWx(D,4H)  dWx = x.T.dot(difog)  # difog(N,4H) Wh(H,4H) dprev_h(N,H)  dprev_h = difog.dot(Wh.T)  # prev_h(N,H) difog(N,4H)Wh(H,4H)  dWh = prev_h.T.dot(difog)  db = np.sum(difog, axis=0)  

前向传播

在整个数据序列上正向传递LSTM。 我们假设一个输入由T向量组成的序列,每个维度为D. LSTM使用隐藏H的大小,我们在含有N个序列的小批量上工作。 跑完之后LSTM向前,我们返回所有时间步的隐藏状态。

请注意,初始单元格状态作为输入传递,但是初始单元格state设置为零。 另请注意,不返回单元格状态; 它是LSTM的内部变量,不从外部访问。

提示:

输入:      - x: Input data of shape (N, T, D)      - h0: Initial hidden state of shape (N, H)      - Wx: Weights for input-to-hidden connections, of shape (D, 4H)      - Wh: Weights for hidden-to-hidden connections, of shape (H, 4H)      - b: Biases of shape (4H,)  返回:      - h: Hidden states for all timesteps of all sequences, of shape (N, T, H)      - cache: Values needed for the backward pass.  

实现:

循环T序列调用lstm_step_forward即可!

N, T, D = x.shape  N, H = h0.shape    h = np.zeros((N, T, H))  prev_h = h0  cache = {}  # Initial cell state  next_c = np.zeros((N, H))  for i in range(T):      prev_h, next_c, cache_i = lstm_step_forward(x[:, i, :], prev_h, next_c, Wx, Wh, b)      h[:, i, :] = prev_h      cache[i] = cache_i  

反向传播

提示:

输入:      - dh: Upstream gradients of hidden states, of shape (N, T, H)      - cache: Values from the forward pass  返回:      - dx: Gradient of input data of shape (N, T, D)      - dh0: Gradient of initial hidden state of shape (N, H)      - dWx: Gradient of input-to-hidden weight matrix of shape (D, 4H)      - dWh: Gradient of hidden-to-hidden weight matrix of shape (H, 4H)      - db: Gradient of biases, of shape (4H,)  

从后往前,反向传播求梯度,记住求的是完整的梯度,要通过循环算出每次的,然后求和!

N, T, H = dh.shape  x = cache[0][0]  N, D = x.shape  dx = np.zeros((N, T, D))  dh0 = np.zeros((N, H))  dWx = np.zeros((D, 4*H))  dWh = np.zeros((H, 4*H))  db = np.zeros(4*H)  dprev_h = np.zeros((N, H))  dprev_c = np.zeros((N, H))  for i in reversed(range(T)):      dnext_h = dh[:, i, :] + dprev_h      dnext_c = dprev_c      dx[:, i, :], dprev_h, dprev_c, dWx_tmp, dWh_tmp, db_tmp = lstm_step_backward(dnext_h, dnext_c, cache[i])      dWx += dWx_tmp      dWh += dWh_tmp      db += db_tmp  dh0 = dprev_h  

最后运行LSTM_Captioning.ipynb,测试结果举例: