Numpy搭建卷积神经网络

本文将介绍卷积神经网络的基本原理,以及如何利用numpy库函数搭建CNN,包括实现卷积层、POOL层以及对应的正传和反传。

符号 - 上标 \([l]\) 代表第 \(l\) 层的数值.
- 例如: \(a^{[L]}\) 表示第 \(L\) 层激活函数的输出. \(W^{[L]}\)\(b^{[L]}\) 表示第 \(L\) 层的参数.
- 上标 \((i)\) 表示第 \(i\) 样本.
- 例如: \(x^{(i)}\) 表示第 \(i\) 训练样本.
- 下标 \(i\) 表示向量的第 \(i\) 个元素.
- 例如: \(a^{[l]}_i\) 表示第\(l\)层第\(i\)个神经元的输出.

  • \(n_H\), \(n_W\)\(n_C\) 分别表示给定层的高、宽和深度. 如果指定层\(l\), 则高、宽、深度为 \(n_H^{[l]}\), \(n_W^{[l]}\), \(n_C^{[l]}\).
  • \(n_{H_{prev}}\), \(n_{W_{prev}}\)\(n_{C_{prev}}\) 分别表示上一层的高、宽和深度,如果参考层为 \(l\), 则为 \(n_H^{[l-1]}\), \(n_W^{[l-1]}\), \(n_C^{[l-1]}\).

本文将要实现的模型如下:

卷积神经网络

卷积层是CNN中最难理解的一个概念,它将输入体转化成不同大小的输出体,如图所示:

下面将介绍卷基层中的两个基本元素:补零和卷积

补零

补零是指在图像边界增加0元素。

Figure 1 : 补零
图像 (3 通道, RGB) 补2个零元素.

补零的主要好处有:

  • 保证卷基层的高和宽不变。如果不补零,随着深度的增加,输出的高和宽会越来越小。补零的一个特殊补法叫“SAME”,意思是补零以保证卷积前后高和宽保持不变。
  • 保留更多图像边界的信息。如果不补零,图像边界的信息对下一层的影响很小。

在numpy中,对一个形状为(5,5,5,5,5)的数组第1和第2维度两端分别补1和3个零,函数是:

1
a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))
1
2
3
4
5
6
7
8
9
10
11
12
13
import numpy as np
import h5py
import matplotlib.pyplot as plt

%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

%load_ext autoreload
%autoreload 2

np.random.seed(1)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# GRADED FUNCTION: zero_pad

def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.

Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions

Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
"""

### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0, 0), (pad, pad), (pad, pad), (0, 0)), 'constant', constant_values=0)
### END CODE HERE ###

return X_pad
1
2
3
4
5
6
7
8
9
10
11
12
13
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1, 1] =", x[1, 1])
print ("x_pad[1, 1] =", x_pad[1, 1])

fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
x.shape = (4, 3, 3, 2)
x_pad.shape = (4, 7, 7, 2)
x[1, 1] = [[ 0.90085595 -0.68372786]
 [-0.12289023 -0.93576943]
 [-0.26788808  0.53035547]]
x_pad[1, 1] = [[ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]
 [ 0.  0.]]





<matplotlib.image.AxesImage at 0x7fb7094a0860>
png

png

卷积操作

卷积每一步每一步操作是将一个滤波器应用在输入图像的一个特定位置,包括:

  • 输入图像
  • 对图像的每个位置应用这个滤波器
  • 输入结果(通常大小发生变化)
图 2 : 卷积操作
应用步长为1 大小为2x2 的滤波器 (步长 = 每次移动窗口的大小)

在计算机视觉中,左边矩阵的每一个元素为一个像素点。对左边矩阵应用一个33滤波器进行卷积操作,意味着从该矩阵中取出33大小的元素和这个滤波器点点对应相乘,然后求和。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# GRADED FUNCTION: conv_single_step

def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.

Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)

Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
"""

### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice and W. Add bias.
s = np.multiply(a_slice_prev, W) + b
# Sum over all entries of the volume s
Z = np.sum(s)
### END CODE HERE ###

return Z
1
2
3
4
5
6
7
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)

Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
Z = -23.1602122025

正传

在正传播过程,每次应用“卷积”就输出一个2D矩阵,多次应用不同的滤波器,叠加起来就形成一个三维体。

图 3 : vertical 和 horizontal start/end 分别表示切片在垂直和水平方向的起点和终点(以一个2x2 滤波器为例)
本图只展示一个通道的情况.

卷积前后维度变化公式:

\[ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 \] \[ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 \] \[ n_C = \text{使用的滤波器个数}\]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
# GRADED FUNCTION: conv_forward

def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function

Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"

Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
"""

### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape

# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape

# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters['stride']
pad = hparameters['pad']

# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_H = int((n_H_prev - f + 2 * pad) / stride) + 1
n_W = int((n_W_prev - f + 2 * pad) / stride) + 1

# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros((m, n_H, n_W, n_C))

# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(A_prev, pad)

for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
Z[i, h, w, c] = conv_single_step(a_slice_prev, W[...,c], b[...,c])

### END CODE HERE ###

# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))

# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)

return Z, cache
1
2
3
4
5
6
7
8
9
10
np.random.seed(1)
A_prev = np.random.randn(10, 4, 4, 3)
W = np.random.randn(2, 2, 3, 8)
b = np.random.randn(1, 1, 1, 8)
hparameters = {"pad" : 2,
"stride": 1}

Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
Z's mean = 0.155859324889
cache_conv[0][1][2][3] = [-0.20075807  0.18656139  0.41005165]

最后,卷基层可以再增加一个激活函数:

1
2
3
4
# Convolve the window to get back one output neuron
Z[i, h, w, c] = ...
# Apply activation
A[i, h, w, c] = activation(Z[i, h, w, c])

池化层

池化层 (POOL) 减少输入数据的高度和宽度,有利于减少计算量,也有利于使得特征检测与输入数据的位置更加无关. 有两种池化方法:

  • 最大池化层: 在输入图像上移动一个 (\(f, f\)) 窗口,输出这个窗口内的最大值.

  • 平均池化层: 在输入图像上移动一个 (\(f, f\)) 窗口,输出这个窗口内的平均值.

池化层没有参数需要训练,但是有超参(例如窗口大小f)需要调整。.

正向池化

分别实现 MAX-POOL 和 AVG-POOL.

池化前后数据的维度变化: \[ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 \] \[ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 \] \[ n_C = n_{C_{prev}}\] 注意池化前一般不会补零;池化也不会变化输入输出数据的深度

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
# GRADED FUNCTION: pool_forward

def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer

Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")

Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
"""

# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape

# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]

# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev

# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))

### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
for w in range(n_W): # loop on the horizontal axis of the output volume
for c in range (n_C): # loop over the channels of the output volume

# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f

# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]

# Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)

### END CODE HERE ###

# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)

# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))

return A, cache
1
2
3
4
5
6
7
8
9
10
11
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 1, "f": 4}

A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
mode = max
A = [[[[ 1.74481176  1.6924546   2.10025514]]]


 [[[ 1.19891788  1.51981682  2.18557541]]]]

mode = average
A = [[[[-0.09498456  0.11180064 -0.14263511]]]


 [[[-0.09525108  0.28325018  0.33035185]]]]

卷基层反向传播

当前深度学习框架中,往往只需要实现正向传播,编程框架会自动实现反向传播。但是也有必要了解反向传播的工作原理。

反向卷积操作

计算dA

对于一个特定的滤波器\(W_c\)和一个特定的训练样本,\(dA\)的表达式:

\[ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}\]

其中 \(W_c\) 为一个滤波器 \(dZ_{hw}\) 为代价函数关于卷积层在第h行、第w列的输出Z的梯度。注意上面的公式,每次对不同的dZ乘以相同的\(W_c\)。 在正传过程中,每个旅欧波器点乘不同的切片并求和,因此,在反传时,将所有切片的提取求和。

上面的公式转化为代码为:

1
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]

计算 dW

计算 \(dW_c\) (\(dW_c\) 是代价函数关于一个滤波器的梯度):

\[ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}\]

其中 \(a_{slice}\) 用于生成激励 \(Z_{ij}\)的切片. 因此, 公式提供了关于这个切片的 \(W\)梯度 . 由于所有的滤波器 \(W\)相同, 因此将所有的梯度求和得到\(dW\).

对应的代码为:

1
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]

计算 db:

对一个特定滤波器 \(W_c\),关于该代价函数计算\(db\)的公式为:

\[ db = \sum_h \sum_w dZ_{hw} \tag{3}\]

对应的代码为:

1
db[:,:,:,c] += dZ[i, h, w, c]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function

Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()

Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
"""

### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = cache

# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape

# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape

# Retrieve information from "hparameters"
stride = hparameters["stride"]
pad = hparameters["pad"]

# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = dZ.shape

# Initialize dA_prev, dW, db with the correct shapes
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
dW = np.zeros((f, f, n_C_prev, n_C))
db = np.zeros((1, 1, 1, n_C))

# Pad A_prev and dA_prev
A_prev_pad = zero_pad(A_prev,pad)
dA_prev_pad = zero_pad(dA_prev,pad)

for i in range(m): # loop over the training examples

# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i, :, :, :]
da_prev_pad = dA_prev_pad[i, :, :, :]

for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume

# Find the corners of the current "slice"
vert_start = h*stride
vert_end = vert_start + f
horiz_start = w*stride
horiz_end = horiz_start + f

# Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]

# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
db[:,:,:,c] += dZ[i, h, w, c]

# Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]
### END CODE HERE ###

# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))

return dA_prev, dW, db
1
2
3
4
5
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
dA_mean = 9.60899067587
dW_mean = 10.5817412755
db_mean = 76.3710691956

池化层反传

最大池化层反传

首先建立函数create_mask_from_window() 实现:

\[ X = \begin{bmatrix} 1 && 3 \\ 4 && 2 \end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix} 0 && 0 \\ 1 && 0 \end{bmatrix}\tag{4}\]

该函数标记了输入层中对输出有影响的位置。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.

Arguments:
x -- Array of shape (f, f)

Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""

### START CODE HERE ### (≈1 line)
mask = (x == np.max(x))
### END CODE HERE ###

return mask
1
2
3
4
5
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
x =  [[ 1.62434536 -0.61175641 -0.52817175]
 [-1.07296862  0.86540763 -2.3015387 ]]
mask =  [[ True False False]
 [False False False]]

平均池化反传

为了描述平均池化层中输入数据对输出数据的影响,建立函数实现:

\[ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix} 1/4 && 1/4 \\ 1/4 && 1/4 \end{bmatrix}\tag{5}\]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape

Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz

Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
"""

### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = shape

# Compute the value to distribute on the matrix (≈1 line)
average = dz/(n_H*n_W)

# Create a matrix where every entry is the "average" value (≈1 line)
a = np.ones((n_H,n_W))*average
### END CODE HERE ###

return a
1
2
a = distribute_value(2, (2,2))
print('distributed value =', a)
distributed value = [[ 0.5  0.5]
 [ 0.5  0.5]]

池化层反传

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer

Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")

Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
"""

### START CODE HERE ###

# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = cache

# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = hparameters["stride"]
f = hparameters["f"]

# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = np.shape(A_prev)
m, n_H, n_W, n_C = np.shape(dA)

# Initialize dA_prev with zeros (≈1 line)
dA_prev = np.zeros(np.shape(A_prev))

for i in range(m): # loop over the training examples

# select training example from A_prev (≈1 line)
a_prev = A_prev[i, :, :, :]

for h in range(n_H): # loop on the vertical axis
for w in range(n_W): # loop on the horizontal axis
for c in range(n_C): # loop over the channels (depth)

# Find the corners of the current "slice" (≈4 lines)
vert_start = h*stride
vert_end = vert_start + f
horiz_start = w*stride
horiz_end = horiz_start + f

# Compute the backward propagation in both modes.
if mode == "max":

# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c]
# Create the mask from a_prev_slice (≈1 line)
mask = create_mask_from_window(a_prev_slice)
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += mask*dA[i, h*stride, w*stride, c]

elif mode == "average":

# Get the value a from dA (≈1 line)
da = distribute_value(dA,(f,f))
# Define the shape of the filter as fxf (≈1 line)
shape = (f,f)
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start:vert_end, horiz_start:horiz_end, c] += da[i, vert_start:vert_end, horiz_start:horiz_end, c]

### END CODE ###

# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)

return dA_prev
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)

dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
mode = max
mean of dA =  0.145713902729
dA_prev[1,1] =  [[ 0.          0.        ]
 [ 5.05844394 -1.68282702]
 [ 0.          0.        ]]

mode = average
mean of dA =  0.145713902729
dA_prev[1,1] =  [[-0.32345834  0.45074345]
 [ 2.52832571 -0.24863478]
 [ 1.26416285 -0.12431739]]

结论

  1. 本文介绍CNN的基本原理,以及实现卷基层各个构成部分
  2. 反传部分有助于进一步理解CNN的工作机理,但是在主流的深度学习框架中,都不需要自己实现,而只需要计算到代价函数,反传及模型更新的部分会自动执行。

参考资料