天天看點

反向傳播算法及其實作

理清反向傳播算法

    • ---背景
    • ---定義全連接配接網絡
    • ---前向運算
    • ---鍊式求導
    • ---反向傳播算法
    • 代碼一(較粗糙,代碼二會改進),預測sin(x)曲線
    • 代碼二:添加Batch訓練,替換激活函數

—背景

去年看了《神經網絡與深度學習》的前幾章,了解了反向傳播算法的一些皮毛,當時想自己實作一下,但是由于事情多,就放下了。現在有時間,也是由于想理清這算法就動手推公式和寫代碼了。

------這裡隻以全連接配接層作為例子,主要是我最近的一些了解------

—定義全連接配接網絡

反向傳播算法及其實作

上圖所示,說明一下參數:

w i j l w_{ij}^{l} wijl​:表示第 l l l層中的第 i i i個神經元與第 l + 1 l+1 l+1層中第 j j j個神經元的權重

b i l b_{i}^{l} bil​:表示第 l l l層中的第 i i i個神經元的偏向值

z i l z_{i}^{l} zil​:表示第 l l l層中的第 i i i個神經元的輸入值,它由前一層對應權重與偏向和。

a i l a_{i}^{l} ail​:表示第 l l l層中的第 i i i個神經元的輸出值,它是輸入值經激活函數計算所得。

這裡每個神經元所用激活函數為 s i g m o i d sigmoid sigmoid函數 s ( x ) = 1 1 + e − x s(x)=\frac{1}{1+e^{-x}} s(x)=1+e−x1​,順便算下 s ( x ) s(x) s(x)對 x x x求導的結果為: s ′ ( x ) = e − x ( 1 + e − x ) 2 = s ( x ) ( 1 − s ( x ) ) s^{\prime}(x)=\frac{e^{-x}}{\left(1+e^{-x}\right)^{2}}=s(x)(1-s(x)) s′(x)=(1+e−x)2e−x​=s(x)(1−s(x))。

這個網絡輸出是兩個值,可以代表一個二分類網絡每個類别的機率。輸入是兩個值,可以想象每一個樣本有兩個特征值。

接下來我們舉個例子,看網絡的前向運算。

—前向運算

輸入一個樣本 X X X,它有兩個特征值如下表示:

X = { x 1 , x 2 } \begin{aligned} X =\{x_{1},x_{2}\} \end{aligned} X={x1​,x2​}​

兩個特征值進入輸入層,也就是第一層的輸出值,于是可以依次計算出第二層每個神經元的輸入值和激活值(對下一層的輸出值):

z 1 2 = w 11 1 ⋅ x 1 + w 21 1 ⋅ x 2 + b 1 2 z_{1}^{2}=w_{11}^{1}\cdot x_{1}+w_{21}^{1}\cdot x_{2}+b_{1}^{2} z12​=w111​⋅x1​+w211​⋅x2​+b12​

z 2 2 = w 12 1 ⋅ x 1 + w 22 1 ⋅ x 2 + b 2 2 z_{2}^{2}=w_{12}^{1}\cdot x_{1}+w_{22}^{1}\cdot x_{2}+b_{2}^{2} z22​=w121​⋅x1​+w221​⋅x2​+b22​

z 3 2 = w 13 1 ⋅ x 1 + w 23 1 ⋅ x 2 + b 3 2 z_{3}^{2}=w_{13}^{1}\cdot x_{1}+w_{23}^{1}\cdot x_{2}+b_{3}^{2} z32​=w131​⋅x1​+w231​⋅x2​+b32​

a 1 2 = s ( z 1 2 ) a_{1}^{2}=s(z_{1}^{2}) a12​=s(z12​)

a 2 2 = s ( z 2 2 ) a_{2}^{2}=s(z_{2}^{2}) a22​=s(z22​)

a 3 2 = s ( z 3 2 ) a_{3}^{2}=s(z_{3}^{2}) a32​=s(z32​)

接下來算出第三層每個神經元的輸入值和對下一層的輸出值:

z 1 3 = w 11 2 ⋅ a 1 2 + w 21 2 ⋅ a 2 2 + w 31 2 ⋅ a 3 2 + b 1 3 z_{1}^{3}=w_{11}^{2}\cdot a_{1}^{2}+w_{21}^{2}\cdot a_{2}^{2}+w_{31}^{2}\cdot a_{3}^{2}+b_{1}^{3} z13​=w112​⋅a12​+w212​⋅a22​+w312​⋅a32​+b13​

z 2 3 = w 12 2 ⋅ a 1 2 + w 22 2 ⋅ a 2 2 + w 32 2 ⋅ a 3 2 + b 2 3 z_{2}^{3}=w_{12}^{2}\cdot a_{1}^{2}+w_{22}^{2}\cdot a_{2}^{2}+w_{32}^{2}\cdot a_{3}^{2}+b_{2}^{3} z23​=w122​⋅a12​+w222​⋅a22​+w322​⋅a32​+b23​

z 3 3 = w 13 2 ⋅ a 1 2 + w 23 2 ⋅ a 2 2 + w 33 2 ⋅ a 3 2 + b 3 3 z_{3}^{3}=w_{13}^{2}\cdot a_{1}^{2}+w_{23}^{2}\cdot a_{2}^{2}+w_{33}^{2}\cdot a_{3}^{2}+b_{3}^{3} z33​=w132​⋅a12​+w232​⋅a22​+w332​⋅a32​+b33​

a 1 3 = s ( z 1 3 ) a_{1}^{3}=s(z_{1}^{3}) a13​=s(z13​)

a 2 3 = s ( z 2 3 ) a_{2}^{3}=s(z_{2}^{3}) a23​=s(z23​)

a 3 3 = s ( z 3 3 ) a_{3}^{3}=s(z_{3}^{3}) a33​=s(z33​)

有了第三層激活值,那麼可以算出第四層,也就是輸出層的值:

z 1 4 = w 11 3 ⋅ a 1 3 + w 21 3 ⋅ a 2 3 + w 31 3 ⋅ a 3 3 + b 1 4 z_{1}^{4}=w_{11}^{3}\cdot a_{1}^{3}+w_{21}^{3}\cdot a_{2}^{3}+w_{31}^{3}\cdot a_{3}^{3}+b_{1}^{4} z14​=w113​⋅a13​+w213​⋅a23​+w313​⋅a33​+b14​

z 2 4 = w 12 3 ⋅ a 1 3 + w 22 3 ⋅ a 2 3 + w 32 3 ⋅ a 3 3 + b 2 4 z_{2}^{4}=w_{12}^{3}\cdot a_{1}^{3}+w_{22}^{3}\cdot a_{2}^{3}+w_{32}^{3}\cdot a_{3}^{3}+b_{2}^{4} z24​=w123​⋅a13​+w223​⋅a23​+w323​⋅a33​+b24​

a 1 4 = s ( z 1 4 ) a_{1}^{4}=s(z_{1}^{4}) a14​=s(z14​)

a 2 4 = s ( z 2 4 ) a_{2}^{4}=s(z_{2}^{4}) a24​=s(z24​)

得到網絡的輸出值 a 1 4 , a 2 4 a_{1}^{4},a_{2}^{4} a14​,a24​,我們與真實值相比較,設對于樣本 X X X的标簽為 Y = { y 1 , y 2 } Y={\{y_{1},y_{2}\}} Y={y1​,y2​}。那麼算出網絡計算值與真實值的差距 l o s s loss loss,這裡用平方差:

l o s s = ( y 1 − a 1 4 ) 2 + ( y 2 − a 2 4 ) 2 2 \begin{aligned} loss =\frac{(y_{1}-a_{1}^{4})^{2}+(y_{2}-a_{2}^{4})^{2}}{2} \end{aligned} loss=2(y1​−a14​)2+(y2​−a24​)2​​

有了損失值,那麼,我們應該優化網絡參數,不斷降低損失值。這裡采用最常用的梯度下降法,來求loss的最小值。因為,沿梯度相反的方向,就是函數值下降最快的方向。那麼接下來就是求每個 l l l關于 w , b w,b w,b的梯度,然後按照一定的學習率 l r lr lr更新這些參數,如下:

w = w − l r ⋅ d l o s s d w ( 1 ) \begin{aligned} w = w-lr\cdot\frac{\mathfrak{d} loss}{\mathfrak{d} w} (1) \end{aligned} w=w−lr⋅dwdloss​(1)​

b = b − l r ⋅ d l o s s d b ( 2 ) \begin{aligned} b = b-lr\cdot\frac{\mathfrak{d} loss}{\mathfrak{d} b} (2) \end{aligned} b=b−lr⋅dbdloss​(2)​

,總有一天,loss會降到最低,令我們滿意。

那麼,計算每個 w , b w,b w,b的梯度,這和前向計算一樣,是一件體力活,接下來就采用鍊式求導來依次計算出 d l o s s d w \frac{\mathfrak{d} loss}{\mathfrak{d} w} dwdloss​、 d l o s s d b \frac{\mathfrak{d} loss}{\mathfrak{d} b} dbdloss​

—鍊式求導

從最後一層開始,求 d l o s s d w 11 3 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{11}^{3}} dw113​dloss​、 d l o s s d w 12 3 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{12}^{3}} dw123​dloss​、 d l o s s d w 21 3 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{21}^{3}} dw213​dloss​、 d l o s s d w 22 3 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{22}^{3}} dw223​dloss​、 d l o s s d w 31 3 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{31}^{3}} dw313​dloss​、 d l o s s d w 32 3 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{32}^{3}} dw323​dloss​以及 d l o s s d b 1 4 \frac{\mathfrak{d} loss}{\mathfrak{d} b_{1}^{4}} db14​dloss​、 d l o s s d b 2 4 \frac{\mathfrak{d} loss}{\mathfrak{d} b_{2}^{4}} db24​dloss​:

參照上面前向計算式子,從後往前看,直到遇見 b 1 4 b_{1}^{4} b14​為止:

l o s s = ( y 1 − a 1 4 ) 2 + ( y 2 − a 2 4 ) 2 2 loss =\frac{(y_{1}-a_{1}^{4})^{2}+(y_{2}-a_{2}^{4})^{2}}{2} loss=2(y1​−a14​)2+(y2​−a24​)2​

a 1 4 = s ( z 1 4 ) a_{1}^{4}=s(z_{1}^{4}) a14​=s(z14​)

z 1 4 = w 11 3 ⋅ a 1 3 + w 21 3 ⋅ a 2 3 + w 31 3 ⋅ a 3 3 + b 1 4 z_{1}^{4}=w_{11}^{3}\cdot a_{1}^{3}+w_{21}^{3}\cdot a_{2}^{3}+w_{31}^{3}\cdot a_{3}^{3}+b_{1}^{4} z14​=w113​⋅a13​+w213​⋅a23​+w313​⋅a33​+b14​

那麼可以依照鍊式求導法則來求 l o s s loss loss對 b 1 4 b_{1}^{4} b14​的偏導數:

d l o s s d b 1 4 = d l o s s d a 1 4 ⋅ d a 1 4 d z 1 4 ⋅ d z 1 4 d b 1 4 = − 1 2 ⋅ 2 ⋅ ( y 1 − a 1 4 ) ⋅ s ( z 1 4 ) ⋅ ( 1 − s ( z 1 4 ) ) = − ( y 1 − a 1 4 ) ⋅ s ( z 1 4 ) ⋅ ( 1 − s ( z 1 4 ) ) \begin{aligned} \frac{\mathfrak{d} loss}{\mathfrak{d} b_{1}^{4}}&=\frac{\mathfrak{d} loss}{\mathfrak{d} a_{1}^{4}}\cdot\frac{\mathfrak{d}a_{1}^{4}}{\mathfrak{d}z_{1}^{4}}\cdot\frac{\mathfrak{d}z_{1}^{4}}{\mathfrak{d} b_{1}^{4}} \\ &=-\frac{1}{2}\cdot2\cdot(y_{1}-a_{1}^{4})\cdot s(z_{1}^{4})\cdot(1-s(z_{1}^{4})) \\ &=-(y_{1}-a_{1}^{4})\cdot s(z_{1}^{4})\cdot(1-s(z_{1}^{4})) \end{aligned} db14​dloss​​=da14​dloss​⋅dz14​da14​​⋅db14​dz14​​=−21​⋅2⋅(y1​−a14​)⋅s(z14​)⋅(1−s(z14​))=−(y1​−a14​)⋅s(z14​)⋅(1−s(z14​))​

同理可以得到下面:

d l o s s d b 2 4 = d l o s s d a 2 4 ⋅ d a 2 4 d z 2 4 ⋅ d z 2 4 d b 2 4 = − 1 2 ⋅ 2 ⋅ ( y 2 − a 2 4 ) ⋅ s ( z 2 4 ) ⋅ ( 1 − s ( z 2 4 ) ) = − ( y 2 − a 2 4 ) ⋅ s ( z 2 4 ) ⋅ ( 1 − s ( z 2 4 ) ) \begin{aligned} \frac{\mathfrak{d} loss}{\mathfrak{d} b_{2}^{4}}&=\frac{\mathfrak{d} loss}{\mathfrak{d} a_{2}^{4}}\cdot\frac{\mathfrak{d}a_{2}^{4}}{\mathfrak{d}z_{2}^{4}}\cdot\frac{\mathfrak{d}z_{2}^{4}}{\mathfrak{d} b_{2}^{4}} \\ &=-\frac{1}{2}\cdot2\cdot(y_{2}-a_{2}^{4})\cdot s(z_{2}^{4})\cdot(1-s(z_{2}^{4})) \\ &=-(y_{2}-a_{2}^{4})\cdot s(z_{2}^{4})\cdot(1-s(z_{2}^{4})) \end{aligned} db24​dloss​​=da24​dloss​⋅dz24​da24​​⋅db24​dz24​​=−21​⋅2⋅(y2​−a24​)⋅s(z24​)⋅(1−s(z24​))=−(y2​−a24​)⋅s(z24​)⋅(1−s(z24​))​

d l o s s d w 11 3 = d l o s s d a 1 4 ⋅ d a 1 4 d z 1 4 ⋅ d z 1 4 d w 11 3 \begin{aligned} \frac{\mathfrak{d} loss}{\mathfrak{d} w_{11}^{3}} &= \frac{\mathfrak{d} loss}{\mathfrak{d} a_{1}^{4}} \cdot \frac{\mathfrak{d}a_{1}^{4}}{\mathfrak{d}z_{1}^{4}} \cdot \frac{\mathfrak{d}z_{1}^{4}}{\mathfrak{d} w_{11}^{3}} \end{aligned} dw113​dloss​​=da14​dloss​⋅dz14​da14​​⋅dw113​dz14​​​

d l o s s d w 12 3 = d l o s s d a 2 4 ⋅ d a 2 4 d z 2 4 ⋅ d z 2 4 d w 12 3 \begin{aligned} \frac{\mathfrak{d} loss}{\mathfrak{d} w_{12}^{3}} &= \frac{\mathfrak{d} loss}{\mathfrak{d} a_{2}^{4}} \cdot \frac{\mathfrak{d}a_{2}^{4}}{\mathfrak{d}z_{2}^{4}} \cdot \frac{\mathfrak{d}z_{2}^{4}}{\mathfrak{d} w_{12}^{3}} \end{aligned} dw123​dloss​​=da24​dloss​⋅dz24​da24​​⋅dw123​dz24​​​

. . . . . . ...... ......照這樣計算下去就可以把這一層參數偏導數全求出來。

最後一層求出之後,再求倒數第二層 d l o s s d w 11 2 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{11}^{2}} dw112​dloss​、 d l o s s d w 12 2 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{12}^{2}} dw122​dloss​、 d l o s s d w 13 2 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{13}^{2}} dw132​dloss​、 d l o s s d w 21 2 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{21}^{2}} dw212​dloss​、 d l o s s d w 22 2 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{22}^{2}} dw222​dloss​、 d l o s s d w 23 2 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{23}^{2}} dw232​dloss​、 d l o s s d w 31 2 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{31}^{2}} dw312​dloss​、 d l o s s d w 32 2 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{32}^{2}} dw322​dloss​、 d l o s s d w 33 2 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{33}^{2}} dw332​dloss​以及 d l o s s d b 1 3 \frac{\mathfrak{d} loss}{\mathfrak{d} b_{1}^{3}} db13​dloss​、 d l o s s d b 2 3 \frac{\mathfrak{d} loss}{\mathfrak{d} b_{2}^{3}} db23​dloss​、 d l o s s d b 2 3 \frac{\mathfrak{d} loss}{\mathfrak{d} b_{2}^{3}} db23​dloss​:

這一層有點深,求 d l o s s d b 1 3 \frac{\mathfrak{d} loss}{\mathfrak{d} b_{1}^{3}} db13​dloss​,從後往前看:

l o s s = ( y 1 − a 1 4 ) 2 + ( y 2 − a 2 4 ) 2 2 loss =\frac{(y_{1}-a_{1}^{4})^{2}+(y_{2}-a_{2}^{4})^{2}}{2} loss=2(y1​−a14​)2+(y2​−a24​)2​

a 1 4 = s ( z 1 4 ) a_{1}^{4}=s(z_{1}^{4}) a14​=s(z14​)

a 2 4 = s ( z 2 4 ) a_{2}^{4}=s(z_{2}^{4}) a24​=s(z24​)

z 1 4 = w 11 3 ⋅ a 1 3 + w 21 3 ⋅ a 2 3 + w 31 3 ⋅ a 3 3 + b 1 4 z_{1}^{4}=w_{11}^{3}\cdot a_{1}^{3}+w_{21}^{3}\cdot a_{2}^{3}+w_{31}^{3}\cdot a_{3}^{3}+b_{1}^{4} z14​=w113​⋅a13​+w213​⋅a23​+w313​⋅a33​+b14​

z 2 4 = w 12 3 ⋅ a 1 3 + w 22 3 ⋅ a 2 3 + w 32 3 ⋅ a 3 3 + b 2 4 z_{2}^{4}=w_{12}^{3}\cdot a_{1}^{3}+w_{22}^{3}\cdot a_{2}^{3}+w_{32}^{3}\cdot a_{3}^{3}+b_{2}^{4} z24​=w123​⋅a13​+w223​⋅a23​+w323​⋅a33​+b24​

a 1 3 = s ( z 1 3 ) a_{1}^{3}=s(z_{1}^{3}) a13​=s(z13​)

z 1 3 = w 11 2 ⋅ a 1 2 + w 21 2 ⋅ a 2 2 + w 31 2 ⋅ a 3 2 + b 1 3 z_{1}^{3}=w_{11}^{2}\cdot a_{1}^{2}+w_{21}^{2}\cdot a_{2}^{2}+w_{31}^{2}\cdot a_{3}^{2}+b_{1}^{3} z13​=w112​⋅a12​+w212​⋅a22​+w312​⋅a32​+b13​

直到出現 b 1 3 b_{1}^{3} b13​,然後求偏導數:

d l o s s d b 1 3 = d l o s s d a 1 4 ⋅ d a 1 4 d z 1 4 ⋅ d z 1 4 d a 1 3 ⋅ d a 1 3 d z 1 3 ⋅ d z 1 3 d b 1 3 + d l o s s d a 2 4 ⋅ d a 2 4 d z 2 4 ⋅ d z 2 4 d a 1 3 ⋅ d a 1 3 d z 1 3 ⋅ d z 1 3 d b 1 3 \begin{aligned} \frac{\mathfrak{d} loss}{\mathfrak{d} b_{1}^{3}} &= \frac{\mathfrak{d} loss}{\mathfrak{d} a_{1}^{4}} \cdot \frac{\mathfrak{d}a_{1}^{4}}{\mathfrak{d}z_{1}^{4}} \cdot \frac{\mathfrak{d}z_{1}^{4}}{\mathfrak{d} a_{1}^{3}} \cdot \frac{\mathfrak{d} a_{1}^{3}}{\mathfrak{d} z_{1}^{3}} \cdot \frac{\mathfrak{d} z_{1}^{3}}{\mathfrak{d} b_{1}^{3}}+ \frac{\mathfrak{d} loss}{\mathfrak{d} a_{2}^{4}} \cdot \frac{\mathfrak{d}a_{2}^{4}}{\mathfrak{d}z_{2}^{4}} \cdot \frac{\mathfrak{d}z_{2}^{4}}{\mathfrak{d} a_{1}^{3}} \cdot \frac{\mathfrak{d} a_{1}^{3}}{\mathfrak{d} z_{1}^{3}} \cdot \frac{\mathfrak{d} z_{1}^{3}}{\mathfrak{d} b_{1}^{3}} \end{aligned} db13​dloss​​=da14​dloss​⋅dz14​da14​​⋅da13​dz14​​⋅dz13​da13​​⋅db13​dz13​​+da24​dloss​⋅dz24​da24​​⋅da13​dz24​​⋅dz13​da13​​⋅db13​dz13​​​

好了,接下來看 d l o s s d w 11 2 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{11}^{2}} dw112​dloss​:

l o s s = ( y 1 − a 1 4 ) 2 + ( y 2 − a 2 4 ) 2 2 loss =\frac{(y_{1}-a_{1}^{4})^{2}+(y_{2}-a_{2}^{4})^{2}}{2} loss=2(y1​−a14​)2+(y2​−a24​)2​

a 1 4 = s ( z 1 4 ) a_{1}^{4}=s(z_{1}^{4}) a14​=s(z14​)

a 2 4 = s ( z 2 4 ) a_{2}^{4}=s(z_{2}^{4}) a24​=s(z24​)

z 1 4 = w 11 3 ⋅ a 1 3 + w 21 3 ⋅ a 2 3 + w 31 3 ⋅ a 3 3 + b 1 4 z_{1}^{4}=w_{11}^{3}\cdot a_{1}^{3}+w_{21}^{3}\cdot a_{2}^{3}+w_{31}^{3}\cdot a_{3}^{3}+b_{1}^{4} z14​=w113​⋅a13​+w213​⋅a23​+w313​⋅a33​+b14​

z 2 4 = w 12 3 ⋅ a 1 3 + w 22 3 ⋅ a 2 3 + w 32 3 ⋅ a 3 3 + b 2 4 z_{2}^{4}=w_{12}^{3}\cdot a_{1}^{3}+w_{22}^{3}\cdot a_{2}^{3}+w_{32}^{3}\cdot a_{3}^{3}+b_{2}^{4} z24​=w123​⋅a13​+w223​⋅a23​+w323​⋅a33​+b24​

a 1 3 = s ( z 1 3 ) a_{1}^{3}=s(z_{1}^{3}) a13​=s(z13​)

z 1 3 = w 11 2 ⋅ a 1 2 + w 21 2 ⋅ a 2 2 + w 31 2 ⋅ a 3 2 + b 1 3 z_{1}^{3}=w_{11}^{2}\cdot a_{1}^{2}+w_{21}^{2}\cdot a_{2}^{2}+w_{31}^{2}\cdot a_{3}^{2}+b_{1}^{3} z13​=w112​⋅a12​+w212​⋅a22​+w312​⋅a32​+b13​

看到了 w 11 2 w_{11}^{2} w112​,那就求導:

d l o s s d w 11 2 = d l o s s d a 1 4 ⋅ d a 1 4 d z 1 4 ⋅ d z 1 4 d a 1 3 ⋅ d a 1 3 d z 1 3 ⋅ d z 1 3 d w 11 2 + d l o s s d a 2 4 ⋅ d a 2 4 d z 2 4 ⋅ d z 2 4 d a 1 3 ⋅ d a 1 3 d z 1 3 ⋅ d z 1 3 d w 11 2 \begin{aligned} \frac{\mathfrak{d} loss}{\mathfrak{d} w_{11}^{2}} &= \frac{\mathfrak{d} loss}{\mathfrak{d} a_{1}^{4}} \cdot \frac{\mathfrak{d}a_{1}^{4}}{\mathfrak{d}z_{1}^{4}} \cdot \frac{\mathfrak{d}z_{1}^{4}}{\mathfrak{d} a_{1}^{3}} \cdot \frac{\mathfrak{d} a_{1}^{3}}{\mathfrak{d} z_{1}^{3}} \cdot \frac{\mathfrak{d} z_{1}^{3}}{\mathfrak{d} w_{11}^{2}}+ \frac{\mathfrak{d} loss}{\mathfrak{d} a_{2}^{4}} \cdot \frac{\mathfrak{d}a_{2}^{4}}{\mathfrak{d}z_{2}^{4}} \cdot \frac{\mathfrak{d}z_{2}^{4}}{\mathfrak{d} a_{1}^{3}} \cdot \frac{\mathfrak{d} a_{1}^{3}}{\mathfrak{d} z_{1}^{3}} \cdot \frac{\mathfrak{d} z_{1}^{3}}{\mathfrak{d} w_{11}^{2}} \end{aligned} dw112​dloss​​=da14​dloss​⋅dz14​da14​​⋅da13​dz14​​⋅dz13​da13​​⋅dw112​dz13​​+da24​dloss​⋅dz24​da24​​⋅da13​dz24​​⋅dz13​da13​​⋅dw112​dz13​​​

接下來,算其它的也是一樣的方法,這裡就不贅述了!

求出所有層的參數,然後按照梯度下降法的公式(1)、(2),更新一次參數。再不斷重複這個前向運算和後向求偏導并更新參數過程,使得 l o s s loss loss降到最低。

這裡大家可能就發現問題了,這樣求導,越往深處求,越發現,有些偏導數前面的都是一樣的,而且已經求過了,在求所有偏導數時,存在大量的不必要的重複計算。那怎麼才能優化它呢?接下來就介紹反向傳播算法來加速網絡求梯度。

—反向傳播算法

反向傳播算法,我的了解就是引入 δ \delta δ,從後往前計算梯度時,每計算完一個參數梯度,就先儲存下來,然後再計算前面梯度時,直接用先前儲存下來的梯度值繼續計算。這樣避免重複計算!

介紹一個符号,當兩個矩陣的行和列相同時,運算符 ⊙ \odot ⊙表示矩陣點乘:

[ 1 2 ] ⊙ [ 3 4 ] = [ 1 ∗ 3 2 ∗ 4 ] = [ 3 8 ] \begin{aligned}\left[\begin{array}{l}{1} \\ {2}\end{array}\right] \odot\left[\begin{array}{l}{3} \\ {4}\end{array}\right]=\left[\begin{array}{l}{1 * 3} \\ {2 * 4}\end{array}\right]=\left[\begin{array}{l}{3} \\ {8}\end{array}\right]\end{aligned} [12​]⊙[34​]=[1∗32∗4​]=[38​]​

ok,下面講的是算法思想:

我們先定義 δ i l \delta_{i}^{l} δil​表示第 l l l層的第 i i i個神經元的對輸出結果的誤差:

δ i l = d l o s s d z i l \begin{aligned} \delta_{i}^{l}=\frac{\mathfrak{d} loss}{\mathfrak{d} z_{i}^{l}} \end{aligned} δil​=dzil​dloss​​

這其實好了解,因為 z i l z_{i}^{l} zil​細微的變化會引起結果發生變化,由于網絡都是加和乘運算,是以, z i l z_{i}^{l} zil​變大,肯定引起 l o s s loss loss變大。那麼,引起這樣一個不好的變化就是我們定義的誤差。

由于反向傳播算法論文裡面提到四個重要的公式,我這裡先直接寫出來,後面了解時,再反複來看這四個公式。公式如下:

  • 輸出層的誤差計算如下:

    δ j L = ∂ l o s s ∂ a j L s ′ ( z j L ) \delta_{j}^{L}=\frac{\partial loss}{\partial a_{j}^{L}} s^{\prime}\left(z_{j}^{L}\right) δjL​=∂ajL​∂loss​s′(zjL​)

    這就是loss對 z j L z_{j}^{L} zjL​求導。先對激活值求導,再由激活值對 z j L z_{j}^{L} zjL​求導。這裡特指在輸出層。由于網絡都是矩陣運算,我們可以将公式改寫成矩陣運算的形式,如下:

    δ L = ∇ a l o s s ⊙ s ′ ( z L ) \delta^{L}=\nabla_{a} loss \odot s^{\prime}\left(z^{L}\right) δL=∇a​loss⊙s′(zL)

    ∇ a l o s s \nabla_{a} loss ∇a​loss表示 l o s s loss loss對激活值的偏導數。回頭看損失函數 l o s s = ( y 1 − a 1 4 ) 2 + ( y 2 − a 2 4 ) 2 2 loss =\frac{(y_{1}-a_{1}^{4})^{2}+(y_{2}-a_{2}^{4})^{2}}{2} loss=2(y1​−a14​)2+(y2​−a24​)2​, l o s s loss loss對 a a a求偏導,可以算出為 − ( y i − a i 4 ) -(y_{i}-a_{i}^{4}) −(yi​−ai4​),那麼把 ∇ a l o s s \nabla_{a} loss ∇a​loss替換掉,寫成:

    δ L = ( a L − y ) ⊙ s ′ ( z L ) \delta^{L}=\left(a^{L}-y\right) \odot s^{\prime}\left(z^{L}\right) δL=(aL−y)⊙s′(zL)

  • 由輸出層的誤差,反向計算前面各層每個神經元的誤差,式子如下:

    δ l = ( ( w l ) T δ l + 1 ) ⊙ s ′ ( z l ) \delta^{l}=\left(\left(w^{l}\right)^{T} \delta^{l+1}\right) \odot s^{\prime}\left(z^{l}\right) δl=((wl)Tδl+1)⊙s′(zl)

    前面一部分是後面的誤差通過權重傳遞到前面,後面點乘目前神經元的激活對輸入偏導是将誤差通過激活函數傳遞到目前神經元。具體做法見下圖根據最後一層誤差,反向求 δ 1 3 \delta_{1}^{3} δ13​:

    反向傳播算法及其實作
  • 有前面兩個公式就可以算出所有神經元上的誤差,根據這個誤差,就可以算出損失函數對偏向的偏導數:

    ∂ l o s s ∂ b j l = δ j l \frac{\partial loss}{\partial b_{j}^{l}}=\delta_{j}^{l} ∂bjl​∂loss​=δjl​

  • 損失函數對權重的偏導數:

    ∂ l o s s ∂ w i j l = a i l δ j l + 1 \frac{\partial loss}{\partial w_{ij}^{l}}=a_{i}^{l} \delta_{j}^{l+1} ∂wijl​∂loss​=ail​δjl+1​

    形象點,看下圖紅色箭頭求損失函數對 w 32 3 w_{32}^{3} w323​偏導數:

    反向傳播算法及其實作

    接下來,講為什麼是這樣,其實,很簡單,這四個公式都是前面講的鍊式法則推導出來的結果。

    帶着疑問,我來慢慢解答,并以幾個求權重和偏向的偏導數例子來證明上述公式。

    前面鍊式法則求所有參數偏導數時,存在很多重複計算,那麼反向傳播算法就來解決這個問題。

    首先,定義輸出層誤差 δ \delta δ,這是為了友善後續表達。仔細看這個公式:

    δ L = ( a L − y ) ⊙ s ′ ( z L ) \delta^{L}=\left(a^{L}-y\right) \odot s^{\prime}\left(z^{L}\right) δL=(aL−y)⊙s′(zL)

    這就是損失函數對最後一層神經單元的 z z z求偏導,有了這個公式,我們先用鍊式法則計算前面層的 δ i l = L − 1 \delta_{i}^{l=L-1} δil=L−1​:

δ i l = ∂ l o s s ∂ z i l = ∂ l o s s ∂ a 1 L ⋅ ∂ a 1 L ∂ z 1 L ⋅ ∂ z 1 L ∂ a i L − 1 ⋅ ∂ a i L − 1 ∂ z i L − 1 + ∂ l o s s ∂ a 2 L ⋅ ∂ a 2 L ∂ z 2 L ⋅ ∂ z 2 L ∂ a i L − 1 ⋅ ∂ a i L − 1 ∂ z i L − 1 + . . . + ∂ l o s s ∂ a n L ⋅ ∂ a n L ∂ z n L ⋅ ∂ z n L ∂ a i L − 1 ⋅ ∂ a i L − 1 ∂ z i L − 1 \begin{aligned}\delta_{i}^{l}=\frac{\partial loss}{\partial z_{i}^{l}}=\frac{\partial loss}{\partial a_{1}^{L}}\cdot \frac{\partial a_{1}^{L}}{\partial z_{1}^{L}}\cdot \frac{\partial z_{1}^{L}}{\partial a_{i}^{L-1}}\cdot \frac{\partial a_{i}^{L-1}}{\partial z_{i}^{L-1}}+\frac{\partial loss}{\partial a_{2}^{L}}\cdot \frac{\partial a_{2}^{L}}{\partial z_{2}^{L}}\cdot \frac{\partial z_{2}^{L}}{\partial a_{i}^{L-1}}\cdot \frac{\partial a_{i}^{L-1}}{\partial z_{i}^{L-1}}+...+\frac{\partial loss}{\partial a_{n}^{L}}\cdot \frac{\partial a_{n}^{L}}{\partial z_{n}^{L}}\cdot \frac{\partial z_{n}^{L}}{\partial a_{i}^{L-1}}\cdot \frac{\partial a_{i}^{L-1}}{\partial z_{i}^{L-1}} \end{aligned} δil​=∂zil​∂loss​=∂a1L​∂loss​⋅∂z1L​∂a1L​​⋅∂aiL−1​∂z1L​​⋅∂ziL−1​∂aiL−1​​+∂a2L​∂loss​⋅∂z2L​∂a2L​​⋅∂aiL−1​∂z2L​​⋅∂ziL−1​∂aiL−1​​+...+∂anL​∂loss​⋅∂znL​∂anL​​⋅∂aiL−1​∂znL​​⋅∂ziL−1​∂aiL−1​​​

其中n代表最後一層神經元個數,接下來化簡:

∂ l o s s ∂ a 1 L ⋅ ∂ a 1 L ∂ z 1 L = δ 1 L . . . ∂ l o s s ∂ a n L ⋅ ∂ a n L ∂ z n L = δ n L \begin{aligned}\frac{\partial loss}{\partial a_{1}^{L}}\cdot \frac{\partial a_{1}^{L}}{\partial z_{1}^{L}}=\delta_{1}^{L} ...\frac{\partial loss}{\partial a_{n}^{L}}\cdot \frac{\partial a_{n}^{L}}{\partial z_{n}^{L}}=\delta_{n}^{L}\end{aligned} ∂a1L​∂loss​⋅∂z1L​∂a1L​​=δ1L​...∂anL​∂loss​⋅∂znL​∂anL​​=δnL​​

而 ∂ z 1 L ∂ a i L − 1 = w i 1 L − 1 . . . ∂ z n L ∂ a i L − 1 = w i n L − 1 \begin{aligned}\frac{\partial z_{1}^{L}}{\partial a_{i}^{L-1}}=w_{i1}^{L-1}...\frac{\partial z_{n}^{L}}{\partial a_{i}^{L-1}}=w_{in}^{L-1}\end{aligned} ∂aiL−1​∂z1L​​=wi1L−1​...∂aiL−1​∂znL​​=winL−1​​,這可以從網絡中的公式看出來。

那麼上面公式可以化簡:

δ i l = ∂ l o s s ∂ z i l = ∂ l o s s ∂ a 1 L ⋅ ∂ a 1 L ∂ z 1 L ⋅ ∂ z 1 L ∂ a i L − 1 ⋅ ∂ a i L − 1 ∂ z i L − 1 + ∂ l o s s ∂ a 2 L ⋅ ∂ a 2 L ∂ z 2 L ⋅ ∂ z 2 L ∂ a i L − 1 ⋅ ∂ a i L − 1 ∂ z i L − 1 + . . . + ∂ l o s s ∂ a n L ⋅ ∂ a n L ∂ z n L ⋅ ∂ z n L ∂ a i L − 1 ⋅ ∂ a i L − 1 ∂ z i L − 1 = ( δ 1 L ⋅ w i 1 L − 1 + . . . + δ n L ⋅ w i n L − 1 ) ⋅ ∂ a i L − 1 ∂ z i L − 1 = ( δ 1 L ⋅ w i 1 L − 1 + . . . + δ n L ⋅ w i n L − 1 = l ) ⊙ s ′ ( z i L − 1 = l ) \begin{aligned} \delta_{i}^{l}=\frac{\partial loss}{\partial z_{i}^{l}} &= \frac{\partial loss}{\partial a_{1}^{L}}\cdot \frac{\partial a_{1}^{L}}{\partial z_{1}^{L}}\cdot \frac{\partial z_{1}^{L}}{\partial a_{i}^{L-1}}\cdot \frac{\partial a_{i}^{L-1}}{\partial z_{i}^{L-1}}+\frac{\partial loss}{\partial a_{2}^{L}}\cdot \frac{\partial a_{2}^{L}}{\partial z_{2}^{L}}\cdot \frac{\partial z_{2}^{L}}{\partial a_{i}^{L-1}}\cdot \frac{\partial a_{i}^{L-1}}{\partial z_{i}^{L-1}}+...+\frac{\partial loss}{\partial a_{n}^{L}}\cdot \frac{\partial a_{n}^{L}}{\partial z_{n}^{L}}\cdot \frac{\partial z_{n}^{L}}{\partial a_{i}^{L-1}}\cdot \frac{\partial a_{i}^{L-1}}{\partial z_{i}^{L-1}} \\&= (\delta_{1}^{L}\cdot w_{i1}^{L-1}+...+\delta_{n}^{L}\cdot w_{in}^{L-1}) \cdot \frac{\partial a_{i}^{L-1}}{\partial z_{i}^{L-1}} \\&= (\delta_{1}^{L}\cdot w_{i1}^{L-1}+...+\delta_{n}^{L}\cdot w_{in}^{L-1=l}) \odot s^{\prime}(z_{i}^{L-1=l}) \end{aligned} δil​=∂zil​∂loss​​=∂a1L​∂loss​⋅∂z1L​∂a1L​​⋅∂aiL−1​∂z1L​​⋅∂ziL−1​∂aiL−1​​+∂a2L​∂loss​⋅∂z2L​∂a2L​​⋅∂aiL−1​∂z2L​​⋅∂ziL−1​∂aiL−1​​+...+∂anL​∂loss​⋅∂znL​∂anL​​⋅∂aiL−1​∂znL​​⋅∂ziL−1​∂aiL−1​​=(δ1L​⋅wi1L−1​+...+δnL​⋅winL−1​)⋅∂ziL−1​∂aiL−1​​=(δ1L​⋅wi1L−1​+...+δnL​⋅winL−1=l​)⊙s′(ziL−1=l​)​

那麼這一層就把公式二證明完畢!

接下來,從例子出發,這樣會更加清晰,求 δ 1 3 \delta_{1}^{3} δ13​:

從後往前找到 δ 1 3 \delta_{1}^{3} δ13​為止:

l o s s = ( y 1 − a 1 4 ) 2 + ( y 2 − a 2 4 ) 2 2 loss =\frac{(y_{1}-a_{1}^{4})^{2}+(y_{2}-a_{2}^{4})^{2}}{2} loss=2(y1​−a14​)2+(y2​−a24​)2​

a 1 4 = s ( z 1 4 ) a_{1}^{4}=s(z_{1}^{4}) a14​=s(z14​)

a 2 4 = s ( z 2 4 ) a_{2}^{4}=s(z_{2}^{4}) a24​=s(z24​)

z 1 4 = w 11 3 ⋅ a 1 3 + w 21 3 ⋅ a 2 3 + w 31 3 ⋅ a 3 3 + b 1 4 z_{1}^{4}=w_{11}^{3}\cdot a_{1}^{3}+w_{21}^{3}\cdot a_{2}^{3}+w_{31}^{3}\cdot a_{3}^{3}+b_{1}^{4} z14​=w113​⋅a13​+w213​⋅a23​+w313​⋅a33​+b14​

z 2 4 = w 12 3 ⋅ a 1 3 + w 22 3 ⋅ a 2 3 + w 32 3 ⋅ a 3 3 + b 2 4 z_{2}^{4}=w_{12}^{3}\cdot a_{1}^{3}+w_{22}^{3}\cdot a_{2}^{3}+w_{32}^{3}\cdot a_{3}^{3}+b_{2}^{4} z24​=w123​⋅a13​+w223​⋅a23​+w323​⋅a33​+b24​

a 1 3 = s ( z 1 3 ) a_{1}^{3}=s(z_{1}^{3}) a13​=s(z13​)

δ 1 3 = ∂ l o s s ∂ z 1 3 = ∂ l o s s ∂ a 1 4 ⋅ ∂ a 1 4 ∂ z 1 4 ⋅ ∂ z 1 4 ∂ a 1 3 ⋅ ∂ a 1 3 ∂ z 1 3 + ∂ l o s s ∂ a 2 4 ⋅ ∂ a 2 4 ∂ z 2 4 ⋅ ∂ z 2 4 ∂ a 1 3 ⋅ ∂ a 1 3 ∂ z 1 3 = δ 1 4 ⋅ w 11 3 ⋅ s ′ ( z 1 3 ) + δ 2 4 ⋅ w 12 3 ⋅ s ′ ( z 1 3 ) = ( δ 1 4 ⋅ w 11 3 + δ 2 4 ⋅ w 12 3 ) ⋅ s ′ ( z 1 3 ) \begin{aligned} \delta_{1}^{3}=\frac{\partial loss}{\partial z_{1}^{3}}&=\frac{\partial loss}{\partial a_{1}^{4}} \cdot \frac{\partial a_{1}^{4}}{\partial z_{1}^{4}} \cdot \frac{\partial z_{1}^{4}}{\partial a_{1}^{3}} \cdot \frac{\partial a_{1}^{3}}{\partial z_{1}^{3}} + \frac{\partial loss}{\partial a_{2}^{4}} \cdot \frac{\partial a_{2}^{4}}{\partial z_{2}^{4}} \cdot \frac{\partial z_{2}^{4}}{\partial a_{1}^{3}} \cdot \frac{\partial a_{1}^{3}}{\partial z_{1}^{3}} \\&=\delta_{1}^{4} \cdot w_{11}^{3} \cdot s^{\prime}(z_{1}^{3})+\delta_{2}^{4} \cdot w_{12}^{3} \cdot s^{\prime}(z_{1}^{3}) \\&=(\delta_{1}^{4} \cdot w_{11}^{3}+\delta_{2}^{4} \cdot w_{12}^{3}) \cdot s^{\prime}(z_{1}^{3}) \end{aligned} δ13​=∂z13​∂loss​​=∂a14​∂loss​⋅∂z14​∂a14​​⋅∂a13​∂z14​​⋅∂z13​∂a13​​+∂a24​∂loss​⋅∂z24​∂a24​​⋅∂a13​∂z24​​⋅∂z13​∂a13​​=δ14​⋅w113​⋅s′(z13​)+δ24​⋅w123​⋅s′(z13​)=(δ14​⋅w113​+δ24​⋅w123​)⋅s′(z13​)​

和圖中所示計算一樣。

接下來證明公式三:

由公式二可以得到每個神經單元的誤差 δ = ∂ l o s s ∂ z \delta=\frac{\partial loss}{\partial z} δ=∂z∂loss​,算 ∂ l o s s ∂ w i j l = L − 1 \frac{\partial loss}{\partial w_{ij}^{l=L-1}} ∂wijl=L−1​∂loss​:

∂ l o s s ∂ w i j l = L − 1 = ∂ l o s s ∂ a j L ⋅ ∂ a j L ∂ z j L ⋅ ∂ z j L ∂ w i j L − 1 由 于 ∂ l o s s ∂ a j L ⋅ ∂ a j L ∂ z j L = δ j L , 而 且 根 據 前 向 計 算 公 式 可 以 發 現 : ∂ z j L ∂ w i j L − 1 = a i L − 1 所 以 原 式 = δ j L ⋅ a i L − 1 \begin{aligned} \frac{\partial loss}{\partial w_{ij}^{l=L-1}}&=\frac{\partial loss}{\partial a_{j}^{L}} \cdot \frac{\partial a_{j}^{L}}{\partial z_{j}^{L}} \cdot \frac{\partial z_{j}^{L}}{\partial w_{ij}^{L-1}} \\& 由于\frac{\partial loss}{\partial a_{j}^{L}} \cdot \frac{\partial a_{j}^{L}}{\partial z_{j}^{L}} = \delta_{j}^{L},而且根據前向計算公式可以發現:\frac{\partial z_{j}^{L}}{\partial w_{ij}^{L-1}}=a_{i}^{L-1} \\&是以原式= \delta_{j}^{L} \cdot a_{i}^{L-1} \end{aligned} ∂wijl=L−1​∂loss​​=∂ajL​∂loss​⋅∂zjL​∂ajL​​⋅∂wijL−1​∂zjL​​由于∂ajL​∂loss​⋅∂zjL​∂ajL​​=δjL​,而且根據前向計算公式可以發現:∂wijL−1​∂zjL​​=aiL−1​是以原式=δjL​⋅aiL−1​​

和上圖第三個反向傳播公式一緻。

下面舉個例子,更加清晰!求 d l o s s d w 11 3 \frac{\mathfrak{d} loss}{\mathfrak{d} w_{11}^{3}} dw113​dloss​:

由前向計算式子檢視:

l o s s = ( y 1 − a 1 4 ) 2 + ( y 2 − a 2 4 ) 2 2 loss =\frac{(y_{1}-a_{1}^{4})^{2}+(y_{2}-a_{2}^{4})^{2}}{2} loss=2(y1​−a14​)2+(y2​−a24​)2​

a 1 4 = s ( z 1 4 ) a_{1}^{4}=s(z_{1}^{4}) a14​=s(z14​)

z 1 4 = w 11 3 ⋅ a 1 3 + w 21 3 ⋅ a 2 3 + w 31 3 ⋅ a 3 3 + b 1 4 z_{1}^{4}=w_{11}^{3}\cdot a_{1}^{3}+w_{21}^{3}\cdot a_{2}^{3}+w_{31}^{3}\cdot a_{3}^{3}+b_{1}^{4} z14​=w113​⋅a13​+w213​⋅a23​+w313​⋅a33​+b14​

找到 w 11 3 w_{11}^{3} w113​,進行如下求導:

d l o s s d w 11 3 = d l o s s d a 1 4 ⋅ d a 1 4 d z 1 4 ⋅ d z 1 4 d w 11 3 = δ 1 L ⋅ a 1 3 \begin{aligned} \frac{\mathfrak{d} loss}{\mathfrak{d} w_{11}^{3}} &= \frac{\mathfrak{d} loss}{\mathfrak{d} a_{1}^{4}} \cdot \frac{\mathfrak{d}a_{1}^{4}}{\mathfrak{d}z_{1}^{4}} \cdot \frac{\mathfrak{d}z_{1}^{4}}{\mathfrak{d} w_{11}^{3}} \end{aligned}=\delta_{1}^{L} \cdot a_{1}^{3} dw113​dloss​​=da14​dloss​⋅dz14​da14​​⋅dw113​dz14​​​=δ1L​⋅a13​

OK,同理也可以推導出對偏向求導,這裡就不贅述了。

這隻是推導出倒數第二層的四個公式成立,那麼同理,再往前推,公式也一樣可以推導出來,這裡就不幹這個體力活了!

那麼接下來,就是實作代碼:

代碼一(較粗糙,代碼二會改進),預測sin(x)曲線

# -*- coding: utf-8 -*-
'''
    20191119
    建構簡單的全連接配接神經網絡,實作前向計算和反向傳播算法求導
'''
import numpy as np
import math
import matplotlib.pyplot as plt
from tqdm import tqdm
class SimpleNerworks:
    '''
    我希望執行個體化這個類的時候,你就想好了網絡的層數,以及每層網絡的神經元個數
    傳入格式為: {第一層:2個神經元,第二層:3個神經元,第三層:4個神經元,第四層:1個神經元}
    '''
    def __init__(self,*kwargs):
        super(SimpleNerworks, self).__init__()
        assert len(kwargs)>=2
        #初始化權重和偏差
        self.weights = [np.mat(np.random.randn(y,x)) for x,y in zip(kwargs[:-1],kwargs[1:])]
        self.bias = [np.mat(np.random.randn(y,1)) for y in kwargs[1:]]
        self.a = [np.mat(np.zeros((y,1))) for y in kwargs] #每個神經元的輸出值,輸入層直接是x,其他層則為a = sigmoid(z)
        self.z = [np.mat(np.zeros_like(b)) for b in self.bias] #每個神經元的輸入值,輸入層就算了,z = wx+b
        self.delta = [np.mat(np.zeros_like(b)) for b in self.bias]

    def forward(self,put:list):
        #前向運算,順便儲存中間參數a,z,友善反向計算梯度
        self.a[0] = np.mat(put)
        for i,w_b in enumerate(zip(self.weights,self.bias)):
            w,b=w_b
            self.z[i]=w.dot(self.a[i]) + b
            self.a[i+1]=self.Sigmoid(self.z[i])
            put = self.a[i+1]
        return self.a[-1]

    def backpropagation(self,y_pre,y_real):
        '''
        反向傳播
        計算出每個神經元的delta,按照反向傳播公式求出所有梯度
        :return: 傳回每個w,b的梯度
        '''

        #先算出最後一層的delta
        self.delta[-1] = np.multiply(np.mat(y_pre - y_real),self.Sigmoid_derivative(self.z[-1]))

        #算出所有delta
        i = len(self.delta) -1
        while i>=1:
            self.delta[i-1] = np.multiply(np.dot(self.weights[i].T,self.delta[i]) , self.Sigmoid_derivative(self.z[i-1]))
            i -= 1

        #利用delta算出所有參數的梯度
        delta_bias = self.delta
        delta_weights = [ D.dot(A.T) for D,A in zip(self.delta,self.a[:-1])]
        return delta_weights,delta_bias

    def updata_w_b(self,delta_weights,delta_bias,lr):
        '''
        :param delta_weights: w的梯度
        :param delta_bias: b的梯度
        :param lr: 學習率
        更新self.weights,self.bias
        '''
        for i in range(len(delta_weights)):
            self.bias[i] = self.bias[i] - lr * delta_bias[i]
            self.weights[i] = self.weights[i] - lr * delta_weights[i]

    def Sigmoid(self,x):
        #sigmoid函數
        s = 1 / (1 + np.exp(-x))
        return s

    def Sigmoid_derivative(self,x):
        # sigmoid函數對x的求導
        return np.multiply(self.Sigmoid(x),1 - self.Sigmoid(x))

    def loss_function(self,y_pre,y_real):
        '''
        :param y_pre: 網絡計算值
        :param y_real: 真實值
        :return: 1/2 *(y_pre-y_real)^2
        這裡我之是以前面乘以1/2,是因為,這樣loss對裡面的y_pre求導,剛好等于y_pre-y_real,十分簡潔
        '''
        return 0.5*pow(y_pre-y_real,2).sum()

def DataSinX():
    '''
    生成x,sin(x),打亂。
    :return:
    '''
    data = [(i,math.sin(i)) for i in np.arange(-5 * math.pi, 5 * math.pi, 0.1)]
    np.random.shuffle(data)
    return data

def DataIter(Data):
    '''
    :param X: x
    :param Y: sin(x)
    :return:
    '''
    for i in range(len(Data)):
        yield Data[i]

def ShowData(axs,x,y,c,marker,legend):

    axs.scatter(x, y, marker=marker, color=c)
    axs.legend(legend)



if __name__ == '__main__':
    Data = DataSinX()
    DataIter(Data)
    fig, axs = plt.subplots()
    net = SimpleNerworks(1,2,3,2,1)
    for i in tqdm(range(1,100)):
        for x,Y in DataIter(Data):
            y = net.forward([x])
            delta_weights,delta_bias =net.backpropagation(y,Y)
            net.updata_w_b(delta_weights,delta_bias,0.01)
            ShowData(axs=axs, x=x, y=Y, marker='*', c=(0.8, 0., 0.), legend='sinx')
            ShowData(axs=axs, x=x, y=Y, marker='.', c=(0., 0.5, 0.), legend='P')
        print("----loss:{}---".format(net.loss_function(y,Y)))
    fig.show()



    # fig,axs = plt.subplots()
    # for x,y in DataIter(Data):
    #     ShowData(axs=axs,x=x,y=y,marker='*',c=(0.8,0.,0.),legend='sinx')
    #     ShowData(axs=axs, x=x, y=y-1, marker='.', c=(0., 0.5, 0.), legend='learn')
    # fig.show()

           

結果:

0%|                                                    | 0/99 [00:00<?, ?it/s]----loss:0.08218073585832743---
  1%|▍                                           | 1/99 [00:03<05:20,  3.27s/it]----loss:0.0798943153067189---
  2%|▉                                           | 2/99 [00:06<05:21,  3.31s/it]----loss:0.07789974959818005---
  3%|█▎                                          | 3/99 [00:10<05:32,  3.46s/it]----loss:0.07614294783023841---
  4%|█▊                                          | 4/99 [00:14<05:42,  3.60s/it]----loss:0.07458248315071356---
  5%|██▏                                         | 5/99 [00:18<05:54,  3.77s/it]----loss:0.07318609280327401---
  6%|██▋                                         | 6/99 [00:23<06:08,  3.96s/it]----loss:0.0719282753431308---
  7%|███                                         | 7/99 [00:27<06:19,  4.12s/it]----loss:0.07078860492973296---
  8%|███▌                                        | 8/99 [00:32<06:36,  4.36s/it]----loss:0.06975052549045974---
  9%|████                                        | 9/99 [00:38<07:13,  4.81s/it]----loss:0.06880047292276117---
 10%|████▎                                      | 10/99 [00:43<07:22,  4.97s/it]----loss:0.06792722589782152---
 11%|████▊                                      | 11/99 [00:49<07:37,  5.20s/it]----loss:0.06712141877667446---
 12%|█████▏                                     | 12/99 [00:55<07:48,  5.39s/it]----loss:0.06637517133179552---
 13%|█████▋                                     | 13/99 [01:01<08:17,  5.79s/it]----loss:0.06568180386305453---
 14%|██████                                     | 14/99 [01:08<08:35,  6.07s/it]----loss:0.06503561558313474---
 15%|██████▌                                    | 15/99 [01:15<08:44,  6.24s/it]----loss:0.06443171045970558---
 16%|██████▉                                    | 16/99 [01:22<08:58,  6.48s/it]----loss:0.06386585906003966---
 17%|███████▍                                   | 17/99 [01:30<09:25,  6.89s/it]----loss:0.06333438799704556---
 18%|███████▊                                   | 18/99 [01:37<09:27,  7.01s/it]----loss:0.06283409074359572---
 19%|████████▎                                  | 19/99 [01:44<09:32,  7.15s/it]----loss:0.062362155140612836---
 20%|████████▋                                  | 20/99 [01:53<10:06,  7.67s/it]----loss:0.06191610405800253---
 21%|█████████                                  | 21/99 [02:04<11:04,  8.53s/it]----loss:0.061493746501050044---
 22%|█████████▌                                 | 22/99 [02:17<12:40,  9.88s/it]----loss:0.06109313707402205---
 23%|█████████▉                                 | 23/99 [02:31<13:57, 11.02s/it]----loss:0.06071254217698073---
 24%|██████████▍                                | 24/99 [02:41<13:40, 10.94s/it]----loss:0.06035041166309115---
 25%|██████████▊                                | 25/99 [02:51<13:07, 10.64s/it]----loss:0.06000535495171597---
 26%|███████████▎                               | 26/99 [03:07<14:38, 12.04s/it]----loss:0.05967612079871905---
 27%|███████████▋                               | 27/99 [03:24<16:16, 13.56s/it]----loss:0.05936158008511355---
 28%|████████████▏                              | 28/99 [03:42<17:46, 15.02s/it]----loss:0.05906071110982107---
 29%|████████████▌                              | 29/99 [03:55<16:49, 14.43s/it]----loss:0.058772586970220066---
 30%|█████████████                              | 30/99 [04:08<16:00, 13.93s/it]----loss:0.05849636469156972---
 31%|█████████████▍                             | 31/99 [04:21<15:30, 13.69s/it]----loss:0.05823127582796268---
 32%|█████████████▉                             | 32/99 [04:35<15:29, 13.87s/it]----loss:0.05797661830671369---
 33%|██████████████▎                            | 33/99 [04:50<15:27, 14.06s/it]----loss:0.05773174932770428---
 34%|██████████████▊                            | 34/99 [05:08<16:44, 15.45s/it]----loss:0.05749607916123627---
 35%|███████████████▏                           | 35/99 [05:31<18:53, 17.71s/it]----loss:0.057269065713969454---
 36%|███████████████▋                           | 36/99 [05:55<20:15, 19.30s/it]----loss:0.05705020975376913---
 37%|████████████████                           | 37/99 [06:17<20:58, 20.30s/it]----loss:0.05683905070171422---
 38%|████████████████▌                          | 38/99 [06:37<20:33, 20.22s/it]----loss:0.05663516291386805---
 39%|████████████████▉                          | 39/99 [06:56<19:43, 19.73s/it]----loss:0.05643815238728767---
 40%|█████████████████▎                         | 40/99 [07:17<19:51, 20.19s/it]----loss:0.05624765383460416---
 41%|█████████████████▊                         | 41/99 [07:41<20:37, 21.34s/it]----loss:0.056063328079724827---
 42%|██████████████████▏                        | 42/99 [08:07<21:41, 22.83s/it]----loss:0.055884859734082706---
 43%|██████████████████▋                        | 43/99 [08:35<22:42, 24.34s/it]----loss:0.055711955118630745---
 44%|███████████████████                        | 44/99 [09:00<22:22, 24.40s/it]----loss:0.055544340401642425---
 45%|███████████████████▌                       | 45/99 [09:23<21:32, 23.93s/it]----loss:0.0553817599264888---
 46%|███████████████████▉                       | 46/99 [09:47<21:15, 24.07s/it]----loss:0.05522397470704746---
 47%|████████████████████▍                      | 47/99 [10:12<21:07, 24.37s/it]----loss:0.055070761071362974---
 48%|████████████████████▊                      | 48/99 [10:38<21:00, 24.72s/it]----loss:0.05492190943670524---
 49%|█████████████████████▎                     | 49/99 [11:05<21:11, 25.44s/it]----loss:0.05477722320133363---
 51%|█████████████████████▋                     | 50/99 [11:31<20:52, 25.55s/it]----loss:0.054636517740130175---
 52%|██████████████████████▏                    | 51/99 [11:58<20:59, 26.25s/it]----loss:0.05449961949285859---
 53%|██████████████████████▌                    | 52/99 [12:24<20:25, 26.07s/it]----loss:0.05436636513518095---
 54%|███████████████████████                    | 53/99 [12:51<20:14, 26.41s/it]----loss:0.05423660082375039---
 55%|███████████████████████▍                   | 54/99 [13:20<20:15, 27.02s/it]----loss:0.054110181507729435---
 56%|███████████████████████▉                   | 55/99 [13:49<20:24, 27.83s/it]----loss:0.05398697029997453---
 57%|████████████████████████▎                  | 56/99 [14:24<21:26, 29.92s/it]----loss:0.05386683790190694---
 58%|████████████████████████▊                  | 57/99 [14:53<20:43, 29.62s/it]----loss:0.05374966207676673---
 59%|█████████████████████████▏                 | 58/99 [15:24<20:29, 29.99s/it]----loss:0.053635327166539785---
 60%|█████████████████████████▋                 | 59/99 [15:56<20:24, 30.60s/it]----loss:0.05352372364836692---
 61%|██████████████████████████                 | 60/99 [16:30<20:36, 31.69s/it]----loss:0.05341474772669764---
 62%|██████████████████████████▍                | 61/99 [17:07<20:59, 33.16s/it]----loss:0.05330830095785497---
 63%|██████████████████████████▉                | 62/99 [17:40<20:22, 33.04s/it]----loss:0.053204289904025925---
 64%|███████████████████████████▎               | 63/99 [18:14<19:59, 33.31s/it]----loss:0.05310262581400647---
 65%|███████████████████████████▊               | 64/99 [18:47<19:31, 33.46s/it]----loss:0.053003224328303296---
 66%|████████████████████████████▏              | 65/99 [19:22<19:10, 33.83s/it]----loss:0.05290600520643899---
 67%|████████████████████████████▋              | 66/99 [20:01<19:29, 35.44s/it]----loss:0.05281089207452103---
 68%|█████████████████████████████              | 67/99 [20:39<19:13, 36.04s/it]----loss:0.05271781219133017---
 69%|█████████████████████████████▌             | 68/99 [21:14<18:31, 35.87s/it]----loss:0.052626696231351106---
 70%|█████████████████████████████▉             | 69/99 [21:53<18:24, 36.83s/it]----loss:0.05253747808332267---
 71%|██████████████████████████████▍            | 70/99 [22:31<17:57, 37.16s/it]----loss:0.05245009466301887---
 72%|██████████████████████████████▊            | 71/99 [23:11<17:43, 37.98s/it]----loss:0.05236448573909454---
 73%|███████████████████████████████▎           | 72/99 [23:50<17:12, 38.25s/it]----loss:0.05228059377093726---
 74%|███████████████████████████████▋           | 73/99 [24:34<17:22, 40.11s/it]----loss:0.05219836375756433---
 75%|████████████████████████████████▏          | 74/99 [25:17<17:01, 40.87s/it]----loss:0.052117743096692135---
 76%|████████████████████████████████▌          | 75/99 [25:59<16:26, 41.11s/it]----loss:0.05203868145318096---
 77%|█████████████████████████████████          | 76/99 [26:43<16:06, 42.02s/it]----loss:0.05196113063613246---
 78%|█████████████████████████████████▍         | 77/99 [27:27<15:41, 42.78s/it]----loss:0.051885044483977925---
 79%|█████████████████████████████████▉         | 78/99 [28:14<15:22, 43.95s/it]----loss:0.05181037875695312---
 80%|██████████████████████████████████▎        | 79/99 [29:00<14:50, 44.51s/it]----loss:0.0517370910364093---
 81%|██████████████████████████████████▋        | 80/99 [29:46<14:16, 45.05s/it]----loss:0.05166514063045369---
 82%|███████████████████████████████████▏       | 81/99 [30:35<13:52, 46.24s/it]----loss:0.05159448848545817---
 83%|███████████████████████████████████▌       | 82/99 [31:31<13:54, 49.08s/it]----loss:0.05152509710301109---
 84%|████████████████████████████████████       | 83/99 [32:24<13:24, 50.29s/it]----loss:0.05145693046192346---
 85%|████████████████████████████████████▍      | 84/99 [33:14<12:32, 50.16s/it]----loss:0.051389953944931024---
 86%|████████████████████████████████████▉      | 85/99 [34:11<12:12, 52.33s/it]----loss:0.051324134269764315---
 87%|█████████████████████████████████████▎     | 86/99 [35:07<11:32, 53.24s/it]----loss:0.051259439424283154---
 88%|█████████████████████████████████████▊     | 87/99 [35:55<10:22, 51.84s/it]----loss:0.05119583860539704---
 89%|██████████████████████████████████████▏    | 88/99 [36:45<09:23, 51.25s/it]----loss:0.051133302161514356---
 90%|██████████████████████████████████████▋    | 89/99 [37:39<08:39, 51.92s/it]----loss:0.05107180153828329---
 91%|███████████████████████████████████████    | 90/99 [38:32<07:51, 52.41s/it]----loss:0.051011309227404475---
 92%|███████████████████████████████████████▌   | 91/99 [39:29<07:10, 53.79s/it]----loss:0.05095179871831452---
 93%|███████████████████████████████████████▉   | 92/99 [40:24<06:19, 54.20s/it]----loss:0.05089324445255136---
 94%|████████████████████████████████████████▍  | 93/99 [41:26<05:39, 56.58s/it]----loss:0.05083562178062897---
 95%|████████████████████████████████████████▊  | 94/99 [42:23<04:43, 56.65s/it]----loss:0.05077890692126074---
 96%|█████████████████████████████████████████▎ | 95/99 [43:19<03:44, 56.24s/it]----loss:0.05072307692278237---
 97%|█████████████████████████████████████████▋ | 96/99 [44:14<02:47, 55.97s/it]----loss:0.05066810962663606---
 98%|██████████████████████████████████████████▏| 97/99 [45:10<01:52, 56.11s/it]----loss:0.0506139836327878---
 99%|██████████████████████████████████████████▌| 98/99 [46:07<00:56, 56.23s/it]----loss:0.05056067826695867---
100%|███████████████████████████████████████████| 99/99 [47:10<00:00, 28.59s/it]

Process finished with exit code 0

           

上面損失函數值有下降,但越到後面下降幅度越小!

這個圖的legend每設定好,抱歉!紅色五角星是x - sin(x)曲線,而綠色圓圈是x-預測值y,圖中可以看到兩個已經重合了。

反向傳播算法及其實作

代碼二:添加Batch訓練,替換激活函數

這裡例子預測sin曲線,如果用relu作為激活函數,效果會很差,建議用sigmoid或者Prule。因為,sin曲線有一般為負數,relu激活函數在神經元輸出為負數時激活為0,這會導緻反向傳播有些神經元梯度為0,不更新參數。

# -*- coding: utf-8 -*-
'''
    20191119
    建構簡單的全連接配接神經網絡,實作前向計算和反向傳播算法求導
'''
import numpy as np
import math
import matplotlib.pyplot as plt
# from tqdm import tqdm
import tqdm
class SimpleNerworks:
    '''
    我希望執行個體化這個類的時候,你就想好了網絡的層數,以及每層網絡的神經元個數
    '''
    def __init__(self,*kwargs):
        super(SimpleNerworks, self).__init__()
        assert len(kwargs)>=2
        #初始化權重和偏差
        self.weights = [np.mat(np.random.randn(y,x),dtype=float) for x,y in zip(kwargs[:-1],kwargs[1:])]
        self.bias = [np.mat(np.random.randn(y,1),dtype=float) for y in kwargs[1:]]
        # print(self.weights)
        # print(self.bias)
        self.a = [np.mat(np.zeros((y,1)),dtype=float) for y in kwargs] #每個神經元的輸出值,輸入層直接是x,其他層則為a = sigmoid(z)
        self.z = [np.mat(np.zeros_like(b),dtype=float) for b in self.bias] #每個神經元的輸入值,輸入層就算了,z = wx+b
        self.delta = [np.mat(np.zeros_like(b),dtype=float) for b in self.bias]

        #梯度
        self.delta_bias = [np.mat(np.zeros_like(b)) for b in self.bias]
        self.delta_weights = [np.mat(np.zeros_like(w)) for w in self.weights]

    def forward(self,put_batch:list,Mode_Train=True):
        #Mode_Train表示模型狀态,如果為True,則計算前向運算時,儲存中間值a,z,并用反向傳播計算出所有參數梯度。否則不儲存。
        #put為一個list對象,也就是一個batch,資料存放形式像這樣[(x_1,y_1),(x_2,y_2),...],
        out = []
        loss = []
        for x, y in put_batch:
            # 輸入x,y要規範成矩陣,友善運算!
            x = np.mat(x)
            y = np.mat(y)
            # 每一次前向運算都可以利用後向運算把對于梯度求出來。
            self.a[0] = x
            for i, (w, b) in enumerate(zip(self.weights,self.bias)):
                self.z[i] = w.dot(self.a[i]) + b
                self.a[i + 1] = self.s(self.z[i])
            if Mode_Train:
                # 後向運算計算所有參數的梯度
                delta_weights, delta_bias = self.backpropagation(self.a[-1], y)
                # 梯度累積
                self.delta_weights = [w + nw for w, nw in zip(self.delta_weights, delta_weights)]
                self.delta_bias = [b + nb for b, nb in zip(self.delta_bias, delta_bias)]
            out.append(self.a[-1])
            loss.append(self.loss_function(self.a[-1],y))
        return out,np.sum(loss)

    def Zero_gradient(self):
        '''梯度清零'''
        self.delta_weights = [w*0 for w in self.delta_weights]
        self.delta_bias = [b * 0 for b in self.delta_bias]
    def backpropagation(self,y_pre,y_real):
        '''
        反向傳播
        計算出每個神經元的delta,按照反向傳播公式求出所有梯度
        :return: 傳回每個w,b的梯度
        '''

        #先算出最後一層的delta
        self.delta[-1] = np.multiply(self.loss_derivative(y_pre , y_real),self.s_derivative(self.z[-1]))

        #算出所有delta
        i = len(self.delta) -1
        while i>=1:
            self.delta[i-1] = np.multiply(np.dot(self.weights[i].T,self.delta[i]) , self.s_derivative(self.z[i-1]))
            i -= 1

        #利用delta算出所有參數的梯度
        delta_bias = self.delta
        delta_weights = [ D.dot(A.T) for D,A in zip(self.delta,self.a[:-1])]
        return delta_weights,delta_bias

    def updata_w_b(self,batch_size,lr):
        self.bias = [b - lr * (delta_b/batch_size) for b,delta_b in zip(self.bias,self.delta_bias)]
        self.weights = [w - lr*(delta_w/batch_size) for w, delta_w in zip(self.weights,self.delta_weights)]


    def s(self,x):
        # x =  self.Rule(x)
        x = self.Sigmoid(x)
        # x = self.leakyRelu(x)
        return x

    def s_derivative(self,x):
        # x = self.Rule_derivative(x)
        x = self.Sigmoid_derivative(x)
        # x = self.leakyRelu_derivative(x)
        return x

    def Rule(self,x):
        return np.maximum(x, 0)

    def Rule_derivative(self,x):
        return (x > 0) +0.

    def leakyRelu(self,x):
        a = np.where(x < 0, 0.5 * x, x)
        return a

    def leakyRelu_derivative(self,x):
        a = np.where(x < 0, 0.5, 1.)
        return a

    def Sigmoid(self,x):
        #sigmoid函數
        s = 1 / (1 + np.exp(-x))
        return s

    def Sigmoid_derivative(self,x):
        # sigmoid函數對x的求導
        return np.multiply(self.Sigmoid(x),1 - self.Sigmoid(x))

    def loss_function(self,y_pre,y_real):
        '''
        :param y_pre: 網絡計算值
        :param y_real: 真實值
        :return: 1/2 *(y_pre-y_real)^2
        這裡我之是以前面乘以1/2,是因為,這樣loss對裡面的y_pre求導,剛好等于y_pre-y_real,十分簡潔
        '''
        return 0.5*pow(y_pre-y_real,2).sum()

    def loss_derivative(self,y_pre,y_real):
        return y_pre-y_real

    def BCL_cross_entropy(self,y_pre,y_real):
        '''
        二分類交叉熵損失函數
        :param y_pre:經過softmoid之後,将值變為0~1之間
        :param y_real: 0或者1
        :return:
        '''
        #為了防止log(0)出現,将裡面的機率進行處理
        y_pre = np.clip(y_pre,1e-12,1. - 1e-12)
        loss = -np.sum(y_real * np.log(y_pre) + (1 - y_real) * np.log(1 - y_pre))
        return loss

    def MCL_cross_entropy(self,y_pre,y_real):
        '''
        多分類交叉熵,用多分類交叉熵做損失函數必須最後一層用softmax激活函數
        :param y_pre: one-hot類型,裡面的值都在0~1之間
        :param y_real: one-hot類型
        :return:
        '''
        # 為了防止log(0)出現,将裡面的機率進行處理
        y_pre = np.clip(y_pre, 1e-12, 1. - 1e-12)
        loss = -np.sum(y_real * np.log(y_pre))
        return loss

    def Softmax(self,layer_z):
        exp_z = np.exp(layer_z)
        return exp_z / np.sum(exp_z)

    def SaveModel(self):
        np.savez('parameter.npz',self.weights,self.bias)
        # np.save('parameter_weights.npz',self.weights)
        # np.save('parameter_bias.npz',self.bias)
    def LoadModel(self):
        data = np.load('parameter.npz',allow_pickle=True)
        self.weights = data['arr_0']
        self.bias = data['arr_1']

def DataSinX():
    '''
    生成x,sin(x),打亂。
    :return:
    '''
    data = [(i,math.sin(i)) for i in np.arange(-5 * math.pi, 5 * math.pi, 0.1)]
    np.random.shuffle(data)
    print("Data 資料的大小為 :",len(data))
    return data

def DataIter(Data,batch_size):
    for i in range(0,len(Data),batch_size):
        yield Data[i:min(batch_size+i,len(Data))]

def ShowData(axs,x,y,c,marker,legend):
    axs.scatter(x, y, marker=marker, color=c)

def lr_adjust(lr,iter,maxiter):
    new_lr = lr * pow(1 - iter / maxiter,0.25)
    return new_lr



if __name__ == '__main__':

    batch_size = 5
    lr = 1.5
    epochs = 5000

    Data = DataSinX()



    # net = SimpleNerworks(1,2,4,2,1)
    # net.LoadModel()

    # for i in range(epochs):
    #     for batch in DataIter(Data,batch_size=batch_size):
    #         net.forward(batch,Mode_Train=True)
    #         net.updata_w_b(batch_size,lr)
    #         net.Zero_gradient()
    #     # lr = lr_adjust(lr,i,epochs)
    #     _,loss = net.forward(Data,Mode_Train=False)
    #     print('epoch : {} ,learning rate : {} ,loss : {} '.format(i,lr,loss))
    # net.SaveModel()
           

這裡,我将持續更新這個部落格,因為,我想将二分類、多分類任務用全連接配接網絡做一下。這裡隻是實作了二分類的softmax激活函數,以及交叉熵損失函數。有關對交叉熵求導并反向更新參數這一塊,後續再寫上!