天天看點

Machine Learning---Neural NetworkMachine Learning:Neural Network

Machine Learning:Neural Network

一:前言

Machine Learning---Neural NetworkMachine Learning:Neural Network

1,Wikipedia上對Neural Network的定義:

     In machine learning,artificial neural networks (ANNs) are a family of statistical learning algorithms inspired by biological neural networks (thecentral nervous systems of animals, in particular thebrain) and are used to estimate or approximate functions that can depend on a large number of input sand are generally unknown. Artificial neural networks are generally presented as systems of interconnected "neurons" which can compute values from inputs, and are capable ofmachine learning as well aspattern recognition thanks to their adaptive nature.

2,為什麼要引入Neural Network?

     我們前面學習了回歸和分類模型,然而它們的實際應用受制于資料的規模(也就是次元災難(curse of dimensionality))。Neural Network用于處理大量輸入特征是非常有優勢的,例如抽取計算機視覺中圖檔的像素作為輸入特征,那麼這将得到數量巨大的輸入特征集;如果仍然回歸和分類模型,學習參數所需要的時間将會是不能承受的;

Machine Learning---Neural NetworkMachine Learning:Neural Network

二:Neural Network—Representation

1,Neural Network模型

Machine Learning---Neural NetworkMachine Learning:Neural Network

在神經網絡中,我們把第一層稱為input layer,最後一層稱為output layer,中間若幹層都稱為hidden layer。

下面我們來看一個非常簡單的神經網絡:

Machine Learning---Neural NetworkMachine Learning:Neural Network

這個簡單的神經網絡相當于一個邏輯分類器。之是以會是這樣的結果,是因為output layer上neutron的轉換函數是sigmoid函數。當然這完全可以根據我們的實際問題去選擇合适的activation function。

下面我們來詳細的描述一下神經網絡:

Machine Learning---Neural NetworkMachine Learning:Neural Network

上圖中,a(i,j)表示為第j層的第i個activation,它是由activation function轉換得到的,也就是這裡的g(.);一般來說,各層的activation function是一樣的(除了output layer,這個需要根據最後需要的形式來确定,比如離散、連續還是multi-classification等等),當然你也可以選擇使它們不一樣,這樣增大了實作起來的難度。Theta權重參數矩陣控制layer j到layer j+1的映射。

2,Forward Propagation Algorithm

Machine Learning---Neural NetworkMachine Learning:Neural Network

這種Forward propagation的方式是從input layeràhidden layeràoutput layer來進行對h(x)的計算;我們并不是直接對原始資料進行模組化、調參,而是利用中間層得到的結果,然而中間層的結果是由原始資料學習而來的;換言之,這有很大的靈活性,每層之間的轉換可以是任何線性組合或者多項式組合等。

下面來看一下如何利用神經網絡實作邏輯表達式:

Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network

通過多And、Or、Not的分析,組合成了一個小的神經網絡實作了XNOR的功能。

3,神經網絡的multi-classification問題

Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network

對于multi-classification,我們用了softmax activation function來表示最後的輸出結果。

三,Neural Network—Learning

下面我們将以分類問題來闡述Neural Network的學習過程,類比邏輯回歸得到Neural Network的Cost Function,然後用梯度下降算法求出參數。

Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network

1,Error Back Propagation Algorithm:

我們知道梯度下降算法分為兩個步驟:

(1), 對Cost Function求關于參數theta的偏導數;

(2),根據偏導數對參數theta進行更新、調整;

Error Back Propagation Algorithm提供了一種高效的求偏導數的方法。

Machine Learning---Neural NetworkMachine Learning:Neural Network

例如在上圖所示的神經網絡中,我們進行一般化的推導:

Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network

通過上面的推導,我們已經得到了一個高效的求解偏導數的方法:通過Forward propagation求出a(i)的值,再通過back propagation求出delta的值,然後帶入式子(6)中即可;

現在我們來整合一下Neural Network Learning的整個過程,舉一個具體的例子:

Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network

上面的back propagation Algorithm直接引用了上面推導的結果;

總結一下back propagation Algorithm:

Machine Learning---Neural NetworkMachine Learning:Neural Network

2,下面介紹一下Neural Network在MATLAB中的實作技巧:

Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network

簡單的說就是先把權重參數矩陣unroll成一個向量,然後利用庫中現成的梯度下降算法求出最優參數,最後reshape成矩陣的形式;之是以這樣做是因為現成的梯度下降算法的參數即initTheta要求必須是一個向量的形式。

3,Gradient Checking

這是一種數學上求偏導數的方法。

Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network

它可以用來校驗實作的梯度下降算法是否運作正确,當二者的資料非常相似時,則表明運作的結果是正确的;若二者結果相差很大,則表明梯度下降算法沒有正确運作;

4,如何初始化權重矩陣Theta的值

Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network

四,總結一下整個過程

Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network
Machine Learning---Neural NetworkMachine Learning:Neural Network

總結匆忙,難免會有纰漏和錯誤,還望大家不吝指正、賜教;

----------------------------------------------------------------------------------------------------------------------

本文中課件來自斯坦福大學Andrew Ng老師的機器學習PPT

繼續閱讀