1. 介紹
GoogLeNet首次出現在ILSVRC 2014的比賽中(Inception V1),就以較大優勢取得了第一名(top-5錯誤率6.67%,VGGNet—7.32%)。GoogLeNet有22層深,比[AlexNet]的8層和[VGGNet]的19層還要深。但是GoogLeNet隻有500萬的參數數量,是AlexNet的1/12(但遠勝于AlexNet的準确率),VGGNet的參數是AlexNet的3倍,在記憶體或計算資源有限時,GoogLeNet是比較好的選擇。
降低參數的目的
- 參數越多模型越龐大,需要提供模型學習的資料量就越大,而目前高品質的資料非常昂貴;
- 參數越多,耗費的計算資源也會更大。
創新點
- 去除了最後的全連接配接層,用全局平均池化層(将圖檔尺寸變為 1 × 1 1\times1 1×1)來取代它。 → \rightarrow →全連接配接層幾乎占據了AlexNet或VGGNet中的90%的參數量,而且會引起過拟合,去除全連接配接層後模型訓練更快并且減輕了過拟合。(該想法借鑒了Network In Network中的思想)
- 利用Inception Module提高參數的利用效率。
GoogLeNet了解及Tensorflow實作1. 介紹2. Inception家族3. TensorFlow實作4. 總結參考
圖1. Inception Module結構圖
Inception Module的基本結構
4個分支,通過 1 × 1 1\times1 1×1卷積來進行低成本的跨通道的特征變換。
- 分支1:對輸入進行 1 × 1 1\times1 1×1的卷積,能夠跨通道組織資訊,提高網絡的表達能力,同時可以對輸出通道升維和降維;
- 分支2:先使用 1 × 1 1\times1 1×1卷積,然後連接配接 3 × 3 3\times3 3×3卷積,相當于進行了兩次特征變換;
- 分支3:先 1 × 1 1\times1 1×1卷積,然後連接配接 5 × 5 5\times5 5×5卷積;
- 分支4:先 3 × 3 3\times3 3×3最大池化,後 1 × 1 1\times1 1×1卷積, 1 × 1 1\times1 1×1卷積的成本效益很高,用很小的計算量就能增加一層特征變換和非線性化。
Inception Module可以讓網絡的深度和寬度高效率的擴充,提升準确率且不至于過拟合。
Hebbian原理
人腦神經元的連接配接是稀疏的,是以研究者認為大型神經網絡的合理的連接配接方式應該也是稀疏的(對于非常大型、非常深的神經網絡,可以減輕過拟合并降低計算量)。文中提到的稀疏結構基于Hebbian原理。
Hebbian原理: 神經反射活動的持續和重複會導緻神經元連接配接穩定性的持久提升,當兩個神經元細胞A和B距離很近,并且A參與了對B重複、持續的興奮,那麼某些代謝變化會導緻A将作為能使B興奮的細胞。如圖2所示,将上一層高度相關(correlated)的的節點聚類,并将聚類出來的每一個小簇(cluster)連接配接到一起。
圖2. 稀疏結構的建構
一個“好”的稀疏結構,應該是符合Hebbian原理的,我們應該把相關性高的一簇神經元節點連接配接在一起。 在圖檔資料中,鄰近區域的資料相關性高,是以相鄰的像素點被卷積操作連接配接在一起。我們可能有多個卷積核,在同一空間位置但在不同通道的卷積核的輸出結果相關性極高,通過 1 × 1 1\times1 1×1卷積能夠把這些相關性很高的、在同一個空間位置但是不同通道的特征連接配接到一起。
網絡結構
圖3. GoogLeNet網絡結構圖
如圖3所示,GoogLeNet(Inception v1)有22層深,除了最後一層的輸出其中間節點的分類效果也很好。是以在InceptionNet中,還使用到了輔助分類節點(auxiliary classifiers),即将中間某一層的輸出用作分類,并按一個較小的權重(0.3)加到最終分類結果中。這相當與做了模型融合,同時給網絡增加了反向傳播的梯度信号,也提供了額外的正則化,對于整個InceptionNet的訓練很有裨益。
2. Inception家族
- Inception V1: 出自2014年9月的論文《Going Deeper with Convolutions》,top-5錯誤率為6.67%
使用了異步的SGD訓練,學習速率每疊代8個epoch降低4%。同時,Inception V1也使用了Multi-Scale、Multi-Crop等資料增強方法,并在不同的采樣資料上訓練了7個模型進行融合。
- Inception V2: 出自2015年2月的論文《Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covarite Shift》,top-5錯誤率為4.8%
- 學了了VGGNet ,用兩個 3 × 3 3\times3 3×3的卷積代替 5 × 5 5\times5 5×5的大卷積(用以降低參數量并降低過拟合),還提出了著名的Batch Normalization方法。
- BN是一個非常有效的正則化方法,可以讓大型卷積網絡的訓練速度加快很多倍,同時收斂後的分類準确率也可以得到大幅提高。BN用于神經網絡某層時,會對每一個mini-batch資料的内部進行标準化(normalization)處理,使輸出規範化到 N ∼ ( 0 , 1 ) N\sim(0,1) N∼(0,1)的正态分布,減少了Internal Covariate Shift(内部神經元分布的改變)。
- BN的論文指出,傳統的深度神經網絡在訓練時,每一層的輸入的分布都在變化,導緻訓練變得困難,我們隻能使用一個很小的學習速率解決這個問題。而對每一層使用BN之後,能夠有效的解決這個問題,學習速率可以增大很多倍,達到之前的準确率所需要的疊代次數隻有1/14,訓練時間大大縮短。(BN在某種意義上還起到了正則化的作用,是以可以減少或者取消Dropout,簡化網絡結構。)
- 單純的使用BN獲得的增益并不明顯,還需要一些相應的調整:
- 增大學習速率并加快學習衰減速度以适用BN規範化後的資料;
- 去除Dropout并減輕L2正則(BN已祈禱正則化的作用);
- 去除LRN;
- 更徹底地對訓練樣本進行shuffle;
減少資料增強中對資料地光學畸變(BN訓練更快,每個樣本被訓練的次數更少,是以更真實的樣本對訓練更有幫助);
使用這些措施後,Inception V2的在訓練達到Inception V1的準确率時快了14倍,并且在模型的訓練時的準确率上限更高。
- Inception V3: 出自2015年12月的論文《Rethinking the Inception Architecture for Computer Vision》,top-5錯誤率為3.5%
主要有兩方面的改造:
- 一是引入了Factorization into small convolutions 的思想, 将一個較大的二維卷積拆成兩個較小的一維卷積,比如将 7 × 7 7\times7 7×7卷積拆成 1 × 7 1\times7 1×7卷積和 7 × 1 7\times1 7×1卷積,或者将 3 × 3 3\times3 3×3的卷積拆成 1 × 3 1\times3 1×3和 3 × 1 3\times1 3×1卷積,如圖4所示。
![]()
GoogLeNet了解及Tensorflow實作1. 介紹2. Inception家族3. TensorFlow實作4. 總結參考 圖4. 将一個3×3卷積拆成1×3卷積和3×1卷積
一方面節約了大量參數,加速運算并減輕了過拟合(将 7 × 7 7\times7 7×7卷積拆成 1 × 7 1\times7 1×7卷積和 7 × 1 7\times1 7×1卷積,比拆成3個 3 × 3 3\times3 3×3卷積更節約參數),同時增加了一層非線性擴充模型表達能力。這種非對稱的卷積結構拆分,其結果比對稱地拆為幾個相同的小卷積核效果更明顯,可以處理更多、更豐富的空間特征,增加特征多樣性。
- 另一方面是優化了Inception Module的結構, 現在Inception Module有 35 × 35 , 17 × 17 , 8 × 8 35\times35,17\times17,8\times8 35×35,17×17,8×8三種不同的結構,如圖5所示。
![]()
GoogLeNet了解及Tensorflow實作1. 介紹2. Inception家族3. TensorFlow實作4. 總結參考 圖5. Inception V3中三種結構的Inception Module
這些Inception Module隻在網絡的後部出現,前部還是普通的卷積層。并且Inception V3除了在Inception Module中使用分支,還在分支中使用了分支(8×8的結構中),可以說是Network In Network In Network。
Inception V3的網絡結構如下表:
類型 kernel尺寸/步長(或注釋) 輸入尺寸 卷積 3×3 / 2 299×299×3 卷積 3×3 / 1 149×149×32 卷積 3×3 / 1 147×147×32 池化 3×3 / 2 147×147×64 卷積 3×3 / 1 73×73×64 卷積 3×3 / 2 71×71×80 卷積 3×3 / 1 35×35×192 Inception 子產品組 3個Inception Module 35×35×288 Inception 子產品組 5個Inception Module 17×17×768 Inception 子產品組 3個Inception Module 8×8×1280 池化 8×8 8×8×2048 線性 logits 1×1×2048 Softmax 分類輸出 1×1×1000
- Inception V4: 出自2016年2月的論文《Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning》,top-5錯誤率為3.08%
與Inception V3相比V4主要是結合了微軟的ResNet。
3. TensorFlow實作
實作的是Inception V3,網絡結構如上表。 Inception V3相對比較複雜,是以使用tf.contrib.slim輔助設計網絡。contrib.slim中的一些功能群組建可以大大減少設計Inception Net的代碼量,隻需要少量的代碼即可建構好有42層深的Inception V3。
實作代碼:
import tensorflow as tf
from datetime import datetime
import time
import math
slim = tf.contrib.slim
#trunc_normal:産生截斷的正态分布
trunc_normal = lambda stddev: tf.truncated_normal_initializer(0.0, stddev)
num_batches = 100
'''
inception_arg_scope:用來生成網絡中經常用到的函數的預設參數,比如卷積的激活函數、權重初始化方式、标準化器等。
L2正則的weight_decay預設值為0.00004,标準差stddev預設值為0.1,參數batch_norm_var_collection預設值為moving_vars
'''
def inception_v3_arg_scope(weight_decay=0.00004,
stddev=0.1,
batch_norm_var_collection='moving_vars'):
'''
參數字典
'''
batch_norm_params = {
'decay': 0.9997, #衰減系數
'epsilon': 0.001,
'updates_collections': tf.GraphKeys.UPDATE_OPS,
'variables_collections': {
'beta': None,
'gama': None,
'moving_mean': [batch_norm_var_collection],
'moving_variance': [batch_norm_var_collection],
}
}
'''
slim.arg_scope,可以給函數的參數自動賦予某些預設值
'''
with slim.arg_scope([slim.conv2d, slim.fully_connected],
weights_regularizer=slim.l2_regularizer(weight_decay)):
with slim.arg_scope(
[slim.conv2d],
weights_initializer=tf.truncated_normal_initializer(stddev=stddev),
activation_fn=tf.nn.relu,
normalizer_fn=slim.batch_norm,
normalizer_params=batch_norm_params
) as sc:
return sc
'''
inception_v3_base:生成Inception V3網絡的卷積部分
參數inputs為輸入的圖檔資料的tensor,scope為包含了函數預設參數的環境。
輸出為:35*35*192
'''
def inception_v3_base(inputs, scope=None):
end_points = {} #儲存關鍵節點供之後使用
with tf.variable_scope(scope, 'InceptionV3', [inputs]):
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='VALID'): #為函數設定預設值
net = slim.conv2d(inputs, 32, [3, 3], stride=2, scope='Conv2d_1a_3x3')
net = slim.conv2d(net, 32, [3, 3], scope='Conv2d_2a_3x3')
net = slim.conv2d(net, 64, [3, 3], padding='SAME', scope='Conv2d_2b_3x3')
net = slim.max_pool2d(net, [3, 3], stride=2, scope='MaxPool_3a_3x3')
net = slim.conv2d(net, 80, [1, 1], scope='Conv2d_3b_1x1')
net = slim.conv2d(net, 192, [3, 3], scope='Conv2d_4a_3x3')
net = slim.max_pool2d(net, [3, 3], stride=2, scope='MaxPool_5a_3x3')
'''
第1個Inception子產品組,包括三個子產品
'''
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
# 第1個Inception Module
with tf.variable_scope('Mixed_5b'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 48, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 64, [5, 5], scope='Conv2d_0b_5x5')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 32, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3) #輸出通道數為64+64+96+32=256,padding是'SAME',輸出為35*35*256
# 第2個Inception Module
with tf.variable_scope('Mixed_5c'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 48, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 64, [5, 5], scope='Conv2d_0b_5x5')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3) #輸出通道數為64+64+96+64=288,padding是'SAME',輸出為35*35*288
# 第3個Inception Module
with tf.variable_scope('Mixed_5d'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 48, [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, 64, [5, 5], scope='Conv2d_0b_5x5')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')
branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0c_3x3')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')
net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3) #輸出通道數為64+64+96+64=288,padding是'SAME',輸出為35*35*288
'''
第2個子產品組Inception,5個子產品
'''
# 第1個Inception Module
with tf.variable_scope('Mixed_6a'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 384, [3, 3],
stride=2, padding='VALID', scope='Conv2d_1a_1x1') #圖檔縮小為17x17
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_1a_1x1')
branch_1 = slim.conv2d(branch_1, 96, [3, 3], scope='Conv2d_1b_3x3')
branch_1 = slim.conv2d(branch_1, 96, [3, 3],
stride=2, padding='VALID', scope='Conv2d_1c_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID', scope='MaxPool_1a_3x3')
net = tf.concat([branch_0, branch_1, branch_2], 3) #輸出通道數為384+96+256=768,輸出尺寸為17*17*768
#後4個Module中都用到了'Factorization into small convolutions'
# 第2個Inception Module
with tf.variable_scope('Mixed_6b'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_2a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_2a_1x1')
branch_1 = slim.conv2d(branch_1, 128, [1, 7], scope='Conv2d_2b_1x7')
branch_1 = slim.conv2d(branch_1, 192, [7, 1], scope='Conv2d_2c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_2a_1x1')
branch_2 = slim.conv2d(branch_2, 128, [7, 1], scope='Conv2d_2b_7x1')
branch_2 = slim.conv2d(branch_2, 128, [1, 7], scope='Conv2d_2c_1x7')
branch_2 = slim.conv2d(branch_2, 128, [7, 1], scope='Conv2d_2d_7x1')
branch_2 = slim.conv2d(branch_2, 128, [1, 7], scope='Conv2d_2e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_2a_3x3')
branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_2b_1x1')
net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3) #輸出通道為192+192+192+192=768,輸出尺寸為17*17*768
# 第3個Inception Module
with tf.variable_scope('Mixed_6c'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_2a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_2a_1x1')
branch_1 = slim.conv2d(branch_1, 160, [1, 7], scope='Conv2d_2b_1x7')
branch_1 = slim.conv2d(branch_1, 192, [7, 1], scope='Conv2d_2c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_2a_1x1')
branch_2 = slim.conv2d(branch_2, 160, [7, 1], scope='Conv2d_2b_7x1')
branch_2 = slim.conv2d(branch_2, 160, [1, 7], scope='Conv2d_2c_1x7')
branch_2 = slim.conv2d(branch_2, 160, [7, 1], scope='Conv2d_2d_7x1')
branch_2 = slim.conv2d(branch_2, 128, [1, 7], scope='Conv2d_2e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_2a_3x3')
branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_2b_1x1')
net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3) #輸出通道為192+192+192+192=768,輸出尺寸為17*17*768
# 第4個Inception Module
with tf.variable_scope('Mixed_6d'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_2a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_2a_1x1')
branch_1 = slim.conv2d(branch_1, 160, [1, 7], scope='Conv2d_2b_1x7')
branch_1 = slim.conv2d(branch_1, 192, [7, 1], scope='Conv2d_2c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_2a_1x1')
branch_2 = slim.conv2d(branch_2, 160, [7, 1], scope='Conv2d_2b_7x1')
branch_2 = slim.conv2d(branch_2, 160, [1, 7], scope='Conv2d_2c_1x7')
branch_2 = slim.conv2d(branch_2, 160, [7, 1], scope='Conv2d_2d_7x1')
branch_2 = slim.conv2d(branch_2, 128, [1, 7], scope='Conv2d_2e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_2a_3x3')
branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_2b_1x1')
net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3) # 輸出通道為192+192+192+192=768,輸出尺寸為17*17*768
# 第5個Inception Module
with tf.variable_scope('Mixed_6e'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_2a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_2a_1x1')
branch_1 = slim.conv2d(branch_1, 160, [1, 7], scope='Conv2d_2b_1x7')
branch_1 = slim.conv2d(branch_1, 192, [7, 1], scope='Conv2d_2c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_2a_1x1')
branch_2 = slim.conv2d(branch_2, 160, [7, 1], scope='Conv2d_2b_7x1')
branch_2 = slim.conv2d(branch_2, 160, [1, 7], scope='Conv2d_2c_1x7')
branch_2 = slim.conv2d(branch_2, 160, [7, 1], scope='Conv2d_2d_7x1')
branch_2 = slim.conv2d(branch_2, 128, [1, 7], scope='Conv2d_2e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_2a_3x3')
branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_2b_1x1')
net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3) # 輸出通道為192+192+192+192=768,輸出尺寸為17*17*768
end_points['Mixed_6e'] = net
'''
第3個Inception子產品組,包含3個Inception Module
'''
# 第1個Inception Module
with tf.variable_scope('Mixed_7a'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_3a_1x1')
branch_0 = slim.conv2d(branch_0, 320, [3, 3],
stride=2, padding='VALID', scope='Conv2d_3b_3x3')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_3a_1x1')
branch_1 = slim.conv2d(branch_1, 192, [1, 7], scope='Conv2d_3b_1x7')
branch_1 = slim.conv2d(branch_1, 192, [7, 1], scope='Conv2d_3c_7x1')
branch_1 = slim.conv2d(branch_1, 192, [3, 3], stride=2,
padding='VALID', scope='Conv2d_3d_3x3')
with tf.variable_scope('Branch_2'):
branch_2 = slim.max_pool2d(net, [3, 3], stride=2,
padding='VALID', scope='MaxPool_3a_3x3')
net = tf.concat([branch_0, branch_1, branch_2], 3) #輸出通道數為320+192+768=1280,輸出尺寸為8*8*1280
# 第2個Inception Module
with tf.variable_scope('Mixed_7b'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 320, [1, 1], scope='Conv2d_3a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 384, [1, 1], scope='Conv2d_3a_1x1')
branch_1 = tf.concat([
slim.conv2d(branch_1, 384, [1, 3], scope='Conv2d_3b_1x3'),
slim.conv2d(branch_1, 384, [3, 1], scope='Conv2d_3c_3x1')
], 3)
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 448, [1, 1], scope='Conv2d_3a_1x1')
branch_2 = slim.conv2d(branch_2, 384, [3, 3], scope='Conv2d_3b_3x3')
branch_2 = tf.concat([
slim.conv2d(branch_2, 384, [1, 3], scope='Conv2d_3c_1x3'),
slim.conv2d(branch_2, 384, [3, 1], scope='Conv2d_3d_3x1')
], 3)
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_3a_3x3')
branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_3b_1x1')
net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3) # 輸出通道數為320+768+768+192=2048,輸出尺寸為8*8*2048
# 第3個Inception Module
with tf.variable_scope('Mixed_7c'):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, 320, [1, 1], scope='Conv2d_3a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, 384, [1, 1], scope='Conv2d_3a_1x1')
branch_1 = tf.concat([
slim.conv2d(branch_1, 384, [1, 3], scope='Conv2d_3b_1x3'),
slim.conv2d(branch_1, 384, [3, 1], scope='Conv2d_3c_3x1')
], 3)
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, 448, [1, 1], scope='Conv2d_3a_1x1')
branch_2 = slim.conv2d(branch_2, 384, [3, 3], scope='Conv2d_3b_3x3')
branch_2 = tf.concat([
slim.conv2d(branch_2, 384, [1, 3], scope='Conv2d_3c_1x3'),
slim.conv2d(branch_2, 384, [3, 1], scope='Conv2d_3d_3x1')
], 3)
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_3a_3x3')
branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_3b_1x1')
net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3) # 輸出通道數為320+768+768+192=2048,輸出尺寸為8*8*2048
return net, end_points
'''
inception_v3:全局平均池化、Softmax和Auxiliary Logits
num_classes:分類數量
is_training:标志是否是訓練過程,隻有訓練時Batch Normalization和Dropout才會被啟用
dropout_keep_prob:Dropout所需保留節點的比例,預設為0.8
prediction_fn:最後用來進行分類的函數
spatial_squeeze:标志是否對輸出進行squeeze操作(即去除維數為1的次元,如5x3x1轉為5x3)
reuse:标志是否會對網絡和Variable進行重複使用
scope:包含了預設參數的環境
'''
def inception_v3(inputs,
num_classes=100,
is_training=True,
dropout_keep_prob=0.8,
prediction_fn=slim.softmax,
spatial_squeeze=True,
reuse=None,
scope='InceptionV3'):
with tf.variable_scope(scope, 'InceptionV3', [inputs, num_classes],
reuse=reuse) as scope:
with slim.arg_scope([slim.batch_norm, slim.dropout],
is_training=is_training):
net, end_points = inception_v3_base(inputs, scope=scope)
'''
Auxiliary Logits作為輔助分類的節點,對分類結果預測有很大幫助
'''
with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
stride=1, padding='SAME'):
aux_logits = end_points['Mixed_6e']
with tf.variable_scope('AuxLogits'):
aux_logits = slim.avg_pool2d(aux_logits,
[5, 5], stride=3, padding='VALID',
scope='AvgPool_1a_5x5')
aux_logits = slim.conv2d(aux_logits,
128, [1, 1], scope='Conv2d_1b_1x1')
aux_logits = slim.conv2d(aux_logits,
768, [5, 5], weights_initializer=trunc_normal(0.01),
padding='VALID', scope='Conv2d_1c_5x5')
aux_logits = slim.conv2d(aux_logits,
num_classes, [1, 1], activation_fn=None,
normalizer_fn=None, weights_initializer=trunc_normal(0.001),
scope='Conv2d_1d_1x1')
if spatial_squeeze:
aux_logits = tf.squeeze(aux_logits, [1, 2],
name='SpatialSqueeze')
end_points['AuxLogits'] = aux_logits
'''
Logits進行正常的分類預測的邏輯
'''
with tf.variable_scope('Logits'):
net = slim.avg_pool2d(net, [8, 8],
padding='VALID', scope='AvgPool_1a_8x8')
net = slim.dropout(net, keep_prob=dropout_keep_prob,
scope='Dropout_1b')
end_points['PreLogits'] = net
logits = slim.conv2d(net, num_classes, [1, 1],
activation_fn=None, normalizer_fn=None, scope='Conv2d_1c_1x1')
if spatial_squeeze:
logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
end_points['Logits'] = logits
end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
return logits, end_points
'''
time_tensorflow_run:評估每輪計算時間的函數
'''
def time_tensorflow_run(session, target, info_string):
num_steps_burn_in = 10
total_duration = 0.0
total_duration_squared = 0.0
for i in range(num_batches + num_steps_burn_in):
start_time = time.time()
_ = session.run(target)
duration = time.time() - start_time
if i >= num_steps_burn_in:
if not i % 10:
print('%s: step %d, duration = %.3f' %(datetime.now(), i - num_steps_burn_in, duration))
total_duration += duration
total_duration_squared += duration * duration
mn = total_duration / num_batches
vr = total_duration_squared / num_batches - mn * mn
sd = math.sqrt(vr)
print('%s: %s across %d steps, %.3f +/- %.3f sec / batch'%(datetime.now(), info_string, num_batches, mn, sd))
def main():
batch_size = 32
height, width = 299, 299
inputs = tf.random_uniform((batch_size, height, width, 3))
with slim.arg_scope(inception_v3_arg_scope()):
logits, end_points = inception_v3(inputs, is_training=False)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
# 前饋時間計算
time_tensorflow_run(sess, logits, 'Forward')
if __name__ == '__main__':
main()
運作結果截圖: (僅前饋時間)
4. 總結
Inception V3是一個非常複雜、精妙的模型,其中用到了非常多之前積累下來的設計大型卷積網絡的經驗和技巧。
Inception V3作為一個極深的卷積神經網絡,擁有非常精妙的設計和構造,整個網絡的結構和分支非常複雜,Inception V3中有很多的設計CNN的思想和Trick可以借鑒。
- Factorization into small convolutions很有效,可以降低參數量、減輕過拟合,增加網絡非線性的表達能力。
- 卷積網絡從輸入到輸出,應該讓圖檔尺寸逐漸減小,輸出通道數逐漸增加,即讓空間結構簡化,将空間資訊轉化為高階抽象的特征資訊。
- Inception Module用多個分支提取不同抽象程度的高階特征的思路很有效,可以豐富網絡的表達能力。
參考
- 《TensorFlow 實戰》—— 黃文堅、唐源