天天看點

keras cnn注意力機制_U-Net中CNN+ConvLSTM2D圖像分割分類

keras cnn注意力機制_U-Net中CNN+ConvLSTM2D圖像分割分類

Keras-----CNN+ConvLSTM2D

第一次看到這個思想是在2018MICCAI會議論文,CFCM: Segmentation via Coarse to Fine Context Memory,做醫學圖像分割.

閱讀數隻有50但已收到一部分人郵箱Call,正好這段時間把ConvLSTM2D和BiConvLSTM2D都測試了下,趁着年前最後一天工作時間,将心得完善了下. 喜歡關注下,後面會寫學習到的新東西,春節愉快! (2019-1-30)

原文:

在網上找了很多版本,都沒有自己想要的

在一個普通的U-net加Res上修改的

是以自己填坑踩坑再填坑,直接上代碼和網絡圖,有問題讨論随時Call

訓練網絡主要用來做圖像分割,加入LSTM為了讓網絡學習到長期依賴的資訊. U-net就不多贅述了,搞計算機視覺的應該都有接觸,但是在CNN中加入RNN提取圖像特征的确實不多,LSTM(長短期記憶機制)屬于RNN中的衍生品,之後還有GRU(門控單元)是簡化了的LSTM.

說白了就是在提取圖像資訊特征的時候類似提取序列特征思想一樣提取圖像的上下文資訊(上文指單向LSTM,上下文指雙向LSTM,雙向LSTM也測試過了,正好今天有一個港外小夥伴私郵想讓測試下,這裡雙向LSTM通過使用Bidirectional包裝器).具體想了解LSTM機制的可以看下面這篇blogLSTM原理及實作

CNN可以提取空間特性,LSTM可以提取時間特性,ConvLSTM可以同時利用時空特性.

ConvLSTM核心本質還是和LSTM一樣,将上一層的輸出作下一層的輸入。不同的地方在于加上卷積操作之後,為不僅能夠得到時序關系,還能夠像卷積層一樣提取特征,提取空間特征。這樣就能夠得到時空特征。并且将狀态與狀态之間的切換也換成了卷積計算。

keras的ConvLSTM2D層,也是一個LSTM網絡,但它的輸入變換和循環變換是通過卷積實作的,ConvLSTM2D的輸入和輸出形狀如下:

輸入形狀:

5D tensor(samples,time,channels,row,cols)

輸出形狀可選:

5D tensor(samples,time,output_row,output_col,filters) 傳回每個序列的結果

4D tensor(samples,output_row,output_col,filters)隻傳回最後一個序列的結果

其中,time代表每一個輸入樣本的圖像序列所具有的圖像幀數,這樣就用到了TimeDistributed包裝器.

能明白我描述的東西就夠了,理論性的東西感覺沒有必要了解得太深入,知道在幹什麼就好.

後面直接實操,還需要掌握的有上面提到的兩個包裝器:TimeDistributed層和Bidirectional層(Keras自帶) 1.使用TimeDistributed包裝器,将一個圖層應用于輸入的每個時間片(就是把time維每一序列單獨做卷積操作提取特征)

keras.layers.TimeDistributed(layer)

2.使用Bidirectional雙向封裝器,将單向LSTM擴充,前向傳播的時候增加學習參數,利用到後面序列(未來)的資訊提取特征,使用時包裝在想使用的LSTM層就好

keras.layers.Bidirectional(layer, merge_mode=‘concat’, weights=None)

測試結果:

前處理的話針對不同領域分割圖有不同的前處理方法,資料增強時使用了平移/旋轉/噪點/場強增強等方法

個人建議序列值為10左右做嘗試,除此之外還需考慮算力和效率之間的平衡.

(1)輸入序列增加後單雙向LSTM最優dice值均升高

(2)雙向LSTM較單向收斂更穩定更快

#-*- coding:utf-8 -*-

"""@Author :Alex@contact: [email protected]@File name:segmentation/U_net_convlstm2d@Software : PyCharm@Desc: CNN+ConvLSTM@[email protected]@ ___ __ _ __ @@ / _ | / /__ | |/_/ @@ / __ |/ / -_)> < @@ /_/ |_/_/__/_/|_| @@ 常敦瑞 @@[email protected]"""

from keras.models import *

from keras.layers import *

from keras.optimizers import *

from keras.utils.vis_utils import plot_model

from keras.layers.convolutional_recurrent import ConvLSTM2D

def get_unet(pretrained_weights=None, input_size=(None, 160, 240, 1)):

inputs = Input(input_size)

conv1 = TimeDistributed(Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal'))(inputs)

conv1 = TimeDistributed(Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal'))(conv1)

pool1 = TimeDistributed(MaxPooling2D(pool_size=(2, 2)))(conv1)

conv2 = TimeDistributed(Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal'))(pool1)

conv2 = TimeDistributed(Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal'))(conv2)

pool2 = TimeDistributed(MaxPooling2D(pool_size=(2, 2)))(conv2)

conv3 = TimeDistributed(Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal'))(pool2)

conv3 = TimeDistributed(Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal'))(conv3)

# pool3 = TimeDistributed(MaxPooling2D(pool_size=(2, 2)))(conv3)

# conv4 = TimeDistributed(Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal'))(pool3)

# conv4 = TimeDistributed(Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal'))(conv4)

drop4 = TimeDistributed(Dropout(0.5))(conv3)

pool4 = TimeDistributed(MaxPooling2D(pool_size=(2, 2)))(drop4)

conv5 = TimeDistributed(Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal'))(pool4)

conv5 = TimeDistributed(Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal'))(conv5)

drop5 = TimeDistributed(Dropout(0.5))(conv5)

up6 = ConvLSTM2D(512, 2, activation='relu', padding='same', kernel_initializer='he_normal', return_sequences=True)(

TimeDistributed(UpSampling2D(size=(2, 2)))(drop5))

merge6 = concatenate([drop4, up6], axis=4)

# conv6 = ConvLSTM2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal', return_sequences=True)(merge6)

# conv6 = ConvLSTM2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal', return_sequences=True)(conv6)

# up7 = ConvLSTM2D(256, 2, activation='relu', padding='same', kernel_initializer='he_normal', return_sequences=True)(

# TimeDistributed(UpSampling2D(size=(2, 2)))(conv6))

merge7 = concatenate([conv3, up6], axis=4)

conv7 = ConvLSTM2D(256, 3, padding='same', return_sequences=True)(merge7)

conv7 = ConvLSTM2D(256, 3, padding='same', return_sequences=True)(conv7)

up8 = ConvLSTM2D(128, 2, padding='same',return_sequences=True)(

TimeDistributed(UpSampling2D(size=(2, 2)))(conv7))

merge8 = concatenate([conv2, up8], axis=4)

conv8 = ConvLSTM2D(128, 3, padding='same', return_sequences=True)(merge8)

conv8 = ConvLSTM2D(128, 3, padding='same', return_sequences=True)(conv8)

up9 = ConvLSTM2D(64, 2, padding='same', return_sequences=True)(

TimeDistributed(UpSampling2D(size=(2, 2)))(conv8))

merge9 = concatenate([conv1, up9], axis=4)

conv9 = ConvLSTM2D(64, 3, padding='same', return_sequences=True)(merge9)

conv9 = ConvLSTM2D(64, 3, padding='same', return_sequences=True)(conv9)

conv9 = TimeDistributed(Conv2D(2, 3, activation='relu', padding='same'))(conv9)

# conv9 = ConvLSTM2D(2, 3, padding='same', return_sequences=True)(conv9)

# conv10 = ConvLSTM2D(3, 1, activation='softmax', return_sequences=True)(conv9)

conv10 = TimeDistributed(Conv2D(2, 1,activation='softmax', padding='same'))(conv9)

model = Model(input=inputs, output=conv10)

model.compile(optimizer=Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])

plot_model(model, to_file='MRI_brain_seg_UNet3D.png', show_shapes=True)

model.summary()

if (pretrained_weights):

model.load_weights(pretrained_weights)

return model

Plot To_file 網絡圖

keras cnn注意力機制_U-Net中CNN+ConvLSTM2D圖像分割分類

版權聲明:本文為部落客原創文章,轉載需注明作者和出處。