天天看點

EfficientNet遷移學習(三) —— 網絡搭建(Model.py)

基本介紹

本次實驗的模型是圖像分類最新的模型,

EfficientNet

系列。該模型在ImageNet上訓練,取得頂級的的準确度,并且有效的遷移學習到其它的分類資料集,相關比對結果如下圖所示。論文的核心思想是提出了對網絡複合縮放,通過對網絡的寬度、深度和分辨率統一進行縮放,能夠達到更高的精度上限,并且網絡的計算量降低很多。該論文的翻譯,請參考連結:EfficientNet論文翻譯。本次實驗采用的是

EfficientNet-B7

EfficientNet遷移學習(三) —— 網絡搭建(Model.py)

網絡結構

下圖是基準網絡EfficientNet-B0的結構組成,根據Google提供的開源代碼中的網絡結構。EfficientNet-B0包含3個大的部分:

  1. stem:conv2d(3x3)+ BN + activation(swish_f32).
  2. B l o c k i Block_i Blocki​:總共有16個MBConv,每個子產品的結構為(Conv1x1+BN+Swish_f32,升維)+ DepthWise_Conv + SE + (Conv1x1+BN,降維)+ Add.
  3. head:Conv2d(1x1)+ BN + swish_f32 + global_pooling + dropout + dense(最後輸出的類别數量)。
EfficientNet遷移學習(三) —— 網絡搭建(Model.py)

下圖給出了EfficientNet-B0網絡大緻簡圖,可以大概了解網絡的結構,其它的EfficientNet網絡均是通過複合系數縮放而來,(width_coefficient,depth_coefficient)分别将寬度(channel)和子產品(Block)數量乘以對應的系數。

EfficientNet遷移學習(三) —— 網絡搭建(Model.py)
EfficientNet遷移學習(三) —— 網絡搭建(Model.py)

代碼架構

  1. 建立模型的類,初始化函數中填充相應的訓練參數
    class Model:
        def __init__(self):
            self.base_architecture = cfg.efficientnet.base_architecture[0]
            self.pre_trained_weight = cfg.efficientnet.pre_trained_weight[0]
            self.num_classes = cfg.efficientnet.num_class
    
            self.input_shape = cfg.train.input_size
            self.batch_size = cfg.train.batch_size
            self.step_per_epoch = cfg.train.step_per_epoch
    
            self.warmup_epoch = cfg.train.warmup_epochs
            self.first_stage_epoch = cfg.train.first_stage_epochs
            self.second_stage_epoch = cfg.train.second_stage_epochs
    
            self.learn_rate_init = cfg.train.learn_rate_init
            self.learn_rate_end = cfg.train.learn_rate_end
    
            self.loss_function = cfg.train.loss_function[0]
            self.model_name = 'efficientnet-b7'
    
            self.batch_norm_decay = cfg.efficientnet.batch_norm_decay
            self.override_params = {}
               
  2. 網絡的輸入子產品,即網絡的資料接口
    with tf.name_scope('Input_Placeholder'):
                self.inputs = tf.placeholder(tf.float32,
                                             shape=(None, self.input_shape[0], self.input_shape[1], self.input_shape[2]),
                                             name='input')
                self.label_c = tf.placeholder(tf.int64, shape=(self.batch_size,), name='label')
                self.trainable = tf.placeholder(dtype=tf.bool, name='training')
               
  3. 建構網絡結構子產品,即網絡結構
    with tf.name_scope('Build_Model'):
                self.logits = self.model(self.trainable,
                                         self.pre_trained_weight,
                                         self.base_architecture,
                                         num_classes=self.num_classes)
    
                self.one_hot = tf.one_hot(self.label_c, self.num_classes)
    
                # 模型儲存的所有變量
                self.net_variables = tf.global_variables()
               
  4. 網絡的損失函數
    with tf.name_scope('Loss_Function'):
                if self.loss_function == 'softmax':
                    print('using softmax cross entropy loss function')
                    loss_net = tf.losses.softmax_cross_entropy(self.one_hot, self.logits)
                else:
                    print('using sigmoid cross entropy loss function')
                    loss_net = tf.losses.sigmoid_cross_entropy(self.one_hot, self.logits, label_smoothing=0.1)
    
                # 添加L2正則化項
                # l2 = tf.add_n([tf.nn.l2_loss(var) for var in tf.trainable_variables()])
                # self.loss = loss_net + l2*0.0005
    
                # Add weight decay to the loss for non-batch-normalization variables.
                # self.loss = loss_net + 0.0005 * tf.add_n([tf.nn.l2_loss(v) for v in tf.trainable_variables()
                #                                           if 'batch_normalization' not in v.name])
    
                for v in tf.trainable_variables():
                    print('1====: ', v.name)
                    if 'batch_normalization' not in v.name and 'head' in v.name and 'bias' not in v.name:
                        print('------------------------------')
                        print('2====: ', v.name)
                        print('------------------------------')
    
                        self.loss = loss_net + 0.00005 * tf.add_n([tf.nn.l2_loss(v)])
               
  5. 建構網絡的評價名額,要根據具體的任務,建構相應的評價名額
    with tf.name_scope('Compute_Accuracy'):
                # 計算準确度
                correct_prediction = tf.equal(tf.argmax(self.logits, 1), self.label_c)
                self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
    
                # 計算每次訓練的混淆矩陣, 統計靈敏度和特異度的訓練過程
                confusion_matrix = tf.confusion_matrix(self.label_c, tf.argmax(self.logits, 1), num_classes=2)
    
                TN = confusion_matrix[0][0]
                FP = confusion_matrix[0][1]
                FN = confusion_matrix[1][0]
                TP = confusion_matrix[1][1]
    
                # acc = (TP + TN) / (TP + TN + FP + FN)
                self.sensitive = TP / (TP + FN)
                self.specify = TN / (TN + FP)
               
  6. 設定網絡的學習率
    with tf.name_scope('Learning_Rate'):
                self.global_step = tf.Variable(0, dtype=tf.int64, trainable=False, name='global_step')
                warmup_steps = tf.constant(self.warmup_epoch * self.step_per_epoch,
                                           dtype=tf.int64,
                                           name='warmup_steps')
    
                # 總的訓練步數
                train_steps = tf.constant((self.first_stage_epoch + self.second_stage_epoch) * self.step_per_epoch,
                                          dtype=tf.int64,
                                          name='train_steps')
    
                # 帶有預熱的cosine學習率調整
                cosine_item = (1 + tf.cos((self.global_step - warmup_steps) / (train_steps - warmup_steps) * np.pi))
                warm_learn_rate = self.learn_rate_end + 0.5 * (self.learn_rate_init - self.learn_rate_end) * cosine_item
    
                self.learn_rate = tf.cond(pred=self.global_step < warmup_steps,
                                          true_fn=lambda: self.global_step / warmup_steps * self.learn_rate_init,
                                          false_fn=lambda: warm_learn_rate)
    
                # self.learn_rate = tf.train.exponential_decay(cfg.Train.Learn_Rate_Init,
                #                                              self.global_step,
                #                                              decay_steps=400,
                #                                              decay_rate=0.9)
    
                # boundaries = [240, 1600]
                # values = [0.001, 0.0001, 0.00001]
                # self.learn_rate = tf.train.piecewise_constant(self.global_step, boundaries, values)
    
                global_step_update = tf.assign_add(self.global_step, 1)
    
               
  7. 設定待優化的參數,通常用于遷移學習,需要分階段訓練網絡的不同的子產品
    with tf.name_scope("First_Train_Stage"):
                # 存儲第一階段需要優化的網絡參數
                self.first_stage_trainable_var_list = []
                for var in tf.trainable_variables():
                    var_name = var.op.name
                    var_name_mess = str(var_name).split('/')
    
                    # 根據名字, 選擇要優化的參數
                    if var_name_mess[1] in ['head']:
                        self.first_stage_trainable_var_list.append(var)
    
                optimizer = tf.train.AdamOptimizer(self.learn_rate)
                optimizer_variables = optimizer.minimize(self.loss, var_list=self.first_stage_trainable_var_list)
                with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
                    with tf.control_dependencies([optimizer_variables, global_step_update]):
                        # with tf.control_dependencies([moving_ave]):
                        self.train_op_with_frozen_variables = tf.no_op()
                        # self.train_op_with_frozen_variables = tf.group(moving_ave)
    
            with tf.name_scope("Second_Train_Stage"):
                second_stage_trainable_var_list = tf.trainable_variables()
                second_stage_optimizer = tf.train.AdamOptimizer(self.learn_rate)
                second_stage_variables = second_stage_optimizer.minimize(self.loss, var_list=second_stage_trainable_var_list)
    
                with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
                    with tf.control_dependencies([second_stage_variables, global_step_update]):
                        # with tf.control_dependencies([moving_ave]):
                        self.train_op_with_all_variables = tf.no_op()
                        # self.train_op_with_all_variables = tf.group(moving_ave)
    
               
  8. 儲存網絡和加載網絡
    with tf.name_scope('Model_Loader_Save'):
                # 恢複除去最後一層的所有訓練參數
                variables_to_restore = []
                for v in self.net_variables:
                    if v.name.split('/')[1] not in ['head']:
                        variables_to_restore.append(v)
    
                self.loader = tf.train.Saver(variables_to_restore)
                self.saver = tf.train.Saver(tf.global_variables(), max_to_keep=200)
               
  9. 儲存網絡的訓練記錄,用于tensorboard
    with tf.name_scope('Collect_Summary'):
                tf.summary.scalar('loss', self.loss)
                tf.summary.scalar('accuracy', self.accuracy)
                tf.summary.scalar('learning rate', self.learn_rate)
                tf.summary.scalar('sensitive', self.sensitive)
                tf.summary.scalar('specify', self.specify)
    
                self.merged = tf.summary.merge_all()
               

完整代碼(Model.py)

import tensorflow as tf
import numpy as np
import model_builder_factory
from config import cfg


class Model:
    def __init__(self):
        self.base_architecture = cfg.efficientnet.base_architecture[0]
        self.pre_trained_weight = cfg.efficientnet.pre_trained_weight[0]
        self.num_classes = cfg.efficientnet.num_class

        self.input_shape = cfg.train.input_size
        self.batch_size = cfg.train.batch_size
        self.step_per_epoch = cfg.train.step_per_epoch

        self.warmup_epoch = cfg.train.warmup_epochs
        self.first_stage_epoch = cfg.train.first_stage_epochs
        self.second_stage_epoch = cfg.train.second_stage_epochs

        self.learn_rate_init = cfg.train.learn_rate_init
        self.learn_rate_end = cfg.train.learn_rate_end

        self.loss_function = cfg.train.loss_function[0]
        self.model_name = 'efficientnet-b7'

        self.batch_norm_decay = cfg.efficientnet.batch_norm_decay
        self.override_params = {}

        with tf.name_scope('Input_Placeholder'):
            self.inputs = tf.placeholder(tf.float32,
                                         shape=(None, self.input_shape[0], self.input_shape[1], self.input_shape[2]),
                                         name='input')
            self.label_c = tf.placeholder(tf.int64, shape=(self.batch_size,), name='label')
            self.trainable = tf.placeholder(dtype=tf.bool, name='training')

        with tf.name_scope('Build_Model'):
            self.logits = self.model(self.trainable,
                                     self.pre_trained_weight,
                                     self.base_architecture,
                                     num_classes=self.num_classes)

            self.one_hot = tf.one_hot(self.label_c, self.num_classes)

            # 模型儲存的所有變量
            self.net_variables = tf.global_variables()

        with tf.name_scope('Loss_Function'):
            if self.loss_function == 'softmax':
                print('using softmax cross entropy loss function')
                loss_net = tf.losses.softmax_cross_entropy(self.one_hot, self.logits)
            else:
                print('using sigmoid cross entropy loss function')
                loss_net = tf.losses.sigmoid_cross_entropy(self.one_hot, self.logits, label_smoothing=0.1)

            # 添加L2正則化項
            # l2 = tf.add_n([tf.nn.l2_loss(var) for var in tf.trainable_variables()])
            # self.loss = loss_net + l2*0.0005

            # Add weight decay to the loss for non-batch-normalization variables.
            # self.loss = loss_net + 0.0005 * tf.add_n([tf.nn.l2_loss(v) for v in tf.trainable_variables()
            #                                           if 'batch_normalization' not in v.name])

            for v in tf.trainable_variables():
                print('1====: ', v.name)
                if 'batch_normalization' not in v.name and 'head' in v.name and 'bias' not in v.name:
                    print('------------------------------')
                    print('2====: ', v.name)
                    print('------------------------------')

                    self.loss = loss_net + 0.00005 * tf.add_n([tf.nn.l2_loss(v)])

        with tf.name_scope('Compute_Accuracy'):
            # 計算準确度
            correct_prediction = tf.equal(tf.argmax(self.logits, 1), self.label_c)
            self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))

            # 計算每次訓練的混淆矩陣, 統計靈敏度和特異度的訓練過程
            confusion_matrix = tf.confusion_matrix(self.label_c, tf.argmax(self.logits, 1), num_classes=2)

            TN = confusion_matrix[0][0]
            FP = confusion_matrix[0][1]
            FN = confusion_matrix[1][0]
            TP = confusion_matrix[1][1]

            # acc = (TP + TN) / (TP + TN + FP + FN)
            self.sensitive = TP / (TP + FN)
            self.specify = TN / (TN + FP)

        with tf.name_scope('Learning_Rate'):
            self.global_step = tf.Variable(0, dtype=tf.int64, trainable=False, name='global_step')
            warmup_steps = tf.constant(self.warmup_epoch * self.step_per_epoch,
                                       dtype=tf.int64,
                                       name='warmup_steps')

            # 總的訓練步數
            train_steps = tf.constant((self.first_stage_epoch + self.second_stage_epoch) * self.step_per_epoch,
                                      dtype=tf.int64,
                                      name='train_steps')

            # 帶有預熱的cosine學習率調整
            cosine_item = (1 + tf.cos((self.global_step - warmup_steps) / (train_steps - warmup_steps) * np.pi))
            warm_learn_rate = self.learn_rate_end + 0.5 * (self.learn_rate_init - self.learn_rate_end) * cosine_item

            self.learn_rate = tf.cond(pred=self.global_step < warmup_steps,
                                      true_fn=lambda: self.global_step / warmup_steps * self.learn_rate_init,
                                      false_fn=lambda: warm_learn_rate)

            # self.learn_rate = tf.train.exponential_decay(cfg.Train.Learn_Rate_Init,
            #                                              self.global_step,
            #                                              decay_steps=400,
            #                                              decay_rate=0.9)

            # boundaries = [240, 1600]
            # values = [0.001, 0.0001, 0.00001]
            # self.learn_rate = tf.train.piecewise_constant(self.global_step, boundaries, values)

            global_step_update = tf.assign_add(self.global_step, 1)

        # with tf.name_scope("Moving_Weight_Decay"):
        #    moving_ave = tf.train.ExponentialMovingAverage(cfg.ResNet.Moving_Ave_Decay).apply(tf.trainable_variables())

        with tf.name_scope("First_Train_Stage"):
            # 存儲第一階段需要優化的網絡參數
            self.first_stage_trainable_var_list = []
            for var in tf.trainable_variables():
                var_name = var.op.name
                var_name_mess = str(var_name).split('/')

                # 根據名字, 選擇要優化的參數
                if var_name_mess[1] in ['head']:
                    self.first_stage_trainable_var_list.append(var)

            optimizer = tf.train.AdamOptimizer(self.learn_rate)
            optimizer_variables = optimizer.minimize(self.loss, var_list=self.first_stage_trainable_var_list)
            with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
                with tf.control_dependencies([optimizer_variables, global_step_update]):
                    # with tf.control_dependencies([moving_ave]):
                    self.train_op_with_frozen_variables = tf.no_op()
                    # self.train_op_with_frozen_variables = tf.group(moving_ave)

        with tf.name_scope("Second_Train_Stage"):
            second_stage_trainable_var_list = tf.trainable_variables()
            second_stage_optimizer = tf.train.AdamOptimizer(self.learn_rate)
            second_stage_variables = second_stage_optimizer.minimize(self.loss, var_list=second_stage_trainable_var_list)

            with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
                with tf.control_dependencies([second_stage_variables, global_step_update]):
                    # with tf.control_dependencies([moving_ave]):
                    self.train_op_with_all_variables = tf.no_op()
                    # self.train_op_with_all_variables = tf.group(moving_ave)

        with tf.name_scope('Model_Loader_Save'):
            # 恢複除去最後一層的所有訓練參數
            variables_to_restore = []
            for v in self.net_variables:
                if v.name.split('/')[1] not in ['head']:
                    variables_to_restore.append(v)

            self.loader = tf.train.Saver(variables_to_restore)
            self.saver = tf.train.Saver(tf.global_variables(), max_to_keep=200)

        with tf.name_scope('Collect_Summary'):
            tf.summary.scalar('loss', self.loss)
            tf.summary.scalar('accuracy', self.accuracy)
            tf.summary.scalar('learning rate', self.learn_rate)
            tf.summary.scalar('sensitive', self.sensitive)
            tf.summary.scalar('specify', self.specify)

            self.merged = tf.summary.merge_all()

    def model(self, is_training, pre_trained_model, base_architecture, num_classes):
        """
        load network structure
        :param is_training: 是否訓練的标志
        :param pre_trained_model: 預訓練權重
        :param base_architecture: 網絡名字
        :param num_classes: 分類數量
        :return: 網絡最後一層,以及所有層構成的字典
        """

        # if FLAGS.batch_norm_momentum is not None:
        #     override_params['batch_norm_momentum'] = FLAGS.batch_norm_momentum
        # if FLAGS.batch_norm_epsilon is not None:
        #     override_params['batch_norm_epsilon'] = FLAGS.batch_norm_epsilon
        # if FLAGS.dropout_rate is not None:
        #     override_params['dropout_rate'] = FLAGS.dropout_rate
        # if FLAGS.survival_prob is not None:
        #     override_params['survival_prob'] = FLAGS.survival_prob
        # if FLAGS.data_format:
        #     override_params['data_format'] = FLAGS.data_format
        # if FLAGS.num_label_classes:
        self.override_params['num_classes'] = num_classes
        # if FLAGS.depth_coefficient:
        #     override_params['depth_coefficient'] = FLAGS.depth_coefficient
        # if FLAGS.width_coefficient:
        #     override_params['width_coefficient'] = FLAGS.width_coefficient

        model_builder = model_builder_factory.get_model_builder(self.base_architecture)

        logits, _ = model_builder.build_model(self.inputs,
                                              self.base_architecture,
                                              is_training,
                                              override_params=self.override_params)

        return logits

           

完整代碼(config.py)

該代碼用于設定網絡的各種訓練參數,訓練過程中,隻需更改這個檔案的相應參數即可,非常友善。

from easydict import EasyDict as edict

cfg = edict()

# Consumers can get config by: from config import cfg

# network options
cfg.efficientnet = edict()

cfg.efficientnet.num_class = 2
cfg.efficientnet.moving_ave_decay = 0.9995
cfg.efficientnet.pre_trained_weight = ['./efficientnet-b0/model.ckpt',
                                       './efficientnet-b7/model.ckpt']

cfg.efficientnet.base_architecture = ['efficientnet-b0',
                                      'efficientnet-b7']

cfg.efficientnet.batch_norm_decay = 0.99

# train options
cfg.train = edict()

cfg.train.root_path = '../B7Data/1025_color_new/'
cfg.train.train_set = "../B7Data/1025_color_new/train_1025_color.txt"
cfg.train.valid_set = "../B7Data/1025_color_new/valid_1025_color.txt"

cfg.train.log = './checkpoint/log/log_1111_test5/'
cfg.train.save_model = './checkpoint/model/model_1111_test5/model'

cfg.train.train_num = 1352
cfg.train.valid_num = 256
cfg.train.batch_size = 32
cfg.train.step_per_epoch = 1352//32

cfg.train.input_size = [224, 224, 3]
cfg.train.learn_rate_init = 0.0001
cfg.train.learn_rate_end = 1e-6
cfg.train.warmup_epochs = 10
cfg.train.first_stage_epochs = 60
cfg.train.second_stage_epochs = 100

cfg.train.loss_function = ['sigmoid', 'softmax']


# test options
cfg.test = edict()
cfg.test.mode = ['txt', 'image']
cfg.test.image_path = '../B7Data/1025_color_new/'
cfg.test.weight_file = "./checkpoint/model/model_1027_1/model-1"


           

繼續閱讀