天天看點

Tensorflow2.0之TensorBoard:訓練過程可視化使用 TensorBoard 流程具體流程檢視 Graph 和 Profile 資訊執行個體

文章目錄

  • 使用 TensorBoard 流程
  • 具體流程
    • Step 1
    • Step 2
    • Step 3
    • Step 4
  • 檢視 Graph 和 Profile 資訊
  • 執行個體
    • 1、定義模型及訓練過程
    • 2、建立檔案夾存放 TensorBoard 的記錄檔案
    • 3、執行個體化記錄器(開啟Trace)
    • 4、将參數記錄到指定的記錄器中

使用 TensorBoard 流程

  • 1、建立檔案夾存放 TensorBoard 的記錄檔案;
  • 2、執行個體化記錄器;
  • 3、将參數(一般是标量)記錄到指定的記錄器中;
  • 4、通路 TensorBoard 的可視界面。

具體流程

Step 1

在代碼目錄下建立一個檔案夾(如 ./tensorboard )。

Step 2

執行個體化記錄器:

Step 3

将參數(一般是标量)記錄到指定的記錄器中:

summary_writer = tf.summary.create_file_writer('./tensorboard')
# 開始模型訓練
for batch_index in range(num_batches):
    # ...(訓練代碼,目前batch的損失值放入變量loss中)
    with summary_writer.as_default():                               # 希望使用的記錄器
        tf.summary.scalar("loss", loss, step=batch_index)  # 還可以添加其他自定義的變量
           

每運作一次 tf.summary.scalar() ,記錄器就會向記錄檔案中寫入一條記錄。

Step 4

當我們要對訓練過程可視化時,在代碼目錄打開終端,運作:

其中 ‘E:\Pycharm\code\Jupyter\tensorflow2.0\My_net\Tensorboard\tensorboard’ 是存放 TensorBoard 記錄檔案的檔案夾路徑。

然後使用浏覽器通路指令行程式所輸出的網址(一般是 http://127.0.0.1:6006/),即可通路 TensorBoard 的可視界面。

檢視 Graph 和 Profile 資訊

tf.summary.trace_on(graph=True, profiler=True)  # 開啟Trace,可以記錄圖結構和profile資訊
# 進行訓練
with summary_writer.as_default():
    tf.summary.trace_export(name="model_trace", step=0, profiler_outdir=log_dir)    # 儲存Trace資訊到檔案
           

之後,我們就可以在 TensorBoard 中選擇 “Profile”,以時間軸的方式檢視各操作的耗時情況。如果使用了 tf.function 建立了計算圖,也可以點選 “Graphs” 檢視圖結構。

執行個體

此處用 MNIST 的訓練過程舉例:

1、定義模型及訓練過程

import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.layers as layers

mnist = keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# Add a channels dimension
x_train = x_train[..., tf.newaxis].astype(np.float32)
x_test = x_test[..., tf.newaxis].astype(np.float32)

train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(x_test.shape[0])

class MyModel(keras.Model):
    # Set layers.
    def __init__(self):
        super(MyModel, self).__init__()
        # Convolution Layer with 32 filters and a kernel size of 5.
        self.conv1 = layers.Conv2D(32, kernel_size=5, activation=tf.nn.relu)
        # Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
        self.maxpool1 = layers.MaxPool2D(2, strides=2)

        # Convolution Layer with 64 filters and a kernel size of 3.
        self.conv2 = layers.Conv2D(64, kernel_size=3, activation=tf.nn.relu)
        # Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
        self.maxpool2 = layers.MaxPool2D(2, strides=2)

        # Flatten the data to a 1-D vector for the fully connected layer.
        self.flatten = layers.Flatten()

        # Fully connected layer.
        self.fc1 = layers.Dense(1024)
        # Apply Dropout (if is_training is False, dropout is not applied).
        self.dropout = layers.Dropout(rate=0.5)

        # Output layer, class prediction.
        self.out = layers.Dense(10)

    # Set forward pass.
    def call(self, x, is_training=False):
        x = tf.reshape(x, [-1, 28, 28, 1])
        x = self.conv1(x)
        x = self.maxpool1(x)
        x = self.conv2(x)
        x = self.maxpool2(x)
        x = self.flatten(x)
        x = self.fc1(x)
        x = self.dropout(x, training=is_training)
        x = self.out(x)
        if not is_training:
            # tf cross entropy expect logits without softmax, so only
            # apply softmax when not training.
            x = tf.nn.softmax(x)
        return x

model = MyModel()

loss_object = keras.losses.SparseCategoricalCrossentropy()
optimizer = keras.optimizers.Adam()

@tf.function
def train_step(images, labels):
    with tf.GradientTape() as tape:
        predictions = model(images)
        loss = loss_object(labels, predictions)
        loss = tf.reduce_mean(loss)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    return loss
           

2、建立檔案夾存放 TensorBoard 的記錄檔案

3、執行個體化記錄器(開啟Trace)

summary_writer = tf.summary.create_file_writer(log_dir)     # 執行個體化記錄器
tf.summary.trace_on(profiler=True)  # 開啟Trace(可選)
           

4、将參數記錄到指定的記錄器中

EPOCHS = 5

for epoch in range(EPOCHS):
    for images, labels in train_ds.take(10):
        loss = train_step(images, labels)
        with summary_writer.as_default():                           # 指定記錄器
            tf.summary.scalar("loss", loss, step=epoch)       # 将目前損失函數的值寫入記錄器

with summary_writer.as_default():
    tf.summary.trace_export(name="model_trace", step=0, profiler_outdir=log_dir)    # 儲存Trace資訊到檔案(可選)
           

繼續閱讀