天天看點

如何訓練神經網絡進行圖像分類第1部分

Image classification is a hot topic in data science with the past few years seeing huge improvements in many areas. It has a lot of applications everywhere including SEO. But how is this done?

圖像分類是資料科學中的一個熱門話題,在過去的幾年中,圖像分類在許多領域都取得了巨大的進步。 它到處都有很多應用程式,包括SEO。 但是,這是怎麼做的呢?

A deep neural network is a network of artificial neurons organised into layers (via software). With each layer connected to the next and each connection having a weight that helps determine how much the artificial neuron fires. This firing helps determine how strong the connections are between layers, and in general neurons that fire together have stronger connections. Just like with biological neurons.

深神經網絡是組織成的人工神經元網絡層 (通過軟體)。 每個層都連接配接到下一層,并且每個連接配接的權重都有助于确定人造神經元的發射量。 這種發射有助于确定各層之間的連接配接強度,通常,一起發射的神經元具有更強的連接配接。 就像生物神經元一樣。

How strong these connections are is determined by how the network is trained on the data you put into it. These networks are trained via a process called backpropagation which works to feed data into the network and then measures the network’s performance. This error is measured using a loss function.

這些連接配接的牢固程度取決于如何根據輸入的資料對網絡進行教育訓練 。 這些網絡是通過稱為反向傳播的過程進行訓練的,該過程将資料饋入網絡,然後測量網絡的性能。 使用損耗函數測量該誤差。

Backpropagation works by using gradient descent to measure the rate-of-change of the loss function with respect to the weighting of each connection, and the gradient descent step is used to make sure the error rate for each connection is reduced as close to zero as possible. The network should eventually converge on a solution where the overall error is minimised.

反向傳播通過使用梯度下降來測量損耗函數相對于每個連接配接權重的變化率來進行工作,并且使用梯度下降步驟來確定将每個連接配接的錯誤率降低到接近零。可能。 網絡最終應收斂于最小化總體誤差的解決方案。

The rate at which the network learns is called the learning rate and this is another hyperparameter that can be tuned when training neural networks. If the learning rate is too small, the network can take too long to converge on a solution, and conversely, if the learning rate is too large then the network will ‘bounce around’ and never really converge on an optimum solution.

網絡學習的速率稱為學習速率,這是訓練神經網絡時可以調整的另一個超參數。 如果學習率太小,則網絡可能花費太長時間才能收斂于解決方案,反之,如果學習率太高,則網絡将“反彈”并且永遠不會真正收斂于最佳解決方案。

There are different types of layers in neural networks, and each one transforms data differently. The most basic type is a dense layer, which is where all neurons are connected. Other types include convolutional layers which are primarily used for image processing tasks and recurrent layers which are used to process time-series data. There are others, but these are the most common types.

神經網絡中有不同類型的層,并且每個層對資料的轉換都不同。 最基本的類型是密集層,這是所有神經元連接配接的地方。 其他類型包括主要用于圖像處理任務的卷積層和用于處理時間序列資料的循環層。 還有其他類型,但是這些是最常見的類型。

In this article, I will be focusing on how to implement a simple image classifier using a series of dense layers in Python using Keras as part of Tensorflow. As mentioned above, convolutional neural networks usually work better for image classification tasks and I will talk about these in part 2 of this series. As my primary area of interest is Search Engine Optimisation, I will tie all of this together in part 3 for how neural networks are used in search.

在這篇文章中,我将重點放在如何實作使用一系列使用Python中的緻密層的一個簡單的圖像分類Keras作為一部分Tensorflow 。 如上所述,卷積神經網絡通常可以更好地用于圖像分類任務,我将在本系列的第2部分中讨論這些問題。 由于我的主要興趣是搜尋引擎優化,是以我将在第3部分中将所有這些聯系在一起,以了解如何在搜尋中使用神經網絡。

Neural networks are fascinating, and if you have an interest in this topic, I would encourage you to check out this excellent playlist on YouTube on a channel called Deep Lizard. They have excellent and accessible content on neural networks and deep learning.

神經網絡令人着迷,如果您對此主題感興趣,我鼓勵您在YouTube上名為Deep Lizard的頻道上檢視這個出色的播放清單 。 他們在神經網絡和深度學習方面擁有出色且易于通路的内容。

If you are more interested in the implementation using Python with Keras, I would encourage you to look at Hands-on Machine Learning with Scikit-Learn, Keras and Tensorflow by Aurelion Geron. It is an excellent book, written by a former Googler and reviewed by the author of Keras. I highly recommend it.

如果您對将Python與Keras結合使用的實作更感興趣,我鼓勵您看看Aurelion Geron的Scikit-Learn,Keras和Tensorflow的動手機器學習 。 這是一本非常棒的書,由前Google員工撰寫,并由Keras的作者撰寫。 我強烈推薦它。

Keras和TensorFlow入門 (Getting started with Keras and TensorFlow)

Keras is a high-level deep learning API in Python that allows you to easily create and train deep learning models. It was launched in 2016 and has gained traction within the data science community due to its ease-of-use as its syntax is designed to be very similar to Scikit-Learn.

Keras是Python中的進階深度學習API,可讓您輕松建立和訓練深度學習模型。 它于2016年推出,由于其易用性而在資料科學界引起了廣泛關注,因為其文法設計與Scikit-Learn非常相似。

TensorFlow is a library created by the Google Brain team for machine learning tasks and has often competed with PyTorch (created by Facebook) for market share, but has not been able to do as well as its documentation wasn’t as accessible. To remedy this, Google released TensorFlow 2 which contained many improvements particularly around cross-compatibility with models, GPU support and graphing utilities.

TensorFlow是由Google Brain團隊建立的用于機器學習任務的庫,經常與PyTorch (由Facebook建立)争奪市場佔有率,但由于無法提供文檔而無法做到。 為了解決這個問題,Google釋出了TensorFlow 2,其中包含許多改進,特别是在與模型,GPU支援和圖形實用程式的交叉相容性方面。

With the release of TensorFlow 2, Keras is now merged into Tensorflow and the standalone version is no longer maintained. If you want to install Keras and TensorFlow then it is straightforward. Just go to your Python environment (I recommend using a virtual environment/package manager like pipenv) and use whichever Python package manager you like:

随着TensorFlow 2的釋出,Keras現在已合并到Tensorflow中,并且不再維護獨立版本。 如果您要安裝Keras和TensorFlow,那麼它很簡單。 隻需轉到您的Python環境(我建議使用虛拟環境/程式包管理器,如pipenv ),然後使用您喜歡的任何Python程式包管理器:

python -m pip install TensorFlow
           

Be sure to use a 64-bit version of Python for this to work. To get GPU support on Windows is a bit of a faff, but this YouTube video will show you how to get it done. However, you probably won’t need it for this tutorial. You could look at Google Colab as that has GPUs available with no setup.

確定使用64位版本的Python才能正常工作。 在Windows上獲得GPU支援有點麻煩,但是此YouTube 視訊将向您展示如何完成它。 但是,本教程可能不需要它。 您可以檢視Google Colab,因為它具有沒有設定的可用GPU。

You can check the installation in a Jupyter notebook with the following:

您可以使用以下指令在Jupyter筆記本中檢查安裝:

import TensorFlow as tf
from tensorflow import keras
print(tf.__version__)
print(keras.__version__)
           

If this has all worked, then at the time of writing you should see something like:

如果一切都成功了,那麼在撰寫本文時,您應該會看到類似以下内容的内容:

2.3.0
2.4.0
           

When you’ve got Keras and TensorFlow working, you should be good to go on building an image classifier with a neural network.

當Keras和TensorFlow工作時,您應該繼續使用神經網絡建構圖像分類器。

導入資料集 (Importing the data set)

For most simple image classification tasks, it is popular to use the MNIST data set, which consists of 60,000 photos of handwritten numbers. However, for this task, we are going to use the MNIST Fashion dataset, which consists of 60,000 28 x 28 grayscale images of Zalando article fashion images, all classified across 10 different classes. The reason for this is that image classifiers tend to find this more challenging.

對于大多數簡單的圖像分類任務,通常使用MNIST資料集 ,該資料集包含60,000張手寫數字的照片。 但是,對于此任務,我們将使用MNIST Fashion資料集 ,該資料集由Zalando文章時尚圖像的60,000張28 x 28灰階圖像組成,所有圖像均分為10個不同的類。 這樣做的原因是圖像分類器趨向于更具挑戰性。

Keras has utility functions to help import this dataset, so it is fairly straightforward to use (similar to Sklearn). Work in a Jupyter notebook, and begin by making sure we have all the imports we need:

Keras具有實用程式功能來幫助導入該資料集,是以使用起來相當簡單(類似于Sklearn)。 在Jupyter筆記本中工作,并首先確定我們擁有所需的所有進口物品:

import tensorflow as tf
from tensorflow import keras
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
           

We will be working with NumPy arrays and plotting this with Matplotlib, so you will need to ensure these are accessible within your environment. Once this is done, you can import the dataset:

我們将使用NumPy數組,并使用Matplotlib進行繪制,是以您需要確定可以在您的環境中通路它們。 完成此操作後,您可以導入資料集:

fashion_mnist = keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data()
           

The dataset is split between a training and a test set automatically (60,000 images in training, 10,000 images in test). The x-axis data are the images and the y-axis data are the labels. To make this more useful for working with, it is also a good idea to create a validation data set so we can ensure the model isn’t overfitting:

資料集自動在訓練集和測試集之間劃分(訓練中60,000張圖像,測試中10,000張圖像)。 x軸資料是圖像,y軸資料是标簽。 為了使它更有用,建立驗證資料集也是一個好主意,這樣我們可以確定模型不會過拟合:

X_valid, X_train = X_train_full[:5000] X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
           

The y-axis data is just a series of numbers associated with each class label, therefore we need to define the class labels manually:

y軸資料隻是與每個類别标簽相關的一系列數字,是以我們需要手動定義類别标簽:

class_names = [ “T-shirt/top” , “Trouser” , “Pullover” , “Dress” , “Coat” , “Sandal” , “Shirt” , “Sneaker” , “Bag” , “Ankle boot” ]
           

To get an idea of what the dataset actually represents we can use a simple loop and Matplotlib:

為了了解資料集實際代表什麼,我們可以使用一個簡單的循環和Matplotlib:

plt.figure(figsize=(10,10))
for i in range(25):
    plt.subplot(5,5,i+1)
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    plt.imshow(X_train_full[i], cmap=plt.cm.binary)
    plt.xlabel(class_names[y_train_full[i]])
plt.show()
           

From this, you will see something like this:

由此,您将看到類似以下内容:

如何訓練神經網絡進行圖像分類第1部分

First 25 entries from the Fashion MNIST dataset Fashion MNIST資料集中的前25個條目

As you can, while there are only 10 classes (similar to the MNIST dataset), the images are different for each class, which is why it is a more challenging dataset to work with.

如您所願,雖然隻有10個類别(類似于MNIST資料集),但是每個類别的圖像都不相同,這就是使用它更具挑戰性的資料集的原因。

标準化資料集 (Normalizing the dataset)

The first step to working with neural networks is to normalize the dataset, otherwise, it could take a lot longer for the network to converge on a solution.

使用神經網絡的第一步是對資料集進行規範化,否則,網絡收斂到解決方案可能會花費更長的時間。

The usual way of normalizing a dataset is to scale the features, and this is done by substacting the mean from each feature and dividing by the standard deviation. This will put the features on the same scale somewhere between 0 — 1.

标準化資料集的通常方法是縮放特征,這是通過将每個特征的均值除以标準差來完成的。 這會将要素以相同的比例置于0到1之間。

As we are working with 28 x 28 NumPy arrays representing each image and each pixel in the array has an intensity somewhere between 1 — 255, a simpler way of getting all of these images on a scale between 0–1 is to divide each array by 255.

當我們使用表示每個圖像的28 x 28 NumPy數組,并且數組中的每個像素的強度介于1到255之間時,将所有這些圖像縮放為0–1的簡單方法是将每個數組除以255。

X_valid, X_train = X_train_full / 255., X_train_full / 255.
X_test = X_test / 255.
           

From there, we are good to go to build a dense layer neural network and train it on our data set.

從那裡開始,我們很高興去建構一個密集層神經網絡,并在我們的資料集上對其進行訓練。

建立神經網絡圖像分類器 (Building the neural network image classifier)

In order to build the model, we have to specify its structure using Keras’ syntax. As mentioned above, it is very similar to Scikit-Learn and so it should be recognisable if you are familiar with that package. The code for building the model is as follows:

為了構模組化型,我們必須使用Keras的文法指定其結構。 如上所述,它與Scikit-Learn非常相似,是以如果您熟悉該軟體包,則應該可以識别。 用于構模組化型的代碼如下:

model = keras.models.Sequential([keras.layers.Flatten(input_shape = [28, 28]),
keras.layers.Dense(300, activation = “relu” ),
keras.layers.Dense(100, activation = “relu” ),
keras.layers.Dense(100, activation = “relu” ),
keras.layers.Dense(100, activation = “relu” ),
keras.layers.Dense(10, activation = “softmax” )])
           

To explain this code:

解釋此代碼:

  • The first line creates a Sequential model and this is the simplest type of data structure in Keras and is basically a sequence of connected layers in a network

    第一行建立一個順序模型,這是Keras中最簡單的資料結構類型,基本上是網絡中一系列連接配接的層

  • The first layer in the model is a flatten layer and is there for pre-processing of the data and it isn’t trainable itself. What this does is take each 28 x 28 NumPy array for each image and flattens it into a 1 x 784 array that the network can work with

    模型中的第一層是扁平層,用于資料的預處理,它本身不可訓練。 這樣做是為每個圖像取每個28 x 28 NumPy數組并将其展平為網絡可以使用的1 x 784數組

  • Next, we add a Dense hidden layer with 300 neurons. It will use the ReLU activation function. Each Dense layer manages its own weight matrix, containing all the connection weights between the neurons and their inputs

    接下來,我們添加具有300個神經元的密集隐藏層。 它将使用ReLU激活功能。 每個密集層管理自己的權重矩陣,其中包含神經元及其輸入之間的所有連接配接權重

  • Next, we add another 3 Dense layers with 100 neurons each. There are diminishing returns to adding new layers and this is something we need to test as we build and optimise the network

    接下來,我們再添加3個Dense層,每個層有100個神經元。 添加新層的收益遞減,這是我們在建構和優化網絡時需要測試的東西

  • Finally, we add a Dense layer with 10 neurons as there are 10 classes to predict and as they are all exclusive, we use the softmax activation function

    最後,我們添加了一個包含10個神經元的密集層,因為要預測10個類,并且它們都是排他性的,是以我們使用softmax激活函數

To get a full understanding of the model’s structure we can use:

為了全面了解模型的結構,我們可以使用:

model.summary()
           

And this will give us an output of the full structure of the network:

這将為我們提供網絡完整結構的輸出:

Model: “sequential_2” _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= flatten_2 (Flatten) (None, 784) 0 _________________________________________________________________ dense_10 (Dense) (None, 300) 235500 _________________________________________________________________ dense_11 (Dense) (None, 100) 30100 _________________________________________________________________ dense_12 (Dense) (None, 100) 10100 _________________________________________________________________ dense_13 (Dense) (None, 100) 10100 _________________________________________________________________ dense_14 (Dense) (None, 10) 1010 ================================================================= Total params: 286,810 Trainable params: 286,810 Non-trainable params: 0 _________________________________________________________________
           

As can be seen this network has a total of 286,810 trainable parameters (consisting of weights between neurons and bias terms) and this gives the network a lot of flexibility, but it also means that it will be very easy for it to overfit so we need to becareful.

可以看出,該網絡總共有286,810個可訓練參數(由神經元和偏差項之間的權重組成),這為網絡提供了很大的靈活性,但是這也意味着它很容易過度拟合,是以我們需要要小心。

Before we can train the network we need to compile it, and this is done with the following code:

在訓練網絡之前,我們需要對其進行編譯,這是通過以下代碼完成的:

model.compile(loss = “sparse_categorical_crossentropy”,
optimizer = “sgd”,
metrics = [“accuracy”])
           

In this line we are specifying 3 things:

在這一行中,我們指定了三件事:

i) The loss function to use. In this case we are using sparse categorical cross entropy — this is because we have an index of exclusive (sparse) labels we are trying to predict against

i)損失函數。 在這種情況下,我們使用稀疏分類交叉熵 -這是因為我們有一個我們試圖針對的排他(稀疏)标簽索引

ii) The optimizer we are going to use to optimise the model against the loss function is stochastic gradient descent and this will ensure the model converges on an optimum solution i.e. Keras will use the backpropagation method described above

ⅱ)我們将用它來優化對損失函數模型中的優化是随機梯度下降 并確定模型收斂于最佳解,即Keras将使用上述反向傳播方法

iii) Finally, we specify a metric that we are going to use in addition to loss to give us an idea of how well our model is working. In this case, we are using accuracy which gives an idea of how well our model is doing by giving a percentage of how many predictions match the actual class for the model we are training

iii)最後,我們指定除損失外将要使用的度量,以使我們了解模型的運作狀況。 在這種情況下,我們使用準确性 通過給出多少預測與我們正在訓練的模型的實際類别相比對的百分比,可以了解模型的運作狀況

訓練網絡 (Training the network)

Training the network is easy once it has been compiled. All you need to do is call the model’s fit method (like Sklearn) as follows:

編譯網絡後,對網絡進行教育訓練很容易。 您需要做的就是調用模型的fit方法(例如Sklearn),如下所示:

history = model.fit(X_train,
                    y_train,
                    epochs = 10,
                    validation_data = (X_valid, y_valid))
           

We initially pass in the data that we want to train the network on, in this case, X_train are the images and y_train is an array containing the labels. We also specify the number of epochs we want to train the model with (an epoch being defined as how many times we want to pass the training data through the network for training purposes).

最初,我們傳遞要在其上訓練網絡的資料,在這種情況下, X_train是圖像, y_train是包含标簽的數組。 我們還指定了要用來訓練模型的時期數(一個時期被定義為我們想要通過訓練網絡将訓練資料傳遞多少次)。

Keras also lets us specify an optional validation_data argument where we pass in a validation data set. If we do this, then at the end of each epoch Keras will test the performance of the network on the validation data set. This is a good way of ensuring the model isn’t overfitting, however, it doesn’t feed into the training itself.

Keras還允許我們指定一個可選的validation_data參數,在該參數中傳遞驗證資料集。 如果我們這樣做,那麼在每個紀元末,Keras将在驗證資料集上測試網絡的性能。 這是確定模型不過度拟合的好方法,但是,它不會影響訓練本身。

As training proceeds, you will see something like this:

随着教育訓練的進行,您将看到以下内容:

Epoch 1/10 1719/1719 [==============================] — 5s 3ms/step — loss: 0.7698 — accuracy: 0.7385 — val_loss: 0.5738 — val_accuracy: 0.7962 
Epoch 2/10 1719/1719 [==============================] — 5s 3ms/step — loss: 0.4830 — accuracy: 0.8283 — val_loss: 0.4570 — val_accuracy: 0.8404 
Epoch 3/10 1719/1719 [==============================] — 5s 3ms/step — loss: 0.4261 — accuracy: 0.8480 — val_loss: 0.4121 — val_accuracy: 0.8522 
Epoch 4/10 1719/1719 [==============================] — 5s 3ms/step — loss: 0.3932 — accuracy: 0.8582 — val_loss: 0.3951 — val_accuracy: 0.8566 
Epoch 5/10 1719/1719 [==============================] — 5s 3ms/step — loss: 0.3708 — accuracy: 0.8660 — val_loss: 0.3597 — val_accuracy: 0.8682 
Epoch 6/10 1719/1719 [==============================] — 5s 3ms/step — loss: 0.3518 — accuracy: 0.8728 — val_loss: 0.3397 — val_accuracy: 0.8756 
Epoch 7/10 1719/1719 [==============================] — 5s 3ms/step — loss: 0.3369 — accuracy: 0.8779 — val_loss: 0.3506 — val_accuracy: 0.8738 
Epoch 8/10 1719/1719 [==============================] — 5s 3ms/step — loss: 0.3243 — accuracy: 0.8814 — val_loss: 0.3343 — val_accuracy: 0.8774 
Epoch 9/10 1719/1719 [==============================] — 4s 3ms/step — loss: 0.3128 — accuracy: 0.8861 — val_loss: 0.3415 — val_accuracy: 0.8794 
Epoch 10/10 1719/1719 [==============================] — 4s 2ms/step — loss: 0.3019 — accuracy: 0.8888 — val_loss: 0.3265 — val_accuracy: 0.8822
           

This will continue for as long as the training is happening, there are accuracy and loss metrics for both the training and validation data sets. The value of the accuracy is a simple percentage measure of how many items the network got right. The value of loss is the cross entropy loss.

隻要訓練正在進行,這種情況就會持續下去,訓練和驗證資料集都有準确性和損失名額。 準确性的值是對網絡中正确的項目數的簡單百分比度量。 損失的價值就是交叉熵損失 。

Once the model is trained, it is possible to call its history method to get a dictionary of the loss and any other metrics needed at every stage of the training. We can put these in a Pandas DataFrame and plot them as follows:

訓練模型後,可以調用其曆史記錄方法以擷取損失的字典以及訓練每個階段所需的任何其他名額。 我們可以将它們放在Pandas DataFrame中,并按以下方式繪制它們:

pd.DataFrame(history.history).plot(figsize = (16, 10))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
           
如何訓練神經網絡進行圖像分類第1部分

Loss and accuracy for our model 我們模型的損失和準确性

As can be seen above as the loss decreases, the accuracy increases. Two other things stand out from this plot:

從上面可以看出,随着損耗的減少,精度提高。 此情節還有另外兩件事:

  • We could probably train this model longer as it doesn’t look like the loss has reached a minimum

    我們可能會訓練更長的時間,因為看起來損失似乎沒有達到最小

  • The accuracy of the training data set is higher than it is for the validation set (which is normal) but not wildly different to the validation dataset. This means there is no overfitting

    訓練資料集的準确性高于驗證集的準确性(這是正常的),但與驗證資料集沒有太大不同。 這意味着沒有過度拟合

評估神經網絡的性能 (Evaluating the performance of the neural network)

Evaluating the performance of the network is straight forward and follows data science best practice principles. We call the model’s evalute method on the test data set to see how it performs. Remember that the test data set hasn’t been used in training and the network hasn’t seen this data before. We should perform this step only once so we can get an accurate idea of the model’s performance.

評估網絡性能很簡單,并且遵循資料科學最佳實踐原則。 我們在測試資料集上調用模型的evalute方法,以檢視其性能。 請記住,測試資料集尚未用于教育訓練中,并且網絡之前也沒有看到過該資料。 我們應該隻執行一次此步驟,這樣我們才能對模型的性能有一個準确的了解。

model.evalute(X_test, y_test)
           

This will run the model on the test data set and the output should look something like this:

這将在測試資料集上運作模型,并且輸出應如下所示:

313/313 [==============================] — 0s 2ms/step — loss: 0.3802 — accuracy: 0.8858
           

You’ll get an output of the loss and whatever other metrics specified when the model was compiled. Here we can see that this model is correct 88% of the time which isn’t bad for a simple network on such a difficult data set.

您将獲得損失以及模型編譯時指定的任何其他名額的輸出。 在這裡,我們可以看到該模型在88%的時間内都是正确的,對于在如此困難的資料集上的簡單網絡而言,這并不壞。

下一步 (Next steps)

In the next part of this series, I will talk about how to implement the above using a convolutional neural network and show how and why these perform better for image classification tasks.

在本系列的下一部分中,我将讨論如何使用卷積神經網絡來實作上述内容,并展示這些方法如何以及為什麼在圖像分類任務中表現更好。

You can get the code I’ve used for this work from my Github here. Please bear in mind that it is a work in progress while I am writing these articles.

你可以和我已經用這項工作從我的Github上的代碼在這裡 。 請記住,在我撰寫這些文章時,它正在進行中。

For part 3 of this series, I will link all of this back to my favourite passion outside data science, and that is SEO. How neural networks are using in search and what we can learn from them. Thanks for reading.

對于本系列的第3部分,我将把所有這些都連結到我最喜歡的資料科學之外的熱情,那就是SEO。 神經網絡如何在搜尋中使用以及我們可以從中學到什麼。 謝謝閱讀。

翻譯自: https://medium.com/@sandy_lee/how-to-train-neural-networks-for-image-classification-part-1-21327fe1cc1

繼續閱讀