天天看點

CS231n assignment1 -- knn

    正在學習cs231n,記錄下自己寫作業的過程

    自己也是深度學習新手,希望能和大家多多交流學習啦,附上我的作業位址:https://github.com/donghaiyu233/cs231n,歡迎fork~。

1.knn(k-Nearest Neighbor)原理

思想:訓練過程僅僅為記住所有的訓練資料,predict過程為在訓練資料中找與之最相近的k個圖檔的标簽,k個标簽進行一次投票,票數最高的标簽即為最終的預測值。

如何計算兩張圖檔的distance:

CS231n assignment1 -- knn

這是課程note中的一張圖,采用L1 distance:

CS231n assignment1 -- knn

,常用的另一個方法是L2 distance,計算兩個向量間的歐式距離:

CS231n assignment1 -- knn

,具體差別見0範數、1範數、2範數有什麼差別。

2.課程作業

1)k_nearest_neighbor.py
import numpy as np
#from past.builtins import xrange


class KNearestNeighbor(object):
    """ a kNN classifier with L2 distance """

    def __init__(self):
        pass

    def train(self, X, y):
        """
        Train the classifier. For k-nearest neighbors this is just 
        memorizing the training data.

        Inputs:
        - X: A numpy array of shape (num_train, D) containing the training data
        consisting of num_train samples each of dimension D.
        - y: A numpy array of shape (N,) containing the training labels, where
            y[i] is the label for X[i].
        """
        self.X_train = X
        self.y_train = y

    def predict(self, X, k=1, num_loops=0):
        """
        Predict labels for test data using this classifier.

        Inputs:
        - X: A numpy array of shape (num_test, D) containing test data consisting
            of num_test samples each of dimension D.
        - k: The number of nearest neighbors that vote for the predicted labels.
        - num_loops: Determines which implementation to use to compute distances
        between training points and testing points.

        Returns:
        - y: A numpy array of shape (num_test,) containing predicted labels for the
        test data, where y[i] is the predicted label for the test point X[i].  
        """
        if num_loops == 0:
            dists = self.compute_distances_no_loops(X)
        elif num_loops == 1:
            dists = self.compute_distances_one_loop(X)
        elif num_loops == 2:
            dists = self.compute_distances_two_loops(X)
        else:
            raise ValueError('Invalid value %d for num_loops' % num_loops)

        return self.predict_labels(dists, k=k)

    def compute_distances_two_loops(self, X):
        """
        Compute the distance between each test point in X and each training point
        in self.X_train using a nested loop over both the training data and the 
        test data.

        Inputs:
        - X: A numpy array of shape (num_test, D) containing test data.

        Returns:
        - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
        is the Euclidean distance between the ith test point and the jth training
        point.
        """
        num_test = X.shape[0]
        num_train = self.X_train.shape[0]
        dists = np.zeros((num_test, num_train))
        for i in range(num_test):
            for j in range(num_train):
                #####################################################################
                # TODO:                                                             #
                # Compute the l2 distance between the ith test point and the jth    #
                # training point, and store the result in dists[i, j]. You should   #
                # not use a loop over dimension.                                    #
                #####################################################################
#                diff = X[i,:]-self.X_train[j,:]
#                diff = diff.dot(diff)
#                dists[i,j]=np.sqrt(diff) 
                dists[i][j] = np.sqrt(np.sum(np.square(self.X_train[j,:] - X[i,:])))
                #####################################################################
                #                       END OF YOUR CODE                            #
                #####################################################################

        return dists

    def compute_distances_one_loop(self, X):
        """
        Compute the distance between each test point in X and each training point
        in self.X_train using a single loop over the test data.

        Input / Output: Same as compute_distances_two_loops
        """
        num_test = X.shape[0]
        num_train = self.X_train.shape[0]
        dists = np.zeros((num_test, num_train))
        for i in range(num_test):
            #######################################################################
            # TODO:                                                               #
            # Compute the l2 distance between the ith test point and all training #
            # points, and store the result in dists[i, :].        
            #利用brodcast,将X[i,:](1*D)==>(num_train,D)
            #axis=1,将每行的值加起來,結果為一個列向量
            #将sum的值賦給dist[i],過程為将sum的值化為行向量再賦給左邊,要求左邊的列數=sum值的行數
            #dists結果仍為:num_test,num_train
            #######################################################################
            dists[i] = np.sqrt(np.sum(np.square(X[i,:]-self.X_train),axis=1))
            #######################################################################
            #                         END OF YOUR CODE                            #
            #######################################################################
        return dists

    def compute_distances_no_loops(self, X):
        """
        Compute the distance between each test point in X and each training point
        in self.X_train using no explicit loops.

        Input / Output: Same as compute_distances_two_loops
        """
        num_test = X.shape[0]
        num_train = self.X_train.shape[0]
        dists = np.zeros((num_test, num_train))
        #########################################################################
        # TODO:                                                                 #
        # Compute the l2 distance between all test points and all training      #
        # points without using any explicit loops, and store the result in      #
        # dists.                                                                #
        #                                                                       #
        # You should implement this function using only basic array operations; #
        # in particular you should not use functions from scipy.                #
        #                                                                       #
        # HINT: Try to formulate the l2 distance using matrix multiplication    #
        #       and two broadcast sums.    
        #X:[num_test,D],  X_train:[num_train,D]
        #np.dot(a,b)  a*b
        #np.multiply(a,b)  a.*b
        #########################################################################
        dists = np.multiply(np.dot(X,self.X_train.T),-2) 
        #-2*X*X_train.T  ==>  [num_test,num_train]
        sq1 = np.sum(np.square(X),axis=1,keepdims = True) 
        #np.square為對元素進行平方
        #keepdims=True:如果不加,結果為列向量;如果加,則結果為[num_test,1](矩陣)
        sq2 = np.sum(np.square(self.X_train),axis=1) 
        #結果為[num_train,1]
        dists = np.add(dists,sq1) 
        #矩陣加也有broadcast
        dists = np.add(dists,sq2)
        #矩陣加一維向量,将向量轉為行向量,broadcast。
        dists = np.sqrt(dists)
        #########################################################################
        #                         END OF YOUR CODE                              #
        #########################################################################
        return dists

    def predict_labels(self, dists, k=1):
        """
        Given a matrix of distances between test points and training points,
        predict a label for each test point.

        Inputs:
        - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
        gives the distance betwen the ith test point and the jth training point.

        Returns:
        - y: A numpy array of shape (num_test,) containing predicted labels for the
        test data, where y[i] is the predicted label for the test point X[i].  
        """
        num_test = dists.shape[0]
        y_pred = np.zeros(num_test)
        for i in range(num_test):
            # A list of length k storing the labels of the k nearest neighbors to
            # the ith test point.
            closest_y = []
            #########################################################################
            # TODO:                                                                 #
            # Use the distance matrix to find the k nearest neighbors of the ith    #
            # testing point, and use self.y_train to find the labels of these       #
            # neighbors. Store these labels in closest_y.                           #
            # Hint: Look up the function numpy.argsort.    
            #argsort傳回數組從小到大的索引值,後面加上[:k]表示去索引值從頭開始k個
            #a=np.array([1,2,3])   a[1]=
            #########################################################################
            closest_y = self.y_train[np.argsort(dists[i,:])[:k]]
            #########################################################################
            # TODO:                                                                 #
            # Now that you have found the labels of the k nearest neighbors, you    #
            # need to find the most common label in the list closest_y of labels.   #
            # Store this label in y_pred[i]. Break ties by choosing the smaller     #
            # label.                                                                #
            ######################################################################### 
            
            #np.bincount():統計次數  
            #example:a=np.array([1,2,3,4])  b=np.bincount(a)  
            #   ==>  b=array([0,1,1,1,1])  統計了0-4出現的次數
            
            
            #np.argmax():輸出輸入數組中最大元素的下标
            #a = np.array([[2,4,6,1],[1,5,2,9]])  b =np.argmax(a,axis=0)
            #   ==>  b=array([0,1,0,1])
            
            
            y_pred[i] = np.argmax(np.bincount(closest_y))      
            #########################################################################
            #                           END OF YOUR CODE                            #
            #########################################################################

        return y_pred

           

    其中需要注意的是函數compute_distances_no_loops(self, X),參考MATLAB計算矩陣間的歐式距離。令測試集為P(m, d),訓練集為C(n, d),m為測試資料數量,n是訓練資料數量,d是次元,最後結果為下式,注意兩個平方為元素的平方,最後的P*C.T為矩陣乘(點乘還是矩陣乘都可以根據矩陣的大小以及矩陣乘法的規則判斷得出)。這裡的實作依然利用了broadcast,P為(m, 1),C為(1, n)即可。

CS231n assignment1 -- knn

    最後的預測部分使用np.argsort()可以對dists排序出k個最近的樣本,np.bincount()統計次數,最後利用np.argmax找到出現次數最多的索引,得到預測值。

2)測試與檢查驗證

cross-validation,其中vstack、hstack用法

num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]

X_train_folds = []
y_train_folds = []
################################################################################
# TODO:                                                                        #
# Split up the training data into folds. After splitting, X_train_folds and    #
# y_train_folds should each be lists of length num_folds, where                #
# y_train_folds[i] is the label vector for the points in X_train_folds[i].     #
# Hint: Look up the numpy array_split function.                                #
################################################################################
X_train_folds = np.split(X_train,5,axis=0)
#在行上對X_train進行五等分,即把X_train等分為5份
y_train_folds = np.split(y_train,5,axis=0)
################################################################################
#                                 END OF YOUR CODE                             #
################################################################################

# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}


################################################################################
# TODO:                                                                        #
# Perform k-fold cross validation to find the best value of k. For each        #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times,   #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all     #
# values of k in the k_to_accuracies dictionary.                               #
################################################################################
for k in k_choices:
    accuracies = []
    for i in range(num_folds):
        X_test_cv = X_train_folds[i]
        X_train_cv = np.vstack(X_train_folds[:i] + X_train_folds[i+1:])
        
        y_test_cv = y_train_folds[i]
        y_train_cv = np.hstack(y_train_folds[:i]+y_train_folds[i+1:])
        
        classifier.train(X_train_cv, y_train_cv)
        dists_cv = classifier.compute_distances_no_loops(X_test_cv)
        
        y_test_pred = classifier.predict_labels(dists_cv, k)
        num_correct = np.sum(y_test_pred == y_test_cv)
        accuracies.append(float(num_correct) * num_folds / num_training)
        k_to_accuracies[k] = accuracies
################################################################################
#                                 END OF YOUR CODE                             #
################################################################################

# Print out the computed accuracies
for k in sorted(k_to_accuracies):
    for accuracy in k_to_accuracies[k]:
        print('k = %d, accuracy = %f' % (k, accuracy))for k in k_choices:
    accuracies = k_to_accuracies[k]
    plt.scatter([k] * len(accuracies), accuracies)


# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
           

輸出結果

k = 1, accuracy = 0.263000
k = 1, accuracy = 0.257000
k = 1, accuracy = 0.264000
k = 1, accuracy = 0.278000
k = 1, accuracy = 0.266000
k = 3, accuracy = 0.239000
k = 3, accuracy = 0.249000
k = 3, accuracy = 0.240000
k = 3, accuracy = 0.266000
k = 3, accuracy = 0.254000
k = 5, accuracy = 0.248000
k = 5, accuracy = 0.266000
k = 5, accuracy = 0.280000
k = 5, accuracy = 0.292000
k = 5, accuracy = 0.280000
k = 8, accuracy = 0.262000
k = 8, accuracy = 0.282000
k = 8, accuracy = 0.273000
k = 8, accuracy = 0.290000
k = 8, accuracy = 0.273000
k = 10, accuracy = 0.265000
k = 10, accuracy = 0.296000
k = 10, accuracy = 0.276000
k = 10, accuracy = 0.284000
k = 10, accuracy = 0.280000
k = 12, accuracy = 0.260000
k = 12, accuracy = 0.295000
k = 12, accuracy = 0.279000
k = 12, accuracy = 0.283000
k = 12, accuracy = 0.280000
k = 15, accuracy = 0.252000
k = 15, accuracy = 0.289000
k = 15, accuracy = 0.278000
k = 15, accuracy = 0.282000
k = 15, accuracy = 0.274000
k = 20, accuracy = 0.270000
k = 20, accuracy = 0.279000
k = 20, accuracy = 0.279000
k = 20, accuracy = 0.282000
k = 20, accuracy = 0.285000
k = 50, accuracy = 0.271000
k = 50, accuracy = 0.288000
k = 50, accuracy = 0.278000
k = 50, accuracy = 0.269000
k = 50, accuracy = 0.266000
k = 100, accuracy = 0.256000
k = 100, accuracy = 0.270000
k = 100, accuracy = 0.263000
k = 100, accuracy = 0.256000
k = 100, accuracy = 0.263000
           
CS231n assignment1 -- knn

    best_k = 10

繼續閱讀