天天看點

台大林軒田《機器學習基石》:作業一python實作

台大林軒田《機器學習基石》:作業一python實作

台大林軒田《機器學習基石》:作業二python實作

台大林軒田《機器學習基石》:作業三python實作

台大林軒田《機器學習基石》:作業四python實作

完整代碼:

https://github.com/xjwhhh/LearningML/tree/master/MLFoundation

歡迎follow和star

在學習和總結的過程中參考了不少别的博文,且自己的水準有限,如果有錯,希望能指出,共同學習,共同進步

##15

台大林軒田《機器學習基石》:作業一python實作

下載下傳訓練資料,每一行都是一個訓練執行個體,每一行的資料中,前四項是特征值,最後一項是标簽,編寫PLA算法進行分類,設w初始為0,sign(0)=-1,問疊代多少次後算法結束?

1.需要自己手動添加一維特征,X0=1

2.一個點分類正确的條件是xwy>0(PLA)

3.算法結束的條件是所有執行個體都被配置設定正确

代碼如下:

import numpy

class NaiveCyclePLA(object):
    def __init__(self, dimension, count):
        self.__dimension = dimension
        self.__count = count

    # get data
    def train_matrix(self, path):
        training_set = open(path)
        x_train = numpy.zeros((self.__count, self.__dimension))
        y_train = numpy.zeros((self.__count, 1))
        x = []
        x_count = 0
        for line in training_set:
            # add 1 dimension manually
            x.append(1)
            for str in line.split(' '):
                if len(str.split('\t')) == 1:
                    x.append(float(str))
                else:
                    x.append(float(str.split('\t')[0]))
                    y_train[x_count, 0] = int(str.split('\t')[1].strip())
            x_train[x_count, :] = x
            x = []
            x_count += 1
        return x_train, y_train

    def iteration_count(self, path):
        count = 0
        x_train, y_train = self.train_matrix(path)
        w = numpy.zeros((self.__dimension, 1))
        # loop until all x are classified right
        while True:
            flag = 0
            for i in range(self.__count):
                if numpy.dot(x_train[i, :], w)[0] * y_train[i, 0] <= 0:
                    w += y_train[i, :] * x_train[i, :].reshape(5, 1)
                    count += 1
                    flag = 1
            if flag == 0:
                break
        return count


if __name__ == '__main__':
    perceptron = NaiveCyclePLA(5, 400)
    print(perceptron.iteration_count("hw1_15_train.dat"))
           

運作了45次

##16

台大林軒田《機器學習基石》:作業一python實作

由于樣本的排列順序不同,最終完成PLA分類的疊代次數也不同。這道題要求我們打亂訓練樣本的順序,進行2000次PLA計算,得到平均疊代次數。

隻要在15題的基礎上對訓練樣本進行打亂即可,使用random.shuffle(random_list)

代碼如下:

import numpy
import random


class RandomPLA(object):
    def __init__(self, dimension, count):
        self.__dimension = dimension
        self.__count = count

    def random_matrix(self, path):
        training_set = open(path)
        random_list = []
        x = []
        x_count = 0
        for line in training_set:
            x.append(1)
            for str in line.split(' '):
                if len(str.split('\t')) == 1:
                    x.append(float(str))
                else:
                    x.append(float(str.split('\t')[0]))
                    x.append(int(str.split('\t')[1].strip()))
            random_list.append(x)
            x = []
            x_count += 1
        random.shuffle(random_list)
        return random_list

    def train_matrix(self, path):
        x_train = numpy.zeros((self.__count, self.__dimension))
        y_train = numpy.zeros((self.__count, 1))
        random_list = self.random_matrix(path)
        for i in range(self.__count):
            for j in range(self.__dimension):
                x_train[i, j] = random_list[i][j]
            y_train[i, 0] = random_list[i][self.__dimension]
        return x_train, y_train

    def iteration_count(self, path):
        count = 0
        x_train, y_train = self.train_matrix(path)
        w = numpy.zeros((self.__dimension, 1))
        while True:
            flag = 0
            for i in range(self.__count):
                if numpy.dot(x_train[i, :], w)[0] * y_train[i, 0] <= 0:
                    w += y_train[i, 0] * x_train[i, :].reshape(5, 1)
                    count += 1
                    flag = 1
            if flag == 0:
                break
        return count


if __name__ == '__main__':
    sum = 0
    for i in range(2000):
        perceptron = RandomPLA(5, 400)
        sum += perceptron.iteration_count('hw1_15_train.dat')
    print(sum / 2000.0)
           

運作了40次左右

#17

台大林軒田《機器學習基石》:作業一python實作

與16題的差別就是在更新w時,多了一個參數,隻需要更改iteration_count(self, path)方法

def iteration_count(self, path):
    count = 0
    x_train, y_train = self.train_matrix(path)
    w = numpy.zeros((self.__dimension, 1))
    while True:
        flag = 0
        for i in range(self.__count):
            if numpy.dot(x_train[i, :], w)[0] * y_train[i, 0] <= 0:
            # 添加參數的地方
                w += 0.5 * y_train[i, 0] * x_train[i, :].reshape(5, 1)
                count += 1
                flag = 1
        if flag == 0:
            break
    return count
           

運作了40次左右

##18

台大林軒田《機器學習基石》:作業一python實作

首先分别從題目中寫的兩個網址中下載下傳訓練樣本和測試樣本,然後運作pocket算法,每次疊代50次,共運作2000次,計算它對測試樣本的平均錯誤率

1.和PLA算法的差別是,pocket算法每次對w進行更新直到達到疊代次數,每次更新w後都判斷該線是否是目前最好的那條線,如果是就放入口袋,最終得到一條線

2.pocket算法的終止條件是達到了預設的更新次數

代碼如下:

import numpy
import random
import copy


class Pocket(object):
    def __init__(self, dimension, train_count, test_count):
        self.__dimension = dimension
        self.__train_count = train_count
        self.__test_count = test_count

    def random_matrix(self, path):
        training_set = open(path)
        random_list = []
        x = []
        x_count = 0
        for line in training_set:
            x.append(1)
            for str in line.split(' '):
                if len(str.split('\t')) == 1:
                    x.append(float(str))
                else:
                    x.append(float(str.split('\t')[0]))
                    x.append(int(str.split('\t')[1].strip()))
            random_list.append(x)
            x = []
            x_count += 1
        random.shuffle(random_list)
        return random_list

    def train_matrix(self, path):
        x_train = numpy.zeros((self.__train_count, self.__dimension))
        y_train = numpy.zeros((self.__train_count, 1))
        random_list = self.random_matrix(path)
        for i in range(self.__train_count):
            for j in range(self.__dimension):
                x_train[i, j] = random_list[i][j]
            y_train[i, 0] = random_list[i][self.__dimension]
        return x_train, y_train

    def iteration(self, path):
        count = 0
        x_train, y_train = self.train_matrix(path)
        w = numpy.zeros((self.__dimension, 1))
        best_count = self.__train_count
        best_w = numpy.zeros((self.__dimension, 1))

        # pocket算法,對一條線進行修改(最多50次),每次修改後都用訓練集資料看是否是目前最好的那條線
        for i in range(self.__train_count):
            if numpy.dot(x_train[i, :], w)[0] * y_train[i, 0] <= 0:
                w += 0.5 * y_train[i, 0] * x_train[i, :].reshape(5, 1)
                # 修改次數加一
                count += 1
                num = 0
                # 驗證
                for j in range(self.__train_count):
                    if numpy.dot(x_train[j, :], w)[0] * y_train[j, 0] <= 0:
                        num += 1
                if num < best_count:
                    best_count = num
                    best_w = copy.deepcopy(w)
                if count == 50:
                    break
        return best_w

    def test_matrix(self, test_path):
        x_test = numpy.zeros((self.__test_count, self.__dimension))
        y_test = numpy.zeros((self.__test_count, 1))
        test_set = open(test_path)
        x = []
        x_count = 0
        for line in test_set:
            x.append(1)
            for str in line.split(' '):
                if len(str.split('\t')) == 1:
                    x.append(float(str))
                else:
                    x.append(float(str.split('\t')[0]))
                    y_test[x_count, 0] = (int(str.split('\t')[1].strip()))
            x_test[x_count, :] = x
            x = []
            x_count += 1
        return x_test, y_test

    # 驗證
    def test_error(self, train_path, test_path):
        w = self.iteration(train_path)
        x_test, y_test = self.test_matrix(test_path)
        count = 0.0
        for i in range(self.__test_count):
            if numpy.dot(x_test[i, :], w)[0] * y_test[i, 0] <= 0:
                count += 1
        return count / self.__test_count


if __name__ == '__main__':
    average_error_rate = 0
    for i in range(2000):
        my_Pocket = Pocket(5, 500, 500)
        average_error_rate += my_Pocket.test_error('hw1_18_train.dat', 'hw1_18_test.dat')
    print(average_error_rate / 2000.0)
           

我的運作結果為0.13181799999999988

##19

台大林軒田《機器學習基石》:作業一python實作

不采用貪心算法,而是采用第50次疊代結果為最終曲線,運作2000次,求對測試樣本的平均錯誤率

隻要将iteration(self, path)裡,更新w之後對是否是目前最佳分類的判斷去掉,始終将最新的放入口袋就好了

def iteration(self, path):
    count = 0
    x_train, y_train = self.train_matrix(path)
    w = numpy.zeros((self.__dimension, 1))
    for i in range(self.__train_count):
        if numpy.dot(x_train[i, :], w)[0] * y_train[i, 0] <= 0:
            w += 0.5 * y_train[i, 0] * x_train[i, :].reshape(5, 1)
            count += 1
        if count == 50:
            break
    return w
           

我的運作結果為0.3678069999999999

##20

台大林軒田《機器學習基石》:作業一python實作

如果把每次調用口袋算法的疊代次數從50次調為100次,求對測試樣本的平均錯誤率

隻需要将iteration(self, path)方法裡判斷是否達到規定疊代次數的常量設為100即可

def iteration(self, path):
    count = 0
    x_train, y_train = self.train_matrix(path)
    w = numpy.zeros((self.__dimension, 1))
    best_count = self.__train_count
    best_w = numpy.zeros((self.__dimension, 1))
    # pocket算法,對一條線進行修改(最多100次),每次修改後都用訓練集資料看是否是目前最好的那條線
    for i in range(self.__train_count):
        if numpy.dot(x_train[i, :], w)[0] * y_train[i, 0] <= 0:
            w += 0.5 * y_train[i, 0] * x_train[i, :].reshape(5, 1)
            count += 1
            num = 0
            for j in range(self.__train_count):
                if numpy.dot(x_train[j, :], w)[0] * y_train[j, 0] <= 0:
                    num += 1
            if num < best_count:
                best_count = num
                best_w = copy.deepcopy(w)
            if count == 100:
                break
    return best_w
           

我的運作結果為0.11375200000000021

繼續閱讀