天天看点

Deep walk模型 详细解释

1 数据解释

数据集:wiki数据集(2405个网页,17981条网页间的关系)

输入样本:node1 node2 <edge_weight>

输出:每个node的embedding

Deep walk模型 详细解释

根据随机游走的序列,输入到word2vec的模型当中,然后就能训练后表示出该节点的embedding

2 代码思想

步骤① 构建一个有向图 ② 进行deepwalk取样本 ③ 输入到word2vec当中训练 ④ 得到了训练好的word2vec,进行evaluate

⑤ 数据分为x_train,x_test, 使用logistics函数,训练embedding后x_train对应的label,然后通过logstics函数预测x_test的标签(标签是0-16所属分类)

⑥画图

① 构建一个有向图

G = nx.read_edgelist('../data/wiki/Wiki_edgelist.txt',
                     create_using=nx.DiGraph(), nodetype=None, data=[('weight', int)])
           

② 进行deepwalk取样本 其中 deepwalk就是根据步长,进行深度优先遍历,如果该节点没有邻居节点了,就break出来

Deep walk模型 详细解释
def deepwalk_walk(self, walk_length, start_node):
    walk = [start_node]
    while len(walk) < walk_length:
        cur = walk[-1]
        cur_nbrs = list(self.G.neighbors(cur))
        if len(cur_nbrs) > 0:
            walk.append(random.choice(cur_nbrs))
        else:
            break
    return walk
           

③ 输入到word2vec当中训练,得到该点的embedding表示

④ 得到了训练好的word2vec,得到了2405X128的embedding,然后进行模型的评价

class Classifier(object):

    def __init__(self, embeddings, clf):
        self.embeddings = embeddings
        self.clf = TopKRanker(clf)
        self.binarizer = MultiLabelBinarizer(sparse_output=True)   # multi one-hot


    def train(self, X, Y, Y_all):
        self.binarizer.fit(Y_all)
        X_train = [self.embeddings[x] for x in X]
        Y = self.binarizer.transform(Y)
        self.clf.fit(X_train, Y)

    def evaluate(self, X, Y):
        top_k_list = [len(l) for l in Y]
        Y_ = self.predict(X, top_k_list)
        Y = self.binarizer.transform(Y)
        averages = ["micro", "macro", "samples", "weighted"]
        results = {}
        for average in averages:
            results[average] = f1_score(Y, Y_, average=average)
        results['acc'] = accuracy_score(Y,Y_)
        print('-------------------')
        print(results)
        return results
        print('-------------------')

    def predict(self, X, top_k_list):
        X_ = numpy.asarray([self.embeddings[x] for x in X])
        Y = self.clf.predict(X_, top_k_list=top_k_list)
        return Y

    def split_train_evaluate(self, X, Y, train_precent, seed=0):
        state = numpy.random.get_state() # 使随机生成器保持相同状态

        training_size = int(train_precent * len(X))
        numpy.random.seed(seed)
        shuffle_indices = numpy.random.permutation(numpy.arange(len(X))) # 随机打乱顺序
        X_train = [X[shuffle_indices[i]] for i in range(training_size)]
        Y_train = [Y[shuffle_indices[i]] for i in range(training_size)]
        X_test = [X[shuffle_indices[i]] for i in range(training_size, len(X))]
        Y_test = [Y[shuffle_indices[i]] for i in range(training_size, len(X))]
        '''前面部分都是为了打乱顺序'''
        self.train(X_train, Y_train, Y)  # 把label转成Multi hot 形式,把x_train转成embedding形式
        numpy.random.set_state(state)
        return self.evaluate(X_test, Y_test)
           

⑤ 先使用split_train_evaluate 函数,

(1)打乱数据 (2)把embedding后的x_train与 转换为mulit-hot的 Y 通过logistics 模型进行训练

(3) 把测试数据x_test 传入 logistics模型,得到评估的F1 score

首先测试数据放进logistics模型,并选择topk的目标值,打上标签1

然后转换回label标签,通过f1 score进行计算

class TopKRanker(OneVsRestClassifier):
    def predict(self, X, top_k_list):
        probs = numpy.asarray(super(TopKRanker, self).predict_proba(X)) # 预测出来NX17,一共有17个label
        all_labels = []
        for i, k in enumerate(top_k_list):
            probs_ = probs[i, :]
            labels = self.classes_[probs_.argsort()[-k:]].tolist()
            probs_[:] = 0
            probs_[labels] = 1
            all_labels.append(probs_)
        return numpy.asarray(all_labels)

def evaluate(self, X, Y):
    top_k_list = [len(l) for l in Y]
    Y_ = self.predict(X, top_k_list)
    Y = self.binarizer.transform(Y)  # 转换为label值
    averages = ["micro", "macro", "samples", "weighted"]
    results = {}
    for average in averages:
        results[average] = f1_score(Y, Y_, average=average)
    results['acc'] = accuracy_score(Y,Y_)
    print('-------------------')
    print(results)
    return results
    print('-------------------')
           

⑥画图

通过TSNE进行降维,然后根据 embedding降维后与 wiki_labels的值对应上,画出二维效果图

Deep walk模型 详细解释

参考文章:

https://zhuanlan.zhihu.com/p/56380812

继续阅读