寫在前面:
本文作為關于推薦系統的算法實作的總結,作為一個大二學生,剛接觸研究推薦系統,作出的總結一定有很多的錯誤,望見諒指正。在學習途中參考了很多技術部落格、項亮博士的《推薦系統實踐》,有引用到他人總結會的盡量标注。
轉載請注明出處:http://blog.csdn.net/czzffff
在暑假期間,所在推薦系統小組針對MovieLens(1M)資料集,重制了一些論文的幾個算法,包括:
非個性化模型(Non-personalized models):Movie Average(MovieAvg)和Top Popular(TopPop);
鄰域模型(Neighborhood models):Correlation Neighborhood models(CorNgbr)和Non-normalized Cosine Neighborhood(NNCosNgbr);
隐語義模型(Latent Factor Model):Asymmetric-SVD(AsySVD)、SVD++和PureSVD
重制論文:【RecSys '10,2010,Cremonesi】 Performance of recommender algorithms on top-n recommendation task
相關論文:【KDD '08 ,2008,Koren】Factorization Meets the Neighborhood:a Multifaceted Collaborative Filtering Model
【CARS-2011,2011,Cremonesi】Top-N recommendations on unpopular items with contextual knowledge
一、資料集
(a) 名稱:MovieLens(1M)
(b) 介紹:包括6,040個使用者對于3,900部電影的1,000,209個評分。時間發表于2000年,包含ratings.dat、users.dat、movies.dat三個檔案。
Ratings.dat:使用者id、電影id、評分(1~5)、時間标; 格式:UserID::MovieID::Rating::Timestamp;
Users.dat: 性别、年齡、職位、郵編; 格式:UserID::Gender::Age::Occupation::Zip-codeAll;
Movies.dat:電影id、标題、流派; 格式:MovieID::Title::Genres。
(c) 來源:http://grouplens.org/datasets/movielens/
二、資料預處理
将三個檔案整合成一行(一行為一條記錄)為如下格式的檔案:
userId,movieId,rating,timestamp,age,gender,occupation,zipcode,movietitle,year,genres
論文中提出,從這些記錄中随機抽取1.4%的記錄作為探測集(probe set),餘下的作為訓練集(training set),将探測集(probe set)中評分為5的全部記錄提取作為測試集(test set)。
三、評價标準
(a)評判值:召回率、準确率
(b)算法:
1) 名稱:召回率、準确率
2) 算法步驟:
a) 步驟一:從測試集中抽取一條記錄(包括使用者ID、電影ID);
b) 步驟二:在未評分集中随機選取1000個該使用者u未評過分的電影id,加上測試集的一個電影ID,共1001個電影ID;
c) 步驟三:通過提出的評分規則對該1001部電影評分,并基于得到的評分降序排序得到一個top-N推薦清單;
d) 步驟四:推薦前N(N為0到20的整數)個清單中的電影,若前N個中包含測試集中第一步中的電影ID,算作命中一次;
e) 步驟五:繼續按順序抽取測試集中下一條記錄,循環以上步驟,算出每一個N的命中次數;
f) 步驟六:命中次數除以測試集記錄條數作為召回率;
g) 步驟七:召回率除以N值作為準确率。
(c)相關公式:

四、算法描述(部分算法有源碼實作)
(一)非個性化模型(Non-personalized models)算法
(1) 算法名稱:Movie Average(MovieAvg)
算法步驟:
a) 步驟一:算出所有被評價過的每一部電影的平均評分;
b) 步驟二:根據電影平均分作為評分規則,對推薦清單電影降序排序。
(2) 算法名稱:Top Popularity(TopPop)
算法步驟:
a) 步驟一:算出每一部電影的被評分次數;
b) 步驟二:根據電影被評分次數作為評分規則,越高次數,排名越高,對推薦清單電影降序排序。
(二)鄰域模型(Neighborhood models)算法:
(1) 算法名稱:Correlation Neighborhood(CorNgbr)
算法步驟:
a) 步驟一: 通過baseline estimates公式,得到每個使用者對每部電影的基礎評分bui;
b) 步驟二:用皮爾遜相關系數法計算相關電影(有共同使用者評價過的)相似度sij,并得出收縮相似度dij;
c) 步驟三:設定K值,取K個相似度最高的電影項目,用作基于使用者的協同過濾公式的計算;
d) 步驟四:由以上步驟得到的相似度、基本分,通過CorNgbr公式計算得出評分
相關公式:
算法說明:(1)baseline(bui)的計算
求baseline從我在兩個技術部落格中可以看到有三種方法,《SVD因式分解實作協同過濾-及源碼實作》這篇作者為Dustinsea的文章中對baseline作了詳 細的介紹并提到了兩個方法求bui:
方法一,直接使用user,item的rating的平均值直接預估bi,bu,例如直接計算bu = sum(Ru)/len(Ru),其中Ru為使用者u投票的集合, sum(Ru)為這些rating值得和, len(Ru)為該集合大小。bi = sum(Ri)/len(Ri), 其中Ri為使用者i被投票的集合, sum(Ri)為這些rating的分值之和, len(Ri)為這個集合的大小。
方法二,其中rui為已知的投票, mu可直接統計, 對每個使用者的參數bu, 對每個item的bi可求(相當于AX=B,求X,此處X即為bu,bi,可使用最小二乘法, 例如可使用Numerical Recipes: The Art of Scientific Computing中提供的優化函數) ,當然,最簡單的方法就是直接根據目前的觀測值, 直接統計出bu 和bi, 統計方式如下:
方法三,可以利用梯度下降法算出。可以參考《基于baseline和stochastic gradient descent的個性化推薦系統》這篇文章:
根據
對bu,bi求偏導,得到梯度變化
(利用stochasticgradient descent算法使上述的目标函數值,在設定的疊代次數内,降到最小):
我将計算得到的bu,bi寫入到檔案以友善利用CorNgbr算法進行Top-N推薦,以下為代碼實作,參考
基于baseline和stochastic gradient descent的個性化推薦系統中代碼:
__author__ = '[email protected]'
'''
Created on 2014/7
@Author:ZackChan
@E-mail:[email protected]
@Homepage: http://blog.csdn.net/czzffff
'''
from operator import itemgetter, attrgetter
from math import sqrt
import random
def load_data():
train = {}
test = {}
filename_train = 'G:\文獻\movielens\movieLens\movielens處理後資料\whole.csv'
filename_test = 'G:\文獻\movielens\movieLens\Set\TestSet1.csv'
for line in open(filename_train):
(userId, itemId, rating, o1,o2,o3,o4,o5,o6,o7,o8) = line.strip().split(',')
train.setdefault(userId,{})
train[userId][itemId] = float(rating)
for line in open(filename_test):
(userId, itemId, rating, o1,o2,o3,o4,o5,o6,o7,o8) = line.strip().split(',')
test.setdefault(userId,{})
test[userId][itemId] = float(rating)
return train,test
def calMean(train):
sta = 0
num = 0
for u in train.keys():
for i in train[u].keys():
sta += train[u][i]
num += 1
mean = sta*1.0/num
return mean
def initialBias(train, userNum, movieNum):
mean = calMean(train)
print("mean="+str(mean))
bu = {}
bi = {}
biNum = {}
buNum = {}
u = 1
while u < (userNum+1):
su = str(u)
for x in range(3953):
bi.setdefault(str(x),0)
for i in train[su].keys():
# bi.setdefault(i,0)
biNum.setdefault(i,0)
bi[i] += (train[su][i] - mean)
biNum[i] += 1
u += 1
i = 1
while i < (movieNum+1):
si = str(i)
biNum.setdefault(si,0)
if biNum[si] >= 1:
bi[si] = bi[si]*1.0/(biNum[si]+25)
else:
bi[si] = 0.0
i += 1
u = 1
while u < (userNum+1):
su = str(u)
for i in train[su].keys():
bu.setdefault(su,0)
buNum.setdefault(su,0)
bu[su] += (train[su][i] - mean - bi[i])
buNum[su] += 1
u += 1
u = 1
while u < (userNum+1):
su = str(u)
buNum.setdefault(su,0)
if buNum[su] >= 1:
bu[su] = bu[su]*1.0/(buNum[su]+10)
else:
bu[su] = 0.0
u += 1
return bu,bi,mean
def sgd(train, test, userNum, movieNum):
bu, bi, mean = initialBias(train, userNum, movieNum)
file_bu=open('newbu1.csv','w')
file_bi=open('newbi1.csv','w')
alpha1 = 0.002
beta1 = 0.1
slowRate = 0.99
step = 0
preRmse = 1000000000.0
nowRmse = 0.0
while step < 200:
rmse = 0.0
n = 0
for u in train.keys():
for i in train[u].keys():
pui = 1.0 * (mean + bu[u] + bi[i])
eui = train[u][i] - pui
rmse += pow(eui,2)
n += 1
bu[u] += alpha1 * (eui - beta1 * bu[u])
bi[i] += alpha1 * (eui - beta1 * bi[i])
nowRmse = sqrt(rmse*1.0/n)
print( "step: %d Rmse: %s" %(step+1,nowRmse))
if (nowRmse < preRmse):
preRmse = nowRmse
alpha1 *= slowRate
step += 1
#輸出bu和bi于檔案中
# newbi={}
for u in train.keys():
# for i in train[u].keys():
# newbi.setdefault(i,bi[i])
# file_bi.write(str(i)+','+str(bi[i])+'\n')
file_bu.write(str(u)+','+str(bu[u])+'\n')
for j in range(3953)[1:]:
file_bi.write(str(j)+','+str(bi[str(j)])+'\n')
return bu, bi, mean
def calRmse(test, bu, bi, mean):
rmse = 0.0
n = 0
for u in test.keys():
for i in test[u].keys():
pui = 1.0 * (mean + bu[u] + bi[i])
eui = pui - test[u][i]
rmse += pow(eui,2)
n += 1
rmse = sqrt(rmse*1.0 / n)
return rmse;
if __name__ == "__main__":
train,test = load_data()
bu,bi,mean = sgd(train, test,6040, 3952)
print( 'the Rmse of test test is: %s' % calRmse(test, bu, bi, mean))
CorNgbr算法實作代碼:可以先把相關電影相似度itemSim先算出寫入檔案中,因為在跑電影相似度時占6G記憶體,将相似度算出後寫入到檔案後再讀取,跑這段代碼占用3G記憶體。
部分代碼參考《基于neighborhood models(item-based) 的個性化推薦系統》文章:
__author__ = '[email protected]'
'''
Created on 2014/7
@Author:ZackChan
@E-mail:[email protected]
@Homepage: http://blog.csdn.net/czzffff
'''
from math import fabs,sqrt
import random
import operator
def load_data():
train = {}
test = {}
numtest = 0
filename_train = 'D:\python341\MyCorNgbr\TrainingSet1.csv'
filename_test = 'D:\python341\MyCorNgbr\TestSet1.csv'
for line in open(filename_train):
(userId, itemId, rating, o1,o2,o3,o4,o5,o6,o7,o8) = line.strip().split(',')
train.setdefault(userId,{})
train[userId][itemId] = float(rating)
for line in open(filename_test):
(userId, itemId, rating, o1,o2,o3,o4,o5,o6,o7,o8) = line.strip().split(',')
# test.setdefault(userId,{})
# test[userId][itemId] = float(rating)
test[userId] = itemId
numtest+=1
print("testnumber")
print(numtest)
return train,test,numtest
def load_unrated():
unrated = {}
list1 = []
list2 = []
filename_unrated = 'D:\python341\MyCorNgbr\without_rated.csv'
for line in open(filename_unrated):
list1 = line.strip().split(',')
list2=list1[1:]
random.shuffle(list2)
unrated.setdefault(list1[0],list2)
# for userid,list in unrated:
# random.shuffle(list)
return unrated
def load_bui():
bu = {}
bi = {}
mean = 3.5813100089534955
filename_bu = 'D:\python341\MyCorNgbr\_newbu1.csv'
filename_bi = 'D:\python341\MyCorNgbr\_newbi1.csv'
for linebu in open(filename_bu):
(userId,valbu)=linebu.strip().split(',')
bu[userId]=float(valbu)
for linebi in open(filename_bi):
(movieId,valbi)=linebi.strip().split(',')
bi[movieId]=float(valbi)
return bu,bi,mean
def initial(train):
filename_sij = 'D:\python341\MyCorNgbr\sij.csv'
file_sij = open(filename_sij,'w')
average = {}
Sij = {}
num = 0
N = {}
for u in train.keys():
for i in train[u].keys():
# mean += train[u][i]
num += 1
average.setdefault(i,0)
average[i] += train[u][i]
N.setdefault(i,0)
N[i] += 1
Sij.setdefault(i,{})
for j in train[u].keys():
if i == j:
continue
Sij[i].setdefault(j,[])
# print("testsij")
Sij[i][j].append(u)
#print("testsij")
# mean = mean / num
for i in average.keys():
average[i] = average[i] / N[i]
pearson = {}
itemSim = {}
for i in Sij.keys():
pearson.setdefault(i,{})
itemSim.setdefault(i,{})
for j in Sij[i].keys():
pearson[i][j] = 1
part1 = 0
part2 = 0
part3 = 0
for u in Sij[i][j]:
part1 += (train[u][i] - average[i]) * (train[u][j] - average[j])
part2 += pow(train[u][i] - average[i], 2)
part3 += pow(train[u][j] - average[j], 2)
if part1 != 0:
pearson[i][j] = part1 / sqrt(part2 * part3)
itemSim[i][j] = fabs(pearson[i][j] * len(Sij[i][j]) / (len(Sij[i][j]) + 100))
file_sij.write(str(i))
file_sij.write(',')
file_sij.write(str(j))
file_sij.write(',')
file_sij.write(str(itemSim[i][j]))
file_sij.write('\n')
# initial user and item Bias, respectly
# bu, bi = initialBias(train, userNum, movieNum, mean)
return itemSim,average
def load_itemSim():
itemSim = {}
filename_itemSim = 'D:\python341\MyCorNgbr\sij.csv'
for line in open(filename_itemSim):
(item1,item2,sim) = line.strip().split(',')
itemSim.setdefault(item1,{})
itemSim[item1][item2] = float(sim)
return itemSim
def CorNgbrModels(train,test,itemSim,mean,arrage,bu,bi,unrated,numtest):
pui = {}
sorted_pui = {}
num = 0
list = []
arr = [0]*30
for u in test.keys():
pui.setdefault(u,{})
list = unrated[u][:1000]
list.append(test[u])
for i in list:
pui[u][i] = mean + bu[u] + bi[i]
stat = 0
stat2 = 0
for j in train[u].keys():
if i in itemSim and j in itemSim[i]:
# if itemSim.has_key(i) and itemSim[i].has_key(j):
stat += (train[u][j] - mean - bu[u] - bi[j]) * itemSim[i][j]
stat2 += itemSim[i][j]
if stat > 0:
pui[u][i] += stat * 1.0 / stat2
num += 1
sorted_pui = sorted(pui[u].items(), key=lambda x:x[1], reverse=True)#評分排序
listnum = 1
for k,v in sorted_pui:
if( k == test[u]):
break
listnum=listnum+1
while(listnum<=20):
arr[listnum]+=1
listnum+=1
for temp in arr[:21]:
print(temp)
temp = 0
while(temp<=20):
arr[temp] = 1.0*arr[temp]/numtest
print(arr[temp])
temp+=1
file_result = open('result.csv','w')
file_result.write(',')
for i in range(101)[1:21]:
file_result.write(str(i))
file_result.write(',')
file_result.write('\n')
for j in arr[:21]:
file_result.write(str(j))
file_result.write(',')
file_result.write('\n')
return
if __name__ =='__main__':
train,test,numtest = load_data()
unrated = load_unrated()
bu,bi,mean = load_bui()
itemSim,average = initial(train)
CorNgbrModels(train,test,itemSim,mean,average,bu,bi,unrated,numtest)
上述代碼運作結果與論文召回率還相差0.1左右,是以往後還需不斷修改,代碼僅作參考。
(2)算法名稱:Non-normalized Cosine Neighborhood (NNCosNgbr)
算法步驟:
a) 步驟一: 通過baseline estimates公式,得到每個使用者對每部電影的基礎評分bui;
b) 步驟二:用餘弦法計算相關電影(有共同使用者評價過的)相似度sij,并得出收縮相似度dij;
c) 步驟三:設定K值,取K個相似度最高的電影項目,用作基于使用者的協同過濾公式的計算;
d) 步驟四:由以上步驟得到的相似度、基本分,通過NNCosNgbr公式計算得出評分
相關公式:
(三)隐語義模型(Latent Factor Model)
(1) 算法名稱: Asymmetric-SVD(AsySVD)
算法步驟:
a) 步驟一:由公式得到損失函數;
b) 步驟二:對p、q矩陣進行初始化;
c) 步驟三:通過随機梯度下降法的疊代得到最終的p、q矩陣;
d) 步驟四:由得到的p、q矩陣計算使用者對電影的評分。
相關公式:
算法說明:
此算法可以參考項亮博士編著的《推薦系統實踐》第2章2.5隐語義模型,也可參考《SVD因式分解實作協同過濾-及源碼實作》,文章中對隐語義模型作了詳細的解釋說明。
__author__ = '[email protected]'
import random
from math import sqrt
import math
'''
Created on 2014/7
@Author:ZackChan
@E-mail:[email protected]
@Homepage: http://blog.csdn.net/czzffff
'''
def load_data():
filename_train = 'G:\文獻\movielens\movieLens\Set\TrainingSet1.csv'
filename_test = 'G:\文獻\movielens\movieLens\Set\TestSet1.csv'
trainlist = []
testlist = []
numtest = 0
numtrain = 0
sumtrain = 0
mean = 0
for line in open(filename_train):
(userId,movieId,rating,o1,o2,o3,o4,o5,o6,o7,o8) = line.strip().split(',')
temp = (userId,movieId,float(rating))
trainlist.append(temp)
sumtrain += float(rating)
numtrain += 1
mean = sumtrain*1.0/numtrain
print("mean = "+str(mean))
for line in open(filename_test):
(userId,movieId,rating,o1,o2,o3,o4,o5,o6,o7,o8) = line.strip().split(',')
temp = (userId,movieId,float(rating))
testlist.append(temp)
numtest+=1
print("testnumber:"+str(numtest))
return trainlist,testlist,numtest,mean
def load_unrated():
unrated = {}
list1 = []
list2 = []
filename_unrated ='G:\文獻\movielens\movieLens\Set\without_rated.csv'
for line in open(filename_unrated):
list1 = line.strip().split(',')
list2=list1[1:]
random.shuffle(list2)
unrated.setdefault(list1[0],list2)
return unrated
def InitBiasLFM(train,F):
p = dict()
q = dict()
bu = dict()
bi = dict()
for u,i,rui in train:
bu[u] = 0
bi[i] = 0
if u not in p:
p[u] = [random.random()/math.sqrt(F) for x in range(0,F)]
if i not in q:
q[i] = [random.random()/math.sqrt(F) for x in range(0,F)]
return p,q,bu,bi
def Predict(u,i,p,q,bu,bi,mean):
if u in bu and i in bi:
ret = mean + bu[u] + bi[i]
else:
ret = mean
if u in p and i in q :
ret += sum(p[u][f]*q[i][f] for f in range(0,len(p[u])))
else:
ret += 0
return ret
def LearningBiasLFM(train, F, n, alpha, beta, mean ):
p,q,bu,bi=InitBiasLFM(train,F)
rmse = 0
num = 0
for step in range(0,n):
for u, i, rui in train:
pui = Predict(u,i,p,q,bu,bi,mean)
eui = rui - pui
# print("eui:"+str(step)+":"+str(eui))
rmse +=pow(eui,2)
num += 1
bu[u] += alpha * (eui - beta * bu[u])
bi[i] += alpha * (eui - beta * bi[i])
for f in range(0,F):
p[u][f] += alpha * (q[i][f] * eui - beta * p[u][f])
q[i][f] += alpha * (p[u][f] * eui - beta * q[i][f])
print("eui:"+str(step)+":"+str(eui))
alpha *= 0.9
rmse = sqrt(rmse * 1.0 / num)
print(str(step)+':rmse = '+str(rmse))
return p,q,bu,bi
def TestRMSE(testlist,p,q,bu,bi,mean):
num = 0
rmse = 0
for u,i,rui in testlist:
pui = Predict(u,i,p,q,bu,bi,mean)
rmse += pow((pui - rui),2)
num += 1
rmse = sqrt(rmse*1.0/num)
return rmse
def TopNBiasSVD(testlist,p,q,bu,bi,mean,unrated,numtest):
pui = {}
arr = [0]*30
for u,i,rui in testlist:
pui.setdefault(u,{})
list = unrated[u][:1000]
list.append(i)
for i in list:
pui[u][i] = Predict(u,i,p,q,bu,bi,mean)
sorted_pui = sorted(pui[u].items(), key=lambda x:x[1], reverse=True)#評分排序
listnum = 1
for k,v in sorted_pui:
if(k == i):
break
listnum +=1
while(listnum<=20):
arr[listnum]+=1
listnum+=1
for temp in arr[:21]:
print(temp)
temp = 0
while(temp<=20):
arr[temp] = 1.0*arr[temp]/numtest
print(arr[temp])
temp+=1
file_result = open('result.csv','a')
file_result.write('\n')
file_result.write(',')
for i in range(101)[1:21]:
file_result.write(str(i))
file_result.write(',')
file_result.write('\n')
for j in arr[:21]:
file_result.write(str(j))
file_result.write(',')
file_result.write('\n')
return
if __name__ =='__main__':
F = 50
n = 10
alpha = 0.02
beta = 0.01
filename_rmse = 'rmse.txt'
file_rmse = open(filename_rmse,'a')
# bu,bi,mean = load_bui()
unrated = load_unrated()
trainlist,testlist,numtest,mean = load_data()
p,q,bu,bi = LearningBiasLFM(trainlist,F,n,alpha,beta,mean)
rmse = TestRMSE(testlist,p,q,bu,bi,mean)
print('testSet:'+str(rmse))
file_rmse.write('\n')
file_rmse.write('F='+str(F)+',step='+str(n)+',alpha='+str(alpha)+',beta='+str(beta)+': '+str(rmse))
TopNBiasSVD(testlist,p,q,bu,bi,mean,unrated,numtest)
上述代碼運作結果與論文召回率還相差0.05左右,是以往後還需不斷修改,代碼僅作參考。
(2)算法名稱: PureSVD
算法步驟:
a) 步驟一:由svdlibc包作矩陣分解得到正交矩陣Q
b) 步驟二:根據公式計算使用者對電影的評分
相關公式: