天天看点

机器学习实战之--regression

前面主要讲到了分类问题,从这节开始,进入到回归的学习。这节主要介绍几个常用的数值回归算法。

1、线性回归

数据的线性拟合

平方误差损失函数:

机器学习实战之--regression

回归系数:

机器学习实战之--regression

主要算法实现:

def standRegres(xArr,yArr):
    xMat = mat(xArr); yMat = mat(yArr).T
    xTx = xMat.T*xMat
    if linalg.det(xTx) == :
        print "This matrix is singular, cannot do inverse"
        return
    ws = xTx.I * (xMat.T*yMat)
    return ws
           

2、局部加权线性回归

由于线性回归可能的欠拟合,引入局部加权线性回归,根据距离训练样本和预测样本之间的距离不同,而给定不同的权值。

为了表示上面的权值,引入核,常用的核为高斯核:

机器学习实战之--regression

k取不同值时,与权重w的关系

机器学习实战之--regression

回归系数:

主要算法实现:

def lwlr(testPoint,xArr,yArr,k=):
    xMat = mat(xArr); yMat = mat(yArr).T
    m = shape(xMat)[]
    weights = mat(eye((m)))
    for j in range(m):                      #next 2 lines create weights matrix
        diffMat = testPoint - xMat[j,:]     #
        weights[j,j] = exp(diffMat*diffMat.T/(-*k**))
    xTx = xMat.T * (weights * xMat)
    if linalg.det(xTx) == :
        print "This matrix is singular, cannot do inverse"
        return
    ws = xTx.I * (xMat.T * (weights * yMat))
    return testPoint * ws

def lwlrTest(testArr,xArr,yArr,k=):  #loops over all the data points and applies lwlr to each one
    m = shape(testArr)[]
    yHat = zeros(m)
    for i in range(m):
        yHat[i] = lwlr(testArr[i],xArr,yArr,k)
    return yHat

def lwlrTestPlot(xArr,yArr,k=):  #same thing as lwlrTest except it sorts X first
    yHat = zeros(shape(yArr))       #easier for plotting
    xCopy = mat(xArr)
    xCopy.sort()
    for i in range(shape(xArr)[]):
        yHat[i] = lwlr(xCopy[i],xArr,yArr,k)
    return yHat,xCopy
           

3、岭回归和逐步线性回归

如果特征数>样本个数(m>n)怎么办?(此时非满秩矩阵,矩阵不能求逆),一开始为了解决这个问题而引入了缩减系数的方法,岭回归就是其中的一种。简单来说岭回归就是在矩阵X’*T后加入一个lamda*I,使之成为一个满秩矩阵。岭回归也用于在估计中加入偏差,以便能得到更好的估计。这里通过引入lamda来限制所有的w之和,通过引入该惩罚项,能够减少不重要的参数,这一技术在统计学上称为缩减技术。

回归系数:![这里写图片描述](https://img-blog.csdn.net

主要代码实现:(数据要做标准化处理,思考一下上面时候要用到数据标准化?损失函数,加权和)

def rssError(yArr,yHatArr): #yArr and yHatArr both need to be arrays
    return ((yArr-yHatArr)**).sum()

def ridgeRegres(xMat,yMat,lam=):
    xTx = xMat.T*xMat
    denom = xTx + eye(shape(xMat)[])*lam
    if linalg.det(denom) == :
        print "This matrix is singular, cannot do inverse"
        return
    ws = denom.I * (xMat.T*yMat)
    return ws

def ridgeTest(xArr,yArr):
    xMat = mat(xArr); yMat=mat(yArr).T
    yMean = mean(yMat,)
    yMat = yMat - yMean     #to eliminate X0 take mean off of Y
    #regularize X's
    xMeans = mean(xMat,)   #calc mean then subtract it off
    xVar = var(xMat,)      #calc variance of Xi then divide by it
    xMat = (xMat - xMeans)/xVar
    numTestPts = 
    wMat = zeros((numTestPts,shape(xMat)[]))
    for i in range(numTestPts):
        ws = ridgeRegres(xMat,yMat,exp(i-))
        wMat[i,:]=ws.T
    return wMat

def regularize(xMat):#regularize by columns
    inMat = xMat.copy()
    inMeans = mean(inMat,)   #calc mean then subtract it off
    inVar = var(inMat,)      #calc variance of Xi then divide by it
    inMat = (inMat - inMeans)/inVar
    return inMat
           

向前逐步回归:

算法伪代码

机器学习实战之--regression
def stageWise(xArr,yArr,eps=,numIt=):
    xMat = mat(xArr); yMat=mat(yArr).T
    yMean = mean(yMat,)
    yMat = yMat - yMean     #can also regularize ys but will get smaller coef
    xMat = regularize(xMat)
    m,n=shape(xMat)
    #returnMat = zeros((numIt,n)) #testing code remove
    ws = zeros((n,)); wsTest = ws.copy(); wsMax = ws.copy()
    for i in range(numIt):
        print ws.T
        lowestError = inf; 
        for j in range(n):
            for sign in [-,]:
                wsTest = ws.copy()
                wsTest[j] += eps*sign
                yTest = xMat*wsTest
                rssE = rssError(yMat.A,yTest.A)
                if rssE < lowestError:
                    lowestError = rssE
                    wsMax = wsTest
        ws = wsMax.copy()
        #returnMat[i,:]=ws.T
    #return returnMat
           

4、权衡方差和偏差

能挖掘出哪些特征是重要的,哪些特征是不重要的

算法实现:

def crossValidation(xArr,yArr,numVal=):
    m = len(yArr)                           
    indexList = range(m)
    errorMat = zeros((numVal,))#create error mat 30columns numVal rows
    for i in range(numVal):
        trainX=[]; trainY=[]
        testX = []; testY = []
        random.shuffle(indexList)
        for j in range(m):#create training set based on first 90% of values in indexList
            if j < m*: 
                trainX.append(xArr[indexList[j]])
                trainY.append(yArr[indexList[j]])
            else:
                testX.append(xArr[indexList[j]])
                testY.append(yArr[indexList[j]])
        wMat = ridgeTest(trainX,trainY)    #get 30 weight vectors from ridge
        for k in range():#loop over all of the ridge estimates
            matTestX = mat(testX); matTrainX=mat(trainX)
            meanTrain = mean(matTrainX,)
            varTrain = var(matTrainX,)
            matTestX = (matTestX-meanTrain)/varTrain #regularize test with training params
            yEst = matTestX * mat(wMat[k,:]).T + mean(trainY)#test ridge results and store
            errorMat[i,k]=rssError(yEst.T.A,array(testY))
            #print errorMat[i,k]
    meanErrors = mean(errorMat,)#calc avg performance of the different ridge weight vectors
    minMean = float(min(meanErrors))
    bestWeights = wMat[nonzero(meanErrors==minMean)]
    #can unregularize to get model
    #when we regularized we wrote Xreg = (x-meanX)/var(x)
    #we can now write in terms of x not Xreg:  x*w/var(x) - meanX/var(x) +meanY
    xMat = mat(xArr); yMat=mat(yArr).T
    meanX = mean(xMat,); varX = var(xMat,)
    unReg = bestWeights/varX
    print "the best model from Ridge Regression is:\n",unReg
    print "with constant term: ",-*sum(multiply(meanX,unReg)) + mean(yMat)
           

继续阅读