天天看点

python笔记:sklearn r2_score和explained_variance_score的本质区别是什么?

python version 3.8.6

numpy version 1.19.2

sklearn version 0.23.2

Q:我知道

r2_score

表示的是在总变量中模型解释的百分比。但是

explained_variance_score

和它有什么区别?

A:从公式的差别角度看:

当残差的均值为0时,它俩是一样的。至于用哪个,就看你有没有假设残差均值为0。

——Answered by CT Zhu:

一、先举个残差均值不为0的栗子:
import numpy as np
from sklearn import metrics

y_true = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 8]
print(metrics.explained_variance_score(y_true, y_pred))
print(metrics.r2_score(y_true, y_pred))

# 结果如下
0.9571734475374732
0.9486081370449679

# 注意:此处残差的均值不为0
print((np.array(y_true) - np.array(y_pred)).mean())
# 结果如下
-0.25
           
  • explained_variance_score 和r^2其实是:

e x p l a i n e d _ v a r i a n c e _ s c o r e = 1 − V a r i a n c e ( Y t u r e − Y p r e d ) V a r i a n c e Y t r u e explained\_variance\_score = 1- \frac{ Variance_{(Y_{ture}-Y_{pred})} }{Variance_{Y_{true}}} explained_variance_score=1−VarianceYtrue​​Variance(Yture​−Ypred​)​​

r 2 = 1 − ∑ S q u a r e d R e s i d u a l s N V a r i a n c e Y t r u e = 1 − ∑ S q u a r e d R e s i d u a l s N ∗ V a r i a n c e Y t r u e r2 = 1-\frac{\frac{\sum SquaredResiduals}{N}}{Variance_{Y_{true}}} = 1-\frac{\sum SquaredResiduals}{N * Variance_{Y_{true}}} r2=1−VarianceYtrue​​N∑SquaredResiduals​​=1−N∗VarianceYtrue​​∑SquaredResiduals​

重点是: V a r i a n c e ( Y t u r e − Y p r e d ) = ∑ S q u a r e d R e s i d u a l s − M e a n E r r o r N Variance_{(Y_{ture}-Y_{pred})}=\frac{ \sum SquaredResiduals-MeanError}{N} Variance(Yture​−Ypred​)​=N∑SquaredResiduals−MeanError​。注:此处MeanError实质上取绝对值abs(MeanError)。
# 上边的例子用numpy这样实现:
explained_variance_score = 1- np.var( np.array(y_true)-np.array(y_pred) ) / np.var(y_true)
r2 = 1 - ((np.array(y_true) - np.array(y_pred))**2).sum() / (4 * np.array(y_true).var())    

print(explained_variance_score)
print(r2)

# 结果如下
0.9571734475374732
0.9486081370449679
           
1)

r2

分母

4 * np.array(y_true).var()

的另一种解释:

依据R2 = 1 - Sum_of_Squares_for_Error/ Sum_of_Squares_for_Total,所以 分母应是

总方差SST

,即

4 * np.array(y_true).var() = ((y - y.mean())**2).sum()

,其中,

y 代表 np.array(y_true)

2) explained_variance_score = 1 - np.cov( np.array(y_pred)-np.array(y_true) )/np.cov(y_true)

二、再举个残差均值为0的栗子:
y_ture = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 7]

print((np.array(y_true) - np.array(y_pred)).mean())
# 结果如下
0.0

print(metrics.explained_variance_score(y_true, y_pred))
print(metrics.r2_score(y_true, y_pred))
# 结果入下
0.9828693790149893
0.9828693790149893
           
备注:对于一维数据,

协方差cov/

方差var

的区别仅仅是自由度的区别,或者说是前者是样本方差,后者是总体方差。例如:
a = [1, 2, 3, 45]
print(np.cov(a))
print(np.var(a)*len(a)/(len(a)-1))   # 即 cov=离差的平方/(样本数 -1),var=离差平方/(样本数)
 # 结果如下:
 462.91666666666663
 462.9166666666667
           

从含义的差别角度看:Answered by Yahya:

  • 先看R2 / 可决系数 / 判定系数:

– 从公式上看:Variancetrue_y x R2true_y = Variancepred_y,很明显R2越接近1,效果越好。

– R2的含义,是从最小二乘(就是2次方差)的角度出发,表示实际y值的方差有多大比重被预测y值解释了。