天天看点

用户流失预测用户流失预测

用户流失预测

数据详细情况请查看:用户流失分析

数据预处理

转换数据类型和处理缺失值

#转换数据类型
#TotalCharges总费用 和 MonthlyCharges每月费用 ,都应该是数值型数据,转换TotalCharges为数值型数据
data['TotalCharges'] = pd.to_numeric(data['TotalCharges'],errors='coerce')
           
#处理缺失值
#用MonthlyCharges 的值 填充 TotalCharges的缺失值
data['TotalCharges'] = data['TotalCharges'].fillna(data['MonthlyCharges'])
           
用户流失预测用户流失预测

特征工程

数据归一化并删除不需要标签

#删除用户ID标签
data =  data.iloc[:,1:]
           
#查看非数值数据包含的特征信息
def uni(label):
    print(label,'-----',data[label].unique())
    
data_object = data.select_dtypes(['object'])
for i in range(0,len(data_object.columns)):
    uni(data_object.columns[i])
           
用户流失预测用户流失预测

根据之前的分析,No phone service,No internet service 可以等同于No

#No phone service 转换为 No
data['MultipleLines'] = data['MultipleLines'].replace('No phone service','No')
data['MultipleLines'].value_counts()
#No internet service 转换为 No
loc = ['OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies']
for i in loc:
    data[i] = data[i].replace('No internet service','No')
           

对只有两类值的特征标签,进行二值化处理,用1, 0来代替

# Male= 1   Female = 0
data['gender'] = data['gender'].replace({'Male':1,'Female':0})

# Yes = 1   No = 0
loc = ['Partner','Dependents','PhoneService','MultipleLines','OnlineSecurity','OnlineBackup','DeviceProtection','TechSupport','StreamingTV','StreamingMovies','PaperlessBilling','Churn']
for i in loc:
    data[i] = data[i].replace({'Yes':1,'No':0})
           

数值型数据标准化处理

data[['tenure']] = StandardScaler().fit_transform(data[['tenure']])
data[['MonthlyCharges']] = StandardScaler().fit_transform(data[['MonthlyCharges']])
data[['TotalCharges']] = StandardScaler().fit_transform(data[['TotalCharges']])
           

对 InternetService,Contract,PaymentMethod 这三列进行独热编码处理

#转码
InternetService = pd.get_dummies(data['InternetService'])
Contract = pd.get_dummies(data['Contract'])
PaymentMethod = pd.get_dummies(data['PaymentMethod'])
#合并数据
train = pd.concat([data,InternetService,Contract,PaymentMethod],axis=1)
#删除原标签列
train = train.drop(['InternetService','Contract','PaymentMethod'],axis=1)
           

根据相关性系数,删除对预测结果影响较小的特征值

# 由于数据集不大可使用 corr() 计算出没对属性之间的标准相关系数  也称为“皮尔逊系数”

corr_matrix = train.corr()
abs(corr_matrix['Churn']).sort_values(ascending=False)
           

根据情况来取舍相关性低的标签列

用户流失预测用户流失预测

本次案例选择删除小于0.1的标签

#删除列
loc = ['gender','PhoneService','MultipleLines','StreamingMovies','StreamingTV','DeviceProtection','OnlineBackup','Mailed check']
train= train.drop(loc,axis=1)
           

提取目标标签,划分数据

train_x = train.drop(['Churn'],axis=1)
train_y = train['Churn']
           

模型预测

导入模型包,

from sklearn.linear_model import LogisticRegression  #逻辑回归
from sklearn.ensemble import RandomForestClassifier  #随机森林
from xgboost import XGBClassifier #xgboost 
#导入评分模块
from sklearn.metrics import recall_score  
from sklearn.metrics import precision_score
from sklearn.metrics import f1_score

           

训练模型并选择最优模型

# 定义一个函数,使其能输入训练数据和模型
#然后返回模型分数
def model_fit(x,y,model):
    model = model
    model.fit(x,y)
    predict = model.predict(x)

    recall = recall_score(y,predict)
    precision = precision_score(y,predict)
    f1 = f1_score(y,predict)

    print(model)
    print('recall   :{:.3f}'.format(recall))
    print('precision:{:.3f}'.format(precision))
    print('f-1      :{:.3f}'.format(f1))
           

训练模型

LR = LogisticRegression()
RF = RandomForestClassifier()
XG = XGBClassifier(eval_metric=['logloss','auc','error'],objective ='reg:squarederror')

model_fit(train_x,train_y,LR)
model_fit(train_x,train_y,RF)
model_fit(train_x,train_y,XG)
           

模型评估

LogisticRegression
recall   :0.547
precision:0.658
f-1      :0.597
RandomForestClassifier
recall   :0.994
precision:0.995
f-1      :0.994
XGBClassifier
recall   :0.819
precision:0.935
f-1      :0.873
           

根据精确率,召回率,F1值 综合比较选择最优模型

本次测试选择模型为RandomForestClassifier

使用网格搜索寻找最优参数

from sklearn.model_selection import GridSearchCV

param_grid = [
    {'n_estimators':[3,5,10,15,25,30],'max_features':[2,4,6,8]},
    {'bootstrap':[False],'n_estimators':[3,10],'max_features':[2,3,4]}
]



grid_search = GridSearchCV(RF,
                           param_grid,cv=5,
                           scoring='neg_mean_squared_error',
                           return_train_score=True)
           
grid_search.fit(train_x,train_y)
#打印最优参数
grid_search.best_params_
           

预测结果展示

final_model = grid_search.best_estimator_
final_predictions = final_model.predict_proba(train_x[:20])

output = pd.DataFrame({'customerID':data_id[:20],'ratio':final_predictions[:,1]})
output = output.sort_values('ratio',ascending=False)
output
           
用户流失预测用户流失预测

建议:预测值越大的用户流失率就越高,对于公司来说,可以根据实际情况设置流失率阈值,达到阈值的群体进行挽留措施,值越大优先级越高。

结果仅供参考

由于数据有限,测试和训练使用的都是一样的数,所以结果的过拟合问题较严重,方法思路是差不多的。