天天看点

Python 第三方模块 机器学习 Scikit-Learn模块 模型的选择与评估

一.inspection

1.简介:

2.使用

(1)检验:

求特征的"部分依赖"(Partial dependence;PD):[<predictions>,<values>=]sklearn.inspection.partial_dependence(<estimator>,<X>,<features>[,response_method='auto',percentiles=(0.05,0.95),grid_resolution=100,method='auto',kind='legacy'])
求用于"特征评估"(feature evaluation)的"置换重要性"(Permutation importance;PI):[<result>=]sklearn.inspection.permutation_importance(<estimator>,<X>,<y>[,scoring=None,n_repeats=5,n_jobs=None,random_state=None,sample_weight=None])
           

(2)绘图:

"部分依赖图"(Partial Dependence Plot;PDP):class sklearn.inspection.PartialDependenceDisplay(<pd_results>,<features>,<feature_names>,<target_idx>,<pdp_lim>,<deciles>[,kind='average',subsample=1000,random_state=None])

######################################################################################################################

绘制"部份依赖与个体条件期望图"(Partial dependence and individual conditional expectation plots;PD and ICE plots):sklearn.inspection.plot_partial_dependence(<estimator>,<X>,<features>[,feature_names=None,target=None,response_method='auto',n_cols=3,grid_resolution=100,percentiles=(0.05,0.95),method='auto',n_jobs=None,verbose=0,line_kw=None,contour_kw=None,ax=None,kind='average',subsample=1000,random_state=None])
           

二.metrics

1.简介:

该模块包含各种"评分函数"(score functions)/"性能指标"(performance metrics)/"成对指标"(pairwise metrics)/"距离计算"(distance
computations),用于对模型性能进行定量评估
           

2.模型选择接口(Model Selection Interface):

通过用户选择确定"计分器"(scorer):[<scoring>=]sklearn.metrics.check_scoring(<estimator>[,scoring=None,allow_none=False])
通过str获取记分器:[<scorer>=]sklearn.metrics.get_scorer(<scoring>)
通过性能指标或"损失函数"(loss function)创建记分器:[<scorer>=]sklearn.metrics.make_scorer(<score_func>[,greater_is_better=True,needs_proba=False,needs_threshold=False,**kwargs])
           

3.分类指标(Classification metrics):

求"准确率分类得分"(Accuracy classification score):[<score>=]sklearn.metrics.accuracy_score(<y_true>,<y_pred>[,normalize=True,sample_weight=None])
使用"梯形法则"(trapezoidal rule)求"曲线下面积"(Area Under the Curve;AUC):[<auc>=]sklearn.metrics.auc(<x>,<y>)
通过"预测得分"(prediction scores)求"平均精度"(average precision):[<average_precision>=]sklearn.metrics.average_precision_score(<y_true>,<y_score>[,average='macro',pos_label=1,sample_weight=None])
求"均衡准确率"(balanced accuracy):[<balanced_accuracy>=]sklearn.metrics.balanced_accuracy_score(<y_true>,<y_pred>[,sample_weight=None,adjusted=False])
求"布赖尔分数"(Brier score):[<score>=]sklearn.metrics.brier_score_loss(<y_true>,<y_prob>[,sample_weight=None,pos_label=None])
求主要分类指标:[<report>=]sklearn.metrics.classification_report(<y_true>,<y_pred>[,labels=None,target_names=None,sample_weight=None,digits=2,output_dict=False,zero_division='warn'])
求"科恩的κ统计量"(Cohen's kappa statistic):[<kappa>=]sklearn.metrics.cohen_kappa_score(<y1>,<y2>[,labels=None,weights=None,sample_weight=None])
求"混淆矩阵"(confusion matrix):[<C>=]sklearn.metrics.confusion_matrix(<y_true>,<y_pred>[,labels=None,sample_weight=None,normalize=None])
求"累计贴现收益"(Discounted Cumulative Gain;DCG):[<discounted_cumulative_gain>=]sklearn.metrics.dcg_score(<y_true>,<y_score>[,k=None,log_base=2,sample_weight=None,ignore_ties=False])
求"检测错误权衡曲线"(Detection Error Tradeoff curve;DET curve):[<fpr>,<fnr>,<thresholds>=]sklearn.metrics.det_curve(<y_true>,<y_score>[,pos_label=None,sample_weight=None])
  #即不同"概率阈值"(probability thresholds)下的"假阳性率-假阴性率对"(False positive rate-False negative rate pairs)构成的曲线
求"F1分数"(F1 score):[<f1_score>=]sklearn.metrics.f1_score(<y_true>,<y_pred>[,labels=None,pos_label=1,average='binary',sample_weight=None,zero_division='warn'])
求"F-β分数"(F-beta score):[<fbeta_score>=]sklearn.metrics.fbeta_score(<y_true>,<y_pred>,<beta>[,labels=None,pos_label=1,average='binary',sample_weight=None,zero_division='warn'])
求"平均汉明损失"(average Hamming loss):[<loss>=]sklearn.metrics.hamming_loss(<y_true>,<y_pred>[,sample_weight=None])
求"平均合页损失"(average hinge loss):[<loss>=]sklearn.metrics.hinge_loss(<y_true>,<pred_decision>[,labels=None,sample_weight=None])
求"杰卡德相似性系数得分"(Jaccard similarity coefficient score):[<score>=]sklearn.metrics.jaccard_score(<y_true>,<y_pred>[,labels=None,pos_label=1,average='binary',sample_weight=None,zero_division='warn'])
求"对数损失"(logistic loss/Log loss)/"交叉熵损失"(cross-entropy loss):[<loss>=]sklearn.metrics.log_loss(<y_true>,<y_pred>[,eps=1e-15,normalize=True,sample_weight=None,labels=None])
求"马修斯相关系数"(Matthews correlation coefficient;MCC):[<mcc>=]sklearn.metrics.matthews_corrcoef(<y_true>,<y_pred>[,sample_weight=None])
为每个类/样本求1个混淆矩阵:[<multi_confusion>=]sklearn.metrics.multilabel_confusion_matrix(<y_true>,<y_pred>,sample_weight=None,labels=None,samplewise=False])
求"经过归一化的累计贴现收益"(Normalized Discounted Cumulative Gain;NDCG):[<normalized_discounted_cumulative_gain>=]sklearn.metrics.ndcg_score(<y_true>,<y_score>[,k=None,sample_weight=None,ignore_ties=False])
求"查准率/精度-查全率曲线"(precision-recall curve):[<precision>,<recall>,<thresholds>=]sklearn.metrics.precision_recall_curve(<y_true>,<probas_pred>[,pos_label=None,sample_weight=None])
  #即各概率阈值下的"查准率-查全率对"(precision-recall pairs)构成的曲线
求每个类的查准率+查全率+F-β分数:[<precision>,<recall><fbeta_score>,<support>=]sklearn.metrics.precision_recall_fscore_support(<y_true>,<y_pred>[,beta=1.0,labels=None,pos_label=1,average=None,warn_for=('precision','recall','f-score'),sample_weight=None,zero_division='warn'])
求查准率:[<precision>=]sklearn.metrics.precision_score(<y_true>,<y_pred>[,labels=None,pos_label=1,average='binary',sample_weight=None,zero_division='warn'])
求查全率:[<recall>=]sklearn.metrics.recall_score(<y_true>,<y_pred>[,labels=None,pos_label=1,average='binary',sample_weight=None,zero_division='warn'])
通过预测得分求AUC:[<auc>=]sklearn.metrics.roc_auc_score(<y_true>,<y_score>[,average='macro',sample_weight=None,max_fpr=None,multi_class='raise',labels=None])
求"接收者操作特性曲线"(Receiver operating characteristic curve;ROC curve):[<fpr>,<tpr>,<thresholds>=]sklearn.metrics.roc_curve(<y_true>,<y_score>[,pos_label=None,sample_weight=None,drop_intermediate=True])
求"Top-k Accuracy classification score":[<score>=]sklearn.metrics.top_k_accuracy_score(<y_true>,<y_score>[,k=2,normalize=True,sample_weight=None,labels=None])
求"0-1分类损失"(Zero-one classification loss):[<loss>=]sklearn.metrics.zero_one_loss(<y_true>,<y_pred>[,normalize=True,sample_weight=None])
           

4.回归指标(Regression metrics):

求"可解释方差回归得分函数"(Explained variance regression score function):[<score>=]sklearn.metrics.explained_variance_score(<y_true>,<y_pred>[,sample_weight=None,multioutput='uniform_average'])
求"最大残差"(maximum residual error):[<max_error>=]sklearn.metrics.max_error(<y_true>,<y_pred>)
求"平均绝对误差回归损失"(Mean absolute error regression loss):[<loss>=]sklearn.metrics.mean_absolute_error(<y_true>,<y_pred>[,sample_weight=None,multioutput='uniform_average'])
求"平均平方误差回归损失"(Mean squared error regression loss):[<loss>=]sklearn.metrics.mean_squared_error(<y_true>,<y_pred>[,sample_weight=None,multioutput='uniform_average',squared=True])
求"平均平方对数误差回归损失"(Mean squared logarithmic error regression loss):[<loss>=]sklearn.metrics.mean_squared_log_error(<y_true>,<y_pred>[,sample_weight=None,multioutput='uniform_average'])
求"中位绝对误差回归损失"(Median absolute error regression loss):[<loss>=]sklearn.metrics.median_absolute_error(<y_true>,<y_pred>[,multioutput='uniform_average',sample_weight=None])
求"平均绝对百分比误差回归损失"(Mean absolute percentage error regression loss):[<loss>=]
求"决定系数回归得分函数"(coefficient of determination regression score function;R^2 regression score function):[<z>=]sklearn.metrics.r2_score(<y_true>,<y_pred>[,sample_weight=None,multioutput='uniform_average'])
求"平均泊松偏差回归损失"(Mean Poisson deviance regression loss):[<loss>=]sklearn.metrics.mean_poisson_deviance(<y_true>,<y_pred>[,sample_weight=None])
求"平均伽马偏差回归损失"(Mean Gamma deviance regression loss):[<loss>=]sklearn.metrics.mean_gamma_deviance(<y_true>,<y_pred>[,sample_weight=None])
求"平均威迪偏差回归损失"(Mean Tweedie deviance regression loss):[<loss>=]sklearn.metrics.mean_tweedie_deviance(<y_true>,<y_pred>[,sample_weight=None,power=0])
           

5.多标签排序指标(Multilabel ranking metrics):

求"范围误差"(Coverage error):[<coverage_error>=]sklearn.metrics.coverage_error(<y_true>,<y_score>[,sample_weight=None])
求"标签排序的平均精度"(Label ranking average precision):[<score>=]sklearn.metrics.label_ranking_average_precision_score(<y_true>,<y_score>[,sample_weight=None])
求"排序损失"(Ranking loss):sklearn.metrics.label_ranking_loss(<y_true>,<y_score>[,sample_weight=None])
           

6.聚类指标

(1)聚类指标(Clustering metrics):

求"经过调整的互信息"(Adjusted Mutual Information):[<ami>=]sklearn.metrics.adjusted_mutual_info_score(<labels_true>,<labels_pred>[,average_method='arithmetic'])
求"经过调整的兰德指数"(Adjusted rand index):[<ARI>=]sklearn.metrics.adjusted_rand_score(<labels_true>,<labels_pred>)
求"卡林斯基-哈拉巴斯得分"(Calinski-Harabasz score;CH score):[<score>=]sklearn.metrics.calinski_harabasz_score(<X>,<labels>)
求"戴维斯-博尔丁得分"(Davies-Bouldin score;DB score):[<score>=]sklearn.metrics.davies_bouldin_score(<X>,<labels>)
求"完整性"(Completeness):[<completeness>=]sklearn.metrics.completeness_score(<labels_true>,<labels_pred>)
求"福尔克斯-马洛斯指数"(Fowlkes-Mallows index;FMI):[<score>=]sklearn.metrics.fowlkes_mallows_score(<labels_true>,<labels_pred>[,sparse=False])
求"同质性"/"齐次性"(homogeneity)+完整性+"V-度量"(V-Measure scores):[<homogeneity>,<completeness>,<v_measure>=]sklearn.metrics.homogeneity_completeness_v_measure(<labels_true>,<labels_pred>[,beta=1.0])
求同质性:[<homogeneity>=]sklearn.metrics.homogeneity_score(<labels_true>,<labels_pred>)
求"互信息"(Mutual Information):[<mi>=]sklearn.metrics.mutual_info_score(<labels_true>,<labels_pred>[,contingency=None])
求"经过归一化的互信息"(Normalized Mutual Information):[<nmi>=]sklearn.metrics.normalized_mutual_info_score(<labels_true>,<labels_pred>[,average_method='arithmetic'])
求"兰德指数"/"约当指数"(Rand index):[<RI>=]sklearn.metrics.rand_score(<labels_true>,<labels_pred>)
求"平均轮廓系数"(mean Silhouette Coefficient):[<silhouette>=]sklearn.metrics.silhouette_score(<X>,<labels>[,metric='euclidean',sample_size=None,random_state=None,**kwds])
求"轮廓系数"(Silhouette Coefficient):[<silhouette>=]sklearn.metrics.silhouette_samples(<X>,<labels>[,metric='euclidean',**kwds])
求V-度量:[<v_measure>=]sklearn.metrics.v_measure_score(<labels_true>,<labels_pred>[,beta=1.0])

######################################################################################################################

sklearn.metrics.cluster子模块包含了用于定量评估聚类算法性能的指标:
求"权变矩阵"(contingency matrix):[<contingency>=]sklearn.metrics.cluster.contingency_matrix(<labels_true>,<labels_pred>[,eps=None,sparse=False,dtype=<class 'numpy.int64'>])
求"成对混淆矩阵"(Pair confusion matrix):[<C>=]sklearn.metrics.cluster.pair_confusion_matrix(<labels_true>,<labels_pred>)
           

(2)双聚类指标(Biclustering metrics):

7.成对指标(Pairwise metrics)

(1)(成对)距离:

查看所有"成对距离"(pair-wise distances)的有效指标:[<PAIRWISE_DISTANCE_FUNCTIONS>=]sklearn.metrics.pairwise.distance_metrics()

######################################################################################################################

求指定距离:[<D>=]sklearn.metrics.pairwise_distances(<X>[,Y=None,metric='euclidean',n_jobs=None,force_all_finite=True,**kwds])
求指定距离最短的点:[<argmin>=]sklearn.metrics.pairwise_distances_argmin(<X>,<Y>[,axis=1,metric='euclidean',metric_kwargs=None])
求指定距离最短的点及其指定距离:[<argmin>,<distances>=]sklearn.metrics.pairwise_distances_argmin_min(<X>,<Y>[,axis=1,metric='euclidean',metric_kwargs=None])
逐块求指定距离:[<D_chunk>=]sklearn.metrics.pairwise_distances_chunked(<X>[,Y=None,reduce_func=None,metric='euclidean',n_jobs=None,working_memory=None,**kwds])
  #使用该函数可节约内存

求"余弦相似性"(cosine similarity):[<kernel_matrix>=]sklearn.metrics.pairwise.cosine_similarity(<X>[,Y=None,dense_output=True])
求"余弦距离"(cosine distance):[<distance_matrix>=]sklearn.metrics.pairwise.cosine_distances(<X>[,Y=None])
求"欧几里得距离"(euclidean distance)/"L2距离"(L2 distances):[<distances>=]sklearn.metrics.pairwise.euclidean_distances(<X>[,Y=None,Y_norm_squared=None,squared=False,X_norm_squared=None])
求"半正矢距离"(Haversine distance):[<distance>=]sklearn.metrics.pairwise.haversine_distances(<X>[,Y=None])
求"L1距离"(L1 distances)/"曼哈顿距离"(manhattan distance):[<D>=]sklearn.metrics.pairwise.manhattan_distances(<X>[,Y=None,sum_over_features=True])
在存在缺失值的情况下求欧几里得距离:[<distances>=]sklearn.metrics.pairwise.nan_euclidean_distances(<X>[,Y=None,squared=False,missing_values=nan,copy=True])

######################################################################################################################

求指定成对距离:[<distances>=]sklearn.metrics.pairwise.paired_distances(<X>,<Y>[,metric='euclidean',**kwds])

求"成对欧几里得距离"(paired euclidean distances)/"成对L2距离"(paired L2 distances):[<distances>=]sklearn.metrics.pairwise.paired_euclidean_distances(<X>,<Y>)
求"成对曼哈顿距离"(paired manhattan distance)/"成对L1距离"(paired L1 distances):[<distances>=]sklearn.metrics.pairwise.paired_manhattan_distances(<X>,<Y>)
求"成对余弦距离"(paired cosine distances):[<distances>=]sklearn.metrics.pairwise.paired_cosine_distances(<X>,<Y>)
           

(2)(成对)核:

查看所有"成对核"(pair-wise kernels)的有效指标:[<PAIRWISE_KERNEL_FUNCTIONS>=]sklearn.metrics.pairwise.kernel_metrics()

######################################################################################################################

求"加性卡方核"(additive chi-squared kernel):[<kernel_matrix>=]sklearn.metrics.pairwise.additive_chi2_kernel(<X>[,Y=None])
求"指数卡方核"(exponential chi-squared kernel):[<kernel_matrix>=]sklearn.metrics.pairwise.chi2_kernel(<X>[,Y=None,gamma=1.0])
求"拉普拉斯核"(laplacian kernel):[<kernel_matrix>=]sklearn.metrics.pairwise.laplacian_kernel(<X>[,Y=None,gamma=None])
求"线性核"(linear kernel):[<Gram_matrix>=]sklearn.metrics.pairwise.linear_kernel(<X>[,Y=None,dense_output=True])
求"多项式核"(polynomial kernel):[<Gram_matrix>=]sklearn.metrics.pairwise.polynomial_kernel(<X>[,Y=None,degree=3,gamma=None,coef0=1])
求"径向基函数核"(Radial Basis Function kernel;RBF kernel)/"高斯核"(gaussian kernel):[<kernel_matrix>=]sklearn.metrics.pairwise.rbf_kernel(<X>[,Y=None,gamma=None])
求"Sigmoid核"(sigmoid kernel):[<Gram_matrix>=]sklearn.metrics.pairwise.sigmoid_kernel(<X>[,Y=None,gamma=None,coef0=1])

######################################################################################################################

求指定成对核:[<kernel_matrix>=]sklearn.metrics.pairwise.pairwise_kernels(<X>[,Y=None,metric='linear',filter_params=False,n_jobs=None,**kwds])
           

8.绘图(Plotting)

(1)函数API:

绘制混淆矩阵:[<display>=]sklearn.metrics.plot_confusion_matrix(<estimator>,<X>,<y_true>[,labels=None,sample_weight=None,normalize=None,display_labels=None,include_values=True,xticks_rotation='horizontal',values_format=None,cmap='viridis',ax=None,colorbar=True])
绘制检测错误权衡曲线:[<display>=]sklearn.metrics.plot_det_curve(<estimator>,<X>,<y>[,sample_weight=None,response_method='auto',name=None,ax=None,pos_label=None,**kwargs])
绘制查准率-查全率曲线:[<display>=]sklearn.metrics.plot_precision_recall_curve(<estimator>,<X>,<y>[,sample_weight=None,response_method='auto',name=None,ax=None,pos_label=None,**kwargs])
绘制接收者操作特性曲线:[<display>=]sklearn.metrics.plot_roc_curve(<estimator>,<X>,<y>[,sample_weight=None,drop_intermediate=True,response_method='auto',name=None,ax=None,pos_label=None,**kwargs])
           

(2)类API:

绘制混淆矩阵:class sklearn.metrics.ConfusionMatrixDisplay(<confusion_matrix>[,display_labels=None])
绘制检测错误权衡曲线:class sklearn.metrics.DetCurveDisplay(<fpr>,<tpr>[,estimator_name=None,pos_label=None])
绘制查准率-查全率曲线:class sklearn.metrics.PrecisionRecallDisplay(<precision>,<recall>[,average_precision=None,estimator_name=None,pos_label=None])
绘制接收者操作特性曲线:class sklearn.metrics.RocCurveDisplay(<fpr>,<tpr>[,roc_auc=None,estimator_name=None,pos_label=None])
           

三.model_selection

1.简介:

2.拆分器(Splitter)

用于拆分数据集
           

(1)类:

具有"无重叠组"(non-overlapping groups)的"K-折迭代器变体"(K-fold iterator variant):class sklearn.model_selection.GroupKFold([n_splits=5])
"洗牌分组交叉验证迭代器"/"随机分组交叉验证迭代器"(Shuffle-Group(s)-Out cross-validation iterator):class sklearn.model_selection.GroupShuffleSplit([n_splits=5,test_size=None,train_size=None,random_state=None])
"K-折交叉验证器"(K-Folds cross-validator):class sklearn.model_selection.KFold([n_splits=5,shuffle=False,random_state=None])
"留一交叉验证器"(Leave One Group Out cross-validator):class sklearn.model_selection.LeaveOneGroupOut()
"留P交叉验证器"(Leave P Group(s) Out cross-validator):class sklearn.model_selection.LeavePGroupsOut(<n_groups>)
"预定义的拆分交叉验证器"(Predefined split cross-validator):class sklearn.model_selection.PredefinedSplit(<test_fold>)
"重复的K-折交叉验证器"(Repeated K-Fold cross validator):class sklearn.model_selection.RepeatedKFold([n_splits=5,n_repeats=10,random_state=None])
"重复的分层K-折交叉验证器"(Repeated Stratified K-Fold cross validator):class sklearn.model_selection.RepeatedStratifiedKFold([n_splits=5,n_repeats=10,random_state=None])
"随机排序交叉验证器"(Random permutation cross-validator):class sklearn.model_selection.ShuffleSplit([n_splits=10,test_size=None,train_size=None,random_state=None])
"分层K-折交叉验证器"(Stratified K-Folds cross-validator):class sklearn.model_selection.StratifiedKFold([n_splits=5,shuffle=False,random_state=None])
"分层洗牌拆分交叉验证器"/"分层随机拆分交叉验证器"(Stratified ShuffleSplit cross-validator):class sklearn.model_selection.StratifiedShuffleSplit([n_splits=10,test_size=None,train_size=None,random_state=None])
"时间序列交叉验证器"(Time Series cross-validator):class sklearn.model_selection.TimeSeriesSplit([n_splits=5,max_train_size=None,test_size=None,gap=0])
           

(2)函数:

Input checker utility for building a cross-validator:[<checked_cv>=]sklearn.model_selection.check_cv([cv=5,y=None,classifier=False])
将向量/矩阵随机拆分为训练集和测试集:[<splitting>=]sklearn.model_selection.train_test_split([*arrays,test_size=None,train_size=None,random_state=None,shuffle=True,stratify=None])
           

3.超参数优化器(Hyper-parameter optimizers):

"穷竭搜索"/"暴力搜索"(Exhaustive search):class sklearn.model_selection.GridSearchCV(<estimator>,<param_grid>[,scoring=None,n_jobs=None,refit=True,cv=None,verbose=0,pre_dispatch='2*n_jobs',error_score=nan,return_train_score=False])
"连续减半搜索"(successive halving search):class sklearn.model_selection.HalvingGridSearchCV(<estimator>,<param_grid>[,factor=3,resource='n_samples',max_resources='auto',min_resources='exhaust',aggressive_elimination=False,cv=5,scoring=None,refit=True,error_score=nan,return_train_score=True,random_state=None,n_jobs=None,verbose=0])
每个参数均具有离散数量的值的参数网格:class sklearn.model_selection.ParameterGrid(<param_grid>)
从指定分布对参数进行采样的生成器:class sklearn.model_selection.ParameterSampler(<param_distributions>,<n_iter>[,random_state=None])
"随机搜索"(Randomized search):class sklearn.model_selection.RandomizedSearchCV(<estimator>,<param_distributions>[,n_iter=10,scoring=None,n_jobs=None,refit=True,cv=None,verbose=0,pre_dispatch='2*n_jobs',random_state=None,error_score=nan,return_train_score=False])
"连续减半随机搜索"(successive halving random search):class sklearn.model_selection.HalvingRandomSearchCV(<estimator>,<param_distributions>[,n_candidates='exhaust',factor=3,resource='n_samples',max_resources='auto',min_resources='smallest',aggressive_elimination=False,cv=5,scoring=None,refit=True,error_score=nan,return_train_score=True,random_state=None,n_jobs=None,verbose=0])
           

4.模型检验(Model validation):

通过交叉验证评估指标:[<scores>=]sklearn.model_selection.cross_validate(<estimator>,<X>[,y=None,groups=None,scoring=None,cv=None,n_jobs=None,verbose=0,fit_params=None,pre_dispatch='2*n_jobs',return_train_score=False,return_estimator=False,error_score=nan])
通过交叉验证对每个数据点进行预测:[<predictions>=]sklearn.model_selection.cross_val_predict(<estimator>,<X>[,y=None,groups=None,cv=None,n_jobs=None,verbose=0,fit_params=None,pre_dispatch='2*n_jobs',method='predict'])
通过交叉验证评估指标:[<scores>=]sklearn.model_selection.cross_val_score(<estimator>,<X>[,y=None,groups=None,scoring=None,cv=None,n_jobs=None,verbose=0,fit_params=None,pre_dispatch='2*n_jobs',error_score=nan])
求"学习曲线"(Learning curve.):[<train_sizes_abs>,<train_scores>,<test_scores>,<fit_times>,<score_times>=]sklearn.model_selection.learning_curve(<estimator>,<X>,<y>[,groups=None,train_sizes=array([0.1,0.33,0.55,0.78,1.0]),cv=None,scoring=None,exploit_incremental_learning=False,n_jobs=None,pre_dispatch='all',verbose=0,shuffle=False,random_state=None,error_score=nan,return_times=False,fit_params=None])
通过"置换检验"(permutation test)评估性能:[<score>,<permutation_scores>,<pvalue>=]sklearn.model_selection.permutation_test_score(<estimator>,<X>,<y>[,groups=None,cv=None,n_permutations=100,n_jobs=None,random_state=0,verbose=0,scoring=None,fit_params=None])
求"验证曲线"(Validation curve):[<train_scores>,<test_scores>=]sklearn.model_selection.validation_curve(<estimator>,<X>,<y>,<param_name>,<param_range>[,groups=None,cv=None,scoring=None,n_jobs=None,pre_dispatch='all',verbose=0,error_score=nan,fit_params=None])