天天看點

sklearn:sklearn.GridSearchCV函數的簡介、使用方法之詳細攻略

sklearn.GridSearchCV函數的簡介

1、參數說明

 """Exhaustive search over specified parameter values for an estimator.

   Important members are fit, predict.  """

   GridSearchCV implements a "fit" and a "score" method.

   It also implements "predict", "predict_proba", "decision_function", "transform" and "inverse_transform" if they are implemented in the estimator used.

   The parameters of the estimator used to apply these methods are

    optimized

   by cross-validated grid-search over a parameter grid.

   Read more in the :ref:`User Guide <grid_search>`. 窮舉搜尋指定參數值的估計量。

重要的成員是要被訓練的、預測的。

GridSearchCV實作了一個“fit”和一個“score”方法。

   如果在使用的估計器中實作了“predict”、“predict_proba”、“decision_function”、“transform”和“inverse_transform”,那麼它還實作了“predict”、“predict_proba”、“decision_function”、“transform”和“inverse_transform”。

應用這些方法的估計器的參數是通過參數網格上交叉驗證的網格搜尋來優化的。

 Parameters

   ----------

   estimator : estimator object.

   This is assumed to implement the scikit-learn estimator interface.

   Either estimator needs to provide a ``score`` function, or ``scoring`` must be passed.

   param_grid : dict or list of dictionaries

   Dictionary with parameters names (string) as keys and lists of parameter settings to try as values, or a list of such dictionaries, in which case the grids spanned by each dictionary in the list are explored. This enables searching over any sequence of parameter settings.

   scoring : string, callable, list/tuple, dict or None, default: None    A single string (see :ref:`scoring_parameter`) or a callable    (see :ref:`scoring`) to evaluate the predictions on the test set.

For evaluating multiple metrics, either give a list of (unique) strings or a dict with names as keys and callables as values.

NOTE that when using custom scorers, each scorer should return a single    value. Metric functions returning a list/array of values can be wrapped into multiple scorers that return one value each.  See :ref:`multimetric_grid_search` for an example. If None, the estimator's default scorer (if available) is used.

fit_params : dict, optional

Parameters to pass to the fit method.        .. deprecated:: 0.19    ``fit_params`` as a constructor argument was deprecated in version    0.19 and will be removed in version 0.21. Pass fit parameters to    the ``fit`` method instead.

參數

estimator: estimator對象。

這裡假設實作了sci -learn estimator接口。

要麼估計器需要提供一個' ' score ' '函數,要麼' ' scores ' '必須被傳遞。

param_grid:字典的字典或清單

以參數名稱(字元串)作為鍵的Dictionary和嘗試作為值的參數設定清單,或此類Dictionary的清單,在這種情況下,将探索清單中每個Dictionary跨越的網格。這允許搜尋任何序列的參數設定。

scoring :string, callable, list/tuple, dict or None, default: None一個字元串(參見:ref: ' scoring_parameter ')或callable(參見:ref: ' scores ')來評估測試集上的預測。

對于評估多個名額,要麼給出一個(惟一的)字元串清單,要麼給出一個以名稱為鍵、以可調用項為值的dict。

注意,當使用自定義記分員時,每個記分員應該傳回一個值。傳回值清單/數組的度量函數可以包裝成多個評分器,每個評分器傳回一個值。參見:ref: ' multimetric_grid_search '擷取示例。如果沒有,則使用估計器的預設記分員(如果可用)。

fit_params: dict,可選

參數傳遞給fit方法。. .作為構造函數參數的

deprecated:: 0.19 ' ' fit_params ' '在0.19版本中被棄用,将在0.21版本中删除。而是将fit參數傳遞給' ' fit ' '方法

   n_jobs : int, default=1

   Number of jobs to run in parallel.

   pre_dispatch : int, or string, optional

   Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be:

   - None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs

   - An int, giving the exact number of total jobs that are spawned

   - A string, giving an expression as a function of n_jobs, as in '2*n_jobs'

   iid : boolean, default=True

   If True, the data is assumed to be identically distributed across the folds, and the loss minimized is the total loss per sample, and not the mean loss across the folds.

   cv : int, cross-validation generator or an iterable, optional

   Determines the cross-validation splitting strategy.

   Possible inputs for cv are:

   - None, to use the default 3-fold cross validation,

   - integer, to specify the number of folds in a `(Stratified)KFold`,

   - An object to be used as a cross-validation generator.

   - An iterable yielding train, test splits.

   For integer/None inputs, if the estimator is a classifier and ``y`` is either binary or multiclass, :class:`StratifiedKFold` is used. In all other cases, :class:`KFold` is used.

   Refer :ref:`User Guide <cross_validation>` for the various cross-validation strategies that can be used here.

n_jobs: int,預設值為1

要并行運作的作業數。

pre_dispatch: int或string可選

控制在并行執行期間分派的作業的數量。當配置設定的作業比cpu處理的多時,減少這個數量可以避免記憶體消耗的激增。這個參數可以是:

-沒有,在這種情況下,所有的工作是立即創造和産生。将其用于輕量級和快速運作的作業,以避免由于按需生成作業而導緻的延遲

-一個整數,給出确切的總數的工作,是産生

-一個字元串,給出一個表達式作為n_jobs的函數,如'2*n_jobs'

iid:布爾值,預設=真

如果為真,則假定資料在所有折痕上是同分布的,損失最小的是每個樣本的總損失,而不是折痕上的平均損失。

cv: int,交叉驗證生成器或可疊代的,可選的

确定交叉驗證分割政策。

cv的可能輸入有:

- None,使用預設的3倍交叉驗證,

- integer,用于指定“(分層)KFold”中的折疊數,

-用作交叉驗證生成器的對象。

-一個疊代的屈服序列,測試分裂。

對于整數/無輸入,如果估計器是一個分類器,而' ' y ' '是二進制或多類的,則使用:class: ' hierarchfiedkfold '。在所有其他情況下,使用:class: ' KFold '。

參考:ref: ' User Guide <cross_validation> ',了解這裡可以使用的各種交叉驗證政策。</cross_validation>

refit : boolean, or string, default=True

   Refit an estimator using the best found parameters on the whole dataset.

   For multiple metric evaluation, this needs to be a string denoting the scorer is used to find the best parameters for refitting the estimator at the end.

   The refitted estimator is made available at the ``best_estimator_`` attribute and permits using ``predict`` directly on this ``GridSearchCV`` instance.

   Also for multiple metric evaluation, the attributes ``best_index_``,``best_score_`` and ``best_parameters_`` will only be available if ``refit`` is set and all of them will be determined w.r.t this specific scorer.

   See ``scoring`` parameter to know more about multiple metric evaluation.

   verbose : integer

   Controls the verbosity: the higher, the more messages.

   error_score : 'raise' (default) or numeric

   Value to assign to the score if an error occurs in estimator fitting.

   If set to 'raise', the error is raised. If a numeric value is given,  FitFailedWarning is raised. This parameter does not affect the refit step, which will always raise the error.

refit: boolean,或string, default=True

使用在整個資料集上找到的最佳參數來重新編譯估計器。

對于多個度量評估,這需要是一個表示記分員的字元串,用于在最後找到重新編譯估計器的最佳參數。

修改後的估計器在' ' best_estimator_ ' '屬性中可用,并且允許在這個' ' GridSearchCV ' '執行個體中直接使用' ' predict ' '。

同樣,對于多個度量求值,屬性' ' best_index_ ' '、' ' best_score_ ' '和' ' best_parameters_ ' '隻有在' ' refit ' '被設定并全部被确定為w.r時才可用。t這個特定的得分手。

參見“評分”參數以了解更多關于多重度量評估的資訊。

verbose :整數

控制備援:越高,消息越多。

error_score: 'raise'(預設)或數值

如果在估計器拟合中出現錯誤,則将值賦給該分數。

如果設定為“引發”,則會引發錯誤。如果給定一個數值,則會引發FitFailedWarning。此參數不影響refit步驟,因為後者總是會引起錯誤。

 return_train_score : boolean, optional

   If ``False``, the ``cv_results_`` attribute will not include training scores.

   Current default is ``'warn'``, which behaves as ``True`` in addition to raising a warning when a training score is looked up.

   That default will be changed to ``False`` in 0.21.

   Computing training scores is used to get insights on how different parameter settings impact the overfitting/underfitting trade-off.

   However computing the scores on the training set can be computationally expensive and is not strictly required to select the parameters that yield the best generalization performance. return_train_score:布爾值,可選

如果' ' False ' ', ' ' cv_results_ ' '屬性将不包括訓練分數。

目前的預設值是' 'warn' ' ' ',它的行為為' ' True ' ',除了在查詢訓練分數時發出警告外。

預設值将在0.21中更改為' ' False ' '。

計算訓練分數是用來了解不同的參數設定如何影響過拟合/欠拟合權衡。

然而,計算訓練集上的分數在計算上是很昂貴的,并且并不嚴格要求選擇産生最佳泛化性能的參數。

   Attributes

   cv_results_ : dict of numpy (masked) ndarrays

   A dict with keys as column headers and values as columns, that can be

   imported into a pandas ``DataFrame``.

   For instance the below given table

   +------------+-----------+------------+-----------------+---+---------+

   |param_kernel|param_gamma|param_degree|split0_test_score|...

    |rank_t...|

    +============+===========+============+========

    =========+===+=========+

   |  'poly'    |     --    |      2     |        0.8      |...|    2    |

   |  'poly'    |     --    |      3     |        0.7      |...|    4    |

   |  'rbf'     |     0.1   |     --     |        0.8      |...|    3    |

   |  'rbf'     |     0.2   |     --     |        0.9      |...|    1    |

   will be represented by a ``cv_results_`` dict of::

   {

   'param_kernel': masked_array(data = ['poly', 'poly', 'rbf', 'rbf'],

   mask = [False False False False]...)

   'param_gamma': masked_array(data = [-- -- 0.1 0.2],

   mask = [ True  True False False]...),

   'param_degree': masked_array(data = [2.0 3.0 -- --],

   mask = [False False  True  True]...),

   'split0_test_score'  : [0.8, 0.7, 0.8, 0.9],

   'split1_test_score'  : [0.82, 0.5, 0.7, 0.78],

   'mean_test_score'    : [0.81, 0.60, 0.75, 0.82],

   'std_test_score'     : [0.02, 0.01, 0.03, 0.03],

   'rank_test_score'    : [2, 4, 3, 1],

   'split0_train_score' : [0.8, 0.9, 0.7],

   'split1_train_score' : [0.82, 0.5, 0.7],

   'mean_train_score'   : [0.81, 0.7, 0.7],

   'std_train_score'    : [0.03, 0.03, 0.04],

   'mean_fit_time'      : [0.73, 0.63, 0.43, 0.49],

   'std_fit_time'       : [0.01, 0.02, 0.01, 0.01],

   'mean_score_time'    : [0.007, 0.06, 0.04, 0.04],

   'std_score_time'     : [0.001, 0.002, 0.003, 0.005],

   'params'             : [{'kernel': 'poly', 'degree': 2}, ...],

   } 屬性

cv_results_: numpy(掩蔽)ndarrays的字典

以鍵作為列标頭,以值作為列的dict,可以是這樣

導入到一個pandas ' ' DataFrame ' '。

例如下面給出的表

   NOTE

   The key ``'params'`` is used to store a list of parameter settings dicts for all the parameter candidates.

   The ``mean_fit_time``, ``std_fit_time``, ``mean_score_time`` and ``std_score_time`` are all in seconds.

   For multi-metric evaluation, the scores for all the scorers are available in the ``cv_results_`` dict at the keys ending with that scorer's name (``'_<scorer_name>'``) instead of ``'_score'`` shown above. ('split0_test_precision', 'mean_train_precision' etc.) 請注意

鍵“”params“”用于存儲所有參數候選項的參數設定字典清單。

' mean_fit_time ' '、' std_fit_time ' '、' mean_score_time ' '和' std_score_time ' '都是以秒為機關的。

對于多名額評估,所有評分者的分數都可以在鍵上以該評分者的名字結尾的' ' cv_results_ ' ' dict中找到(' " _<scorer_name>' ' '),而不是上面顯示的' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' cv_results_ ' ' ' dict中找到。</scorer_name>(“split0_test_precision”,“mean_train_precision”等等)。

   best_estimator_ : estimator or dict

   Estimator that was chosen by the search, i.e. estimator

   which gave highest score (or smallest loss if specified)

   on the left out data. Not available if ``refit=False``.

   See ``refit`` parameter for more information on allowed values.

   best_score_ : float

   Mean cross-validated score of the best_estimator

   For multi-metric evaluation, this is present only if ``refit`` is specified.

   best_params_ : dict

   Parameter setting that gave the best results on the hold out data. For multi-metric evaluation, this is present only if ``refit`` is specified.

   best_index_ : int

   The index (of the ``cv_results_`` arrays) which corresponds to the best candidate parameter setting.

   The dict at ``search.cv_results_['params'][search.best_index_]`` gives the parameter setting for the best model, that gives the highest mean score (``search.best_score_``). For multi-metric evaluation, this is present only if ``refit`` is specified. best_estimator_: estimator或dict

由搜尋選擇的估計量,即在被遺漏的資料上給出最高分(或指定最小損失)的估計量。如果' ' refit=False ' ',則不可用。有關允許值的更多資訊,請參見' ' refit ' '參數。

best_score_:float

best_estimator的交叉驗證平均得分

對于多度量評估,隻有在指定“refit”時才會出現這種情況。

best_params_: dict類型

參數設定,給出了最好的結果,對舉行的資料。對于多度量評估,隻有在指定“refit”時才會出現這種情況。

best_index_: int

對應最佳候選參數設定的索引(' ' cv_results_ ' '數組的索引)。

dict at ' ' search.cv_results_['params'][搜尋。best_index_] ' '給出了最佳模型的參數設定,并給出了最高的平均分數(' ' search.best_score_ ' ')。對于多度量評估,隻有在指定“refit”時才會出現這種情況。

   scorer_ : function or a dict

   Scorer function used on the held out data to choose the best parameters for the model.

   For multi-metric evaluation, this attribute holds the validated ``scoring`` dict which maps the scorer key to the scorer callable.

   n_splits_ : int

   The number of cross-validation splits (folds/iterations).

   Notes

   ------

   The parameters selected are those that maximize the score of the left  out data, unless an explicit score is passed in which case it is used instead.

   If `n_jobs` was set to a value higher than one, the data is copied for each point in the grid (and not `n_jobs` times). This is done for efficiency reasons if individual jobs take very little time, but may raise errors if the dataset is large and not enough memory is available.  A  workaround in this case is to set `pre_dispatch`. Then, the memory is copied only`pre_dispatch` many times. A reasonable value for `pre_dispatch` is `2 * n_jobs`.

   See Also

   ---------

   :class:`ParameterGrid`:

   generates all the combinations of a hyperparameter grid.

   :func:`sklearn.model_selection.train_test_split`:

   utility function to split the data into a development set usable for fitting a GridSearchCV instance and an evaluation set for its final evaluation.

   :func:`sklearn.metrics.make_scorer`:

   Make a scorer from a performance metric or loss function. scorer_ : f功能還是字典

記分員函數用于對保留的資料進行篩選,為模型選擇最佳參數。

對于多度量的評估,此屬性儲存已驗證的“評分”dict,該dict将評分鍵映射到可調用的評分者。

n_splits_: int

交叉驗證分割(折疊/疊代)的數量。

筆記------

所選的參數是那些使未輸入資料的得分最大化的參數,除非傳遞了一個顯式的分數(在這種情況下使用該分數)。

如果将' n_jobs '設定為大于1的值,則将為網格中的每個點複制資料(而不是' n_jobs '時間)。如果單個作業花費的時間很少,那麼這樣做是出于效率的考慮,但是如果資料集很大且沒有足夠的可用記憶體,則可能會引起錯誤。在這種情況下,一個解決方案是設定' pre_dispatch '。然後,記憶體多次隻複制' pre_dispatch '。' pre_dispatch '的合理值是' 2 * n_jobs '。

另請參閱 ---------

類:“ParameterGrid”:

生成超參數網格的所有組合。

:func:“sklearn.model_selection.train_test_split”:

實用工具函數,用于将資料分割成可用于拟合GridSearchCV執行個體的開發集和用于最終評估的評估集。

:func:“sklearn.metrics.make_scorer”:

從性能名額或損失函數中建立一個記分員。

2、功能代碼

class GridSearchCV Found at: sklearn.model_selection._search

class GridSearchCV(BaseSearchCV):

   """Exhaustive search over specified parameter values for an estimator.

   """

   def __init__(self, estimator, param_grid, scoring=None,

    fit_params=None,

       n_jobs=1, iid=True, refit=True, cv=None, verbose=0,

       pre_dispatch='2*n_jobs', error_score='raise',

       return_train_score="warn"):

       super(GridSearchCV, self).__init__(estimator=estimator,

        scoring=scoring, fit_params=fit_params, n_jobs=n_jobs, iid=iid,

        refit=refit, cv=cv, verbose=verbose, pre_dispatch=pre_dispatch,

        error_score=error_score, return_train_score=return_train_score)

       self.param_grid = param_grid

       _check_param_grid(param_grid)

   def _get_param_iterator(self):

       """Return ParameterGrid instance for the given param_grid"""

       return ParameterGrid(self.param_grid)

sklearn.GridSearchCV函數的使用方法

   Examples

   --------

   >>> from sklearn import svm, datasets

   >>> from sklearn.model_selection import GridSearchCV

   >>> iris = datasets.load_iris()

   >>> parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}

   >>> svc = svm.SVC()

   >>> clf = GridSearchCV(svc, parameters)

   >>> clf.fit(iris.data, iris.target)

   ...                             # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS

   GridSearchCV(cv=None, error_score=...,

   estimator=SVC(C=1.0, cache_size=..., class_weight=..., coef0=...,

   decision_function_shape='ovr', degree=..., gamma=...,

   kernel='rbf', max_iter=-1, probability=False,

   random_state=None, shrinking=True, tol=...,

   verbose=False),

   fit_params=None, iid=..., n_jobs=1,

   param_grid=..., pre_dispatch=..., refit=..., return_train_score=...,

   scoring=..., verbose=...)

   >>> sorted(clf.cv_results_.keys())

   ['mean_fit_time', 'mean_score_time', 'mean_test_score',...

   'mean_train_score', 'param_C', 'param_kernel', 'params',...

   'rank_test_score', 'split0_test_score',...

   'split0_train_score', 'split1_test_score', 'split1_train_score',...

   'split2_test_score', 'split2_train_score',...

   'std_fit_time', 'std_score_time', 'std_test_score', 'std_train_score'...]

繼續閱讀