modeva.models.ModelTuneGridSearch#

class modeva.models.ModelTuneGridSearch(dataset, model)#

Bases: object

A class for performing hyperparameter tuning using grid search.

run(param_grid: Dict, dataset: str = 'train', metric: str | Tuple = None, n_jobs: int = None, cv=None, error_score=nan)#

Executes a grid search for model tuning.

This method performs hyperparameter optimization using grid search on the specified model and dataset. It evaluates the model’s performance based on the provided metrics and returns the results in a structured format.

Parameters:
  • param_grid (dict) – A dictionary where the keys are parameter names (str) and the values are lists of settings to try. Alternatively, it can be a list of such dictionaries to explore multiple grids.

  • dataset ({"main", "train", "test"}, default="train") – Specifies the dataset to be used for model fitting, with options for main, training, or testing datasets.

  • metric (str or tuple, default=None) – The performance metric(s) to evaluate the model. If None, the method defaults to calculating MSE, MAE, and R2 for regression, and ACC, AUC, F1, LogLoss, and Brier for classification.

  • cv (int, cross-validation generator or an iterable, default=None) – Defines the cross-validation strategy. It can be an integer specifying the number of folds, a CV splitter, or an iterable yielding (train, test) splits as arrays of indices.

  • n_jobs (int, default=None) – The number of jobs to run in parallel. If None, it defaults to 1 unless in a joblib.parallel_backend context. -1 indicates using all processors.

  • error_score ('raise' or numeric, default=np.nan) – Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. This parameter does not affect the refit step, which will always raise the error.

Returns:

A container object with the following components:

  • key: “model_tune_grid_search”

  • data: Name of the dataset used

  • model: Name of the model used

  • inputs: Input parameters

  • value: Dictionary containing the optimization history

  • table: Tabular format of the optimization history

  • options: Dictionary of visualizations configuration. Run results.plot() to show all plots; Run results.plot(name=xxx) to display one preferred plot; and the following names are available:

    • ”parallel”: Parallel plot of the hyperparameter settings and final performance.

    • ”(<parameter>, <metric>)”: Bar plot showing the performance metric against parameter values.

Return type:

ValidationResult

Examples

Grid Search

Grid Search