Skip to main content
Ctrl+K

Modeva-AI

  • Installation
  • User Guide
  • API Reference
  • Gallery
  • Changelog
  • GitHub
  • PyPI
  • Installation
  • User Guide
  • API Reference
  • Gallery
  • Changelog
  • GitHub
  • PyPI

Section Navigation

  • Get Started
  • Dataset
  • Model Development
    • ModelZoo
    • Built-in Interpretable Models
    • External Models
    • Model Calibration
    • Hyperparameter Tuning
      • Grid Search
      • Random Search
      • Particle Swarm Optimization Search
      • Tuning with optuna (Experimental)
  • Model Validation
  • Utilities
  • Low Code
  • Gallery of Modeva Examples
  • Model Development
  • Hyperparameter Tuning
  • Grid Search

Note

Go to the end to download the full example code.

Grid Search#

Installation

# To install the required package, use the following command:
# !pip install modeva

Authentication

# To get authentication, use the following command: (To get full access please replace the token to your own token)
# from modeva.utils.authenticate import authenticate
# authenticate(auth_code='eaaa4301-b140-484c-8e93-f9f633c8bacb')

Import required modules

from modeva import DataSet
from modeva import TestSuite
from modeva.models import MoLGBMClassifier
from modeva.models import ModelTuneGridSearch

Load Dataset

ds = DataSet()
ds.load(name="SimuCredit")
ds.set_random_split()

Run grid search#

param_grid = {"n_estimators": [50, 100, 200],
              "learning_rate": [(i + 1) * 0.01 for i in range(5)]}
model = MoLGBMClassifier(max_depth=2, verbose=-1)
hpo = ModelTuneGridSearch(dataset=ds, model=model)
result = hpo.run(param_grid=param_grid,
                 metric=("AUC", "ACC", "LogLoss", "Brier"),
                 cv=5)
result.table
n_estimators learning_rate AUC ACC LogLoss Brier AUC_rank ACC_rank LogLoss_rank Brier_rank Time
14 200 0.05 0.8350 0.7566 0.4983 0.1650 1 1 1 1 0.8698
11 200 0.04 0.8339 0.7552 0.5010 0.1658 2 2 2 2 0.4239
8 200 0.03 0.8315 0.7540 0.5061 0.1674 3 3 3 3 0.3155
13 100 0.05 0.8296 0.7539 0.5099 0.1686 4 4 4 4 0.5588
10 100 0.04 0.8277 0.7500 0.5155 0.1704 5 5 5 5 0.2796
5 200 0.02 0.8276 0.7492 0.5155 0.1704 6 6 6 6 0.3239
7 100 0.03 0.8237 0.7454 0.5244 0.1735 7 7 7 7 0.1723
12 50 0.05 0.8210 0.7418 0.5309 0.1758 8 8 8 8 0.5945
4 100 0.02 0.8170 0.7406 0.5404 0.1794 9 11 10 10 0.0905
2 200 0.01 0.8169 0.7408 0.5406 0.1795 10 9 11 11 0.2870
9 50 0.04 0.8168 0.7407 0.5401 0.1793 11 10 9 9 0.0905
6 50 0.03 0.8101 0.7378 0.5525 0.1843 12 12 12 12 0.0832
3 50 0.02 0.8035 0.7345 0.5713 0.1920 13 13 13 13 0.0531
1 100 0.01 0.8032 0.7344 0.5715 0.1921 14 14 14 14 0.1106
0 50 0.01 0.7925 0.7177 0.6069 0.2080 15 15 15 15 0.1466


result.plot("parallel", figsize=(8, 6))


result.plot(("n_estimators", "AUC"))


result.plot(("learning_rate", "AUC"))


Retrain model with best hyperparameter#

model_tuned = MoLGBMClassifier(**result.value["params"][0],
                               name="LGBM-Tuned",
                               verbose=-1)
model_tuned.fit(ds.train_x, ds.train_y)
model_tuned
MoLGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
                 importance_type='split', learning_rate=0.01, max_depth=-1,
                 min_child_samples=20, min_child_weight=0.001,
                 min_split_gain=0.0, n_estimators=50, n_jobs=None,
                 num_leaves=31, objective=None, random_state=None,
                 reg_alpha=0.0, reg_lambda=0.0, subsample=1.0,
                 subsample_for_bin=200000, subsample_freq=0, verbose=-1)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
MoLGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
                 importance_type='split', learning_rate=0.01, max_depth=-1,
                 min_child_samples=20, min_child_weight=0.001,
                 min_split_gain=0.0, n_estimators=50, n_jobs=None,
                 num_leaves=31, objective=None, random_state=None,
                 reg_alpha=0.0, reg_lambda=0.0, subsample=1.0,
                 subsample_for_bin=200000, subsample_freq=0, verbose=-1)


Diagnose the tuned model#

ts = TestSuite(ds, model_tuned)
result = ts.diagnose_accuracy_table()
result.table
AUC ACC F1 LogLoss Brier
train 0.8436 0.7593 0.8010 0.5771 0.1939
test 0.8363 0.7598 0.7993 0.5813 0.1959
GAP -0.0073 0.0004 -0.0017 0.0042 0.0019


Total running time of the script: (0 minutes 28.139 seconds)

Download Jupyter notebook: plot_0_grid.ipynb

Download Python source code: plot_0_grid.py

Download zipped: plot_0_grid.zip

Gallery generated by Sphinx-Gallery

previous

Hyperparameter Tuning

next

Random Search

On this page
  • Run grid search
  • Retrain model with best hyperparameter
  • Diagnose the tuned model

© Copyright 2024-2025, Modeva Team.

Built with the PyData Sphinx Theme 0.16.0.