MoReLUDNN Classification#

Installation

# To install the required package, use the following command:
# !pip install modeva

Authentication

# To get authentication, use the following command: (To get full access please replace the token to your own token)
# from modeva.utils.authenticate import authenticate
# authenticate(auth_code='eaaa4301-b140-484c-8e93-f9f633c8bacb')

Import required modules

from modeva import DataSet
from modeva import TestSuite
from modeva.models import MoReLUDNNClassifier

Load and prepare dataset

ds = DataSet()
ds.load(name="TaiwanCredit")
ds.set_random_split()
ds.set_target("FlagDefault")

ds.scale_numerical(method="minmax")
ds.preprocess()

Train model#

model = MoReLUDNNClassifier(max_epochs=100, verbose=True)
model.fit(ds.train_x, ds.train_y)
#### MoReLUDNN Training ####
Epoch 0: Train loss 0.5701, Validation loss 0.5288
Epoch 1: Train loss 0.5169, Validation loss 0.5138
Epoch 2: Train loss 0.4999, Validation loss 0.4939
Epoch 3: Train loss 0.4784, Validation loss 0.4739
Epoch 4: Train loss 0.4648, Validation loss 0.4654
Epoch 5: Train loss 0.4582, Validation loss 0.4637
Epoch 6: Train loss 0.4554, Validation loss 0.4597
Epoch 7: Train loss 0.4541, Validation loss 0.4594
Epoch 8: Train loss 0.4531, Validation loss 0.4568
Epoch 9: Train loss 0.4512, Validation loss 0.4553
Epoch 10: Train loss 0.4497, Validation loss 0.4541
Epoch 11: Train loss 0.4485, Validation loss 0.4530
Epoch 12: Train loss 0.4475, Validation loss 0.4525
Epoch 13: Train loss 0.4470, Validation loss 0.4522
Epoch 14: Train loss 0.4457, Validation loss 0.4521
Epoch 15: Train loss 0.4458, Validation loss 0.4501
Epoch 16: Train loss 0.4464, Validation loss 0.4506
Epoch 17: Train loss 0.4439, Validation loss 0.4491
Epoch 18: Train loss 0.4434, Validation loss 0.4484
Epoch 19: Train loss 0.4428, Validation loss 0.4486
Epoch 20: Train loss 0.4423, Validation loss 0.4477
Epoch 21: Train loss 0.4417, Validation loss 0.4487
Epoch 22: Train loss 0.4425, Validation loss 0.4490
Epoch 23: Train loss 0.4415, Validation loss 0.4466
Epoch 24: Train loss 0.4405, Validation loss 0.4460
Epoch 25: Train loss 0.4400, Validation loss 0.4472
Epoch 26: Train loss 0.4399, Validation loss 0.4463
Epoch 27: Train loss 0.4393, Validation loss 0.4451
Epoch 28: Train loss 0.4389, Validation loss 0.4454
Epoch 29: Train loss 0.4388, Validation loss 0.4440
Epoch 30: Train loss 0.4394, Validation loss 0.4451
Epoch 31: Train loss 0.4374, Validation loss 0.4438
Epoch 32: Train loss 0.4377, Validation loss 0.4448
Epoch 33: Train loss 0.4378, Validation loss 0.4481
Epoch 34: Train loss 0.4382, Validation loss 0.4444
Epoch 35: Train loss 0.4362, Validation loss 0.4428
Epoch 36: Train loss 0.4362, Validation loss 0.4429
Epoch 37: Train loss 0.4366, Validation loss 0.4462
Epoch 38: Train loss 0.4367, Validation loss 0.4433
Epoch 39: Train loss 0.4352, Validation loss 0.4426
Epoch 40: Train loss 0.4349, Validation loss 0.4414
Epoch 41: Train loss 0.4348, Validation loss 0.4415
Epoch 42: Train loss 0.4349, Validation loss 0.4412
Epoch 43: Train loss 0.4340, Validation loss 0.4416
Epoch 44: Train loss 0.4340, Validation loss 0.4421
Epoch 45: Train loss 0.4339, Validation loss 0.4409
Epoch 46: Train loss 0.4334, Validation loss 0.4408
Epoch 47: Train loss 0.4333, Validation loss 0.4407
Epoch 48: Train loss 0.4326, Validation loss 0.4403
Epoch 49: Train loss 0.4326, Validation loss 0.4406
Epoch 50: Train loss 0.4330, Validation loss 0.4415
Epoch 51: Train loss 0.4333, Validation loss 0.4415
Epoch 52: Train loss 0.4336, Validation loss 0.4399
Epoch 53: Train loss 0.4323, Validation loss 0.4406
Epoch 54: Train loss 0.4313, Validation loss 0.4405
Epoch 55: Train loss 0.4316, Validation loss 0.4407
Epoch 56: Train loss 0.4317, Validation loss 0.4465
Epoch 57: Train loss 0.4322, Validation loss 0.4394
Epoch 58: Train loss 0.4312, Validation loss 0.4402
Epoch 59: Train loss 0.4311, Validation loss 0.4398
Epoch 60: Train loss 0.4304, Validation loss 0.4391
Epoch 61: Train loss 0.4301, Validation loss 0.4406
Epoch 62: Train loss 0.4311, Validation loss 0.4396
Epoch 63: Train loss 0.4299, Validation loss 0.4391
Epoch 64: Train loss 0.4301, Validation loss 0.4401
Epoch 65: Train loss 0.4300, Validation loss 0.4408
Epoch 66: Train loss 0.4296, Validation loss 0.4392
Epoch 67: Train loss 0.4305, Validation loss 0.4399
Epoch 68: Train loss 0.4290, Validation loss 0.4391
Epoch 69: Train loss 0.4302, Validation loss 0.4416
Epoch 70: Train loss 0.4295, Validation loss 0.4400
Epoch 71: Train loss 0.4289, Validation loss 0.4426
Epoch 72: Train loss 0.4294, Validation loss 0.4390
Epoch 73: Train loss 0.4288, Validation loss 0.4398
Epoch 74: Train loss 0.4288, Validation loss 0.4411
Epoch 75: Train loss 0.4285, Validation loss 0.4395
Epoch 76: Train loss 0.4293, Validation loss 0.4408
Epoch 77: Train loss 0.4290, Validation loss 0.4393
Epoch 78: Train loss 0.4281, Validation loss 0.4410
Epoch 79: Train loss 0.4287, Validation loss 0.4387
Epoch 80: Train loss 0.4281, Validation loss 0.4397
Epoch 81: Train loss 0.4274, Validation loss 0.4417
Epoch 82: Train loss 0.4272, Validation loss 0.4391
Epoch 83: Train loss 0.4280, Validation loss 0.4383
Epoch 84: Train loss 0.4277, Validation loss 0.4404
Epoch 85: Train loss 0.4277, Validation loss 0.4390
Epoch 86: Train loss 0.4277, Validation loss 0.4390
Epoch 87: Train loss 0.4274, Validation loss 0.4393
Epoch 88: Train loss 0.4271, Validation loss 0.4387
Epoch 89: Train loss 0.4270, Validation loss 0.4392
Epoch 90: Train loss 0.4271, Validation loss 0.4393
Epoch 91: Train loss 0.4270, Validation loss 0.4382
Epoch 92: Train loss 0.4269, Validation loss 0.4460
Epoch 93: Train loss 0.4280, Validation loss 0.4386
Epoch 94: Train loss 0.4275, Validation loss 0.4386
Epoch 95: Train loss 0.4274, Validation loss 0.4402
Epoch 96: Train loss 0.4274, Validation loss 0.4387
Epoch 97: Train loss 0.4266, Validation loss 0.4430
Epoch 98: Train loss 0.4272, Validation loss 0.4384
Epoch 99: Train loss 0.4269, Validation loss 0.4390
Training is terminated as max_epoch is reached.
MoReLUDNNClassifier(device='cpu', max_epochs=100, name='MoReLUDNNClassifier',
                    verbose=True)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


Basic accuracy analysis#

ts = TestSuite(ds, model)
results = ts.diagnose_accuracy_table()
results.table

# Global feature importance
# ----------------------------------------------------------
results = ts.interpret_fi()
results.plot()


LLM summary table#

results = ts.interpret_llm_summary(dataset="train")
results.table
Count Response Mean Response Std Local AUC Global AUC
0 546.0 0.0879 0.2834 0.6277 0.6025
1 396.0 0.0884 0.2842 0.6538 0.6080
2 378.0 0.1164 0.3211 0.6285 0.5730
3 356.0 0.0478 0.2135 0.4074 0.6908
4 280.0 0.1071 0.3098 0.6444 0.6074
... ... ... ... ... ...
6649 1.0 1.0000 NaN NaN 0.6601
6650 1.0 0.0000 NaN NaN 0.7156
6651 1.0 1.0000 NaN NaN 0.7275
6652 1.0 0.0000 NaN NaN 0.5981
6653 1.0 0.0000 NaN NaN 0.7366

6654 rows × 5 columns



LLM parallel coordinate plot#

results = ts.interpret_llm_pc(dataset="train")
results.plot()


LLM profile plot against a feature#

results = ts.interpret_llm_profile(feature="PAY_1", dataset="train")
results.plot()


Local feature importance analysis#

results = ts.interpret_local_linear_fi(dataset="train", sample_index=15, centered=True)
results.plot()


Extract the last hidden layer outputs#

model.predict_last_hidden_layer(ds.train_x)
array([[0.        , 0.        , 0.70624983, ..., 0.0886037 , 0.59815633,
        0.33798283],
       [0.        , 0.        , 0.4662439 , ..., 0.        , 0.        ,
        0.07869662],
       [0.        , 0.        , 0.66410094, ..., 0.        , 0.09141124,
        0.29381192],
       ...,
       [0.        , 0.        , 0.77098477, ..., 0.2617427 , 0.77109665,
        0.3951375 ],
       [0.        , 0.        , 0.5495846 , ..., 0.        , 0.        ,
        0.1672709 ],
       [0.        , 0.        , 0.6064909 , ..., 0.03263528, 0.10469937,
        0.25827247]], dtype=float32)

Total running time of the script: (2 minutes 5.108 seconds)

Gallery generated by Sphinx-Gallery