modeva.TestSuite.interpret_fi#

TestSuite.interpret_fi(dataset: str = 'test')#

Calculate and visualize global feature importance for the model.

This function computes the importance of each feature in the model’s predictions and creates a horizontal bar plot visualization of the results.

Parameters:

dataset ({"main", "train", "test"}, default="test") – The data set used for calculating the explanation results.

Returns:

A container object with the following components:

  • key: “interpret_ei”

  • data: Name of the dataset used

  • model: Name of the model used

  • inputs: Input parameters

  • value: Dictionary containing:

    • ”Name”: List of feature names

    • ”Importance”: List of corresponding feature importance values

  • table: DataFrame containing feature names and importance values

  • options: Dictionary of visualizations configuration for a horizontal bar plot where x-axis is importance, and y-axis is the feature names. Run results.plot() to show this plot.

Return type:

ValidationResult

Examples

Tree Ensemble Models (Classification)

Tree Ensemble Models (Classification)

Tree Ensemble Models (Regression)

Tree Ensemble Models (Regression)

GAMINet Classification

GAMINet Classification

GAMINet Regression

GAMINet Regression

Logistic Regression (Classification)

Logistic Regression (Classification)

Linear Regression (Regression)

Linear Regression (Regression)

Linear Tree Classification

Linear Tree Classification

Linear Tree Regression

Linear Tree Regression

Mixture of Expert (MoE) Classification

Mixture of Expert (MoE) Classification

Mixture of Expert (MoE) Regression

Mixture of Expert (MoE) Regression