modeva.TestSuite.explain_lime#

TestSuite.explain_lime(dataset: str = 'test', sample_index: int = 0, centered: bool = True, random_state: int = 0)#

Generate a LIME (Local Interpretable Model-agnostic Explanations) explanation for a specific sample.

This method provides local feature importance and contribution analysis for a single prediction using the LIME algorithm. It supports both regression and classification tasks.

Parameters:
  • dataset ({"main", "train", "test"}, default="test") – The data set used for calculating the explanation results.

  • sample_index (int, default=0) – The index of the sample in the selected dataset to be explained.

  • centered (bool, default=True) – Whether to center the feature values by subtracting the mean of each feature.

  • random_state (int, default=0) – Random seed for LIME’s perturbation sampling process. Use the same value for reproducible results.

Returns:

A result object containing:

  • key: “explain_lime”

  • data: Name of the dataset used

  • model: Name of the model used

  • inputs: Input parameters used for the analysis

  • value: Dictionary containing

    • ”Name”: List of feature names;

    • ”Value”: List of feature values;

    • ”Effect”: List of feature contributions measured by LIME;

    • ”Coefficient”: List of feature coefficients measured by LIME.

  • value: Dictionary with feature names, values, contributions, and coefficients

  • table: DataFrame representation of the explanation

  • options: Dictionary of visualizations configuration for a horizontal bar plot where x-axis is LIME value, and y-axis is the feature name. Run results.plot() to show this plot.

Return type:

ValidationResult

Notes

The explanation includes both feature coefficients (importance) and feature contributions (coefficient x feature value). Results are visualized using a stem-and-bar plot showing both the feature values and their contributions.

Examples

Local Explainability

Local Explainability