cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
0 Kudos

SHAP interaction values

What inspired this wish list request? SHAP values are invaluable for explaining ML models.

 

What is the improvement you would like to see? The inventor of SHAP values, Scott Lundberg, has a paper Consistent Individualized Feature Attribution for Tree Ensembles. It says "SHAP interaction values can be interpreted as the difference between the A. SHAP values for feature i when feature j is present and B. the SHAP values for feature i when feature j is absent."

 

 

Why is this idea important? My experience is that prior to modeling, a majority of collaborators ask if interactions between factors can be investigated.  Explainable ML is the key to model acceptance, and obviously ML models are deployed where nonlinearity is suspected to dominate (otherwise regression would be sufficient).