What inspired this wish list request? SHAP values are invaluable for explaining ML models.
What is the improvement you would like to see? The inventor of SHAP values, Scott Lundberg, has a paper Consistent Individualized Feature Attribution for Tree Ensembles. It says "SHAP interaction values can be interpreted as the difference between the A. SHAP values for feature i when feature j is present and B. the SHAP values for feature i when feature j is absent."
Why is this idea important? My experience is that prior to modeling, a majority of collaborators ask if interactions between factors can be investigated. Explainable ML is the key to model acceptance, and obviously ML models are deployed where nonlinearity is suspected to dominate (otherwise regression would be sufficient).