SHAP, or SHapley Additive exPlanations, is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. Shapley values are approximating using Kernel SHAP, which uses a weighting kernel for the approximation, and DeepSHAP, which uses DeepLift to approximate them.
Source: A Unified Approach to Interpreting Model PredictionsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Explainable Artificial Intelligence (XAI) | 53 | 10.29% |
Feature Importance | 52 | 10.10% |
Explainable artificial intelligence | 47 | 9.13% |
BIG-bench Machine Learning | 42 | 8.16% |
Decision Making | 41 | 7.96% |
Management | 16 | 3.11% |
Interpretable Machine Learning | 15 | 2.91% |
Fairness | 14 | 2.72% |
Classification | 11 | 2.14% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |