Explainable Artificial Intelligence (XAI)
221 papers with code • 0 benchmarks • 2 datasets
Explainable Artificial Intelligence
Benchmarks
These leaderboards are used to track progress in Explainable Artificial Intelligence (XAI)
Libraries
Use these libraries to find Explainable Artificial Intelligence (XAI) models and implementationsMost implemented papers
BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research.
LioNets: A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer Information
Artificial Intelligence (AI) has a tremendous impact on the unexpected growth of technology in almost every aspect.
Revealing drivers and risks for power grid frequency stability with explainable AI
Stable operation of the electrical power system requires the power grid frequency to stay within strict operational limits.
Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property Prediction
Advances in machine learning have led to graph neural network-based methods for drug discovery, yielding promising results in molecular design, chemical synthesis planning, and molecular property prediction.
Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI
This paper quantifies the quality of heatmap-based eXplainable AI (XAI) methods w. r. t image classification problem.
GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraints
The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models.
OmniXAI: A Library for Explainable AI
We introduce OmniXAI (short for Omni eXplainable AI), an open-source Python library of eXplainable AI (XAI), which offers omni-way explainable AI capabilities and various interpretable machine learning techniques to address the pain points of understanding and interpreting the decisions made by machine learning (ML) in practice.
From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation
In this work we introduce the Concept Relevance Propagation (CRP) approach, which combines the local and global perspectives and thus allows answering both the "where" and "what" questions for individual predictions.
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI comprises of the following key components: (i) a flexible synthetic data generator and a collection of diverse real-world datasets, pre-trained models, and state-of-the-art feature attribution methods, and (ii) open-source implementations of eleven quantitative metrics for evaluating faithfulness, stability (robustness), and fairness of explanation methods, in turn providing comparisons of several explanation methods across a wide variety of metrics, models, and datasets.
"Even if ..." -- Diverse Semifactual Explanations of Reject
In this work, we propose to explain rejects by semifactual explanations, an instance of example-based explanation methods, which them self have not been widely considered in the XAI community yet.