

Explainable AI (XAI) investigates methods for analyzing or complementing AI models to make the internal logic and output of algorithms transparent and interpretable, making these processes humanly understandable and meaningful. Some might dispute the need for XAI, and it is true that there is no need for explainability in certain applications. This assumption may well be true in cases where the ultimate goal is the maximum performance, for example in a cat-vs-dog image classifier. This is not the case, however, when these applications concern people's lives and well-being. A system proposing a specific amount of a drug to a patient, or a system suggesting prison time for a defendant must have an element of being questionable and reasonable. Explainability of such systems will provide transparency and allow human experts to be advised, rather than blindly rely on them.
Using Objects and a Graph Convolutional Network for the Bottom-up Recognition and Explanation of Events in Video
A data set of 1.8 billion measurements from a mechanical wrist with three axes that can hold tools, for example, for spray painting in combination with a pump. The data set spans six months in 1-second intervals.
Deep Neural Network trained for the classification of dermoscopic images into nine categories.
State-of-the-art solver for logic programming under the answer set semantics.
LioNets: Local Interpretations Of Neural Networks through Penultimate Layer Decoding
LORE (LOcal Rule-based Explanations) is a model-agnostic explanator for tabular data
LioNets technique applied to the Turbofan Engine Degradation Simulation dataset (time-series data) LioNets on Time Series
A Neural-Specific Local Interpretation Technique Exploiting Penultimate Layer Information
Interactive dialogue between user-machine learning model via explanations