Skip to main content

Explainable AI

Explainable AI (XAI) investigates methods for analyzing or complementing AI models to make the internal logic and output of algorithms transparent and interpretable, making these processes humanly understandable and meaningful. Some might dispute the need for XAI, and it is true that there is no need for explainability in certain applications. This assumption may well be true in cases where the ultimate goal is the maximum performance, for example in a cat-vs-dog image classifier. This is not the case, however, when these applications concern people's lives and well-being. A system proposing a specific amount of a drug to a patient, or a system suggesting prison time for a defendant must have an element of being questionable and reasonable. Explainability of such systems will provide transparency and allow human experts to be advised, rather than blindly rely on them.

Explainable AI