Skip to main content

A Simple Guide to Verifiable AI

This page is meant to be an accessible entry point to what is meant by “Verifiable AI”, and to the resources on Verifiable AI that are available in the AI4EU AI on-demand platform. This guide is part of the broader AI4EU scientific vision on “Human-centered AI”, available here.

Research areas
Verifiable AI

Verifiable AI cover

What is “Verifiable AI”

A question frequently asked is “Why are there no self-driving cars on our streets?”

This is mainly due to the fact that we are not yet able to verify the various AI components needed to autonomously drive a car in a reliable way, that is, as safely as an average human driver. A car is an example of a safety-critical system in which an error occurring in a component, be it software, computing hardware, physical sensor, physical actuator or energy source, can lead to an erroneous system behaviour that can in turn injure or kill a person.

This is the case for all transportation systems (airplanes, trains, boats, etc) as well as medical systems, energy production and distribution systems, heavy industry control systems, defense and security systems. In these industries, which currently use very little AI, dependability is ensured by applying a completely different, far more cautious and costly engineering method and lifecycle than the one used for developing non-critical AI applications such as e-commerce voice ordering assistants and product recommender systems. State of the art dependability engineering generally starts by explicitly specifying, as systematically as possible, with the rigor of mathematical formalisms allowing automated verification, all the expected behaviors and properties of the overall system in all the situations that it may encounter in operations. These behaviors are then decomposed into sub-behaviors, and at each refinement step, systematic, often fully automated verification of the expected behavior and properties is carried out. Those verification steps are also thoroughly documented so that they can be audited by a third-party dependability certification agency. Critical systems cannot be deployed until they are certified dependable.

Verifiable AI primarily investigates how to extend, and in some cases rethink, dependability engineering to be able to certify AI-powered critical systems.

Consider again the task of driving a car. It is challenging for an AI because it involves integrating many sub-tasks such as multi-sensor perception, navigation, avoidance of simultaneously moving obstacle in directions and speeds that must be correctly anticipated, integrating reflex and deliberative action choice and doing all that under real-time constraint (e.g., taking two more seconds to choose between braking and swerving can make both actions ineffective to avoid a fatal accident).

How can we formally specify what the output of a perception component for a self-driving car should be for each possible given set of pixels that the car might receive as input during its  lifetime? The current approach is to specify those tasks informally by a large amount of example images, each one with its classification label, and use algorithms to let the AI machine learn an image classifier from the examples. The challenge is to obtain a set of images that are  representative of the ones the driving AI will encounter in its lifetime. How to label such an image dataset with the correct classification without massive human labor? How to guarantee that a small detail in the image will not result in a dangerously wrong classification. The current approach has been shown to misclassify street signs when even a very small part of it has been spray painted or lost its paint.

How should the concept of “a pedestrian” be defined from pixels? How should the action of “crossing” be defined? Or rather, how can they be defined in a way that it is sufficiently precise and universal to support correct interpretation verification up to some probability threshold prescribed by a dependability certification standard? The question is how to recognize that despite widely different contours, a walking pedestrian and a wheelchair pedestrian are both pedestrians, as well as how to also recognize as pedestrians a running child, an electric unicyclist or a skater in a low squatting position?

Can a driver’s perception be dependably specified as the task of detecting an object in an image or is there more to it? Should not the perception component learn to interpret videos rather than static images? How to recognize for example that a pedestrian is currently walking briskly on the sideway, with the eyes fully focused on a cell phone and with headphones on, and (s)he might very well start crossing the street automatically before noticing that the light is green for cars and will not hear a horn anyway? Note the jump in the complexity in visual detail perception and human behavior modeling required to perform this kind of anticipation that is crucial to prevent accidents. How to verify that a self-driving car will behave as safely as an average driver who performs such detailed analysis and anticipation routinely without thinking about it. These are examples of the difficult open research questions that verifiable AI currently investigates.

Contact Info

Please help us to complete and maintain this document by notifying corrections or addition to the document maintainers, Jacques Robin and Florian Zimmermann.

Note: if you want to add a software resource, data set or researcher to this document, you first need to make sure that they are available in the AI4EU platform, e.g., by publishing the software.

Citing

This document is published under the Creative Commons License Attribution 4.0 International (CC BY 4.0).  It should be cited as:

Jacques Robin and Florian Zimmermann (editors), “A simple guide to Verifiable AI”. Published on the AI4EU platform: http://ai4eu.eu. June 24, 2020.