Tutorial on the DIGIHALL Platform based on Papyrus for Robotics, for the design of vehicle software architectures and for the validation of behavior-tree based scenarios and autonomous driving policies in CARLA/ROS 2
The Verifiable AI objectives are organized in four open research questions that constitute four dimensions of the grand challenge resulting from the emergent use of AI in safety-critical applications. These four dimensions also represent the natural way to organize the background material on Verifiable AI:
- Dependability with AI: how to design and verify dependable and secure systems that include unverifiable AI components?
- Dependability of AI: how to verify the dependability and security of AI components themselves (i.e., domain-independent inference engines, as well as knowledge bases either machine learned from data or manually encoded by human domain experts)?
- AI for dependability: which AI techniques can be themselves leveraged to automate the design and verification of systems that include unverified AI components?
- Meta-AI dependability: which AI techniques can be themselves leveraged to automate the design and verification of AI components (meta-AI)?