Header
Trustworthy AI Cluster: Main innovations and future challenges | Birds of a Feather Workshops
Future-Ready On-Demand Solutions with AI, Data, and Robotics
Trustworthy AI Cluster partners were at The Future Ready ADRA Event and share the current and promising results about Trustworthy AI insights of the sibling projects and proposed new challenges to be addressed for a better acceptance of AI solutions within the industrial domain.
Below is the content of the 3 remarkable or promising results/insights and 3 future research/innovation challenges (or gaps).
Body

Trustworthy AI Cluster @Future Ready ADRA event
On the 18th of February, from 10:30 to 12:00, the Trustworthy AI Cluster partners were at The Future Ready ADRA Event. The AI Data Robotics Association promotes the Future-Ready: On Demand Solutions with AI, Data, and Robotics event.
- Michel Barreteau - ULTIMATE
- Stylianos Trevlakis - TALON
- Jaume Abella - SAFEXPLAIN
- Nikos Katzouris - EVENFLOW
discussed and shared the current and promising results about Trustworthy AI of the sibling projects and proposed new challenges to be addressed for a better acceptance of AI solutions within the industrial domain.
Below is the content of the 3 remarkable or promising results/insights and 3 future research/innovation challenges (or gaps).
3 remarkable or promising results/insights
1) Considering hybrid AI to build or increase trustworthiness:
In comparison with pure ML/DL applications, hybrid AI (especially neuromorphic approaches) enable to take into account symbolic bases (e.g. temporal properties, rules expressed by humans) to complete their evaluation, V&V using robust methods (like formal ones).
2) Building trustworthy AI activities along the whole AI life cycle (at system / algorithm / SW / HW levels)
Legacy AI-based systems but also innovative ones have to consider an end-to-end approach to design, increase, maintain trustworthiness at different engineering levels. With consistency.
3) End-to-end Trustworthy AI (Value Sensitive Design based) methodology:
Trustworthy AI (even if declined from ethical principles) is mainly driven by quantifying technical criteria. The proposed experimental approach consists in considering the ethical dimension, at the same level than the other technical trustworthy AI criteria, through some questionnaires filled by different stakeholders (e.g. customer, AI developer, AI V&V). It may impact the way they think, design, implement, V&V the AI solution.
3 future research/innovation challenges (or gaps)
1) Scalability and transferability of trustworthy frameworks:
Create a layered approach to trustworthiness (data, control, human layers). Explain how the framework can scale with future and more demanding applications. Also consider and propose ways to transfer/adapt the trustworthiness between different verticals (i.e., from robotics to manufacturing).
2) AI-based risk management environment
Build a dedicated environment to manage AI risks (potentially based on the NIST AI RMF) for a specific critical domain based on an AI risk repository (potentially inspired from the MIT one). This environment should be flexible enough to instantiate the AI risks and management rules applicable to a given critical domain. It should fulfil the requirements of AI regulation document(s) depending on the target risk level to prevent or mitigate risks according the chosen policy. It could be coupled with classic risk management tooled-up methodologies (e.g. STPA).
3) Collecting and organizing good practices, RETEX (incl. assumptions) in terms of trustworthiness to make the AI life cycle more mature - idea from the audience.