Skip to main content

Fri, 11/20/2020 - 12:23

LESSONS LEARNT ON TRUSTWORTHY AI MADE IN EUROPE: CHALLENGES AND ANSWERS

On November 13th 2020, The AI4EU Observatory on Society and AI organised their 1st workshop “Trustworthy AI made in Europe: from Principles to Practices”.

ELSEC Workshop

On November 13th 2020, The AI4EU Observatory on Society and AI organised their 1st workshop “Trustworthy AI made in Europe: from Principles to Practices”. The event focused on current ethical, legal, and technical challenges risen in the design and deployment of AI systems and the impact these have over society. Discussions and reflections on each panel session put forward how these challenges should be addressed to be aligned with the European human-centric approach of AI. The workshop included speakers from different backgrounds (philosophers, engineers and computer scientists, lawyers, economists) and sectors (academia, industry, European and international institutions). The diversity of disciplines and knowledge allowed us to cover a broad spectrum of views and contributions to the concept of trustworthy AI.

The morning session focused on the challenges surrounding Trustworthy AI. Some of them are cultural and invite us to reconsider the way in which we depict AI in the media. For example, Luc Steels pointed to different AI narratives, the one promoted by fiction and the one disseminated by scientific and technological disciplines. A connected risk here is the spread of misconceptions about AI capabilities and the need of certifying real AI systems’ performance. 

Another set of concerns was more philosophical in nature and regarded the epistemic uncertainty in predicting the behavior of autonomous and intelligent systems. According to Viola Schiaffonati it’s time to move towards new forms of experimentation allowing to accommodate the study of complex and dynamic interactions between the AI system and its environment. 

To advance the scientific understanding of AI systems and their impact on humans within specific domains, such as that of criminal justice, is the main goal of the HUMAINT project. Emilia Gomez, who is leading this research, highlighted design choices and technical metrics which characterise different levels of interventions in the development of an AI system. These may include, for example, the conceptualization of fairness and its translation into metrics that can be monitored and compared. She stressed the need for standards and methodologies, but also the importance of human oversight to make algorithms understandable for domain experts and users.

The delicate balance between technical and social aspects of technology, including AI, has been the one of the main focal points of Manuela Battaglini’s presentation. Starting from the disruptive big data innovation that allowed tech companies processing huge quantities of data (think of Hadoop), she highlighted the growing gap between physical technologies (internet, big data or AI/ML, among others) and social technologies (such as public administrations, governments, education or laws). She then described a framework based on GDPR principles that can help us address the connected ethical and legal issues, such as opaque decisions and discrimination.

In the transition towards Trustworthy AI companies are key actors. As suggested by Sonja Zillner, businesses can take advantage of AI solutions, especially, for optimizing and monitoring industrial processes. The industrial sector is familiarized with the use of risk management systems which take into account ethical requirements such as safety, reliability, security, privacy. However, when the use of AI has a direct interaction with humans, it is important to move priorities from machine performance efficiency to user protection. For this reason, Zillner introduced Siemens’ approach to deal with safe AI challenges by first setting a list of ethical principles and tools to validate them, then creating an engineering environment to ensure that the data collected and used in AI or Machine Learning systems are of high quality. The last layer is defined by safety argumentation and regulatory framework to align the architectural design with existing norms and regulations.

The morning session concluded with the presentation of the Z-InspectionⓇ, a protocol developed by Roberto V. Zicari and a multidisciplinary team of experts, and applied in the healthcare sector such as the a case study for predicting cardiovascular risks. The assessment methodology helps identify the ethical, technical and legal implications of the AI, highlighting at the same time risks and benefits. Zicari expressed the challenge to map the philosophical concept of ethical requirements such as fairness to Machine Learning metrics.

In the afternoon, more practical examples of tools and initiatives were presented to define how it is possible to improve trustworthiness. As Andreas Theodorou reminded, the key to create a trustworthy AI, is not only to verify an AI system but ensure the ethical accountability and responsibility on developers have over society. The panel included speakers from industry, which presented different technical approaches to Trustworthy AI. 

Arnaud Gotlieb described a method for testing autonomous systems, the so-called metamorphic testing. He suggested how this technique, which was in fact introduced 20 years ago, can support the testing of  systems whose output is hardly predictable (e.g. self-adaptive programs) and, in so doing, support the implementation of Trustworthy AI requirements (such as transparency and non-dscirmination). The importance of applied principles in business has been presented by Shalini Kurapati, who highlighted how human oversight is necessary to support more transparent model assessment and tackle tensions between companies’ competitiveness and social dilemmas

Once again, the human perspective was at the centre of interest in speakers coming from academia. Juan Carlos Nieves stressed how humans are fundamental in interpreting and contextualizing norms and, to make  better sense of this, he provided an example of AI application to support and enhance autonomous living among elderly adults. Dario Garcia-Gasulla presented a design scheme that will empower end-users by providing a better communication on the level of transparency and data privacy. This would help users make more conscious, informed decisions and support experts to investigate the characteristics of an AI system. 

Finally, Karine Perset presented an initiative which is broader in scope, the OECD AI Policy Observatory. She introduced some of the activities carried out in this context including a framework to navigate policy implications of different types of AI systems and some interactive tools to explore different types of AI-related contents (e.g. news, publications, etc).

The last part of the workshop was devoted to a public discussion which stressed, on the one hand, the abundance of tools and methodologies for the assessment of Trustworthy AI, and, on the other hand, the need to make the same assessment accountable and transparent. In the very end, the workshop represents a starting point to promote an interdisciplinary and intersectoral dialogue around AI and foster a Trustworthy AI culture. There is a clear need that AI stakeholders commit towards the society and domain experts, by promoting responsible practices and providing tools to make AI understandable. This workshop has also shown the importance of defining standards to measure performance and interpret the outcomes of AI systems, especially when this has a direct impact on humans.

Invited talks

Ulises Cortes (Barcelona Supercomputing Center)

Luc Steels (ICREA & University of Venice)- Roads towards trustworthy AI [video presentation]

Viola Schiaffonati (Politecnico di Milano) - AI and scientific method: from epistemology to ethics [video presentation] [slides]

Emilia Gomez (AI WATCH & HUMAINT) - HUMAINT: understanding the impact of AI on human behaviour [video presentation]

Manuela Battaglini (Transparent Internet) - Transparency, automated decision-making processes and personal profiling [video presentation] [slides]

Sonja Zillner (Siemens) - Trustworthy AI for Industrial Application [video presentation] [slides]

Roberto V. Zicari (Goethe University Frankfurt) -Z-Inspection®: A Process to Assess Trustworthy AI [video presentation] [slides]

Andreas Theodorou (Umea University)

Karine Perset (OECD) - Implementing the OECD AI Principles [video presentation]

Arnaud Gotlieb (Simula) - Metamorphic Testing: A Validation Technique for Trustworthy AI [video presentation]

Shalini Kurapati (Clearbox AI) - Machine Learning model assessment for trustworthy and human-centric AI adoption in enterprises [video presentation]

Juan Carlos Nieves (Umea University) - Toward Human-Centric Trustworthy Systems [video presentation]

Dario Garcia-Gasulla (Barcelona Supercomputing Center) - Signs for Ethical AI: A Route Towards Transparency [video presentation] [slides]

 

 

Source: Francesca Foffano