Skip to main content

Research

Ethic Assessment Tools

Explore the tools that provide mechanisms for developers and researchers to ensure a trustworthy and responsible use of AI

In 2018, the European Commission opened a process to select a group of experts in Artificial Intelligence (AI) coming from civil society, academia and the industry. As a result, the High-Level Expert Group on Artificial Intelligence (AI HLEG) was created in June 2018 with a total of 52 people from different countries of the European Union (EU). The main objective of this independent group is to provide support in the creation of the European Strategy for Artificial Intelligence with a vision on “ethical, secure and cutting- edge AI”. For this, they have published two documents in this first year of activity: (i) the Ethics Guidelines for Trustworthy AI (the “Guidelines”), along with an assessment list of questions and (ii) the Policy and Investment Recommendations.

Trustworthy AI is defined by three complementary concepts: Lawful AI, Ethical AI and Robust AI. The Guidelines have a human-centric approach on AI and identify 4 ethical principles and 7 requirements that companies should follow in order to achieve trustworthy AI. The document is complemented with a set of questions per each of the 7 requirements, that aim to operationalize the requirements (the “Assessment List”). The 7 requirements are:

 

  1. Human Agency and Oversight: fundamental rights, human agency and human oversight.
  2. Technical Robustness and Safety: resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility.
  3. Privacy and Data Governance: respect for privacy, quality and integrity of data, access to data.
  4. Transparency: traceability, explainability, communication.
  5. Diversity, Non-discrimination and Fairness: avoidance of unfair bias, accessibility and universal design.
  6. Societal and Environmental Well-being: sustainability and environmental friendliness, social impact, society and democracy.
  7. Accountability: auditability, minimization and reporting of negative impact, trade-offs and redress.

 

Starting from June 2019, three main pathways were made available through a piloting process to collect feedback on the Assessment List: 1) an online survey (“quantitative analysis”) 2); a number of in-depth interviews with European organisations (“deep dives”); 3) reporting feedback through the AI Alliance. Based on the feedback collected through these 3 pathways, the Assessment List was revised which resulted in the current document the Trustworthy AI Assessment List (“ALTAI”).

be informed

The ALTAI Tool

The ALTAI tool aims to provide a basis evaluation process for Trustworthy AI self-evaluation. It helps organisations understand what Trustworthy AI is, in particular what risks an AI system might generate.

 

Access the tool