Skip to main content

Wed, 02/03/2021 - 12:08

9 RECOMMENDATIONS ON ETHICS FROM THE SHERPA PROJECT

To understand what European projects on AI ethics can teach us.

To understand what European projects on AI ethics can teach us. The Centre for Computing and Social Responsibility (CCSR) wrote an article reflecting on the SHERPA project. SHERPA is an EU-funded project which analyses how AI and big data analytics impact ethics and human rights. From this experience, Dr Nitika Bhalla and Prof Bernd Stahl offer a set of 9 recommendations. More details are present in the complete version available for downloads or in the official website. 

Complete versionSHERPA_article.pdf

Official websitehttps://www.project-sherpa.eu/

Recommendations: https://www.project-sherpa.eu/recommendations/

Smart information systems (SIS) have the potential to affect most aspects of modern society. The ideal scenario would be to create an environment where humans feel complimented and empowered by big data analytics and artificial intelligence (AI), however this is sometimes not the case, and therefore attracts significant public debate.

A key European funded project that brings ethical and human rights issues of these technologies to light
is the SHERPA project (‘Shaping the ethical dimensions of smart information systems (SIS) – a European
perspective’). The aim of the project is to analyse and understand novel ways in which SIS systems can
impact current ethical and human rights issues, and produce a set of recommendations that will help to
enhance human flourishing and public confidence. The final phase of the project will advocate these
recommendations to the EU and other policy and decision makers, so that these can be taken forward to
improve current ethical, human rights and legal frameworks in the interest of public good.

SHERPA initially investigated 10 case studies of areas that employ SIS and a further 5 policy scenarios .
The empirical studies were important to understand what are the main issues which concern the users
of SIS. Simultaneously, a human rights analysis was carried out to understand current frameworks, and
an analysis of technical aspects of cybersecurity vulnerabilities of SIS. One unique aspect of this project
is that there is a continual working relationship with a broad range of stakeholders, who have helped to
evaluate our findings via; interviews, an online survey, the Delphi study, focus groups and the SHERPA
stakeholder board. In light of this, the project has developed a set of guidelines , (one for developers and
one for users of SIS), undertaken an analysis of regulatory options and contributed to standardisation
activities relating to AI and big data.

Based on the understanding of ethical and human rights issues from all our findings, SHERPA then
developed a set of recommendations described below. (For a more detailed explanation please see our
website ).

SHERPA Recommendations:

1) Conceptual clarity

Recommendation: Use appropriate and clear definitions of AI and digital technology. 

Further information

2) AI Impact Assessment

Recommendation: Develop baseline model for AI impact assessments

Further information

3) Ethics by Design

Recommendation: Promote Ethics by Design for researchers in EC-funded projects

Further information

4) Education on AI and Ethics

Recommendation: Create training and education pathways that include ethics and human rights in AI

Further information

5) Standardization

Recommendation: Include research findings on AI ethics in standardization

Further information

6) Security

Recommendation: Undertake security analysis for machine learning systems

Further information

7) Regulatory Framework

Recommendation: Develop a regulatory framework and enforcement mechanisms for AI

Further information

8) European Agency for AI

Recommendation: Establish an independent European Union Agency for AI

Further information

9) AI Ethics Officer

Recommendation: Establish role of AI (Ethics) Officer in organisations

Further information
 

Image by tumsas edgars

 

Source: Dr Nitika Bhalla and Prof Bernd Stahl