A project that will develop a framework to address and tackle the multiple manifestations of bias and unfairness in AI by proposing a controlled experimentation environment for AI developers.
AI-based decision support systems are increasingly deployed in industry, in the public and private sectors, and in policymaking. As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems from amplifying this phenomenon but rather mitigate it, starting with AI developers. To trust these systems, domain experts and stakeholders need to trust their decisions.
AEQUITAS' controlled testing environment will help in assessing the bias in AI systems by identifying potential causes of bias in data, algorithms, and interpretation of results, providing, when possible, effective methods and engineering guidelines to repair, remove, and mitigate bias. It will also provide fairness-by-design guidelines, methodologies, and software engineering techniques to design new bias-free AI systems.