Skip to main content

Header

MAMMOth

Multi-Attribute, Multimodal Bias Mitigation in AI Systems

MAMMOth project tackles AI bias by focusing on multi-discrimination mitigation for tabular, network and multimodal data

Duration
-

Artificial Intelligence (AI) is increasingly employed by businesses, governments, and other organizations to make decisions with far-reaching impacts on individuals and society. This offers big opportunities for automation in different sectors and daily life, but at the same time, it brings risks for discrimination of minority and marginal population groups on the basis of the so-called protected attributes, like gender, race, and age. Despite the large body of research to date, the proposed methods work in limited settings, under very constrained assumptions, and do not reflect the complexity and requirements of real-world applications.

To this end, the MAMMOth project focuses on multi-discrimination mitigation for tabular, network and multimodal data. MAMMOth aims at addressing the associated scientific challenges by developing an innovative fairness-aware AI-data driven foundation that provides the necessary tools and techniques for the discovery and mitigation of (multi-)discrimination and ensures the accountability of AI-systems with respect to multiple protected attributes and for traditional tabular data and more complex network and visual data.

The project actively engages with numerous communities of vulnerable and/or underrepresented groups in AI research right from the start, adopting a co-creation approach, to make sure that actual user needs and pains are at the centre of the research agenda and act as guidance to the project’s activities. MAMMOth will make available both standalone open-source methods and an integrated open-source “bias toolkit” that will combine new methods with third-party fairness libraries and components.

The MAMMOth tools are designed for three following sectors of interest:

1. Algorithm-based decision-making in finance: The goal is to identify attributes contributing to AI bias in credit scoring and debt repayment, and to develop and test an algorithmic decision-making system that reduces bias in financial services.

2. Decision-making in face verification systems: The goal is to address inequalities in the access of minorities to online services using remote face verification, e.g. in the context of digital identity authentication/Know Your Customer (KYC) procedures.

3. Bias in academic collaborations and citations: The goal is to investigate how intersectional biases in search engines like Google Scholar affect the visibility of scholars and measure their impact on the academic network.

Assets related to MAMMOth