Skip to main content

Header

BIAS

BIAS

The BIAS project aims to empower the Artificial Intelligence (AI) and Human Resources Management (HRM) communities by addressing and mitigating algorithmic biases.

Business Categories
Duration
-

BIAS Consortium

The BIAS project is a four-year project that focuses on investigating and mitigating biases in artificial intelligence (AI) systems used in the labour market. It aims to develop reliable tools for identifying and mitigating biases in AI/NLP systems, engage stakeholders through surveys, interviews, and co-creation workshops, and raise awareness about the importance of tackling biases in AI. The project also emphasizes capacity building to equip the AI and HRM communities with the tools and knowledge to prevent bias in AI. Furthermore, ethnographic fieldwork is conducted to gather insights from employers, employees, and AI developers across different European countries. This research helps in understanding current experiences and future scenarios of AI usage in employment settings.

One of the key outcomes of the project is the creation of the Debiaser, a proof-of-concept technology that identifies and mitigates bias and unfairness in decision making. The Debiaser will be made available to the AI community, enabling companies to reduce biases in their human resources practices.

Overall, the BIAS project follows an interdisciplinary approach, combining research and impact methodologies to address biases in the labour market. By reducing algorithmic bias, promoting awareness, and empowering stakeholders, the project aims to foster more equitable and fair practices in AI and contribute to advancements in the field of worker studies.

 

Assets related to BIAS