Understanding and mitigating bias in AI automated systems
The AI community has been focusing on developing fixes for harmful bias and discrimination, through so-called ‘debiasing algorithms’ that either try to fix data for known or expected biases, or constrain the outcomes of a given predictive model to produce ‘fair’ outcomes. We argue that creating more AI solutions to fix harmful biases in data is not the only solution we should be pursuing. A fundamental question we are facing as researchers and practitioners, is not how to fix harmful bias in AI with new algorithms, but rather; if we should be designing and deploying such potentially biased systems in the first place
'Humane Conversation' featuring Sennay Ghebreab, AI Scientists at the University of Amsterdam, and Hinda Haned, professor by special appointment at the University of Amsterdam. The conversation is hosted by Research Priority Area (RPA) on Human(e) AI at the University of Amsterdam.