[TMP-029] Contesting Black-Box Decisions
This projects aims to establish foundations for integrating contestability into decision-making systems based on socio-ethical policies
The right to contest decisions impacting individuals or society is a fundamental democratic principle. In the EU, the General Data Protection Regulation mandates mechanisms for contesting algorithmic decisions. Contesting a decision involves not just explanation but assessing its alignment with externally defined policies. Despite its importance, little work has been done to develop effective contestability mechanisms.
This microproject aims to establish foundations for integrating contestability into decision-making systems based on socio-ethical policies, such as the Guidelines for Trustworthy AI. It will contribute to the broader research on ethical and legal aspects of algorithmic decisions discussed in WP5 of the HumanE-AI-Net project. The project has three objectives: 1) extend formal language for socio-ethical values, expressed as norms and requirements; 2) design a feedback architecture to monitor AI predictions, assessing them against policies; and 3) develop a logic to evaluate black-box predictions using formal socio-technical requirements.
The result will be an agent architecture with four components: a predictor (e.g., neural network), a decision-maker, a utility component influencing decisions, and a governor component to accept or reject recommendations. The architecture will support compliance checking and allow extensions, such as retraining feedback from the governor to the predictor.
We have developed a framework aimed at facilitating appeals against the opaque operations of AI models, drawing on foundational work in contestable AI and adhering to regulatory mandates such as the General Data Protection Regulation (GDPR), which grants individuals the right to contest solely automated decisions. The aim is to extend the discourse on socio-ethical values in AI by conceptualizing a feedback architecture that monitors AI decisions and evaluates them against formal socio-technical requirements. Our results include a proposal for an appeal process and an argumentation model that supports reasoning with justifications and explanations, thereby enhancing the contestability of AI systems. Our work not only advances the theoretical foundations of contestable AI but also proposes practical steps towards implementing systems that respect individuals’ rights to challenge and understand AI decisions. The project has written a draft paper with the aim of submitting it to AAMAS’ blue-sky track.
Partners:
- Umeå University (UmU), Andreas Theodorou, andreas.theodorou@umu.se
- University of Bergen (UiB), Marija Slavkovik, marija.slavkovik@uib.no
- Open University of Cyprus (OUC), Loizos Michael, loizos@ouc.ac.cy