[TMP-121] To develop a trustworthy AI model for situation awareness by using mixed reality in Police interventions
AI systems that assist Citizen Security and Safety Units
We focus on the ethical and societal aspects of the ELS theme of HumanE-AI-Net, aiming to design AI systems that assist Citizen Security and Safety Units. These units, with the most officers, address diverse situations such as helping disoriented individuals, managing traffic, and responding to gang fights or shootings. Unlike specialized units, they lack tailored training and tools to handle specific scenarios. Enhancing situational awareness—maintaining a clear understanding of information and tactical conditions—is essential.
Our goal is to develop AI tools that improve the efficiency, safety, and rights protection of officers while fostering public trust. Transparency and trustworthiness are critical in deploying AI for public safety. We will use mixed reality (e.g., HoloLens) to test these solutions with police officers, assessing Trustworthy AI implementation during a common intervention: vehicle stops.
Vehicle stops, ranging from low-risk traffic checks to high-risk suspect pursuits, exemplify scenarios where AI can enhance safety, such as using drones to track armed suspects. This scenario will help evaluate AI’s impact on officers and the public while addressing societal and legal challenges. These include safeguarding privacy, avoiding discrimination, ensuring public safety, and navigating legal issues like data protection and UAV regulations.
The project was a multidisciplinary collaboration bridging police academies, authorities, and computing science, in Sweden and Catalonia (Spain). The main result of the project was a Trustworthy AI Model and its integration with mixed reality interfaces, our collaborative effort seeks to empower officers with enhanced situational awareness and decision-making support. By leveraging context information, user insights, and ethical considerations, we strive to ensure that AI-empowered police interventions are not only effective but also ethical, legal, and socially responsible.
The Trustworthy AI Model is a result of two user studies, one with the Mossos d’Esquadra, the police authority in Barcelona, and one the police education unit at Umeå Sweden. The study included the participation of 39 senior police officers, 20 from Barcelona and 19 from Umeå. The formal publication of the Trustworthy AI Model is in process.
Tangible Outcomes
- David Martín-Moncunill, Eduardo García Laredo, Juan Carlos Nieves: POTDAI: “”A Tool to Evaluate the Perceived Operational Trust Degree in Artificial Intelligence Systems””. IEEE Access 12: 133097-133109 (2024) https://ieeexplore.ieee.org/document/10663721
- [under review] Andreas Brännström, Eduardo Garcia Laredo, Bernat Vivolas Jorda, Lola Valles, Jonas Hansson, Emili Martinez Cañaveras, Anders Schogster, David Martin-Moncunill, Juan Carlos Nieves: “Trustworthy AI and Mixed Reality in Police Interventions: Challenges and Opportunities”.
- Two trustworthy seminars were given one with:
- the Mossos d’Esquadra, the police authority in Barcelona.
- the police education unit at Umeå Sweden.
- video demonstrating the project: https://www.svt.se/nyheter/lokalt/vasterbotten/ai-for-poliser-utvecklas-i-umea
Partners
- Umeå University – Computing Science Department, Juan Carlos Nieves, jcnieves@cs.umu.se
- Umeå University – Police Education Unit, Jonas Hansson, jonas.hansson@umu.se
- Comet Global Innovation-COMET, Rocio Salguero Martinez, r.salguero@comet.technology
- Institut de Seguretat Pública de Catalunya -ISPC, Lola Vales Port, lvalles@gencat.ca