[TMP-021] Assistive AI: A Verifiable and Accountable Social Community
The workshop designs an Assistive AI roadmap, aligns with regulations, and initiates micro-projects to address AI's societal impacts.
The workshop aims to: (a) design a roadmap for Assistive AI that aligns with ongoing AI regulations and (b) initiate micro-projects based on the workshop’s outcomes.
The 2003 NSF call (NSF 03-611), though canceled, recognized the critical role of IT in shaping the nation’s future, noting difficulties in predicting social and economic impacts. These challenges are even greater for AI, as its rapid evolution creates uncertainty and potential societal disruptions. Beyond regulation, there is a pressing need to assist people and minimize turbulence.
Regulated AI
Efforts to regulate AI, like the EU's push to align chatbots with fundamental rights, face challenges. Open-source models, such as Alpaca, enable rule-breaking and misinformation campaigns through methods like peer-to-peer networks, artificial identities, and echo chambers. While necessary, delays in regulation may destabilize society.
Assistive AI
"Assistive AI" (AAI) could mitigate AI’s harmful effects if supported by efficient verification methods. Our early approach preserves contributors' anonymity while ensuring accountability, provided community and legal rules are followed. AAI could complement regulations by addressing societal needs more directly.
Regulatory frameworks for the use of AI are emerging. However, they trail behind the fast-evolving malicious AI technologies that can quickly cause lasting societal damage. In response, we introduce a pioneering Assistive AI framework designed to enhance human decision-making capabilities. This framework aims to establish a trust network across various fields, especially within legal contexts, serving as a proactive complement to ongoing regulatory efforts. Central to our framework are the principles of privacy, accountability, and credibility. In our methodology, the foundation of reliability of information and information sources is built upon the ability to uphold accountability, enhance security, and protect privacy. This approach supports, filters, and potentially guides communication, thereby empowering individuals and communities to make well-informed decisions based on cutting-edge advancements in AI. Our framework uses the concept of Boards as proxies to collectively ensure that AI-assisted decisions are reliable, accountable, and in alignment with societal values and legal standards. Through a detailed exploration of our framework, including its main components, operations, and sample use cases, we show how AI can assist in the complex process of decision-making while maintaining human oversight. The proposed framework not only extends regulatory landscapes but also highlights the synergy between AI technology and human judgement, underscoring the potential of AI to serve as a vital instrument in discerning reality from fiction and thus enhancing the decision-making process. Furthermore, we provide domain-specific use cases to highlight the applicability of our framework.
Tangible Outcomes
- [arxiv] “Assistive AI for augmenting human decision-making” Natabara Máté Gyöngyössy, Bernát Török, Csilla Farkas, Laura Lucaj, Attila Menyhárd, Krisztina Menyhárd-Balázs, András Simonyi, Patrick van der Smagt, Zsolt Ződi, András Lőrincz https://arxiv.org/abs/2410.14353
Partners:
- ELTE, Andras Lörincz
- Siemens, Sonja Zillner, sonja.zillner@siemens.com
- Volkswagen, Patrick van der Smagt, smagt@argmax.ai