
[TMP-004] A Human-AI Collaboration Study Using a Collaborative Game
This project studies the effects of AI initiative levels on human-AI collaboration using the Geometry Friends game.

This project uses the game environment Geometry Friends to evaluate the interaction between humans and artificial agents. To better investigate the effects of AI initiative levels on human perception, we designed three different agent behaviours within a collaborative game:
- a leader agent who acts according to its own plan expecting the human player to follow
- a follower agent that aligns its actions to follow the human player’s plan
- a shifting agent that changes its initiative depending on whether the human player follows its plan.
For all conditions, the agent plays as the circle character of the game, while the human plays as the rectangle.
Given the level of initiative of the agent, including its willingness to shift initiative, we intended to answer four research questions:
- RQ1: How does an agent’s initiative influence the perception of AI partners (agent focus e.g., perceived agent warmth and competence, social identification)?
- RQ2: How does an agent’s initiative impact the perceived quality of collaboration (interaction focus e.g., satisfaction with team and agent performance, objective performance)?
- RQ3: How does an agent’s initiative impact the humans’ self-perception (user focus e.g., satisfaction with self-performance, perception of played role)?
- RQ4: How does an agent’s initiative affect the overall team perception (team focus e.g., agent preference)?
To address these questions, we conducted a study with a mixed-methods design, using the collaborative game Geometry Friends, involving 60 participants across three countries. We present our rationales for designing AI agents with varying levels of initiative, as well as an empirical evaluation of interacting with these agents to assess perceived AI partner warmth and competence, social identification with the team, satisfaction with performance (self, AI partner, and team), and AI partner personal preference based on different levels of initiative.
Our main results are listed below:
- RQ1: AI partner perceptions influence trust levels and are mediated by feelings of control
- We found that the perception of AI partners affects trust, influenced by the level of control the agents provide. If an agent insists on its own plan taking full control, like the leader agent, without establishing trust, it seems unfriendly. However, an agent that follows the human’s plan, like the follower agent, gains more trust as it gives control to the human. Concerning the shifting agent, this agent may shift modes between leader and follower without the human fully understanding why, making participants lose control of the game, which can lead to negative perceptions.
- RQ2: Implicit communication requires time
- The follower agent's understanding of the human's presence and plan likely contributed to higher perceived performance, even though its objective performance was lower. Nevertheless, the lack of an explicit human-AI communication mechanism required humans to learn how to communicate with the agent, perceiving it as slower.
- RQ3: Self perceptions related to sense of responsibility in task
- Participants felt like they played better than the leader agent, indicating that they may feel less responsible for task performance when interacting with an agent playing a leadership role, reducing their sense of accountability and trust towards the agent.
- RQ4: Preferences influenced by context and personal characteristics
- We found that competitive participants who prioritized achievement and fast-paced gaming preferred the leader agent. Meanwhile, those valuing collaboration and fun preferred the follower agent. Possibly, by highlighting collaboration and team decision-making, the leader would be chosen less often. Regarding gender, female preference might lean towards patient, supportive agents, whereas some male participants prefer proactive goal-oriented agents.
This project was carried by Instituto Superior Técnico (IST), Université Paris-Saclay, and Orebro University.
Tangible Outcomes
- Code: full code to run the study https://github.com/1000obo/geometry-friends-study
- Publication: Inês Lobo, Janin Koch, Jennifer Renoux, Inês Batina, and Rui Prada. 2024. When Should I Lead or Follow: Understanding Initiative Levels in Human-AI Collaborative Gameplay. In Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS '24). Association for Computing Machinery, New York, NY, USA, 2037–2056. https://doi.org/10.1145/3643834.3661583