Abstract
Despite the gains in performance that AI can bring to human-AI teams, they also present them with new challenges, such as the decline in human ability to respond to AI failures as the AI becomes more autonomous. This challenge is particularly dangerous in human-AI teams, where the AI holds a unique role in the team’s success. Thus, it is imperative that researchers find solutions for designing AI team-mates that consider their human team-mates’ needs in their adaptation logic. This study explores adaptive autonomy as a solution to overcoming these challenges. We conducted twelve contextual inquiries with professionals in two teaming contexts in order to understand how human teammate perceptions can be used to determine optimal autonomy levels for AI team-mates. The results of this study will enable the human factors community to develop AI team-mates that can enhance their team’s performance while avoiding the potentially devastating impacts of their failures.
Practitioner summary
As AI becomes more autonomous, the human ability to detect and respond to their failures decreases as they become less a part of the AI’s decision-making loop. This contextual inquiry study shows how human factors are affected by and should influence the design of adaptive AI team-mates in different teaming contexts.
Disclosure statement
No potential conflict of interest was reported by the author(s).