Alizadeh, FatemehStevens, GunnarVereschak, OleksandraBailly, GillesCaramiaux, BaptistePins, Dominik2022-06-222022-06-222022https://dl.eusset.eu/handle/20.500.12015/4407AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?enBuilding Appropriate Trust in Human-AI InteractionsText/Conference Paper10.48340/ecscw2022_ws042510-2591