- Home
- Browse by Author
Browsing by Author "Pins, Dominik"
1 - 2 of 2
Results Per Page
Sort Options
- Conference PaperAppropriation and Practices of Working with Voice Assistants in the Kitchen(Proceedings of 17th European Conference on Computer-Supported Cooperative Work - Doctoral Colloquium, 2019) Pins, DominikFor our research, we focus on the kitchen as an important space at home that is not only used for cooking but also has a strong social role in the household (Johannes-Hornschuh, 2010). Many housekeeping tasks take place in the kitchen that can be supported by VAs, such as managing a shopping list or the (family) calendar or researching nutrition and food. These interactions are interesting to study in terms of their social and collaborative components. The kitchen offers many relevant tasks which are often rather complex and might require mixed- media approaches for successful support that might well exceed the capabilities of the VA technology in the current form (Moore, 2017). Better understanding where there are areas for innovation and what we can learn from the current practices of interaction to work around the current limitations is a further aim of our work.
- Conference PaperBuilding Appropriate Trust in Human-AI Interactions(Proceedings of 20th European Conference on Computer-Supported Cooperative Work, 2022) Alizadeh, Fatemeh; Stevens, Gunnar; Vereschak, Oleksandra; Bailly, Gilles; Caramiaux, Baptiste; Pins, DominikAI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?