Suspicious Minds: the Problem of Trust and Conversational Agents

dc.contributor.authorIvarsson, Jonas
dc.contributor.authorLindwall, Oskar
dc.date45170
dc.date.accessioned2023-09-21T04:50:40Z
dc.date.available2023-09-21T04:50:40Z
dc.date.issued2023
dc.description.abstractIn recent years, the field of natural language processing has seen substantial developments, resulting in powerful voice-based interactive services. The quality of the voice and interactivity are sometimes so good that the artificial can no longer be differentiated from real persons. Thus, discerning whether an interactional partner is a human or an artificial agent is no longer merely a theoretical question but a practical problem society faces. Consequently, the ‘Turing test’ has moved from the laboratory into the wild. The passage from the theoretical to the practical domain also accentuates understanding as a topic of continued inquiry. When interactions are successful but the artificial agent has not been identified as such, can it also be said that the interlocutors have understood each other? In what ways does understanding figure in real-world human–computer interactions? Based on empirical observations, this study shows how we need two parallel conceptions of understanding to address these questions. By departing from ethnomethodology and conversation analysis, we illustrate how parties in a conversation regularly deploy two forms of analysis (categorial and sequential) to understand their interactional partners. The interplay between these forms of analysis shapes the developing sense of interactional exchanges and is crucial for established relations. Furthermore, outside of experimental settings, any problems in identifying and categorizing an interactional partner raise concerns regarding trust and suspicion. When suspicion is roused, shared understanding is disrupted. Therefore, this study concludes that the proliferation of conversational systems, fueled by artificial intelligence, may have unintended consequences, including impacts on human–human interactions.de
dc.identifier.doi10.1007/s10606-023-09465-8
dc.identifier.issn1573-7551
dc.identifier.urihttp://dx.doi.org/10.1007/s10606-023-09465-8
dc.identifier.urihttps://dl.eusset.eu/handle/20.500.12015/5074
dc.publisherSpringer
dc.relation.ispartofComputer Supported Cooperative Work (CSCW): Vol. 32, No. 3
dc.relation.ispartofseriesComputer Supported Cooperative Work (CSCW)
dc.subjectConversation
dc.subjectHuman–computer interaction
dc.subjectNatural language processing
dc.subjectTrust
dc.subjectUnderstanding
dc.titleSuspicious Minds: the Problem of Trust and Conversational Agentsde
dc.typeText/Journal Article
gi.citation.startPage545-571
gi.citations.count13
gi.citations.elementDiego Gosmar (2024): Conversational hyperconvergence: an onlife evolution model for conversational AI agency, In: AI and Ethics 2(5), doi:10.1007/s43681-024-00463-0
gi.citations.elementAdam Palmquist, Izabella Jedel, Chris Hart, Victor Manuel Perez Colado, Aedan Soellaart (2024): “Not in Kansas Anymore” Exploring Avatar-Player Dynamics Through a Wizard of Oz Approach in Virtual Reality, In: Lecture Notes in Computer Science, doi:10.1007/978-3-031-61041-7_17
gi.citations.elementClemens Eisenmann, Jakub Mlynář, Jason Turowetz, Anne W. Rawls (2023): “Machine Down”: making sense of human–computer interaction—Garfinkel’s research on ELIZA and LYRIC from 1967 to 1969 and its contemporary relevance, In: AI & SOCIETY 6(39), doi:10.1007/s00146-023-01793-z
gi.citations.elementJin Mao, Baiyun Chen, Juhong Christie Liu (2023): Generative Artificial Intelligence in Education and Its Implications for Assessment, In: TechTrends 1(68), doi:10.1007/s11528-023-00911-4
gi.citations.elementSoo Jung Hong (2025): What drives AI-based risk information-seeking intent? Insufficiency of risk information versus (Un)certainty of AI chatbots, In: Computers in Human Behavior, doi:10.1016/j.chb.2024.108460
gi.citations.elementTom Ziemke (2024): Ironies of social robotics, In: Science Robotics 91(9), doi:10.1126/scirobotics.adq6387
gi.citations.elementMehrbod Manavi, Felix Carros, David Unbehaun, Clemens Eisenmann, Lena Müller, Rainer Wieching, Volker Wulf (2025): From idle to interaction – assessing social dynamics and unanticipated conversations between social robots and residents with mild cognitive impairment in a nursing home, In: i-com 1(24), doi:10.1515/icom-2024-0046
gi.citations.elementJakub Mlynář, Lynn de Rijk, Andreas Liesenfeld, Wyke Stommel, Saul Albert (2024): AI in situated action: a scoping review of ethnomethodological and conversation analytic studies, In: AI & SOCIETY 3(40), doi:10.1007/s00146-024-01919-x
gi.citations.elementMarc Relieu (2024): How Lenny the bot convinces you that he is a person: Storytelling, affiliations, and alignments in multi-unit turns, In: Discourse & Communication 6(18), doi:10.1177/17504813241271437
gi.citations.elementYimin Xiao, Yuewen Chen, Naomi Yamashita, Yuexi Chen, Zhicheng Liu, Ge Gao (2024): (Dis)placed Contributions: Uncovering Hidden Hurdles to Collaborative Writing Involving Non-Native Speakers, Native Speakers, and AI-Powered Editing Tools, In: Proceedings of the ACM on Human-Computer Interaction CSCW2(8), doi:10.1145/3686942
gi.citations.elementMarc Relieu (2024): How Lenny the bot convinces you that he is a person: Storytelling, affiliations, and alignments in multi-unit turns, In: Discourse & Communication, doi:10.1177/175048132411271437
gi.citations.elementHoang Phuoc Ho, Vani Ramesh, Ivo Zaloudek, Delaram Javdani Rikhtehgar, Shenghui Wang (2025): Enhancing Visitor Engagement in Interactive Art Exhibitions with Visual-Enhanced Conversational Agents, In: Proceedings of the 30th International Conference on Intelligent User Interfaces, doi:10.1145/3708359.3712145
gi.citations.elementAdam Palmquist, Izabella Jedel, Ole Goethe (2024): Design Implications and Processes for an Attainable Game Experience, In: Human–Computer Interaction Series, doi:10.1007/978-3-031-30595-5_3

Files