Conference Paper
Explanation Preferences in XAI Fact- Checkers
Fulltext URI
Document type
Text/Conference Paper
Files
Additional Information
Date
2022
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
European Society for Socially Embedded Technologies (EUSSET)
Abstract
As misinformation grows rampantly, fact-checking has become an inordinate task that calls for automation. While there has been much advancement in the identification of misinformation using artificial intelligence (AI), these systems tend to be opaque, fulfilling little of what fact-checking does to convince users of its evaluation. A proposition for this is the use of explainable AI (XAI) to reveal the decision-making processes of the AI. As research on XAI fact- checkers accumulate, investigating user attitudes on the use of AI in fact-checking and towards different styles of explanations will contribute to an understanding of explanation preferences in XAI fact-checkers. We present the preliminary results of a perception study with 22 participants, finding a clear preference towards explanations mimicking organic fact-checking practices and towards explanations that use texts or that contain more details. These early findings may guide the design of XAI to enhance the performance of the human-AI system.