Conference Paper

Explanation Preferences in XAI Fact- Checkers

Fulltext URI

Document type

Text/Conference Paper

Additional Information

Date

2022

Journal Title

Journal ISSN

Volume Title

Publisher

European Society for Socially Embedded Technologies (EUSSET)

Abstract

As misinformation grows rampantly, fact-checking has become an inordinate task that calls for automation. While there has been much advancement in the identification of misinformation using artificial intelligence (AI), these systems tend to be opaque, fulfilling little of what fact-checking does to convince users of its evaluation. A proposition for this is the use of explainable AI (XAI) to reveal the decision-making processes of the AI. As research on XAI fact- checkers accumulate, investigating user attitudes on the use of AI in fact-checking and towards different styles of explanations will contribute to an understanding of explanation preferences in XAI fact-checkers. We present the preliminary results of a perception study with 22 participants, finding a clear preference towards explanations mimicking organic fact-checking practices and towards explanations that use texts or that contain more details. These early findings may guide the design of XAI to enhance the performance of the human-AI system.

Description

Lim, Gionnieve; Perrault, Simon T. (2022): Explanation Preferences in XAI Fact- Checkers. Proceedings of 20th European Conference on Computer-Supported Cooperative Work. DOI: 10.48340/ecscw2022_p02. European Society for Socially Embedded Technologies (EUSSET). PISSN: 2510-2591. Poster. Coimbra, Portugal. 27 June - 1 Juli 2022

Keywords

Citation

Tags