Lim, GionnievePerrault, Simon T.2022-06-222022-06-222022https://dl.eusset.eu/handle/20.500.12015/4387As misinformation grows rampantly, fact-checking has become an inordinate task that calls for automation. While there has been much advancement in the identification of misinformation using artificial intelligence (AI), these systems tend to be opaque, fulfilling little of what fact-checking does to convince users of its evaluation. A proposition for this is the use of explainable AI (XAI) to reveal the decision-making processes of the AI. As research on XAI fact- checkers accumulate, investigating user attitudes on the use of AI in fact-checking and towards different styles of explanations will contribute to an understanding of explanation preferences in XAI fact-checkers. We present the preliminary results of a perception study with 22 participants, finding a clear preference towards explanations mimicking organic fact-checking practices and towards explanations that use texts or that contain more details. These early findings may guide the design of XAI to enhance the performance of the human-AI system.enExplanation Preferences in XAI Fact- CheckersText/Conference Paper10.48340/ecscw2022_p022510-2591