The Impact of AI Explainability on Cognitive Dissonance and Trust in Human-AI Recruitment Teams
Abstract
This study examined how varying levels of AI explainability affect trust and cognitive dissonance within human-AI teams, focusing on collaborative recruitment scenarios. The results indicate a complex interplay: while no significant differences were observed between the high-explainability and no-explainability conditions, low explainability led to a statistically significant increase in cognitive dissonance. This finding suggests that partial explanations may exacerbate rather than mitigate uncertainty, possibly due to confirmation bias, where users selectively interpret incomplete information to align with pre-existing beliefs. In such cases, explanations may provide enough detail to provoke skepticism but insufficient justification to resolve doubts, leaving users conflicted between their intuition and the AI's suggestions. These results highlight the need to prioritize explanation quality and completeness over sheer detail when designing XAI systems for collaboration.
References
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115. https://doi.org/10.1016/j.inffus.2019.12.012
Bashkirova, A., & Krpan, D. (2024). Confirmation bias in AI-assisted decision-making: AI triage recommendations congruent with expert judgments increase psychologist trust and recommendation acceptance. Computers in Human Behavior: Artificial Humans, 2(1), 100066. https://doi.org/10.1016/j.chbah.2024.100066
Bhatt, U., Xiang, A., Sharma, S., Weller, A., Taly, A., Jia, Y., ... & Eckersley, P. (2020, January). Explainable machine learning in deployment. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 648-657). https://doi.org/10.1145/3351095.3375624
Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 188. https://doi.org/10.1145/3449287
Bussone, A., Stumpf, S., & O'Sullivan, D. (2015, October). The role of explanations on trust and reliance in clinical decision support systems. In 2015 International Conference on Healthcare Informatics (pp. 160-169). IEEE. https://doi.org/10.1109/ichi.2015.26
de Brito Duarte, R., Correia, F., Arriaga, P., & Paiva, A. (2023). AI trust: Can explainable AI enhance warranted trust?. Human Behavior and Emerging Technologies, 2023(1), 4637678. https://doi.org/10.1155/2023/4637678
Denault, V. (2020). Misconceptions about nonverbal cues to deception: A covert threat to the justice system?. Frontiers in Psychology, 11, 573460. https://doi.org/10.3389/fpsyg.2020.573460
Došilović, F. K., Brčić, M., & Hlupić, N. (2018, May). Explainable artificial intelligence: A survey. In 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO) (pp. 0210-0215). IEEE. https://doi.org/10.23919/mipro.2018.8400040
Eppler, M. J., & Mengis, J. (2004). The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. The Information Society, 20(5), 325–344. https://doi.org/10.1080/01972240490507974
Festinger, L. (1962). Cognitive dissonance. Scientific American, 207(4), 93-106.
Ghori, M. F., Dehpanah, A., Gemmell, J., Qahri‐Saremi, H., & Mobasher, B. (2021). How does the user’s knowledge of the recommender influence their behavior? arXiv. https://doi.org/10.48550/arxiv.2109.00982
Gillespie, N., Lockey, S., Curtis, C., Pool, J. K., & Akbari, A. (2023). Trust in Artificial Intelligence: A global study. Brisbane, Australia; New York, United States: The University of Queensland; KPMG Australia. https://doi.org/10.14264/00d3c94
Giovine, C., & Roberts, R. (2024). Building AI trust: The key role of explainability. McKinsey & Company. https://www.mckinsey.com/capabilities/quantumblack/our-insights/building-ai-trust-the-key-role-of-explainability
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627-660. https://doi.org/10.5465/annals.2018.0057
Haufe, S., Wilming, R., Clark, B., Zhumagambetov, R., Panknin, D., & Boubekki, A. (2024). Explainable AI needs formal notions of explanation correctness. arXiv. https://doi.org/10.48550/arxiv.2409.14590
Hunkenschroer, A. L., & Luetge, C. (2022). Ethics of AI-enabled recruiting and selection: A review and research agenda [Review of Ethics of AI-enabled recruiting and selection: A review and research agenda]. Journal of Business Ethics, 178(4), 977-1007. https://doi.org/10.1007/s10551-022-05049-6
Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International journal of cognitive ergonomics, 4(1), 53-71. https://doi.org/10.1207/S15327566IJCE0401_04
Kaplan, A. D., Kessler, T. T., Brill, J. C., & Hancock, P. A. (2023). Trust in artificial intelligence: Meta-analytic findings. Human Factors, 65(2), 337-359. https://doi.org/10.1177/00187208211013988
Kästner, L., Langer, M., Lazar, V., Schomäcker, A., Speith, T., & Sterz, S. (2021). On the relation of trust and explainability: Why to engineer for trustworthiness. arXiv. https://doi.org/10.48550/arxiv.2108.05379
Keutzer, C. S. (1968). A measure of cognitive dissonance as a predictor of smoking treatment outcome. Psychological Reports, 22(2), 655-658. https://doi.org/10.2466/pr0.1968.22.2.65
Kupfer, C., Prassl, R., Fleiß, J., Malin, C., Thalmann, S., & Kubicek, B. (2023). Check the box! How to deal with automation bias in AI-based personnel selection. Frontiers in Psychology, 14, 1118723. https://doi.org/10.3389/fpsyg.2023.1118723
Langer, M., König, C. J., Back, C., & Hemsing, V. (2023). Trust in artificial intelligence: Comparing trust processes between human and automated trustees in light of unfair bias. Journal of Business and Psychology, 38(3), 493-508. https://doi.org/10.1007/s10869-022-09829-9
Levin, D. T., Harriott, C., Paul, N. A., Zhang, T., & Adams, J. A. (2013). Cognitive dissonance as a measure of reactions to human-robot interaction. Journal of Human-Robot Interaction, 2(3), 3-17. https://doi.org/10.5898/jhri.2.3.levin
Morandini, S., Fraboni, F., Puzzo, G., Giusino, D., Volpi, L., Brendel, H., Balatti, E., Angelis, M.D., Cesarei, A.D., & Pietrantoni, L. (2023). Examining the nexus between explainability of AI systems and user's trust: A preliminary scoping review. CEUR Workshop Proceedings, 3554, 6.
Naiseh, M., Al-Thani, D., Jiang, N., & Ali, R. (2023). How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human-Computer Studies, 169, 102941. https://doi.org/10.1016/j.ijhcs.2022.102941
Nuño, A., & John, F. A. S. (2015). How to ask sensitive questions in conservation: A review of specialized questioning techniques. Biological Conservation, 189, 5-15. https://doi.org/10.1016/j.biocon.2014.09.047
Oshikawa, S. (1972). The measurement of cognitive dissonance: Some experimental findings. Journal of Marketing, 36(1), 64-67. https://doi.org/10.1177/002224297203600112
Paas, F. G. W. C., van Merriënboer, J. J. G., & Adam, J. J. (1994). Measurement of cognitive load in instructional research. Perceptual and Motor Skills, 79(1), 419-430. https://doi.org/10.2466/pms.1994.79.1.419
Peters, T.M., & Visser, R.W. (2023). The Importance of Distrust in AI. In L. Longo (Ed.), Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science (Vol. 1903). Springer, Cham. https://doi.org/10.1007/978-3-031-44070-0_15
Rastogi, C., Zhang, Y., Wei, D., Varshney, K. R., Dhurandhar, A., & Tomsett, R. (2022). Deciding fast and slow: The role of cognitive biases in ai-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 83. https://doi.org/10.1145/3512930
Schmidt, P., Biessmann, F., & Teubner, T. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4), 260-278. https://doi.org/10.1080/12460125.2020.1819094
Sharma, S., Kaushik, K., Sharma, R., & Chaturvedi, N. (2023). Explainable Artificial Intelligence (XAI). IJFANS International Journal of Food and Nutritional Sciences, 12(1), 2660-2666.
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
Sivaraman, V., Bukowski, L. A., Levin, J. R., Kahn, J. M., & Perer, A. (2023). Ignore, trust, or negotiate: Understanding clinician acceptance of AI-based treatment recommendations in health care. arXiv. https://doi.org/10.48550/arxiv.2302.00096
Small, E., Xuan, Y., Hettiachchi, D., & Sokol, K. (2023). Helpful, misleading or confusing: How humans perceive fundamental building blocks of Artificial Intelligence explanations. arXiv. https://doi.org/10.48550/arXiv.2303
Smelyakov, K., Hurova, Y., & Osiievskyi, S. (2023). Analysis of the effectiveness of using machine learning algorithms to make hiring decisions. CEUR Workshop Proceedings, 3387, 7.
Sweeney, J. C., Hausknecht, D., & Soutar, G. N. (2000). Cognitive dissonance after purchase: A multidimensional scale. Psychology & Marketing, 17(5), 369-385. https://doi.org/10.1002/(SICI)1520-6793(200005)17:5%3C369::AID-MAR1%3E3.0.CO;2-G
Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach's alpha. International Journal of Medical Education, 2, 53–55. https://doi.org/10.5116/ijme.4dfb.8dfd
Ulfert, A. S., Georganta, E., Centeio Jorge, C., Mehrotra, S., & Tielman, M. (2024). Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework. European Journal of Work and Organizational Psychology, 33(2), 158-171. https://doi.org/10.1080/1359432x.2023.2200172
Wang, X., & Yin, M. (2022). Effects of explanations in AI-assisted decision making: Principles and comparisons. ACM Transactions on Interactive Intelligent Systems, 12(4), 27. https://doi.org/10.1145/3519266
Zhang, M. (2021, August). Research on cross-cultural differences in nonverbal communication between America and China. In Proceedings of the 2021 5th International Seminar on Education, Management and Social Sciences (ISEMSS 2021): Advances in Social Science, Education and Humanities Research (pp. 954-957). Atlantis Press. https://doi.org/10.2991/assehr.k.210806.181
Zhang, Y., Liao, Q. V., & Bellamy, R. K. (2020, January). Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 295-305). https://doi.org/10.1145/3351095.3372852
Copyright (c) 2025 Tetiana Sydorenko

This work is licensed under a Creative Commons Attribution 4.0 International License.