The Influence of Explainable Artificial Intelligence on Human Decision-Making: A Review of Empirical Evidence
Main Article Content
Abstract
The rapid adoption of artificial intelligence (AI) technologies has transformed human decision-making in many areas. However, the lack of transparency in many AI models has led to concerns about their trustworthiness. Explainable Artificial Intelligence (XAI) has emerged as a vital approach to address these issues by clarifying AI decision-making processes. This paper reviews empirical research examining XAI’s influence on human decision-making. Problem Statement: The lack of transparency in traditional AI models impedes human understanding, reducing trust and acceptance, especially in high-stakes fields impacting human welfare. This underscores the need to evaluate how XAI can improve human decision-making and foster greater confidence in AI systems. Objective: This paper systematically reviews empirical studies that assess XAI's impact on human decision-making across different sectors. It specifically aims to examine study methodologies, key findings, and implications for research and practical application. Methodology: A thorough literature search was conducted in major academic databases, including PubMed, IEEE Xplore, and Google Scholar, with keywords such as “explainable artificial intelligence,” “XAI,” “human decision-making,” and “empirical study.” Studies published between 2010 and 2024 were included, focusing on empirical research examining XAI's effects on decision-making in either experimental or real-world settings. Results: The review identified 50 empirical studies fitting the criteria, covering fields like healthcare, finance, criminal justice, and autonomous systems. Methodologies ranged from controlled experiments to field studies and user assessments. Findings reveal that XAI techniques positively impact human decision-making, enhancing trust, accuracy, and efficiency. XAI also supports effective collaboration between humans and AI systems, resulting in better-informed decisions. Conclusion: Explainable AI plays a key role in enhancing decision-making by providing greater transparency, interpretability, and trust in AI. Empirical findings confirm that XAI techniques improve decision outcomes across various domains, though further research is necessary to understand XAI’s long-term impact on societal trust in AI.