No TL;DR found
As AI technologies, particularly deep learning models, have advanced, their inherent “black box” nature has raised significant concerns regarding accountability, fairness, and trust, especially in critical domains such as healthcare, finance, and criminal justice. We present a detailed exploration of XAI, emphasizing its essential role in improving the interpretability and transparency of complex AI systems in various application domains. Health-related applications were notably using XAI, emphasizing diagnostics, and medical imaging. Other notable domains of use of XAI is encompassed environmental and agricultural management, industrial optimization, cybersecurity, finance, transportation, and social media. Furthermore, nascent applications in law, education, and social care underscore the growing influence of XAI. The analysis indicates a prevalent application of local explanation techniques, especially SHAP and LIME, with a preference for SHAP due to its stability and mathematical assurances. Each technique is analysed for its strengths and limitations in providing clear, actionable insights into model decision-making processes, thereby aiding stakeholders in understanding AI behaviour. Ultimately, this document underscores the critical challenges for XAI in fostering user trust, enhancing decision-making processes, and ensuring that AI technologies are utilized responsibly and ethically across various applications, paving the way for a more transparent and accountable AI landscape. We believe that by serving as a guide for future studies in the area, our systematic review contributes to the body of literature on XAI.