Toll Free Helpline (India): 1800 1234 070

Rest of World: +91-9810852116

Free Publication Certificate

Vol. 8, Special Issue 5 (2019)

Explainability and trust in AI: Bridging the gap between users and complex models

Author(s):
Dr. Yogesh Bhomia, Mitramani Singh and Arvind Kumar
Abstract:
As artificial intelligence (AI) systems continue to permeate various aspects of our daily lives, understanding and fostering trust in these complex models have become paramount. This review paper delves into the critical intersection of explainability and trust in AI, aiming to bridge the gap between users and intricate machine learning models. The evolving landscape of AI applications, ranging from predictive analytics to autonomous decision-making systems, necessitates a nuanced examination of the factors contributing to user comprehension and trust.
The paper begins by elucidating the significance of explainability, delineating how transparent, interpretable models serve as the foundation for establishing trust among users. It investigates the challenges associated with increasingly intricate AI architectures, emphasizing the potential pitfalls of "black box" models that hinder users' ability to comprehend decision-making processes. Keywords such as interpretability, transparency, and intelligibility are central to dissecting the technical intricacies that define the explainability landscape.
Furthermore, the review explores various methodologies employed to enhance model interpretability, encompassing techniques such as feature importance analysis, attention mechanisms, and model-agnostic interpretability tools. As trust is a multifaceted construct, the paper scrutinizes psychological and sociological aspects that influence user perceptions of AI systems. The integration of human-centric design principles and ethical considerations emerges as a crucial theme in establishing and maintaining user trust.
Highlighting real-world applications and case studies, the review elucidates how explainability contributes to the acceptance and adoption of AI technologies in diverse domains, including healthcare, finance, and autonomous systems. The synergy between technological advancements and user-centric design principles is emphasized, showcasing how the two facets can collectively enhance the overall explainability and trustworthiness of AI models.
Pages: 10-13  |  262 Views  162 Downloads
How to cite this article:
Dr. Yogesh Bhomia, Mitramani Singh and Arvind Kumar. Explainability and trust in AI: Bridging the gap between users and complex models. The Pharma Innovation Journal. 2019; 8(5S): 10-13. DOI: 10.22271/tpi.2019.v8.i5Sa.25262

Call for book chapter