The Role of Automatic Versus User-Invoked Explanations in Enhancing User Trust and Adoption of Large Language Models : A Cognitive Effort Perspective
Sirbu, Ana-Maria (2024-12-20)
The Role of Automatic Versus User-Invoked Explanations in Enhancing User Trust and Adoption of Large Language Models : A Cognitive Effort Perspective
Sirbu, Ana-Maria
(20.12.2024)
Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty.
suljettu
Julkaisun pysyvä osoite on:
https://urn.fi/URN:NBN:fi-fe202501133381
https://urn.fi/URN:NBN:fi-fe202501133381
Tiivistelmä
Despite the remarkable capabilities of large language models (LLMs), their black-box nature often raises concerns about trustworthiness, particularly when users rely on them for data analysis. While providing insights into an LLM’s internal reasoning process through explanations could be a promising approach to addressing this issue, little is known about how LLM explanations impact users. Explainable AI (XAI) techniques have been mostly technology-driven and cannot be applied to LLMs due to their high complexity. Therefore, the thesis addresses this gap by investigating the impact of explanation provision strategies in LLM-based data assistants. Drawing on cognitive load theory and trust theory, a between-subject online experiment (N=96) was conducted to examine how different explanation provision strategies (automatic vs. user-invoked) influence users’ cognitive effort, trust, and adoption of LLM-based data assistants. The results reveal no difference between the two explanation provision strategies impacting cognitive effort, indicating that both automatic and user-invoked explanations are effective low-effort strategies for explanation design. Furthermore, cognitive effort negatively influences users’ trust and adoption of LLM-based data assistants, while trust positively influences their adoption. This thesis contributes to the nascent literature on LLM explainability by offering novel insights into the impact of explanation provision strategies in interactions with LLM-based data assistants.