As Large Language Models (LLMs), such as ChatGPT and Microsoft Copilot, become increasingly integrated into daily life, understanding how users form trust in these AI systems can aid in promoting adoption and sustained use of the technology. While previous research has examined trust in technology from a functional perspective, LLMs challenge this view by showing system-like capabilities with human-like interaction features. This raises the question of how different types of trust, system-like and human-like, form user evaluations for a LLM. This thesis investigates how human-like trusting beliefs (such as integrity, competence, and familiarity) and system-like trusting beliefs (including reliability, functionality, helpfulness, and privacy/security) influence four key user outcomes: perceived usefulness, perceived enjoyment, trusting intention, and continuance intention. It also examines whether the context in which a large language model (LLM) is used, personal versus work-related, moderates these relationships. The study uses a dual trust framework and is supported by theories that emphasize how trust in technology is formed by both system characteristics and the environment in which technology is used. A survey design was employed with a sample of 150 participants, who were randomly assigned to reflect on either personal or professional use of an LLM. Participants reported their experiences with the LLM in the context they most often used it and completed validated measures of trusting beliefs and user outcomes. The results show that system-like trust was the most consistent and significant predictor across all outcomes. Human-like trust showed a more selective influence, contributing significantly to continuance intention. Importantly, context played a significant moderating role, particularly strengthening the effect of trust on continuance intention. This study contributes to theory by validating the dual-trust framework in the context of LLMs and by showing that trust formation can be context-sensitive. Practically, the findings suggest that LLM designers and organizations should tailor their systems and messaging to match user expectations across different settings but specifically focus on system-like trust. Future research should explore how trust evolves over time, and how other factors, such as culture and experience, form trust in AI assistants.

Serge Rijsdijk
hdl.handle.net/2105/76708
Media & Business
Erasmus School of History, Culture and Communication

Pheebe Niewold. (2025, October 10). System versus Human: The Role of Trusting Beliefs in AI Adoption Across Personal and Work Contexts. Media & Business. Retrieved from http://hdl.handle.net/2105/76708