The widespread adoption of generative artificial intelligence (GenAI) tools such as ChatGPT is transforming professional and personal workflows. While these tools promise increased productivity and innovation, they also introduce risks related to job displacement, overreliance, and emotional discomfort. This thesis investigates a critical yet underexplored issue: the distinction between use and correct use. Specifically, it explores AI literacy by examining its constituent variables and analyzing how AI literacy relates to the correct use of generative AI tools. To investigate these dynamics, the central research question guiding this study is: To what extent does AI literacy affect intentions of use and perceptions of GenAI? This thesis argues that technical skills alone are insufficient for meaningful engagement with GenAI. Instead, effective use depends on metacognition (awareness of one's cognitive processes) and anticipation (the ability to foresee limitations in model outputs). Emotional variables such as trust and xenophobia are examined as potential mediators of workplace acceptance and the perception that using AI "feels like cheating" is also briefly explored, as GenAI challenges traditional motivational theories. A quasi-experimental design was employed. Participants completed self-report measures and a performance-based prompting task, using ChatGPT to plan a trip. Prompts were evaluated using a custom rubric grounded in theory and best practices from prompt engineering. Results revealed a misalignment between perceived and actual AI literacy: 25% of participants scored only 1 out of 10 on the prompt task, despite reporting high levels of AI literacy. Metacognition and anticipation were found to be naturally embedded within existing AI literacy constructs, contributing to its conceptual refinement. However, self-reported AI literacy did not significantly predict GenAI acceptance, and mediation analyses showed that trust and xenophobia did not significantly mediate this relationship. These findings highlight the need to refine how AI literacy is conceptualized and measured, and they caution against overreliance on self-reports in designing effective training programs and workplace policies.

Joao Fernando Ferreira Goncalves
hdl.handle.net/2105/76486
Media & Business
Erasmus School of History, Culture and Communication

Nicole Christodoulides. (2025, October 10). AI Literacy: A Prerequisite for Effective Use of Generative AI. Media & Business. Retrieved from http://hdl.handle.net/2105/76486