Loading article details and PDF preview...
Loading article details and PDF preview...
Haruna Abdulkadir
Generative Artificial Intelligence is increasingly used in academic research, but concerns about accuracy, cognitive strain, and ethical risks continue to grow. This study conducted a meta-analysis of 47 empirical works published between 2023 and 2025 to assess how tools such as ChatGPT, Consensus, and NotebookLM influence research practices. Using Cognitive Load Theory and Sociotechnical Systems Theory as guiding lenses, the study reviewed evidence from research tasks, accuracy measures, hallucination rates, bias patterns, and behavioural responses. A randomeffects model with Hedges (g) was applied to synthesise findings across diverse contexts. Results show that generative AI improves efficiency in summarisation and topic development, yet it also produces inconsistent accuracy, fabricated citations, and inherited biases, contributing to researcher overreliance and added verification demands. The study highlights the need for stronger institutional policies, transparency requirements, researcher training, and discipline-specific standards to ensure responsible and ethical use of generative AI in academic environments.