The methods of conducting examinations are evolving with institutions increasingly adopting online systems, making Multiple-Choice Questions (MCQs) important due to their efficiency and scalability. However, constructing high-quality MCQs remains a manual, time-consuming process. Existing automated systems, mainly using BERT-based summarization and lexical distractor generation, such as WordNet, to suffer from limited contextual understanding and scalability. To address these challenges, this research proposes an innovative solution using Large Language Models (LLMs), specifically Gemini AI, for automated MCQ generation. The methodology involves LLM-based text summarization to extract key concepts, followed by direct MCQ and distractor generation with enhanced contextual relevance, diversity, and minimal manual intervention. Additionally, real-time feedback and adaptive difficulty adjustment are integrated to enhance personalized learning experiences. Comparative analysis with recent models like T5, GPT-3.5, and BERT shows that Gemini AI outperforms them in contextual quality, distractor coherence, and generation efficiency, achieving a 20% improvement in human-rated question quality, thus highlighting the potential of LLMs to revolutionize automated assessment design.
@article{b.2025,
author = {Sai Jyothi B. and Naga Likhitha N. and Veda Sri K. and Maheswari M. and Anusha K.},
title = {{Context-Aware MCQ Generation with Large Language Models: A Novel Framework}},
journal = {Journal of Information Technology and Digital World},
volume = {7},
number = {2},
pages = {90-105},
year = {2025},
publisher = {Inventive Research Organization},
doi = {10.36548/jitdw.2025.2.001},
url = {https://doi.org/10.36548/jitdw.2025.2.001}
}
Copy Citation

