Volume - 7 | Issue - 2 | june 2025
Published
06 May, 2025
The methods of conducting examinations are evolving with institutions increasingly adopting online systems, making Multiple-Choice Questions (MCQs) important due to their efficiency and scalability. However, constructing high-quality MCQs remains a manual, time-consuming process. Existing automated systems, mainly using BERT-based summarization and lexical distractor generation, such as WordNet, to suffer from limited contextual understanding and scalability. To address these challenges, this research proposes an innovative solution using Large Language Models (LLMs), specifically Gemini AI, for automated MCQ generation. The methodology involves LLM-based text summarization to extract key concepts, followed by direct MCQ and distractor generation with enhanced contextual relevance, diversity, and minimal manual intervention. Additionally, real-time feedback and adaptive difficulty adjustment are integrated to enhance personalized learning experiences. Comparative analysis with recent models like T5, GPT-3.5, and BERT shows that Gemini AI outperforms them in contextual quality, distractor coherence, and generation efficiency, achieving a 20% improvement in human-rated question quality, thus highlighting the potential of LLMs to revolutionize automated assessment design.
KeywordsMCQ Generation Large Language Models Automated Question Creation Online Assessments Text Summarization Distractor Generation Adaptive Learning

