MUSE_E: Multimodal AI Framework for Automated Answer Script Evaluation: Integrating Handwriting Recognition and Semantic Scoring

Authors

  • KAMALA V, MITRA K, PRADEEPA S, SUBA H, SUREKA R Author

DOI:

https://doi.org/10.7492/7h8es254

Abstract

The advancement of technologies such as Artificial Intelligence (AI) and Vision-Language Models (VLM) has significantly transformed academic assessments into an Automated grading system. This paper states about an AI- Powered Automated Answer script evaluation system that integrates secure web-based exam management, and it assists multimodal AI-driven grading using Flask as backend and React as frontend with JWT-based role authentication for teachers and students. In this application, teachers can create examinations with a pre-defined answer key, total marks, and submission timelines. On the other side, students can upload handwritten answer scripts as pdf or images. The evaluation engine utilizes the Groq LLaMA Vision Model to perform Optical Character Recognition (OCR) and semantic grading by extracting actual text content from encoded handwritten scripts and comparing it with the original answer key. The application evaluates the presence of keywords, concepts, diagrams, and semantic alignment to generate scores based on the answer key, along with detailed AI-driven feedback. It stores the details of marks in a database for retrieval and transparency. By integrating automated OCR, semantic scoring, and secure workflow management, the proposed framework strengthens the efficiency of grading, maintains consistency, scalability, and reliability, and notably reduces manual effort while ensuring fairness in digital academic assessment.

Downloads

Published

1990-2026

Issue

Section

Articles

How to Cite

MUSE_E: Multimodal AI Framework for Automated Answer Script Evaluation: Integrating Handwriting Recognition and Semantic Scoring. (2026). MSW Management Journal, 36(1), 3016-3020. https://doi.org/10.7492/7h8es254