Search for a command to run...
Automated essay scoring has long promised fairness and efficiency in educational assessment, yet most existing systems still struggle to understand student writing beyond surface-level linguistic features. However, existing AES (Automated Essay Scoring) methods often struggle to capture the semantic depth, coherence, and contextual relevance of writing, relying heavily on surface-level features such as word count, grammar, or syntax patterns. Given these limitations, this study proposes a Hierarchical Attention-Based BiLSTM-CNN Hybrid Model (HA-BCHM) that integrates local linguistic cues with global semantic understanding to more closely emulate human grading behavior. The proposed method automatically scores English essays by evaluating grammar, coherence, vocabulary richness, and structural organization. The model also provides real-time feedback to help students improve their writing. The proposed method leverages the strengths of convolutional neural networks for local feature extraction and BiLSTM with attention mechanisms for understanding the broader context and hierarchical structure of text. The model is trained and evaluated on the open-access ASAP 2.0 dataset, comprising approximately 24,000 student-written essays across multiple prompts. Experimental results demonstrate strong agreement with human graders, achieving a Pearson correlation of 0.89 and a mean squared error of 2.15. The proposed method achieves reliable assessment results and observable pedagogical impact; moreover, it results in an average 19.5% increase in student writing scores after revision and provides feedback accuracy of 86.5% across key writing elements.