Abstract:
Automated essay scoring (AES) can effectively alleviate the burden on teachers when evaluating student essays and provide students with objective and timely feedback. It is a crucial application of natural language processing in the field of education. Cross-prompt AES aims to develop a transferable automated scoring model that performs well on essays from a target prompt. However, existing cross-prompt AES models primarily operate in scenarios where target prompt data is available. These models align feature distributions between source and target prompts to learn invariant feature representations for transferring to the target prompt. Unfortunately, such methods cannot be applied to scenarios where target prompt data is not available. In this paper, we propose a cross-prompt AES method based on Category Adversarial Joint Learning (CAJL). First, we jointly model AES as classification and regression tasks to achieve combined performance improvement. Second, unlike existing methods that rely on prompt-agnostic features to enhance model generalization, our approach introduces a category adversarial strategy. By aligning category level features across different prompts, we can learn invariant feature representations of different prompt and further enhance model generalization. We evaluate our proposed method on the Automated Student Assessment Prize (ASAP) and ASAP++ datasets, predicting both overall essay scores and trait scores. Experimental results demonstrate that our method outperforms six classical methods in terms of the quadratic weighted kappa metric.