Skip to main content
eScholarship
Open Access Publications from the University of California

To Aggregate or Not? Linguistic Features in Automatic Essay Scoring and Feedback Systems

Creative Commons 'BY-NC-ND' version 4.0 license
Abstract

This study investigates the relative efficacy of using linguistic micro-features, the aggregation of such features, and a combination of micro-features and aggregated features in developing automatic essay scoring (AES) models. Although the use of aggregated features is widespread in AES systems (e.g., e-rater; Intellimetric), very little published data exists that demonstrates the superiority of using such a method over the use of linguistic micro-features or combination of both micro-features and aggregated features. The results of this study indicate that AES models comprised of micro-features and a combination of micro-features and aggregated features outperform AES models comprised of aggregated features alone. The results also indicate that that AES models based on micro-features and a combination of micro-features and aggregated features provide a greater variety of features with which to provide formative feedback to writers. These results have implications for the development of AES systems and for providing automatic feedback to writers within these systems.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View