Building an Automated Scoring System for a Single English Sentences 


Vol. 14,  No. 3, pp. 223-230, Jun.  2007
10.3745/KIPSTB.2007.14.3.223


PDF
  Abstract

The purpose of developing an automated scoring system for English composition is to score the tests for writing English sentences and to give feedback on them without human's efforts. This paper presents an automated system to score English composition, whose input is a single sentence, not an essay. Dealing with a single sentence as an input has some advantages on comparing the input with the given answers by human teachers and giving detailed feedback to the test takers. The system has been developed and tested with the real test data collected through English tests given to the third grade students in junior high school. Two steps of the process are required to score a single sentence. The first process is analyzing the input sentence in order to detect possible errors,such as spelling errors,syntactic errors and so on. The second process is comparing the input sentence with the given answer to identify the differences as errors. The results produced by the system were then compared with those provided by human raters.

  Statistics


  Cite this article

[IEEE Style]

J. E. Kim, K. J. Lee, K. A. Jin, "Building an Automated Scoring System for a Single English Sentences," The KIPS Transactions:PartB , vol. 14, no. 3, pp. 223-230, 2007. DOI: 10.3745/KIPSTB.2007.14.3.223.

[ACM Style]

Jee Eun Kim, Kong Joo Lee, and Kyung Ae Jin. 2007. Building an Automated Scoring System for a Single English Sentences. The KIPS Transactions:PartB , 14, 3, (2007), 223-230. DOI: 10.3745/KIPSTB.2007.14.3.223.