期刊论文详细信息
Working Papers in Applied Linguistics and TESOL
Automated Essay Scoring: A Literature Review
Ian Blood1 
[1] Teachers College, Columbia University;
关键词: English language;    Writing;    Ability testing;    Grading and marking;    Computer programs;    Communicative competence;    Testing;    Education;    Study of language;    Teaching language;    Applied inguistics;   
DOI  :  10.7916/D8ZG74V2
来源: DOAJ
【 摘 要 】

In recent decades, large-scale English language proficiency testing and testing research have seen an increased interest in constructed-response essay-writing items (Aschbacher, 1991; Powers, Burstein, Chodorow, Fowles, & Kukich, 2001; Weigle, 2002). The TOEFL iBT, for example, includes two constructed-response writing tasks, one of which is an integrative task requiring the test-taker to write in response to information delivered both aurally and in written form (Educational Testing Service, n.d.). Similarly, the IELTS academic test requires test-takers to write in response to a question that relates to a chart or graph that the test-taker must read and interpret (International English Language Testing System, n.d.). Theoretical justification for the use of such integrative, constructed-response tasks (i.e., tasks which require the test-taker to draw upon information received through several modalities in support of a communicative function) date back to at least the early 1960’s. Carroll (1961, 1972) argued that tests which measure linguistic knowledge alone fail to predict the knowledge and abilities that score users are most likely to be interested in, i.e., prediction of actual use of language knowledge for communicative purposes in specific contexts.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:9次