Handbook of Automated Essay Evaluation
Author | : Mark D. Shermis |
Publisher | : Routledge |
Total Pages | : 515 |
Release | : 2013-07-18 |
ISBN-10 | : 9781136334795 |
ISBN-13 | : 1136334793 |
Rating | : 4/5 (95 Downloads) |
Book excerpt: This comprehensive, interdisciplinary handbook reviews the latest methods and technologies used in automated essay evaluation (AEE) methods and technologies. Highlights include the latest in the evaluation of performance-based writing assessments and recent advances in the teaching of writing, language testing, cognitive psychology, and computational linguistics. This greatly expanded follow-up to Automated Essay Scoring reflects the numerous advances that have taken place in the field since 2003 including automated essay scoring and diagnostic feedback. Each chapter features a common structure including an introduction and a conclusion. Ideas for diagnostic and evaluative feedback are sprinkled throughout the book. Highlights of the book’s coverage include: The latest research on automated essay evaluation. Descriptions of the major scoring engines including the E-rater®, the Intelligent Essay Assessor, the IntellimetricTM Engine, c-raterTM, and LightSIDE. Applications of the uses of the technology including a large scale system used in West Virginia. A systematic framework for evaluating research and technological results. Descriptions of AEE methods that can be replicated for languages other than English as seen in the example from China. Chapters from key researchers in the field. The book opens with an introduction to AEEs and a review of the "best practices" of teaching writing along with tips on the use of automated analysis in the classroom. Next the book highlights the capabilities and applications of several scoring engines including the E-rater®, the Intelligent Essay Assessor, the IntellimetricTM engine, c-raterTM, and LightSIDE. Here readers will find an actual application of the use of an AEE in West Virginia, psychometric issues related to AEEs such as validity, reliability, and scaling, and the use of automated scoring to detect reader drift, grammatical errors, discourse coherence quality, and the impact of human rating on AEEs. A review of the cognitive foundations underlying methods used in AEE is also provided. The book concludes with a comparison of the various AEE systems and speculation about the future of the field in light of current educational policy. Ideal for educators, professionals, curriculum specialists, and administrators responsible for developing writing programs or distance learning curricula, those who teach using AEE technologies, policy makers, and researchers in education, writing, psychometrics, cognitive psychology, and computational linguistics, this book also serves as a reference for graduate courses on automated essay evaluation taught in education, computer science, language, linguistics, and cognitive psychology.