Fowler, Alison M.L.
Providing effective feedback on whole-phrase input in computer-assisted language learning.
In: Proceedings of the 12th International Computer Assisted Assessment Conference 2008.
An important advantage of online assessment work is that answer data can be easily stored and later analysed with a view to establishing the efficacy of the assessment methodology. A 5-year study of the effectiveness of online grammar exercises has been carried out at the University of Kent. The exercises featured require input in the form of whole sentences (since this is a more authentic test of language skills than single word input or multiple choice). Error feedback is generic (indicating where errors have occurred) rather than specific (indicating the exact nature of the errors) because the error-diagnosis system has been designed to be completely language independent. The study aimed to gauge whether this type of feedback is effective in terms of enabling students to: · identify the types of mistakes in their input; · rectify the mistakes; · learn from the mistakes and apply that learning to subsequent problems There was initial concern that the generic feedback might not provide enough detail to enable users to understand and correct their errors however extensive use by the Universitys Spanish department has shown that this type of mark-up is very effective. Chapelle (1998) stresses that it is important for learners to be given the opportunity to correct their linguistic errors. Users of this system, having failed to answer a question correctly on their first attempt, are permitted a second attempt. It is abundantly clear from the logged data that where users make mistakes in their first attempt (and they generally do since the material is designed to be testing), there is almost always a significant improvement in attempt two. This alone would be enough to show that the feedback mode is effective. However this was not enough to prove the pedagogical efficacy of this means of exercise presentation. Therefore more detailed analysis was performed. Over several years of trials more than 100,000 answers have been logged and every answer has been analysed. It can be shown that, for well-designed exercises, as students progress through an exercise they improve in three ways: · more questions are answered correctly on the first attempt; · overall questions scores (i.e. the average of 1<sup>s</sup>t and 2<sup>n</sup>d attempts at questions) improve; · thinking time for formulating answers decreases The degree of increase in accuracy and decrease in thinking time is exercise-dependent, but the overall picture shows clearly that the generic, language-independent feedback is indeed effective. Moreover, it is easy to identify poorly designed exercises since they do not exhibit the characteristics listed above.
Conference or workshop item
||Winner: best paper - 12th International Computer Assisted Assessment Conference, Loughborough, 2008
||error detection, feedback, sequence comparison, CALL, language learning, SLA
||Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming,
||Faculties > Science Technology and Medical Studies > School of Computing > Computing Education Group
||29 Mar 2010 12:09
||06 Sep 2011 04:51
||http://kar.kent.ac.uk/id/eprint/23993 (The current URI for this page, for reference purposes)
- Depositors only (login required):