Technology in Speaking Assessment
After watching the recorded presentations, join these authors for a live panel discussion on December 4, 2020 at 9:30 am – 10:00 am (CST). Moderator: Agata Guskaroska
Senior Research Manager
|Making it Happen: Assessing Speaking through Video-Conferencing Technology
Practical considerations such as 'administrative conditions' are especially important when new test formats are operationalised, for example, a speaking test delivered via video-conferencing technology. The literature on research-informed practical implementations of remote speaking tests is limited. This study aims to contribute to this research niche through reporting on the last phase of a research project on a high-stakes video-conferencing speaking test. In the previous three phases (Nakatsuhara et al., 2016; Nakatsuhara et al., 2017; Berry et al., 2018), it was established that the in-room and remote delivery mode are essentially equivalent in terms of score comparability and elicited language functions, but some practical issues were identified as potentially affecting the validity of the test score interpretation and use.
Jing Xu, Edmund Jones, Victoria Laxton and Evelina Galaczi
Principal Research Manager
|Assessing L2 English speaking using automated scoring technology: Examining automarker reliability
Automated scoring is appealing to large-scale L2 speaking assessment in that it increases the speed of score reporting and reduces the logistical complexity of test administration. Despite increased popularity, validation work on automated speaking assessment is in its infancy. The lack of transparency on how learner speech is scored and evidence for the reliability of automated scoring has not only raised language assessment professionals' concerns but provoked scepticism over automated speaking assessment among language teachers, learners and test users (Fan, 2014; Xi, Schmidgall, & Wang, 2016).