Assessing Multi-modal Competence

Assessing Multi-modal Competence

After watching the recorded presentations, join these authors for a live panel discussion on December 5, 2020 at 8:30 am – 9:00 am (CST). Moderator: Shireen Baghestani

PresenterAbstract

YunDeok Choi

Lecturer
Sungkyunkwan University, South Korea

What Interpretations Can We Make from Scores on Graphic-Prompt Writing (GPW) Tasks?: An Argument-Based Approach to Test Validation

This argument-based validation research examines the validity of score interpretations on computer-based graphic-prompt writing (GPW) tasks, centering on the explanation inference. The GPW tasks, designed for English placement testing, measure examinees' ability to incorporate visual graphic information into their writing. Over 100 ESL students, studying at a public university in the United States, completed GPW tasks and two online questionnaires on graph familiarity (Xi, 2005) and test mode preference (Lee, 2004) and submitted their standardized English writing test scores. A Pearson product-moment correlation, corrected for attenuation, revealed scores on the GPW tasks and the standardized writing tests had a moderately strong positive relationship (rT1T2 = .51). Multiple linear regression and follow-up correlation analyses showed that GPW task scores were attributed to examinees' academic writing ability and carried relatively weak, but significant, positive relations to the triad of the graph familiarity. The findings suggest the GPW tasks and the standardized English writing tests assessed different dimensions of the same underlying construct (academic writing ability), and the triad of the graph familiarity served as sources of construct-irrelevant variance. Theoretical and practical implications of the findings, as well as methodological limitations, are discussed.
 
Video Recording


Jinrong Li

Associate Professor
Georgia Southern University

Assessing Multimodal Writing in L2 Contexts: A Research Synthesis

Writing assessment is an integral part of writing instructors’ work (Crusan, 2010; Matsuda, Cox, Jordan, & Ortmeier-Hooper, 2011). It is also complex because it involves assessment of both learning outcome and learning process. The development of technology in the past two decades have added another layer of complexity by dramatically changing the pedagogical practices of L2 writing. In particular, the number of studies incorporating multimodal composing practices into L2 writing contexts is growing rapidly. These developments call for further research on what to assess and how to assess in relation to new forms of writing tasks. Given the wide range of technological tools and diverse L2 writing contexts, there is also urgent need to explore how we can draw on different disciplines, theoretical perspectives, and data sources to achieve a better understanding of the assessment of multimodal composition. In this presentation, therefore, I report a research synthesis of empirical studies of the use and/or assessment of multimodal composition in L2 contexts published in the past decade. Empirical studies were identified using keyword searches via academic databases and Google Scholar. The review and analysis aimed to identify characteristics of multimodal composition tasks and their contexts of use, examine theoretical perspectives of assessment of multimodal composition, and explore common and emerging assessment criteria and assessment practices. Pedagogical implications and future research directions are discussed.
 
Video Recording