Construct Research in Language Assessment
After watching the recorded presentations, join these authors for a live panel discussion on December 4, 2020 at 9:00 am – 9:30 am (CST). Moderator: Sebnem Kurt
|The effect of item format on the use of test-wiseness strategies in an L2 listening test
Considered as part of strategic competence, test-taking strategies comprise test-management strategies and test-wiseness strategies (Cohen, 2014). Understanding the extent to which second language (L2) learners use test-wiseness strategies during language tests is essential for validation research (Cohen, 2007; Wu & Stone, 2016) as the use of these strategies is believed to introduce construct-irrelevant variance to test results. The most common criticism of the multiple-choice item format, for instance, is that it is susceptible to the use of test-wiseness strategies such as guessing and elimination of choices. To explore whether a modified item format would be less prone to the use of such strategies, an eye-tracking study was carried out to investigate test-wiseness strategies used by L2 learners when answering 4-option multiple-choice items and 4-option multiple true-false items in an L2 listening test. In particular, the study explored (a) the extent to which L2 learners performed differently on the two items types, (b) the extent to which L2 learners used test-wiseness strategies for the two item types, and (c) the extent to which the use of test-wiseness strategies introduced construct-irrelevant variance for the two item types. To address these goals, three types of dataâ€”namely, test score data, eye-tracking data, and verbal report dataâ€”were gathered from 40 ESL learners at a large public university in the Pacific region. The data were analyzed both quantitatively and qualitatively using scanpath analysis. The findings revealed that (a) multiple true-false items were more difficult than multiple-choice items, (b) test-wiseness strategies were used less frequently for answering multiple true-false items than for answering multiple-choice items, and (c) the use of test-wiseness strategies had a statistically significant effect on the observed scores for both item types. The study has implications for test item design and highlights the importance of gathering validity evidence based on response processes.
|Exploring the Construct of Interactional Competence in Different Types of Assessments of Oral Communication
Research on interaction in speaking assessment suggested that both verbal and nonverbal interaction are integral parts of the construct of interactional competence (Young, 2011; Galaczi & Taylor, 2018). However, little has been done to investigate which features significantly contributed to interaction effectiveness. Therefore, this study examined the elicitation of interaction features in individual and paired discussion tasks to explore the interactional competence construct. Two raters evaluated 68 test-taker performances. Exploratory factor analysis revealed four factors: body language, topic management, interactional management, and interactive listening. Logistic regressions showed that while the individual task elicited more topic management features, the paired discussion task elicited more interactional management features. Simple regressions showed that body language and topic management features predicted interactional competence scores in the individual task, whereas body language, topic management, interactional management, and interactive listening features were predictors of scores in the paired discussion task. The findings suggest that both nonverbal and verbal interaction features are important in the interactional competence construct. The paired format provides test takers with more opportunities to demonstrate their interactional ability. The study also suggests the importance of rater training in evaluating interactional competence.