Abstracts

Abstracts (Day 1)

The development of a web-based Spanish listening exam

Cristina Pardo Ballester
Iowa State University

It is well known by test developers that test specifications of the construct to be measured (Bachman, 1990; Bachman and Palmer, 1996; Davidson and Lynch, 2002) need to be clearly defined for the process of developing a test.  This study focuses on a trial version of an online Spanish Listening Exam (SLE), a listening measure focused on grammatical items and tasks based on the main topics learned in the first two years of a Spanish curriculum.  The SLE tasks are relevant to the language instruction domain and hence provide sufficient information on which to base diagnostic intervention or classifying instructional materials.  To ensure the content validity of the tasks, four instructors rated the text content of the SLE tasks based on three methods: 1) the linguistic features presented in Spanish textbooks; 2) the ACTFL guidelines for listening; and 3) a wide range of discourse features of the spoken texts.  Descriptive statistics and correlation analyses of raters for the text content of tasks were conducted as evidence of the consistency of the SLE construct validity.  Moreover, the online SLE was administered to 147 Spanish learners and participants’ perceptions about the SLE similarity with instructional tasks were analyzed as proof of the usefulness of the test.  The SLE results provided diagnostic information for Spanish learners and teachers about the relevance of listening tasks.

Minimal pairs in spoken corpora
Implications for pronunciation assessment and teaching

John Levis and Viviana Cortes
Iowa State University

Minimal pairs, such as ship/sheep and think/sink, in which two words are distinguished by a single phoneme, are among the most familiar linguistic features in basic linguistics courses, theoretical phonology, and teaching applications.  In teaching, they are a mainstay for pronunciation diagnostic assessment, spoken language production practice, and pedagogical materials for listening comprehension.

A common criticism of minimal pairs is that they are rarely equally likely in the same context (e.g., Brown 1995), and that even when both are members of the same lexical category (e.g., ship and sheep are both nouns), the context will make one word far more likely in a listener’s interpretation.  Jenkins’ research (2000), however, suggests that such top-down processing effects are more likely for native than nonnative listeners. Despite the continuing questions about how errors in minimal pairs impact listeners’ ability to interpret messages correctly, there are no studies examining how frequency of occurrence might impact interpretation.

This paper examines the relative frequency of minimal pairs in pronunciation teaching and testing materials.  Using a variety of spoken American English corpora in different registers, we examined the occurrence of commonly used minimal consonant and vowel minimal pairs.  The results indicate that one member of a pair (usually the harder to pronounce sound) is usually extremely common while the easier to pronounce member is extremely uncommon in spoken usage. Focusing on minimal pairs with /ɵ/-/s/ and /i/-/ɪ/, we apply our results to diagnostic practices as well as materials for spoken language production and perception.

References

Brown, A.  (1995).  Minimal pairs: Minimal importance?  ELT Journal, 49 (2), 169-175.

Jenkins, J.  (2000).  The phonology of English as an international language.  Oxford University Press.

Top

Premises, conditions, and a framework for cognitive diagnostic assessment

Eunice Eunhee Jang, Ph.D
Ontario Institute for Studies in Education
of the University of Toronto

Different purposes of assessments have strong implications for ways in which to interpret learners’ language competency. For example, if the purpose of assessment is to discriminate learners by locating them on a continuous ability scale, a unidimensional representation of knowledge structure in the domain should suffice. If the purpose of the assessment is to evaluate and monitor learners on particular aspects of skills, a much finer grained representation of knowledge structure and skill space is necessary. Cognitive diagnostic assessment (CDA) resonates with the latter purpose as its main interest is in informing learners of their cognitive strengths and weaknesses in assessed skills. Recently, collective efforts among researchers led to remarkable advances in CDA statistical technologies. The premise of CDA is promising, but its realization is yet to come. In this talk, I bring attention to presuppositions made on CDA, address conditions for valid CDA applications, and suggest an alternative framework that integrates CDA into computer-assisted language learning environments.  The framework is characterized by the ‘use’ of diagnostic feedback so that CDA can anticipate its full utility and a welcoming educator clientele. Presented are empirical examples from the applications of CDA to second language and K-12 literacy assessments.

Top

Using diagnostic information to adapt traditional textbook-based instruction

Joan Jamieson, Northern Arizona University
Maja Grgurovic, Iowa State University
Tony Becker, Northern Arizona University

Although diagnostic assessment has traditionally been defined as a highly specialized procedure for addressing persistent learning problems, materials developers recently have associated this term with corrective procedures in formative assessment. This latter sense was used by developers of the NorthStar textbook series in which on-line assessments and remediation were used to individualize instruction for English language learners. In this presentation, first the process for selecting diagnostic information will be explained.  Then, the results of a pilot study conducted in summer, 2007, will be presented. Before beginning a new unit, students took a brief on-line Readiness Check to assess whether they knew the vocabulary and grammar that the unit assumed; that is, material which was included in the unit but not directly taught. Students were automatically assigned individualized on-line homework based on poor performance on the Readiness Check. After students completed the unit, they took an Achievement Test; individualized on-line instruction was automatically assigned as homework to each student based on low scores on the different parts of the Achievement Test, including extra practice in the following areas: reading, vocabulary, grammar, editing, and writing. Data were collected on test performance and attitudes through scores, a questionnaire, and interviews. Analyses were conducted to examine student performance as well as the degree to which the teacher and the students found the Readiness Check, Achievement Test, and their associated on-line remediation exercises helpful. This presentation describes a real-world example of a small scale, modest step forward for diagnostic language assessment.

Top

Extending learner models for Intelligent Computer-Assisted Language Learning beyond grammar

Luiz A. Amaral, University of Victoria
Detmar Meurers, Ohio State University

Learner models for Intelligent Computer-Assisted Language Learning (ICALL) have focused on modeling a student’s state of knowledge in terms of the acquisition of linguistic structures (cf. Heift, 2005; Michaud and McCoy, 2004; Murphy and McTear, 1997).  Correspondingly, the diagnosis and feedback components of the ICALL systems that make use of the learner model generally take for granted that linguistic errors are caused solely by a lack of grammatical knowledge.

In this paper, we want to argue for a broader perspective of learner models for ICALL which incorporates factors outside of the linguistic competence per se (in line with Bull et al., 1995).  This makes it possible to model the learner’s abilities to use language in context for specific goals and the learner’s abilities relative to particular tasks.  Updating the model requires the specification of explicit activity models, which however are well-motivated.  To guarantee valid
interpretations of students’ performance it is necessary to take into account information about the task environment where it occurs.

The learner model architecture we present allows the system to react to a student’s errors not only based on her linguistic knowledge, but also taking into consideration her ability to perform language tasks of different types and levels of difficulty, the strategies needed to perform the tasks, and certain aspects of negative transfer.  The proposed learner model is being implemented as part of the TAGARELA system, an intelligent electronic workbook accompanying the individualized instruction of Portuguese.

References

Bull, Susan, Paul Brna and Helen Pain (1995).  Extending the Scope of the Student Model.  User Modeling and User-Adapted interaction 5, 45-65.

Heift, Trude (2005).  Corrective Feedback and Learner Uptake in CALL.  ReCALL Journal 17(1), 32-46.

Michaud, Lisa N.  and Kathleen F.  McCoy (2004).  Empirical Derivation of a Sequence of User Stereotypes for Language Learning.  User Modeling and User-Adapted Interaction  14, 317-350.

Murphy, Maureen and Michael McTear (1997). Learner Modeling for Intelligent CALL. In Proceedings of The 6th International Conference on User Modeling. Sardinia, Italy, pp. 301-312.

Top

Soundwaves and spectrographs:  Feedback for L2 learners

Fran Gulinello
Nassau Community College

It has become increasingly common for computer based language learning programs to incorporate sound waves so learners can visually compare their speech to a native speaker’s.  This visual feedback, accompanying the usual oral and auditory feedback, seems like a logical extension of the technology.  Since perception of the target language is influenced by the native language, learners are often unable to identify their own errors or perceive differences. Thus, visual feedback might be useful in learning a second language phonology.  Questions arise, however, as to whether accurate comparisons can truly be made from this technology.  Can learners extract the information from sound waves needed for correct feedback and what do the learners assume about sounds that cannot legitimately be distinguished on a sound wave?  This paper proceeds in two parts:  First is a discussion of the kind of information that can and cannot be obtained from a sound wave and how learners use this information.  The second part presents a confusion matrix based on spectrographic analysis that provides learners with a realistic and concrete model of their interlanguage phonological systems as well as that of native speakers. This feedback is then discussed as both a method of tracking change in pronunciation over time as well as its potential implication for self-feedback.