Presentation Abstracts

Presentation Abstracts

PresenterAbstract

Lea Johannsen

English Language Learner Resource Coordinator
Iowa State University

Applying CALL in the Ivy College of Business

During my time in the ALT program here at ISU, we often spoke of the far-reaching applications of CALL and ESL instruction. Graduates from the program pursued professional opportunities across the globe. For many of us, the prospect of teaching English abroad was a familiar career consideration. What I didn't expect was that I would end up applying my skills within eyeshot of Ross Hall, in the Gerdin Business Building. In this presentation, I will discuss the projects I undertake in my position and how they relate to the skills I acquired in the ALT program.
As the English Language Learner (ELL) Resource Coordinator in the Ivy College of Business (CoB) Communications Center, my job is to support our ELL students through the producing resources and the developing programming. Every semester and every day present a new project to focus on, from scripting, filming, and producing videos, to developing and administering a course, to working one-on-one with students. My workflow is flexible, allowing me the ability to drop and pick up projects based on student and team needs.
I also collaborate extensively with other units within the Ivy CoB, such as Advising, Career Services, and Marketing. Working with people who have incredibly different career experiences and backgrounds allows me to continue learning outside of my own field. And incorporating their diverse perspectives and expertise along with my training in the ALT program allows me to enrich what I create for students. This position blends CALL tools and pedagogy with the unique considerations of both business school and writing center contexts.


Hong Ma

Assistant Professor
Zhejiang University, China

Assigning Students' Writing Samples to CEFR Levels Automatically: A Machin-learning Approach

This project intends to propose a method of assigning students' writing samples to CEFR levels (Common European Framework of Reference for Languages) automatically. We believe that the method we proposed, relying on big data and machine-learning algorithm, will facilitate future endeavors in alignment and writing evaluation.
The data includes 1500 writing samples selected from the EF-Cambridge Open Language Database (EFCAMDAT), which is a publicly available corpus containing over 83 million words from 1 million assignments written by 174,000 learners worldwide, across a wide range of levels (CEFR stages A1-C2). The 1500 writing samples are equally distributed across all six levels of CEFR. The quality indexes of students' writing samples, are obtained through the automatic writing analysis tool (Coh-metrix).
This project uses a machine-learning technique to model the predicting relationship between quality indexes (the independent variables) and CEFR levels (the dependent variable) of students' writing samples, since machine learning methods, emerging in linguistic research recently, has generally demonstrated higher accuracy in classification tasks than traditional regression models (McNamara, Crossley, Roscoe, Allen & Dai, 2015). In similar endeavors, the accuracy of different machine learning classifiers has been reported in N-gram recognition tasks (Jarvis, 2011), and the discriminant function analysis (one of the machine learning classifiers) was used for predicting scores of students' argumentative essays (McNamara et al., 2015). In the current research, we adopted a more advanced machine-learning classifier, multiple supporting vector machine recursive feature elimination (MSVM-REF), which has demonstrated considerably high accuracy in more complicated classifying tasks (e.g. classification and selection of better gene subsets in cancer study) (Duan, Rajapakse, Wang, & Azuaje, 2005). The adoption of this machine-learning classifier will not only result in an algorithm that assign students' writing samples to CEFR levels automatically, but also rank features that discriminating different levels of writing qualities. These top features yield pedagogical implications important to writing instruction.


Aysel Saricaoglu

Assistant Professor
Social Sciences University of Ankara

L2 Learners' Knowledge of Syntactic Complexity: Insights from a Complexity Judgment Test

Syntactic complexity is commonly measured through production-based tasks (e.g., essay writing). However, it is not always possible to gather information about learners' linguistic knowledge from production data (Gass, 2001). Responding to the call for exploring alternative understandings of syntactic complexity in L2 writing research (Ortega, 2015), this study explores if a judgment test could elicit data about learners' knowledge of syntactic complexity. It specifically investigates L2 learners' (n = 43) performance on a complexity judgment test (CJT) developed based on the developmental stages for complexity features hypothesized by Biber, Gray, and Poonpon (2011), and their complexity judgment criteria. Data were collected through a CJT and stimulated recalls and were analyzed both quantitatively and qualitatively. Results revealed that at the production level, learners were able to produce more clause-level complexity features, but at the input level, they were able to judge the complexity of phrases more accurately than the complexity of clauses, confirming that information from a judgment test can reflect learners' linguistic knowledge which cannot be observed from produced language. Results also revealed that extracomplexity factors (e.g., grammar, vocabulary, length, L1) were involved in learners' complexity judgments as indicated by the evidence from the stimulated recall data.


Moonyoung Park

Assistant Professor
Chinese University of Hong Kong

Investigating strategic online reading processes of pre-service English teachers in Korea

With the increased use of the Internet, online reading has become a major source of input for English as a foreign/second language (EFL/ESL) teachers as it provides them with authentic and motivating language for language teaching and learning. input as well as a fundamental skill for lifelong learning. Online texts are typically nonlinear, interactive, and inclusive of multiple media forms and are characterized by their richness and depth of the information they provide through nodes of information that are linked together. Each of these characteristics affords new opportunities while also presenting a range of challenges that requires new thought processes for meaning-making and constant decision-making regarding their reading order and the sources of information they need to use. Thus, it is critical to make EFL/ESL teachers consciously aware of online reading strategies.
The purpose of this project is to examine the complexity of online reading strategies used by eleven pre-service EFL teachers in the Republic of Korea. Individual participants read on the Internet, with the goal of developing technology-enhanced lesson plan incorporating their selected online resources. Internet reading strategies and teacher cognition were analyzed using participants' verbal reports, eye-tracking data, and triangulated complementary data (e.g., computer screen recordings and online reading strategies survey data). Results demonstrate the role that these strategies play in constructing meaning and decision making from Internet texts, as well as the interactive patterns of strategy identified in the Internet reading and lesson planning tasks. Findings from the project may offer insights into the types, patterns, and complexities of reading strategies used in the pre-service EFL teachers' Internet reading and lesson design. The project's findings and interpretation may also contribute to the foundational understanding of the link between reading strategies and new literacies of online reading comprehension involved in online reading tasks.


YunDeok Choi

Lecturer
Sungkyunkwan University, South Korea

What Interpretations Can We Make from Scores on Graphic-Prompt Writing (GPW) Tasks?: An Argument-Based Approach to Test Validation

This argument-based validation research examines the validity of score interpretations on computer-based graphic-prompt writing (GPW) tasks, centering on the explanation inference. The GPW tasks, designed for English placement testing, measure examinees' ability to incorporate visual graphic information into their writing. Over 100 ESL students, studying at a public university in the United States, completed GPW tasks and two online questionnaires on graph familiarity (Xi, 2005) and test mode preference (Lee, 2004) and submitted their standardized English writing test scores. A Pearson product-moment correlation, corrected for attenuation, revealed scores on the GPW tasks and the standardized writing tests had a moderately strong positive relationship (rT1T2 = .51). Multiple linear regression and follow-up correlation analyses showed that GPW task scores were attributed to examinees' academic writing ability and carried relatively weak, but significant, positive relations to the triad of the graph familiarity. The findings suggest the GPW tasks and the standardized English writing tests assessed different dimensions of the same underlying construct (academic writing ability), and the triad of the graph familiarity served as sources of construct-irrelevant variance. Theoretical and practical implications of the findings, as well as methodological limitations, are discussed.


Hyejin Yang

Full time researcher
Chuang-Ang University, South Korea

AI chatbots as L2 conversation partners

The rapid advance in Artificial Intelligence (AI), AI chatbot have been around for recent decades for different purposes. Existing commercial conversation chatbots such as Google Assistant (Google), Siri (Apple), or Alexa (Amazon) have been widely used for simple internet searches or for responding to individual users' inquiries about personal schedule, weather or news, and so forth. In the field of language education, there has been increasing interests to utilize chatbots as language learning partners.
In this presentation, it will begin with introducing current AI chatbots that can be served as conversation partners for English learners. In addition, I, as a part of a research team on AI chatbots at a university in Korea, will also present my recent work on developing an AI chatbot and conducting several empirical research that aimed to find better ways to develop and to integrate chatbots into EFL classrooms.


Elena Cotos

Associate Professor; Director of the Center for Communication Excellence
Iowa State University

Setting the stage, developing a trajectory, and expanding career boundaries

Continued education and teaching have always been a true calling for me. Looking back, I had a lot to think about when considering a doctorate degree, including what to research and how to teach, but the ultimate question was: What would it mean for my career? When I started the ALT Program in 2005, I knew I was getting on an exciting path towards my calling. The experience I gained over the years and the opportunities this doctoral degree opened for me, however, exceeded my expectations. In this presentation, I will give a brief overview of my career path: how I set the stage with my dissertation, how that 'stage' enabled me to develop a long-term research agenda, how my teaching-oriented scholarship served as a credible factor when pursuing broader initiatives, and how one of those initiatives has become the most impactful and rewarding outcomes of my work. With that, I hope to demonstrate that our doctoral program provides the knowledge and skills that translate across disciplinary boundaries and across research and non-research contexts. Depending on one's goals and interests, a PhD from the ALT Program at ISU can take students much farther than they initially think possible!


Yongkook Won

Visiting Researcher
Center for Educational Research, Seoul National University

Topic Modeling Analysis of Research Trends of English Language Teaching in Korea

The goal of this study is to understand the research trends of English language teaching (ELT) in Korea for the last 20 years from 2000 to 2019. To this end, 11 major academic journals in Korea related to ELT were selected, and abstracts of 7,035 articles published in the journals were collected and analyzed. The number of articles published in the journals continued to increase from the first half of the 2000s to the first half of the 2010s, but decreased somewhat in the late 2010s. Text data in the abstracts were preprocessed using NLTK tokenizer (Bird, Loper, & Klein, 2009) and spaCy POS tagger (Honnibal & Montani, 2017), and only the nouns in the data were used for further analysis. Based on the previous studies on ELT research trends (Kim & Kim, 2015), 25 topics were extracted from abstracts of the articles by applying latent Dirichlet allocation (LDA) topic modeling with the R package topicmodels (Grün & Hornik, 2011). Teacher, tertiary education, listening, language testing, and curriculum appeared as topics that were frequently studied in the field of ELT. A result of time series regression analysis shows that rising topics include task-based learning, tertiary education, vocabulary, affective factors, and peer feedback, while falling topics include speaking, culture, and computer-assisted language learning (CALL) (at α = .001).


Hyunwoo Kim

Lecturer
Department of English Language Education, Seoul National University



Yongkook Won

Visiting Researcher
Center for Educational Research, Seoul National University

Effects of complex nominal modifiers on rater judgments of grammatical competence of Korean L2 Writers of English: An Exploratory Study

The effects of grammatical complexity on judgments of overall L2 writing competence have been extensively studied. However, few studies have been conducted to explore the extent to which salient linguistic features exhibited by Korean L2 writers of English affect rater judgments of grammatical competence of those Korean L2 writers. Motivated by Biber et al.'s hypothesized developmental stages of grammatical complexity (2011), this study examines the extent to which the accurate use of complex nominal modifiers is associated with ratings awarded to one rating criterion, grammar. Eighty argumentative essays written by Korean L2 writers of English with varying proficiency levels (A2 - B2 of the CEFR) were selected from the International Corpus Network of Asian Learners of English (ICNALE). After ensuring inter-coder reliability, two coders counted each incidence of complex nominal modifiers in the essays and judged whether the grammatical features were error-free. A cumulative ordinal logistic regression model with proportional odds was fitted to explore the effects of those grammar features on ratings awarded to the rating criterion, grammar, alone. Subsequently, fully standardized coefficients of significant grammar features were computed to estimate the scale-free relative strength of significant grammar features on rater judgments of grammatical competence of Korean L2 writers. One significant implication of this study is that rater training sessions could be designed to draw raters' attention to the findings of this study.
Keywords: Nominal modifiers, grammatical complexity, ordinal logistic regression


Ruslan Suvorov

Assistant Professor
University of Western Ontario

The effect of item format on the use of test-wiseness strategies in an L2 listening test

Considered as part of strategic competence, test-taking strategies comprise test-management strategies and test-wiseness strategies (Cohen, 2014). Understanding the extent to which second language (L2) learners use test-wiseness strategies during language tests is essential for validation research (Cohen, 2007; Wu & Stone, 2016) as the use of these strategies is believed to introduce construct-irrelevant variance to test results. The most common criticism of the multiple-choice item format, for instance, is that it is susceptible to the use of test-wiseness strategies such as guessing and elimination of choices. To explore whether a modified item format would be less prone to the use of such strategies, an eye-tracking study was carried out to investigate test-wiseness strategies used by L2 learners when answering 4-option multiple-choice items and 4-option multiple true-false items in an L2 listening test. In particular, the study explored (a) the extent to which L2 learners performed differently on the two items types, (b) the extent to which L2 learners used test-wiseness strategies for the two item types, and (c) the extent to which the use of test-wiseness strategies introduced construct-irrelevant variance for the two item types. To address these goals, three types of data—namely, test score data, eye-tracking data, and verbal report data—were gathered from 40 ESL learners at a large public university in the Pacific region. The data were analyzed both quantitatively and qualitatively using scanpath analysis. The findings revealed that (a) multiple true-false items were more difficult than multiple-choice items, (b) test-wiseness strategies were used less frequently for answering multiple true-false items than for answering multiple-choice items, and (c) the use of test-wiseness strategies had a statistically significant effect on the observed scores for both item types. The study has implications for test item design and highlights the importance of gathering validity evidence based on response processes.


Adolfo Carrillo Cabello

Technology Enhanced Language Learning Specialist
University of Minnesota

Teaching languages at a distance as guided practice

For decades, CALL has shaped approaches to language teaching and provided solutions for language learning to occur outside of the classroom. While there have been significant advances in pedagogies for teaching languages online (Means et al., 2014; Son, 2018), with the rapid switch to emergency remote teaching many language programs quickly realize that more needs to be done to prepare teachers for effective distance language teaching (Hodges et al, 2020). While the need for better teacher development for effective online teaching is not new (Ernest, et al., 2011), the COVID-19 pandemic uncovered greater gaps in teacher development that called for more coherent and systematic approaches that draw upon research findings (Paesani, 2020). This presentation explores nuances in rapid pivoting to teaching languages at a distance, and proposes guidelines for professional development (PD) interventions that draw upon collective expertise. The presentation describes the process for planning, implementing, and evaluating systematic PD that afford language instructors the ability to pivot to distance learning by creating flexible learning spaces in which a mix of synchronous and virtual instructional practices coexist, as well as suggestions for adapting the language curriculum to account for independent and collaborative learning experiences that are effective regardless of the instructional format.


Jooyoung Lee

Senior Test Development Manager
Pearson

My Work and Lessons Learned

In my presentation, I will talk about what I am currently doing in the Test Development team at Pearson and what lessons I have learned in the professional world. I am primarily working on the speaking section of TELPAS (Texas English Language Proficiency Assessment System), designed to assess the progress of K-12 English learners. I will briefly go over the tasks involved in this project such as item review, validation of operational/field test items (in term of automated scoring), handling of appeals, and managing transcriptions/ratings, which comprise the essential part of scoring engine training. Hopefully, I can share some TELPAS speaking items with you as well. I will also mention a few other projects that I'm involved in including launching a new business English test, conducting research on Versant English Test, and developing a new item type.
In addition, I will spend some time on discussing what it is like to work with other teams/people outside the field of applied linguistics or TESOL (e.g., engineering, R&D, project managers, marketing) to successfully deliver various language tests. I also would like to share my honest thoughts on what lessons I have learned while working at Pearson, how our doctoral program had prepared me for the professional world, how I should have better prepared, and what are some advantages and challenges of working for a (profit) company.


Erik Voss

Lecturer
Teachers College, Columbia University

Flipped Academic English Language Learning at an American University

Flipped Learning is increasing in popularity as a methodology for language teaching. Originating in the fields of science, technology, engineering, and mathematics (STEM), a flipped learning approach is being adopted by English language teachers (Kostka & Marshall, 2017) at all levels of instruction including in university settings (Voss & Kostka, 2019). This methodology is characterized by shifting instruction to a time outside of a traditional classroom environment and presenting content to students through instructor-prepared materials, often as instructional videos. Students engage in activities such as watching the videos and taking notes outside of class that requires them to use skills that are lower on Bloom's taxonomy, such as knowledge and comprehension (Brinks Lockwood, 2014). The concepts introduced through direct instruction before class are then applied during class time as activities that strengthen the knowledge and skills through practice and feedback from the instructor and peers. These activities during class time require students to use skills that are higher on Bloom's taxonomy, such as application, analysis, evaluation, and creation (Brinks Lockwood, 2018). As a result of 'flipping' instruction and homework, instructors have more time to help students as they engage in activities that are more difficult and promote deeper learning (Bergmann & Sams, 2014), which is not possible when students work alone outside of class. In this presentation I will provide an overview of Flipped Learning in an academic English language pathway program at a US university. I will also highlight technology used to implement the methodology and discuss how teaching English using Flipped Learning can occur in-person and in remote learning environments.


Sonca Vo

Lecturer
University of Foreign Language Studies - The University of Danang

Exploring the Construct of Interactional Competence in Different Types of Assessments of Oral Communication

Research on interaction in speaking assessment suggested that both verbal and nonverbal interaction are integral parts of the construct of interactional competence (Young, 2011; Galaczi & Taylor, 2018). However, little has been done to investigate which features significantly contributed to interaction effectiveness. Therefore, this study examined the elicitation of interaction features in individual and paired discussion tasks to explore the interactional competence construct. Two raters evaluated 68 test-taker performances. Exploratory factor analysis revealed four factors: body language, topic management, interactional management, and interactive listening. Logistic regressions showed that while the individual task elicited more topic management features, the paired discussion task elicited more interactional management features. Simple regressions showed that body language and topic management features predicted interactional competence scores in the individual task, whereas body language, topic management, interactional management, and interactive listening features were predictors of scores in the paired discussion task. The findings suggest that both nonverbal and verbal interaction features are important in the interactional competence construct. The paired format provides test takers with more opportunities to demonstrate their interactional ability. The study also suggests the importance of rater training in evaluating interactional competence.


Hong Ma

Assistant Professor
Zhejiang University, China



Zhi Li

Assistant Professor
University of Saskatchewan, Canada

Exploring English language learners' engagement in new online EFL courses during the Covid-19 pandemic

The Covid-19 pandemic has forced school courses around the world to be moved online. Educators have never been more eager to know how online classes can be delivered while maintaining high quality. One of the key indicators of effective teaching is a high-level of student engagement, which can be conceptualized as a multi-dimensional construct (for example., emotional, performance, skill, and participation engagement). To contribute to this on-going discussion about effective online teaching, the study reports on an analysis of the relationship between university students' engagement and pedagogical activities in online English classes at Zhejiang University, China, during the pandemic. An online survey was used to collect 286 students' responses to a modified 4-factor Online Student Engagement (OSE) Scale, along with their evaluation of the engagement levels of 12 pedagogical activities used in these classes, and technology use experiences. The analysis results indicate that students' engagement dimensions were associated with slightly different combinations of pedagogical activities. While activities like in-class videos, online discussions, video lectures, and group chat, were significant predictors of two or more of the four dimensions of engagement, some activities were more conducive to a higher level of a particular engagement dimension. engagement. For example, student's typed responses and online exercises were unique contributors to the emotional dimension of engagement. In addition, the frequency of technology use was significantly associated with the participation dimension of engagement. The findings will shed light on effective online teaching pedagogies and possible pitfalls by establishing the connection between students' engagement and teachers' instructional strategies.


Hye-won Lee

Senior Research Manager
Cambridge Assessment English

Making it Happen: Assessing Speaking through Video-Conferencing Technology

Practical considerations such as 'administrative conditions' are especially important when new test formats are operationalised, for example, a speaking test delivered via video-conferencing technology. The literature on research-informed practical implementations of remote speaking tests is limited. This study aims to contribute to this research niche through reporting on the last phase of a research project on a high-stakes video-conferencing speaking test. In the previous three phases (Nakatsuhara et al., 2016; Nakatsuhara et al., 2017; Berry et al., 2018), it was established that the in-room and remote delivery mode are essentially equivalent in terms of score comparability and elicited language functions, but some practical issues were identified as potentially affecting the validity of the test score interpretation and use.
The final phase was designed to extend the evidence gathered about examiner and test-taker perceptions regarding specific aspects of the test delivery and platform, such as the examiner script, sound quality, display of test prompts, and examiner/test-taker guidelines. Adopting a convergent mixed-method design (Creswell & Plano Clark, 2007), questionnaire and focus group data were gathered from 373 test-takers and 10 examiners. In the presentation, I will discuss key findings and their implication for the practical implementation of the test. I will end with an emphasis on the importance of including research-informed administrative considerations as part of a validity argument.


Jing Xu, Edmund Jones, Victoria Laxton and Evelina Galaczi

Principal Research Manager
Cambridge Assessment English

Assessing L2 English speaking using automated scoring technology: Examining automarker reliability

Automated scoring is appealing to large-scale L2 speaking assessment in that it increases the speed of score reporting and reduces the logistical complexity of test administration. Despite increased popularity, validation work on automated speaking assessment is in its infancy. The lack of transparency on how learner speech is scored and evidence for the reliability of automated scoring has not only raised language assessment professionals' concerns but provoked scepticism over automated speaking assessment among language teachers, learners and test users (Fan, 2014; Xi, Schmidgall, & Wang, 2016).
This paper contributes to this niche in language assessment by providing evidence for the performance of the Custom Automated Speech Engine (CASE), an automarker designed for the Cambridge Assessment English Linguaskill Speaking test, and by problematising traditional approaches to establishing automarker reliability. We argue that correlation is inappropriate for measuring the agreement between automarker and human scores and that quadratic-weighted Kappa (Cohen, 1968) may behave strangely and is hard to interpret. Instead, we chose to use 'limits of agreement', the standard approach in medical science for comparing two concurrent methods of clinical measurement (Bland & Altman, 1986, 1999). Additionally, we examined automarker consistency and severity, as compared to trained examiners, using multifaceted Rasch analysis.
Automated scoring is appealing to large-scale L2 speaking assessment in that it increases the speed of score reporting and reduces the logistical complexity of test administration. Despite increased popularity, validation work on automated speaking assessment is in its infancy. The lack of transparency on how learner speech is scored and evidence for the reliability of automated scoring has not only raised language assessment professionals' concerns but provoked scepticism over automated speaking assessment among language teachers, learners and test users (Fan, 2014; Xi, Schmidgall, & Wang, 2016).


Edna F. Lima

Associate Professor of Instruction
Ohio University

Adapting to new learning environments: Effective and engaging online pronunciation instruction

Considering recent world events and the shift in educational priorities and environments, now more than ever there is the need for an effective, systematic way to teach pronunciation online. Better yet, there is the need for a self-study pronunciation course that will allow students in diverse learning contexts to learn effectively on their own and to develop critical pronunciation skills such as awareness, self-monitoring, and transfer to a variety of real-world situations. Considering this backdrop, I introduce the Supra Tutor 2.0, an eight-module fully online English pronunciation course focusing on suprasegmentals (word stress, rhythm, and intonation) and describe its pedagogical and technological rationales. For instance, the course includes a range of activity types that reflect the communicative framework for teaching pronunciation (Celce-Murcia et al., 2010, pp. 44-45). In this particular framework, pronunciation instruction starts with awareness raising and ends with practice that is less structured, more extemporaneous, and requires learners to focus on both meaning and form. Some other principles guiding my development of the tutor include individualized instruction (Levis, 2007), an anxiety-free learning environment (Luo, 2016), flexibility (Engwall, Wik, Beskow, & Granström, 2004), a variety of speaker models and engaging materials and tasks (Lima & Levis, 2017), and autonomous practice (McCrocklin, 2016). In addition to explaining the rationale behind the development of the Supra Tutor, I will provide a brief tour of one of its modules.


Anne O'Bryan

Adjunct faculty and faculty development team member
Colorado State University Global Campus

Reflections on a career in online education

My ALT degree has led to opportunities within and also outside of our field. After graduating from the ALT program in 2010, I began developing and teaching online courses for Colorado State University's new online-only Global Campus. Now, 10 years later, I have worked with a number of public, private, for-profit and non-profit schools and organizations to design, and teach, a variety of online courses in the areas of teaching English as a second language, applied linguistics, instructional technology, research methods, organizational leadership, and online course development. I have served in a number of roles, including faculty, subject matter expert, course developer, mentor to new online faculty, and facilitator of professional development courses for faculty and instructional designers who are interested in learning more about online teaching and learning. In this presentation, I will talk about my journey and share what I've learned about the online course development process, online teaching, and faculty development at various institutions and reflect on how the ALT degree prepared me for the various roles I have held over the years.


Victor D. O. Santos

Director of Assessment and Research, Avant Assessment & Founder and CEO, Linguacious®

ALT PhD Skills Outside of Academia

In this presentation, Victor Santos will discuss the extent to which the skills he acquired in his Linguistics, Computational Linguistics, Applied Linguistics, and Language Assessment coursework have been useful in his work as Director of Assessment and Research at a major language-testing company and as founder and CEO of a startup company that develops language-learning materials for children. Victor will also discuss skills that he did not learn in academia but had to acquire through means beyond coursework and how they have been useful in his work outside of academia. This presentation should be of special interest to those who have an interest developing a language-learning/assessment career path outside of academia after their ALT PhD program completion.


Jordan Smith

Assistant Professor
University of North Texas

How Opportunities in Grad School Shaped My Current Research Agenda

As an assistant professor at the University of North Texas, I have worked on several individual and collaborative research projects that have allowed me to draw on the knowledge and skills I gained while I was a graduate student at Iowa State. One project I recently completed grew out of an independent study I took with Jo Mackiewicz and a course paper I wrote during Bethany Gray’s discourse analysis class. Another project I am currently working on stems from my dissertation research. A third project began during a research assistantship I had with Bethany Gray and Elena Cotos during my fourth year. And a fourth project launched after I started my job at UNT. In this presentation, I will describe each of these four projects to offer an overview of my current research and to highlight how the opportunities I had at ISU have continued to play an important role in my research agenda. I will also share general insights I have learned from my work on these projects and during my time as a new assistant professor. Finally, I will offer recommendations that I hope will be useful for current ALT students.


Shannon McCrocklin

Assistant Professor
Southern Illinois University

Dictation Programs for Second Language Pronunciation Learning: Perceptions of the transcript, strategy use, and improvement

Despite growing evidence that ASR-dictation practice provides benefits for L2 pronunciation learners (Liakin, Cardoso, & Liakina, 2014; McCrocklin, 2019; Mroz, 2018, Wallace, 2016), there is little research into the ways students engage in ASR-dictation practice. This study examines learners’ perceptions of the ASR-generated transcript as feedback and strategy use during practice. Participants (N=15) dictated 60 sentences to Google Voice Typing in Drive while being audio recorded. Following a mis-transcription, participants thought-aloud, discussing their interpretation of the transcript, utilized strategies and resources, and tried the sentence again with Google. Data analysis included qualitative analysis of think-aloud comments and quantitative analysis of both strategies used and improvement in dictation accuracy for subsequent attempts. Results showed that participants used the transcript to identify individual words with errors, but also hypothesized about segmentals and articulatory features causing errors. The most frequent strategy to improve production was covert rehearsal of target words, followed by listening to dictionary recordings of targets. However, potentially novel pronunciation learning strategies were also documented. Participants were able to improve the accuracy of the transcript in subsequent attempts, earning a perfect transcription by the third attempt in the majority of cases (91%).


Jinrong Li

Associate Professor
Georgia Southern University

Assessing Multimodal Writing in L2 Contexts: A Research Synthesis

Writing assessment is an integral part of writing instructors’ work (Crusan, 2010; Matsuda, Cox, Jordan, & Ortmeier-Hooper, 2011). It is also complex because it involves assessment of both learning outcome and learning process. The development of technology in the past two decades have added another layer of complexity by dramatically changing the pedagogical practices of L2 writing. In particular, the number of studies incorporating multimodal composing practices into L2 writing contexts is growing rapidly. These developments call for further research on what to assess and how to assess in relation to new forms of writing tasks. Given the wide range of technological tools and diverse L2 writing contexts, there is also urgent need to explore how we can draw on different disciplines, theoretical perspectives, and data sources to achieve a better understanding of the assessment of multimodal composition. In this presentation, therefore, I report a research synthesis of empirical studies of the use and/or assessment of multimodal composition in L2 contexts published in the past decade. Empirical studies were identified using keyword searches via academic databases and Google Scholar. The review and analysis aimed to identify characteristics of multimodal composition tasks and their contexts of use, examine theoretical perspectives of assessment of multimodal composition, and explore common and emerging assessment criteria and assessment practices. Pedagogical implications and future research directions are discussed.


Sarah Huffman

Assistant Director of the Center for Communication Excellence
Iowa State University

Conversion of a graduate-level tutor training from face-to-face to online at a graduate communication center

Even before the abrupt shift to online teaching forced by the COVID19 pandemic, online curricula in graduate-level education have enjoyed tremendous growth over the past decades (Fain, 2018). A reason for this growth pertains to online programs’ adaptability and heightened accessibility for broader audiences. Such platforms provide administrators/instructors the opportunity to embed support mechanisms that allow for deeper understanding through simplified designs and bolstered instructional aids. Most graduate-level and professional schools require their students to master writing effectively within their respective discourse communities, and many must produce a thesis, dissertation, or similar capstone-type paper at the end of their program; however, most programs do not offer explicit support with graduate-level writing within the discipline or profession (Gonzalez & Moore, 2018). In this mini-presentation drawn from a workshop provided at the Consortium on Graduate Communication’s 2020 Summer Institute, the presenter will share her experiences in the creation of an online training program for graduate writing consultants who provide constructive feedback and valuable peer collaboration to graduate-level. Challenges, successes, and overall impressions from the design and implementation of this online training will be shared along with recommendations for best practices in developing training for graduate writing consultants.


Rania Mohammed

Assistant Professor
King Abdulaziz University

Juggling the Worlds of Research and Teaching

It has been a difficult task trying to balance between the two worlds of research and teaching. My current research is an extension of my dissertation. My dissertation research focused on finding prosodic variations within frequently occurring lexical bundles in an academic corpus consisting of lectures given by native speakers. Being interested in that area of research, I wanted to compare prosodic difference within lexical bundles between native and non-native lecturers as well as across different registers; i.e. between academic lectures and non-academic talks. Although this research is still in its early stages, I believe that this area of research may have an impact on pronunciation teaching for international teaching assistants to help them deliver lectures effectively. On the other hand, my students that I currently teach are very different in terms of language proficiency from the type of language learners my research aims to target. Most of my teaching hours is spent with students at beginner to intermediate level trying to help them succeed through a series of four-level intensive English courses during a course of one academic year. Therefore, I've felt that there is a discrepancy between my research and teaching worlds where I have to constantly juggle between them. Despite this discrepancy, a new area of research bloomed inspired by my teaching. Currently, I am interested in compiling a learner corpus that focuses on finding patterns of spoken and written errors made by learners. This research will have a direct impact on the group of learners I'm teaching that could help restructure the current curriculum to better address their needs.


Monica (Richards) Ghosh

Communications Specialist
Iowa State University Institute for Transportation

Self-Editing in L2 English Research Writing: Important AND Possible

Many L2 English academics face a marked disadvantage in publishing their research in top journals, in obtaining grant funding, etc. Often, this is not because their research quality is any less than that of their L1 English peers but instead because nonstandard English phrasing and grammar hinder their clear and convincing academic communication (Di Bitetti & Ferreras, 2017; Huttner-Koros, 2015; Meneghini & Packer, 2007; Ramírez-Castañeda, 2020). Exceptions exist, of course, but rarely is graduate school in an L1 English context—even that including focused writing training (e.g., Iowa State's "English 101D: Academic English II for Graduate Students")—adequate for enabling the average L2 English STEM PhD graduate to be competitive in research writing with L1 English colleagues. As a result, successful L2 English academics frequently spend hundreds of dollars annually on professional editing services.
Can writing faculty better train L2 English graduate students to (1) self-identify and (2) self-correct their nonstandard English collocations/grammar? Years of teaching and tutoring L2 research writers have convinced me we can. Most L2 English collocation errors not correctable by automatic grammar checker (e.g., Grammarly) are correctable, for general collocations, via the academic section of the Corpus of Contemporary American English (COCA) (Davies, 2008-) and, for field-specific collocations, via either exact phrase search or regex wildcards in Google Scholar. In addition, a VERY limited set of if/then rules can correct the most serious of L2 research English grammar errors (defined by likelihood of misleading readers about intended meaning—Glasman-Deal, 2009), including the use of the English simple past tense vs. simple present and present perfect as well as the use of "the" to communicate that writer and reader share knowledge of the topic under discussion. This presentation introduces how I teach self-editing skills and recommends systematic pedagogical research to help even the self-editing playing field for L2 English academics.


Stephanie Link

Assistant Professor
Oklahoma State University

Inspiring the next generation of genre-based automated writing evaluation research

Automated writing evaluation (AWE) has evolved significantly in recent decades to meet the ever-changing needs of writers across educational settings. From what started as sentence-level automated feedback has now expanded to discourse- and/or genre-based feedback. Although AWE research continues to spark interest for general academic writing use (e.g., Ranalli & Yamashita, 2020; Link, Mehrzad, & Rahimi, 2020), there is a need for new directions in specialized ESP/EAP research and practice (Hyland & Wong, 2019) that can facilitate useful learning transfer (Loboto, 2006). Early genre-based AWE tools opened new possibilities for ESP/EAP, for example AntMover (Anthony, 2003) and IADE (Cotos, 2009), but with the advancement of NLP and AI approaches, more sophisticated tools and learning systems have emerged, such as the Research Writing Tutor (Cotos, 2014; Cotos & Pendar, 2016) and AcaWriter (Abel, Kitto, Knight, & Buckingham Shum, 2018). Nevertheless, there is a need for upward momentum that can spark new trends in tool creation and continue to bridge theory, research, and practice by carefully considering the needs of learners and ways in which software can most effectively contribute to learning (Anthony, 2019). This presentation will introduce a genre-based AWE tool called Wrangler for “rounding up” research writing resources. This web-based technology leverages the power of natural language processing to develop an intelligent tutoring system to enhance writing for publication. Development started with careful consideration of user experience design, including flow diagrams, wireframes, and 133 informal potential user interviews. A web analytics tool, Hotjar, was integrated into the interface design to track use and inform alpha-to-beta platform development. Wrangler has been integrated into writing for publication courses and workshops at Oklahoma State University; however, the team intends to expand Wrangler’s potential, transcend technological hurdles, and inspire a new generation of genre-based AWE research.

PresenterAbstract

Lea Johannsen

English Language Learner Resource Coordinator
Iowa State University

Applying CALL in the Ivy College of Business

During my time in the ALT program here at ISU, we often spoke of the far-reaching applications of CALL and ESL instruction. Graduates from the program pursued professional opportunities across the globe. For many of us, the prospect of teaching English abroad was a familiar career consideration. What I didn't expect was that I would end up applying my skills within eyeshot of Ross Hall, in the Gerdin Business Building. In this presentation, I will discuss the projects I undertake in my position and how they relate to the skills I acquired in the ALT program.
As the English Language Learner (ELL) Resource Coordinator in the Ivy College of Business (CoB) Communications Center, my job is to support our ELL students through the producing resources and the developing programming. Every semester and every day present a new project to focus on, from scripting, filming, and producing videos, to developing and administering a course, to working one-on-one with students. My workflow is flexible, allowing me the ability to drop and pick up projects based on student and team needs.
I also collaborate extensively with other units within the Ivy CoB, such as Advising, Career Services, and Marketing. Working with people who have incredibly different career experiences and backgrounds allows me to continue learning outside of my own field. And incorporating their diverse perspectives and expertise along with my training in the ALT program allows me to enrich what I create for students. This position blends CALL tools and pedagogy with the unique considerations of both business school and writing center contexts.
 
Video Recording


Erik Voss

Lecturer
Teachers College, Columbia University

Flipped Academic English Language Learning at an American University

Flipped Learning is increasing in popularity as a methodology for language teaching. Originating in the fields of science, technology, engineering, and mathematics (STEM), a flipped learning approach is being adopted by English language teachers (Kostka & Marshall, 2017) at all levels of instruction including in university settings (Voss & Kostka, 2019). This methodology is characterized by shifting instruction to a time outside of a traditional classroom environment and presenting content to students through instructor-prepared materials, often as instructional videos. Students engage in activities such as watching the videos and taking notes outside of class that requires them to use skills that are lower on Bloom's taxonomy, such as knowledge and comprehension (Brinks Lockwood, 2014). The concepts introduced through direct instruction before class are then applied during class time as activities that strengthen the knowledge and skills through practice and feedback from the instructor and peers. These activities during class time require students to use skills that are higher on Bloom's taxonomy, such as application, analysis, evaluation, and creation (Brinks Lockwood, 2018). As a result of 'flipping' instruction and homework, instructors have more time to help students as they engage in activities that are more difficult and promote deeper learning (Bergmann & Sams, 2014), which is not possible when students work alone outside of class. In this presentation I will provide an overview of Flipped Learning in an academic English language pathway program at a US university. I will also highlight technology used to implement the methodology and discuss how teaching English using Flipped Learning can occur in-person and in remote learning environments.
 
Video Recording


Monica (Richards) Ghosh

Communications Specialist
Iowa State University Institute for Transportation

Self-Editing in L2 English Research Writing: Important AND Possible

Many L2 English academics face a marked disadvantage in publishing their research in top journals, in obtaining grant funding, etc. Often, this is not because their research quality is any less than that of their L1 English peers but instead because nonstandard English phrasing and grammar hinder their clear and convincing academic communication (Di Bitetti & Ferreras, 2017; Huttner-Koros, 2015; Meneghini & Packer, 2007; Ramírez-Castañeda, 2020). Exceptions exist, of course, but rarely is graduate school in an L1 English context—even that including focused writing training (e.g., Iowa State's "English 101D: Academic English II for Graduate Students")—adequate for enabling the average L2 English STEM PhD graduate to be competitive in research writing with L1 English colleagues. As a result, successful L2 English academics frequently spend hundreds of dollars annually on professional editing services.
Can writing faculty better train L2 English graduate students to (1) self-identify and (2) self-correct their nonstandard English collocations/grammar? Years of teaching and tutoring L2 research writers have convinced me we can. Most L2 English collocation errors not correctable by automatic grammar checker (e.g., Grammarly) are correctable, for general collocations, via the academic section of the Corpus of Contemporary American English (COCA) (Davies, 2008-) and, for field-specific collocations, via either exact phrase search or regex wildcards in Google Scholar. In addition, a VERY limited set of if/then rules can correct the most serious of L2 research English grammar errors (defined by likelihood of misleading readers about intended meaning—Glasman-Deal, 2009), including the use of the English simple past tense vs. simple present and present perfect as well as the use of "the" to communicate that writer and reader share knowledge of the topic under discussion. This presentation introduces how I teach self-editing skills and recommends systematic pedagogical research to help even the self-editing playing field for L2 English academics.
 
Video Recording

PresenterAbstract

Hyejin Yang

Full time researcher
Chuang-Ang University, South Korea

AI chatbots as L2 conversation partners

The rapid advance in Artificial Intelligence (AI), AI chatbot have been around for recent decades for different purposes. Existing commercial conversation chatbots such as Google Assistant (Google), Siri (Apple), or Alexa (Amazon) have been widely used for simple internet searches or for responding to individual users' inquiries about personal schedule, weather or news, and so forth. In the field of language education, there has been increasing interests to utilize chatbots as language learning partners.
In this presentation, it will begin with introducing current AI chatbots that can be served as conversation partners for English learners. In addition, I, as a part of a research team on AI chatbots at a university in Korea, will also present my recent work on developing an AI chatbot and conducting several empirical research that aimed to find better ways to develop and to integrate chatbots into EFL classrooms.
 
Video Recording


Stephanie Link

Assistant Professor
Oklahoma State University

Inspiring the next generation of genre-based automated writing evaluation research

Automated writing evaluation (AWE) has evolved significantly in recent decades to meet the ever-changing needs of writers across educational settings. From what started as sentence-level automated feedback has now expanded to discourse- and/or genre-based feedback. Although AWE research continues to spark interest for general academic writing use (e.g., Ranalli & Yamashita, 2020; Link, Mehrzad, & Rahimi, 2020), there is a need for new directions in specialized ESP/EAP research and practice (Hyland & Wong, 2019) that can facilitate useful learning transfer (Loboto, 2006). Early genre-based AWE tools opened new possibilities for ESP/EAP, for example AntMover (Anthony, 2003) and IADE (Cotos, 2009), but with the advancement of NLP and AI approaches, more sophisticated tools and learning systems have emerged, such as the Research Writing Tutor (Cotos, 2014; Cotos & Pendar, 2016) and AcaWriter (Abel, Kitto, Knight, & Buckingham Shum, 2018). Nevertheless, there is a need for upward momentum that can spark new trends in tool creation and continue to bridge theory, research, and practice by carefully considering the needs of learners and ways in which software can most effectively contribute to learning (Anthony, 2019). This presentation will introduce a genre-based AWE tool called Wrangler for “rounding up” research writing resources. This web-based technology leverages the power of natural language processing to develop an intelligent tutoring system to enhance writing for publication. Development started with careful consideration of user experience design, including flow diagrams, wireframes, and 133 informal potential user interviews. A web analytics tool, Hotjar, was integrated into the interface design to track use and inform alpha-to-beta platform development. Wrangler has been integrated into writing for publication courses and workshops at Oklahoma State University; however, the team intends to expand Wrangler’s potential, transcend technological hurdles, and inspire a new generation of genre-based AWE research.
 
Video Recording

PresenterAbstract

Adolfo Carrillo Cabello

Technology Enhanced Language Learning Specialist
University of Minnesota

Teaching languages at a distance as guided practice

For decades, CALL has shaped approaches to language teaching and provided solutions for language learning to occur outside of the classroom. While there have been significant advances in pedagogies for teaching languages online (Means et al., 2014; Son, 2018), with the rapid switch to emergency remote teaching many language programs quickly realize that more needs to be done to prepare teachers for effective distance language teaching (Hodges et al, 2020). While the need for better teacher development for effective online teaching is not new (Ernest, et al., 2011), the COVID-19 pandemic uncovered greater gaps in teacher development that called for more coherent and systematic approaches that draw upon research findings (Paesani, 2020). This presentation explores nuances in rapid pivoting to teaching languages at a distance, and proposes guidelines for professional development (PD) interventions that draw upon collective expertise. The presentation describes the process for planning, implementing, and evaluating systematic PD that afford language instructors the ability to pivot to distance learning by creating flexible learning spaces in which a mix of synchronous and virtual instructional practices coexist, as well as suggestions for adapting the language curriculum to account for independent and collaborative learning experiences that are effective regardless of the instructional format.
 
Video Recording


Sarah Huffman

Assistant Director of the Center for Communication Excellence
Iowa State University

Conversion of a graduate-level tutor training from face-to-face to online at a graduate communication center

Even before the abrupt shift to online teaching forced by the COVID19 pandemic, online curricula in graduate-level education have enjoyed tremendous growth over the past decades (Fain, 2018). A reason for this growth pertains to online programs’ adaptability and heightened accessibility for broader audiences. Such platforms provide administrators/instructors the opportunity to embed support mechanisms that allow for deeper understanding through simplified designs and bolstered instructional aids. Most graduate-level and professional schools require their students to master writing effectively within their respective discourse communities, and many must produce a thesis, dissertation, or similar capstone-type paper at the end of their program; however, most programs do not offer explicit support with graduate-level writing within the discipline or profession (Gonzalez & Moore, 2018). In this mini-presentation drawn from a workshop provided at the Consortium on Graduate Communication’s 2020 Summer Institute, the presenter will share her experiences in the creation of an online training program for graduate writing consultants who provide constructive feedback and valuable peer collaboration to graduate-level. Challenges, successes, and overall impressions from the design and implementation of this online training will be shared along with recommendations for best practices in developing training for graduate writing consultants.
 
Video Recording

PresenterAbstract

Moonyoung Park

Assistant Professor
Chinese University of Hong Kong

Investigating strategic online reading processes of pre-service English teachers in Korea

With the increased use of the Internet, online reading has become a major source of input for English as a foreign/second language (EFL/ESL) teachers as it provides them with authentic and motivating language for language teaching and learning. input as well as a fundamental skill for lifelong learning. Online texts are typically nonlinear, interactive, and inclusive of multiple media forms and are characterized by their richness and depth of the information they provide through nodes of information that are linked together. Each of these characteristics affords new opportunities while also presenting a range of challenges that requires new thought processes for meaning-making and constant decision-making regarding their reading order and the sources of information they need to use. Thus, it is critical to make EFL/ESL teachers consciously aware of online reading strategies.
The purpose of this project is to examine the complexity of online reading strategies used by eleven pre-service EFL teachers in the Republic of Korea. Individual participants read on the Internet, with the goal of developing technology-enhanced lesson plan incorporating their selected online resources. Internet reading strategies and teacher cognition were analyzed using participants' verbal reports, eye-tracking data, and triangulated complementary data (e.g., computer screen recordings and online reading strategies survey data). Results demonstrate the role that these strategies play in constructing meaning and decision making from Internet texts, as well as the interactive patterns of strategy identified in the Internet reading and lesson planning tasks. Findings from the project may offer insights into the types, patterns, and complexities of reading strategies used in the pre-service EFL teachers' Internet reading and lesson design. The project's findings and interpretation may also contribute to the foundational understanding of the link between reading strategies and new literacies of online reading comprehension involved in online reading tasks.
 
Video Recording


Hong Ma

Assistant Professor
Zhejiang University, China

Link to Hong Ma's second presentation



Zhi Li

Assistant Professor
University of Saskatchewan, Canada

Exploring English language learners' engagement in new online EFL courses during the Covid-19 pandemic

The Covid-19 pandemic has forced school courses around the world to be moved online. Educators have never been more eager to know how online classes can be delivered while maintaining high quality. One of the key indicators of effective teaching is a high-level of student engagement, which can be conceptualized as a multi-dimensional construct (for example., emotional, performance, skill, and participation engagement). To contribute to this on-going discussion about effective online teaching, the study reports on an analysis of the relationship between university students' engagement and pedagogical activities in online English classes at Zhejiang University, China, during the pandemic. An online survey was used to collect 286 students' responses to a modified 4-factor Online Student Engagement (OSE) Scale, along with their evaluation of the engagement levels of 12 pedagogical activities used in these classes, and technology use experiences. The analysis results indicate that students' engagement dimensions were associated with slightly different combinations of pedagogical activities. While activities like in-class videos, online discussions, video lectures, and group chat, were significant predictors of two or more of the four dimensions of engagement, some activities were more conducive to a higher level of a particular engagement dimension. engagement. For example, student's typed responses and online exercises were unique contributors to the emotional dimension of engagement. In addition, the frequency of technology use was significantly associated with the participation dimension of engagement. The findings will shed light on effective online teaching pedagogies and possible pitfalls by establishing the connection between students' engagement and teachers' instructional strategies.
 
Video Recording

PresenterAbstract

YunDeok Choi

Lecturer
Sungkyunkwan University, South Korea

What Interpretations Can We Make from Scores on Graphic-Prompt Writing (GPW) Tasks?: An Argument-Based Approach to Test Validation

This argument-based validation research examines the validity of score interpretations on computer-based graphic-prompt writing (GPW) tasks, centering on the explanation inference. The GPW tasks, designed for English placement testing, measure examinees' ability to incorporate visual graphic information into their writing. Over 100 ESL students, studying at a public university in the United States, completed GPW tasks and two online questionnaires on graph familiarity (Xi, 2005) and test mode preference (Lee, 2004) and submitted their standardized English writing test scores. A Pearson product-moment correlation, corrected for attenuation, revealed scores on the GPW tasks and the standardized writing tests had a moderately strong positive relationship (rT1T2 = .51). Multiple linear regression and follow-up correlation analyses showed that GPW task scores were attributed to examinees' academic writing ability and carried relatively weak, but significant, positive relations to the triad of the graph familiarity. The findings suggest the GPW tasks and the standardized English writing tests assessed different dimensions of the same underlying construct (academic writing ability), and the triad of the graph familiarity served as sources of construct-irrelevant variance. Theoretical and practical implications of the findings, as well as methodological limitations, are discussed.
 
Video Recording


Jinrong Li

Associate Professor
Georgia Southern University

Assessing Multimodal Writing in L2 Contexts: A Research Synthesis

Writing assessment is an integral part of writing instructors’ work (Crusan, 2010; Matsuda, Cox, Jordan, & Ortmeier-Hooper, 2011). It is also complex because it involves assessment of both learning outcome and learning process. The development of technology in the past two decades have added another layer of complexity by dramatically changing the pedagogical practices of L2 writing. In particular, the number of studies incorporating multimodal composing practices into L2 writing contexts is growing rapidly. These developments call for further research on what to assess and how to assess in relation to new forms of writing tasks. Given the wide range of technological tools and diverse L2 writing contexts, there is also urgent need to explore how we can draw on different disciplines, theoretical perspectives, and data sources to achieve a better understanding of the assessment of multimodal composition. In this presentation, therefore, I report a research synthesis of empirical studies of the use and/or assessment of multimodal composition in L2 contexts published in the past decade. Empirical studies were identified using keyword searches via academic databases and Google Scholar. The review and analysis aimed to identify characteristics of multimodal composition tasks and their contexts of use, examine theoretical perspectives of assessment of multimodal composition, and explore common and emerging assessment criteria and assessment practices. Pedagogical implications and future research directions are discussed.
 
Video Recording

PresenterAbstract

Aysel Saricaoglu

Assistant Professor
Social Sciences University of Ankara

L2 Learners' Knowledge of Syntactic Complexity: Insights from a Complexity Judgment Test

Syntactic complexity is commonly measured through production-based tasks (e.g., essay writing). However, it is not always possible to gather information about learners' linguistic knowledge from production data (Gass, 2001). Responding to the call for exploring alternative understandings of syntactic complexity in L2 writing research (Ortega, 2015), this study explores if a judgment test could elicit data about learners' knowledge of syntactic complexity. It specifically investigates L2 learners' (n = 43) performance on a complexity judgment test (CJT) developed based on the developmental stages for complexity features hypothesized by Biber, Gray, and Poonpon (2011), and their complexity judgment criteria. Data were collected through a CJT and stimulated recalls and were analyzed both quantitatively and qualitatively. Results revealed that at the production level, learners were able to produce more clause-level complexity features, but at the input level, they were able to judge the complexity of phrases more accurately than the complexity of clauses, confirming that information from a judgment test can reflect learners' linguistic knowledge which cannot be observed from produced language. Results also revealed that extracomplexity factors (e.g., grammar, vocabulary, length, L1) were involved in learners' complexity judgments as indicated by the evidence from the stimulated recall data.
 
Video Recording


Hyunwoo Kim

Lecturer
Department of English Language Education, Seoul National University



Yongkook Won

Visiting Researcher
Center for Educational Research, Seoul National University

Link to Yongkook Won's second presentation

Effects of complex nominal modifiers on rater judgments of grammatical competence of Korean L2 Writers of English: An Exploratory Study (Work in Progress)
 

There will be no video recording for this presentation.

PresenterAbstract

Hong Ma and Jinglei Wang

Assistant Professor
Zhejiang University, China

Link to Hong Ma's second presentation

Assigning Students' Writing Samples to CEFR Levels Automatically: A Machin-learning Approach

This project intends to propose a method of assigning students' writing samples to CEFR levels (Common European Framework of Reference for Languages) automatically. We believe that the method we proposed, relying on big data and machine-learning algorithm, will facilitate future endeavors in alignment and writing evaluation.
The data includes 1500 writing samples selected from the EF-Cambridge Open Language Database (EFCAMDAT), which is a publicly available corpus containing over 83 million words from 1 million assignments written by 174,000 learners worldwide, across a wide range of levels (CEFR stages A1-C2). The 1500 writing samples are equally distributed across all six levels of CEFR. The quality indexes of students' writing samples, are obtained through the automatic writing analysis tool (Coh-metrix).
This project uses a machine-learning technique to model the predicting relationship between quality indexes (the independent variables) and CEFR levels (the dependent variable) of students' writing samples, since machine learning methods, emerging in linguistic research recently, has generally demonstrated higher accuracy in classification tasks than traditional regression models (McNamara, Crossley, Roscoe, Allen & Dai, 2015). In similar endeavors, the accuracy of different machine learning classifiers has been reported in N-gram recognition tasks (Jarvis, 2011), and the discriminant function analysis (one of the machine learning classifiers) was used for predicting scores of students' argumentative essays (McNamara et al., 2015). In the current research, we adopted a more advanced machine-learning classifier, multiple supporting vector machine recursive feature elimination (MSVM-REF), which has demonstrated considerably high accuracy in more complicated classifying tasks (e.g. classification and selection of better gene subsets in cancer study) (Duan, Rajapakse, Wang, & Azuaje, 2005). The adoption of this machine-learning classifier will not only result in an algorithm that assign students' writing samples to CEFR levels automatically, but also rank features that discriminating different levels of writing qualities. These top features yield pedagogical implications important to writing instruction.
 
Video Recording


Yongkook Won

Visiting Researcher
Center for Educational Research, Seoul National University

Link to Yongkook Won's second presentation

Topic Modeling Analysis of Research Trends of English Language Teaching in Korea

The goal of this study is to understand the research trends of English language teaching (ELT) in Korea for the last 20 years from 2000 to 2019. To this end, 11 major academic journals in Korea related to ELT were selected, and abstracts of 7,035 articles published in the journals were collected and analyzed. The number of articles published in the journals continued to increase from the first half of the 2000s to the first half of the 2010s, but decreased somewhat in the late 2010s. Text data in the abstracts were preprocessed using NLTK tokenizer (Bird, Loper, & Klein, 2009) and spaCy POS tagger (Honnibal & Montani, 2017), and only the nouns in the data were used for further analysis. Based on the previous studies on ELT research trends (Kim & Kim, 2015), 25 topics were extracted from abstracts of the articles by applying latent Dirichlet allocation (LDA) topic modeling with the R package topicmodels (Grün & Hornik, 2011). Teacher, tertiary education, listening, language testing, and curriculum appeared as topics that were frequently studied in the field of ELT. A result of time series regression analysis shows that rising topics include task-based learning, tertiary education, vocabulary, affective factors, and peer feedback, while falling topics include speaking, culture, and computer-assisted language learning (CALL) (at α = .001).
 
Video Recording