Advancing Technologies — Expanding Research
Technology for Second Language Learning Conference
October 19-21, 2023
Generative AI: Experiments in Writing and Learning
Abstract: Emerging generative AI tools like ChatGPT present intriguing opportunities to enhance writing instruction by helping learners manage cognitive load, exercise fluency, and receive real-time formative feedback. Yet, these technologies will also challenge educators to reshape traditional learning design and assessment practices. Adapting will involve fundamental shifts in our mindsets and how we conceptualize student learning. First, students will need to develop capacities for self-regulated learning as they increasingly act as the “human in the loop” for their own AI-assisted writing and learning processes. Second, educators will need to focus on promoting discipline-specific forms of evaluative judgment to help students self-assess and improve as they engage in AI-assisted work. Finally, both students and educators will need to develop AI literacies that allow them to productively and ethically interact with these technologies to support social learning and professional development. This presentation will report on an experimental fall 2023 course on “Artificial Intelligence and Writing” that seeks to put these principles into action. It will share learning designs for the course’s weekly creative challenges which provide students engaging opportunities to experiment and reflect on the use of specific AI tools and techniques. Preliminary findings from these creative challenges will be used to reflect on the future of AI-assisted writing and learning.
Abram Anders is an Associate Professor of English and Interim Associate Director of the Student Innovation Center at Iowa State University. His research expertise includes academic innovation, business and professional communication, inclusive design, and communication and writing pedagogy. As an academic innovator, he has led large-scale curricular design projects that have had statistically significant impacts on student learning capacities linked with student retention and academic success. His current projects explore artificial intelligence and writing and the development of student innovation capacities. His work has appeared in journals such as Computers & Education, International Journal of Business Communication, Business and Professional Communication Quarterly, and the International Review of Research in Open and Distributed Learning. The Association for Business Communication (ABC) has recognized Dr. Anders as an outstanding researcher and teacher by honoring him with the 2022 Outstanding Article in the International Journal of Business Communication Award, the 2016 Rising Star Award, and the 2014 Pearson Award for Innovation in Teaching with Technology. He also recently received the 2023 Dean’s Arts and Humanities Innovation Award from the College of Liberal Arts and Sciences at Iowa State University.
Large language models in hybrid natural-language processing applications for language learning and assessment
Abstract: In this presentation, I will explore the affordances of large language models (LLMs), such as GPT, for building hybrid natural-language processing (NLP) applications in the fields of computer-assisted language learning and assessment. In these hybrid applications, the LLMs are combined with rule-based components that drive the LLMs in generating and understanding natural language. This permits the developer to retain control over the pedagogical or assessment procedures while simultaneously tapping into the power of the LLMs for flexible language generation and understanding. Specifically, I will discuss the affordances of LLMs in the context of three past and ongoing projects that focus on (1) the assessment of L2 interactional competence, (2) the learning of L1 argumentation discourse, and (3) the learning of text-production strategies for integrated writing tasks.
Dr. Evgeny Chukharev is an Associate Professor in the Applied Linguistics and Technology program at Iowa State University. He is the director of the Language Processing, Acquisition and Change Laboratory (PACE Lab), funded by grants from the National Science Foundation, Educational Testing Service, and the College of Liberal Arts and Sciences. His research program combines applied and quantitative linguistics with experimental work in cognitive and educational psychology of language acquisition and use.
Read more about Dr. Chukharev here.
Generative AI and the end of corpus-assisted data-driven learning? Not so fast!
Abstract: This talk explores the potential advantages of corpora over generative artificial intelligence (GenAI) in understanding language patterns and usage, while also acknowledging the potential of GenAI to address some of the main shortcomings of corpus-based data-driven learning (DDL). One of the main advantages of corpora is that we know exactly the domain of texts from which the corpus data is derived, something that we cannot track from current large language models underlying applications like ChatGPT. We know the texts that make up large general corpora such as BNC2014 and BAWE, and can even extract full texts from these corpora if needed. Corpora also allow for more nuanced analysis of language patterns, including the statistics behind multi-word units and collocations, which can be difficult for GenAI to handle. However, it is important to note that GenAI has its own strengths in advancing our understanding of language-in-use. For example, GenAI’s ability to generate results from almost any register, domain or even language can greatly widen the scope of DDL beyond its current focus on tertiary academic English language. Additionally, the size and speed at which current large language models like ChatGPT can be queried is unprecedented with even the best available corpus tools. I argue that both corpora and GenAI have valuable roles to play in advancing our understanding of language-in-use. By combining these approaches, language learners can gain a more comprehensive understanding of how language works in different contexts than is currently possible using only a single approach.
Dr. Peter Crosthwaite is a Senior Lecturer in the School of Languages and Cultures at UQ (since 2017), formerly assistant professor at the Centre for Applied English Studies (CAES), University of Hong Kong (since 2014). His areas of research and supervisory expertise include corpus linguistics and the use of corpora for language learning (known as ‘data-driven learning’), as well as computer-assisted language learning, and English for General and Specific Academic Purposes. He has published over 50 articles to date in leading Q1-ranked journals including Language Learning, Computer-Assisted Language Learning, ReCALL, System, and Journal of Second Language Writing. He is currently serving as Associate Editor for the Q1 Journal of English for Academic Purposes, features on the editorial boards of the Q1 journals IRAL and System, as well as Applied Corpus Linguistics, a new journal covering the direct applications of corpora to teaching and learning.
Expanding Pedagogy: New Ways of Teaching, Learning and Assessment with AI
Abstract: As we come to understand the affordances and limitations of generative AI, it is time to flip the narrative away from “How will AI impact education?” to “What are new and effective ways of teaching and learning enabled by AI?”. In this presentation I will explore how AI can support innovative pedagogy. Roles for generative AI include: Possibility Engine (AI generates alternative ways of expressing an idea), Socratic Opponent (to develop an argument), Collaboration Coach (to assist group learning), Exploratorium (to investigate and interpret data), Personal Tutor and Dynamic Assessor. I propose that future research into generative AI for education should be based on a new science of learning with AI – to include understanding cognitive and social processes of AI-assisted learning, exploring future roles for AI in education, developing generative AI that explains its reasoning, and promoting ethical education systems.
Dr. Mike Sharples is Emeritus Professor of Educational Technology at The Open University, UK. His expertise involves human-centred design and evaluation of new technologies and environments for learning. He is an Associate Editor of the International Journal of Artificial Intelligence in Education. He founded the Innovating Pedagogy report series and is author of over 300 papers in the areas of educational technology, learning sciences, science education, human-centred design of personal technologies, artificial intelligence and cognitive science. His recent books are Practical Pedagogy: 40 New Ways to Teach and Learn, and Story Machines: How Computers Have Become Creative Writers, both published by Routledge.
Critical Project-Based Learning and Social Justice: Implications for Digital Citizenship
Abstract: Computer-Assisted Language Learning (CALL) tends to be portrayed as a value-neutral field of practitioner research that is concerned with access to or use of digital technologies, particularly to enhance language proficiency, motivation or flexible learning. Such a view of digital education is often highly deterministic and may ignore the material realities and role of people in shaping the technologies we use. This struggle is again being played out in the hype that has greeted the emergence of ChatGPT and related AI technologies. Although confronted with challenges in testing-based educational systems, Project-Based Language Learning (PBLL) has become a more prominent pedagogical approach in recent years and research suggests that when used alongside CALL it may “engage language learners with real-world issues and meaningful target language use through the construction of products that have an authentic purpose and that are shared with an audience that extends beyond the instructional setting” (NFLL, 2022). The “real-world” dimension of PBLL and digital education can, however, be easily assimilated into neoliberal notions of education as skills training or mere preparation for the world of work. This presentation considers the value of a more critical approach to CALL and PBLL through the lens of the ‘social justice and sustainability turn’ in language education to consider how it may aid language learners and teachers to understand social and economic inequalities and to develop more critical notions of digital citizenship.
Dr. Michael Thomas is Professor of Education and Social Justice and Chair of the Centre for Educational Research (CERES) at Liverpool John Moores University in England. He holds PhDs from Newcastle University and Lancaster University, and has worked at universities in Germany, Japan, England and Wales over a twenty-five year period. He is author or editor of over thirty books and peer reviewed special editions on computer-assisted language learning, digital natives, project-based pedagogy, online education and pedagogical theory. He is founding editor of four book series, including Digital Education and Learning, Advances in Digital Language Learning and Teaching, and Global Policy and Critical Futures in Education. He is currently PI on two British Council funded projects exploring gender inequalities in teacher training and ICTs in Botswana, Ghana, Nigeria and South Africa in line with UN SDGs 4 and 5.
Read more about Dr. Thomas here.
Researching Generative AI in Writing
Abstract: The development and diffusion of generative AI is ushering in the greatest disruption to writing practices in modern history. This presentation delves into the significance of generative AI within the framework of literacy theory and research. It provides an overview of five recent and ongoing investigations conducted by the UC Irvine Digital Learning Lab on generative AI, comprising experimental analyses on the quality of its scoring and feedback, classroom research on its impact in writing courses, and a case study on a second language learner’s authoring techniques. The presentation concludes with discussion of a research agenda for this burgeoning field, as well as an introduction to an innovative online tool in development at UCI that promises to support both classroom pedagogy and research.
Dr. Mark Warschauer is a Professor of Education and Director of the Digital Learning Lab at the University of California, Irvine, with affiliated appointments in the Department of Informatics, the Department of Language Science, and the Department Psychological Science. Professor Warschauer is one of the most widely-cited researchers in the world on digital learning, with seminal contributions to topics such as computer-assisted language learning, the nature of digital literacy, and the uses of AI in language and literacy development. He has received more than $20 million in federal grants for his research, has published 12 books and some 300 papers, and has served as founding editor of two prominent open access journals, Language Learning & Technology and AERA Open. He is a member of the National Academic of Education.
Adaptive language learning in the new age of generative AI
Abstract: Nowadays, generative AI (genAI) technologies such as ChatGPT and GPT 4 have become the new catchwords in education. The potential of genAI has been exploited in many domains of education, including adaptive language learning. Although genAI can support the development of some functionalities of adaptive learning, such as creating extended language inputs and associated learning exercises and assessments and providing feedback on language outputs, I argue that designing a robust, learner-first adaptive learning experience requires a principled, systematic approach, where genAI is only a facilitating tool, not a solution.
Learner-first adaptive learning solutions, where a learner’s needs and wants are prioritized every step of the way when he/she interacts with the assessments and learning content, are rare. This is because developing such solutions requires interdisciplinary talents in assessment, learning, cognitive and noncognitive science, artificial intelligence (AI), and many more, which, in reality, is a luxury for most development teams. How do we ensure a learner-first assessment and learning experience? In designing various types of assessments in adaptive learning, we want the assessments to be efficient yet precise, unobtrusive, provide actionable information, and support a positive assessment taking experience. A learning experience optimized for an individual learner must meet his/her unique learning needs, and be tailored to his/her level, dynamic knowledge and skill profiles, cognitive and learning styles, and constantly changing affective states to facilitate the most speedy and effective learning.
In this talk, I will discuss the science and technologies behind an adaptive learning system, especially how the emergence of genAI could potentially empower the development of materials and assessments in an efficient way. I will decompose the architecture of an adaptive learning system, focusing on the chain of inferences supporting its overall efficacy, including user property representation, user property estimation, content representation, user interaction representation, user interaction impact, and system impact. I will provide an overview of different types of assessment used in adaptive learning and an analysis of the assessment approach, priorities, and design considerations of each to optimize its use in adaptive learning. I will then propose a framework for evaluating different aspects of an adaptive learning system. I will conclude with thoughts on high priority research and development to provide truly learner-first systems to fully empower our learners and the expanding role of genAI in future development of adaptive learning solutions.
Dr. Xiaoming Xi is Director of Examinations, Assessment and Research at the Hong Kong Examinations and Assessment Authority (HKEAA), leading the Assessment Technology and Research Division, the International and Professional Examinations Division, and the Education Assessment Services Division. Her research spans broad areas of theory and practice, including validity and fairness issues in the broader context of test use, test validation methods, approaches to defining test constructs, validity frameworks for automated scoring, automated scoring of speech, the role of technology in language assessment and learning, and test design, rater and scoring issues. She has published widely on testing, assessment, learning, and educational AI technologies. She serves on the editorial boards of a few leading international journals, has been awarded multiple patents in AI technology, and won several prestigious book and article awards in educational testing.
Read more about Dr. Xi here.