Colloquium Organizer: Sara Cushing Weigle
Applications of Corpus Linguistics for Investigating Target Domain Language in High Stakes Assessments
Geoff LaFlair, University of Kentucky & Shelley Staples, Purdue University
Corpus linguistics has been used at various stages of validity arguments for high stakes assessment. This presentation will focus on the use of corpus linguistics to explore the correspondence between large-scale language assessments (one writing test and one speaking test) and their target domains (i.e., writing and speaking in academic contexts). By comparing the lexico-grammatical features produced by test takers on high stakes assessments with the language produced in their respective target domains, corpus linguistic studies can provide evidence for inferences from assessment performances to performance in the target domain.
Language Assessment and the Inseparability of Lexis and Grammar
Ute Römer, Georgia State University
This presentation aims to connect recent corpus research on phraseology with current language testing practice. It will showcase studies on phraseological patterns in English based on corpora such as the BNC (the British National Corpus) and MICASE (the Michigan Corpus of Academic Spoken English), and provide evidence for the strong interconnectedness of lexical items and grammatical structures in natural language. It will then review rubrics of popular speaking and writing tests and discuss in how far these rubrics capture the centrality of phraseology and how well they reflect the patterned nature of language.
Corpus-based Discoveries in the Modeling and Measurement of Lexical Diversity
Scott Jarvis, Ohio University
This paper describes the stages of defining, modeling, and measuring lexical diversity and the processes of validating such measures. Traditionally structured corpora and standard corpus tools are useful at each stage, but additional resources are also valuable, such as human ratings and innovative new tools motivated by the construct definition. One of the most important implications of the project for language testing is that a multi-dimensional phenomenon such as lexical diversity cannot be measured satisfactorily without a fully developed, theoretically sophisticated construct definition that informs each stage of the approach and directly motivates all aspects of the ensuing measures.
Collecting Written and Spoken Corpus Data to Inform Automated Tutoring and Assessment Systems
Fiona Barker, Cambridge English Language Assessment
Nick Saville, Cambridge English Language Assessment
Learner and native speaker corpora have been used for more than two decades to inform every stage of producing and validating tests. As the needs and expectations of stakeholders evolve and digital technologies develop, our use of corpora is also changing. This presentation focuses on collaborative research that explores productive data from learners and native speakers to develop automated approaches to learning and assessing language. Starting from an overview of new and recently annotated datasets, we then focus on how computational analyses are informing digital learning and assessment opportunities, before looking at the future applications of corpora for language assessment.
Factors affecting L2 writing syntactic complexity and implications for assessment
Xiaofei Lu, The Pennsylvania State University
The relationship of syntactic complexity to L2 proficiency and L2 writing quality has long interested the SLA, L2 writing, and language assessment community. This relationship is known to be affected by various learner-, context-, and task-related factors. In this presentation, I first review previous research on the effects of such factors on L2 writing syntactic complexity. I then introduce the L2 Syntactic Complexity Analyzer, a tool designed to automate syntactic complexity analysis of large corpora of writing samples. Finally, I discuss findings from recent corpus-based studies of L2 writing syntactic complexity facilitated by this tool and their implications for assessment.
Jesse Egbert, Brigham Young University
Xiaoming Xi, Educational Testing Service