Assessing Lingua Franca Competence

Organizer: Luke Harding, Lancaster University
Discussant:  Tim McNamara, University of Melbourne


Despite increasing prominence within applied linguistics research, lingua franca approaches to communication are yet to permeate the field of language assessment at a deep level. The reasons behind this are complex, and include practical and conceptual challenges in assessment design, as well as ideologies around language and language use that appear to support an “institutional conservativism” (Harding & McNamara, 2018). However, recent conceptualisations of lingua franca communication provide opportunities for an enriched approach to assessment which takes into account the fluid and dynamic nature of interaction across a range of settings, leading to more authentic assessment methods that arguably would have greater power in predicting communicative success across a range of domains. Given the range of lingua franca contexts in which language assessments are regularly employed—higher education, aviation, healthcare, call centres—the need for such assessments is long overdue.

This colloquium presents four perspectives on the theme of assessing lingua franca competence. In the first talk, Suresh Canagarajah will outline how lingua franca assessment tasks must respond to shifts in conceptualising language performance, arguing that a radical change in assessment practice is required. In the second talk, Luke Harding will build on these ideas, outlining the steps taken in designing lingua franca assessment tasks for academic purposes as part of a project which seeks to translate theory into test design. The third paper will broaden the discussion into French as a lingua franca, with Sara Kennedy, Pavel Trofimovich, Josée Blanchet, and Juliane Bertrand discussing self-, peer- and teacher-assessment of lingua franca competence in a French-medium university context. Finally, Jennifer Jenkins and Constant Leung will critique current assessment approaches in international, standardized tests of English, arguing that “rich messy data” must be embraced to develop a clear picture of (multi)lingua franca competence. Tim McNamara will provide a discussion of the papers before opening the floor for comments and questions.

Assessing lingua franca interactions as performative
Suresh Canagarajah, Pennsylvania State University

In recent past, some schools of English as a lingua franca have studied its grammatical properties to facilitate effective teaching and assessment. As we begin to focus on linguae francae as an activity and not grammar, we are compelled to reconsider ways of teaching and testing students for such proficiency. I focus on three shifts that require attention: 1. From methodological individualism to distributed practice: Since successful negotiation of lingua franca interactions involves co-construction of meanings, we have to assess how effective interlocutors are in negotiating meanings with shared responsibility. 2. From grammar to spatial repertoires: Spatial repertoire (Pennycook and Otsuji, 2015; Canagarajah, 2017) refers to the full range of semiotic resources that facilitate communication in an activity environment. As lingua franca interactions feature semiotic repertoires beyond verbal resources, we have to assess how interlocutors draw from all of them as relevant for their communicative activity. 3. From cognitive representation to performative practice: We have to assess proficiency not in relation to predefined meanings interlocutors bring with them in their minds, but how they generate new meanings through embodied practice. These shifts involve a reconsideration of traditional forms of testing, which have focused on individuals in controlled settings for their internalized knowledge, as measured against the examiner’s criteria. Now, we have to test groups in situated social interactions according to criteria indigenous to that communicative activity. Such an orientation will require a radical reconceptualization of the format and rubrics for testing. I offer a humble starting point by illustrating how we might test the dispositions that facilitate such proficiency, including the following: willingness to negotiate meanings; treating diversity as the norm; and adopting a functional orientation to communication.

Assessing lingua franca competence in academic settings: From theory to design
Luke Harding, Lancaster University

One key site of English as a Lingua Franca (ELF) communication is higher education. In predominantly English speaking contexts such as the United Kingdom, Australia, and the United States, English is the common language of academia among a diverse international student and staff body. However, even in countries where it is not an official language (e.g., across continental Europe), English is now frequently used as a medium of instruction within internationally-focused programmes of study. Within both types of academic context, ELF "competence"—defined broadly as the ability to adapt, to make oneself intelligible, and to convey meaning effectively to a wide range of unfamiliar speakers—is of prime importance. However, current methods of English language assessment required for admission to higher degree programmes do not adequately capture these important ELF competencies. This has resulted in a situation where the need to develop and validate ELF assessment tasks for academic purposes has become an urgent priority (McNamara, 2014). Against this background, this paper reports on an ongoing study intended to address the gap in research by first designing a set of English as a Lingua Franca assessment tasks and an associated rating scale, and then validating these in two typical ELF contexts: Lancaster University (UK) and the University of Innsbruck (Austria). The transition from a theoretical framework to task and scale design will be described, and an initial set of materials will be presented. Results from piloting these assessment materials with an initial set of speakers and expert raters—including scores, user feedback, and analysis of discourse produced across tasks—will be reported and discussed. The paper will conclude with a call for more effective collaboration between researchers working in the areas of ELF and language assessment.

Assessing French as a lingua franca: Teachers' and learners' perspectives
Sara Kennedy, Pavel Trofimovich, Concordia University, Montreal
Josée Blanchet, Juliane Bertrand, Université du Québec à Montréal

Assessment of lingua franca use is an issue that has received increasing attention by researchers, but has centred almost entirely on the use of English as a lingua franca. In this presentation, we draw on quantitative and qualitative data to examine how the use of French as a lingua franca is assessed by teachers and adult learners in a unique context of language use. The learners underwent French language teaching in a French-medium university in a majority-French city where English is also commonly used in everyday life by L1 English and L2 English. This contrasts with other parts of the province, where French is the language of everyday life. Data were elicited by asking teachers or learners to assess various constructs in spoken French as a lingua franca, using scales with endpoint descriptors. With respect to the learners, over one semester they were asked to periodically assess their own and their classmates' speech after completing communicative classroom tasks. With respect to the teachers, they individually viewed video of learner pairs completing interactive tasks and assessed learners using scales and discussing their assessment decisions with a researcher. Results show that teachers clearly differentiated between certain speech constructs, and were sometimes flexible in the degree to which they relied on native speaker norms. Learners became more aware of which aspects of their speech could benefit from additional attention in their autonomous learning. We discuss how the findings have implications for teacher education with regard to assessment literacy and fairness, and learners’ autonomy and critical awareness of language.

Predicting future performance: Rich messy data rather than simplistic standardised test scores
Jennifer Jenkins, University of Southampton
Constant Leung, King’s College, London

For many years, the majority of non-native English students wishing to gain entry to a university operating in English have been obliged to attain a particular score in one of the ‘international’ standardised English language tests. These tests, without exception, claim to tap into prospective students’ ability to utilise native speaker-like English for academic purposes, at best a minority use of English in their prospective university campus, and sometimes entirely absent. More relevant, we argue, would be for exam boards to focus on the kind of English used in the local (and by definition, multilingual) university context as experienced by the individual student and teacher. Regardless of policy dictated from above, in practice this is English as a (multi)lingua franca, and involves translanguaging in and out of other languages than English as well. The ability to communicate effectively in international universities, we believe, cannot be effectively captured by the standardised national-English-oriented tests currently in force—or by any standardised test for that matter. As Professor of Education, Hugh Burkhardt, argued recently in a letter to a British newspaper, “‘rich dirty data’ is a better predictor of future success than ‘simple clean data’” (Burkhardt, 2018). It is ‘rich dirty data’ such as examples of situated language use that we believe needs to form the basis of deciding whose language skills are appropriate for study in a specific university setting, including the skills of native English students. In our talk, we will consider these issues in more detail