Menu
Home
Log in / Register
 
Home arrow Language & Literature arrow Responsible Design in Applied Linguistics: Theory and Practice
Source

Reciprocity in Design

The “giving effect to” referred to above is for many the most critical feature of an applied linguistic design, its implementation. How effectively a plan for language development, measurement or management can be put into practice through l an- guage courses, tests and strategies is an indication of the force or effect of their design - the extent to which the design may potentially contribute to the solution of the language problem. It should therefore come as no surprise that applied linguists working in the subdisciplinary sphere of language assessment invest an inordinate amount of intellectual energy in the validation of language assessments (Weideman 2011b). If the results of the application of a measurement instrument are not evidence of its effectiveness, its continued use and eventually its design itself will be suspect.

Since the groundbreaking work of Messick (1980, 1981, 1988, 1989), Linn (1989), Wainer and Braun (1988) on the validation of tests, the traditionally identified characteristic of test validity (Borsboom et al. 2004) has given way to the currently orthodox concept of validation (Weideman 2012). Validation is now conventionally conceived of as a process of gauging effectiveness or adequacy, rather than as a once-off assignation to a language test of a single quality, validity. Good examples of test validation as a process can be found in Van der Walt and Steyn’s (2007) validation of an undergraduate test of academic literacy, and Van Dyk’s (2010) comprehensive study of another, but similar test.

The concept of validating a language test brings us, however, to the threshold of discovering general design principles that are not typical, in the sense of relating to the identity of the three different kinds and two levels of artefacts discussed above, but rather to common principles that stretch across the varying nature and purposes of these. Validation, it may be argued, is not limited to the determination of the effect or the adequacy of only one kind of applied linguistic artefact, a language test. Purpura et al. (2015) in fact present a case for validation frameworks to be applied also to second language acquisition studies. We find, in the case of language courses, a similar concern. Once a course design has been implemented, there is a need to determine, through a process of evaluation, how effective the language instruction has been. In the history of applied linguistic designs we indeed find a concern not only with language test validation, but also with language program and course evaluation (for a thematic survey, cf. Rea-Dickens and Germaine 1998). The general opinion is that systematic program and course evaluation became more prominent in the last 20 years of the previous century after Baretta and Davies’s (1985) pioneering evaluation of the Bangalore project (cf. also Baretta 1986, 1990). This period produced a substantial number of project and program evaluations (e.g. Kroes 1991a, b; cf. too Alderson and Baretta 1992; Alderson 1992; and, for South Africa, Macdonald and Burroughs 1991; Kotze and McKay 1997). Of course, such evaluations were highly regarded in the case of donor-funded applied linguistic interventions, especially when decisions for further funding of designed solutions had to be taken. At the institutional level there are also sufficient examples of evaluations by external panels of experts of language interventions that have been introduced on scale within institutions. These evaluations are intended not only to gauge the effect of language courses, but sometimes also the organisational functioning of the subinstitutional units that design and offer them. In the usual case, therefore, both the validity of the course designs and the overall effect of the interventions will be systematically considered. In the more recent literature on this we find several examples of academic and scholarly discussions of such larger-scale, overview assessments (e.g. Weideman 2003).

The general principle of validity therefore applies across typically different applied linguistic artefacts. It is an insight that we gain when we consider the commonality rather than the typicality of design conditions for applied linguistic artefacts. Though we are thus first alerted to the principle of validity from the side of language assessment, upon further consideration we may find that it applies reciprocally to language plans as well as to language courses. The normative appeal of this design principle to applied linguists is that the plans they make either for organizing language institutionally, or for designing courses, or for developing tests, should be effective plans. In order to determine that objective validity or adequacy, all of these artefacts may be analyzed in a subjective process of validation. The difference between objective validity - the effect of the design upon implementation - and subjective validation - the currently orthodox view in language testing - is not always well understood (Weideman 2012). Yet they are two sides (a subjective measurement of an objective effect of the design) of the same coin. The process of validating an instrument may of course lead subsequently to improvements in its design, which reinforces the point that validity is a design principle of the technically conceived artefact.

Yet another general principle, reciprocally applicable to other types of artefacts but derived from one kind of applied linguistic artefact, namely course design, is that of differentiation. If language course designers can learn from language test developers about the technical effect or validity of their plans, one might thus consider, too, what test designers can learn from course developers. A good example of differentiation can be found in the current interest not only in one (generic) kind of academic literacy development, but for tuition in developing the ability to handle academic discourse in specific disciplines or fields (Carstens 2009) , sometimes, in order to stress the differentiated nature of the abilities, referred to as “academic literacies” (Street 2000) rather than “academic literacy”. At the same time, we may note that even in conventional institutional settings, such as the teaching of general courses of language ability as school subjects, there will be a varying emphases on both a general language ability and a set of differentiated abilities to use language in various lingual spheres (Weideman 2009a: chapter 4). Language curricula for teaching first and additional languages at secondary school level (cf. Department of Basic Education 2011a, b) may, for example, have an emphasis on specific as well as general language ability. Thus the English First Additional Language and Home Language syllabi of this education authority provide for instruction in the ability to use language in a range of lingual spheres: aesthetic discourse (literature study), academic discourse, business language, social interaction, political and ethical discourse, and so on.

When we recognize differentiation as a design principle not only for language courses, but also for language testing, the question is whether the appropriate level of differentiation is adequately reflected in the assessments that applied linguists develop. Should we not, at university level, in other words, design specific tests of academic literacy that test ability to use language in a specific field? Or is the stage at which low levels of academic literacy should be identified too early in a student’s academic career for there to be much differentiation in respect of field of study? In that case, it may be argued that a generic test of academic literacy may be sufficient. At the secondary school level in South Africa, certainly, we do not yet see an adequate differentiation in either the teaching of languages in the upper secondary school, or in the assessment of language ability in the final exit examinations that follows such language instruction (Du Plessis et al. 2013; Weideman et al. 2015). The assessments and the language instruction that precedes them are misaligned with the requirements of the curriculum to nurture and develop a differentiated ability to use language across a variety of discourse types, and the texts typically associated with these discourse types. Here too language instruction and assessment have much to learn from the move to design language courses for specific purposes, and thus to apply to their designs the principle of differentiation.

What has been referred to above could be termed externally motivated differentiation, i.e. differentiation inspired by the fact that in real life, our general language ability is complemented by a set of language abilities that enables us to handle language across different kinds of discourse (Patterson and Weideman 2013a). There is another kind of differentiation , however, that concerns the internal structuring or organization of an assessment into different kinds of subtests. Such a differentiated design utilizes the insight that one specific ability, for example academic literacy, may be so rich that no single measure (in the shape of one subtest) will be able to do justice to the measurement of that ability. Hence the design is organized internally into an instrument with a differentiated set of subtests. The Test of Academic Literacy Levels (TALL) (ICELDA 2014), for example, is organized as a differentiated measurement of a number of components of the ability to handle academic language, such as making distinctions, seeing relations between different parts of a text, understanding graphic and visual information, handling grammatical and tex?tual relations, inferencing and extrapolating, and so forth. TALL assesses the ability to handle these components of the construct (academic literacy) through a differentiated set of subtests (Patterson and Weideman 2013b):

  • Scrambled text (five sentences of a single, coherent but now scrambled paragraph which has to be restored to its original format)
  • Vocabulary knowledge (usually from the Academic Word List; cf. Coxhead 2000)
  • Interpreting graphic and visual information
  • Register and text type (matching five sentences, taken from different kinds of genres or discourse, with their counterparts from the same text)
  • Text comprehension (insight into an extended text of more than 500 words)
  • Grammar and text relations (a modification of cloze procedure; cf. Van Dyk and Weideman 2004)

TALL is a highly reliable test, and one of the reasons for this high level of reliability may be that its design is a much more differentiated one than that of its main rivals.

A third example of general design conditions for applied linguistic artefacts that can be extrapolated from their current applicability to only one artefact can be found in the design requirements of transparency and accountability. Conventionally, for a language policy, plan or strategy to be effective, it has to be conceived in the open, with all affected parties contributing to its formulation. Without such transparency, buy-in and adherence to its eventual application become less probable. So, within an institution in an environment where language is part of, or proxy for, a contested space, as in some nominally multilingual South African universities, to have an effective language policy, one needs to have wide participation by all involved to prevent its failure, once adopted. In these institutions, the retention of Afrikaans as a language of higher education is contested, since that is viewed by some as a measure that limits accessibility to higher education. Though there may be a legitimate case against language exclusivity of institutions if it is a proxy for ethnic or racial exclusion, the merits of a differentiation of higher education to accommodate a diversity of languages are left unconsidered. All former Afrikaans-medium institutions of higher education in South Africa have therefore either switched, to become monolingual institutions, or have adopted multilingual institutional language policies that strive to be inclusive (by adding English as language of instruction, while nominally retaining Afrikaans), instead of exclusive as regards language. As always, in the negotiations about what these policies must look like, transparency and accessibility go together. Such language plans and policies must therefore be arrived at with the greatest possible openness (transparency) and must have as their result greater accessibility to the scarce goods (education) that the institutions involved provide, before they become defensible in public, or what may be described as accountable solutions.

Accountability, transparency and accessibility are, however, not design principles only for l anguage policies and plans. They obviously apply equally to l anguage course and test design. In some peripheral methods of language teaching, for exam?ple, the idea of learners having a say in determining their own curriculum is taken almost to an extreme, as is the case in Community Language Learning/Counseling- learning. In the case of another artefact, test design, as Rambiritch (2012) has shown, we have to conform to the regulative design principles of transparency, accessibility and accountability. That means that test designers have to devise means of disseminating as much information as possible about their tests, and what their purpose is, to prospective test takers who might be affected by their results. Though web-based information is not the only format for the dissemination of such information - there are also brochures and pamphlets, word of mouth, interviews on radio and in newspapers, fact sheets - an example of how such information can be presented in that format can be found on the ICELDA (2014) website. Here, not only are those who need to write the tests informed of what is being tested, but a sample test is also provided for their information. At the same time, when tests have to be used to exclude people from opportunity, as some high stakes language tests do, this must be done under strict ethical conditions. If possible, tests must rather be employed inclusively. Furthermore, since tests are never 100 % reliable, their administrators must ideally provide second chance tests to that range of students who could potentially have been misclassified as a result of inconsistent measurement (Van der Slik and Weideman 2005,2008,2009). Finally, tests need to be examined for bias, so that they are fair to everyone on every count (Weideman et al. 2015; Van der Slik and Weideman 2010). Rambiritch (2012) argues that without such measures and analyses, tests would lack transparency, accessibility, accountability, and fairness. These principles therefore apply to all applied linguistic designs.

The meaning of these three examples is that we may perhaps gainfully explore a general set of applied linguistic design principles that apply differentially, according to the typical purpose of the various artefacts, to all applied linguistic designs. This is discussed in the next section. As we have noted in the discussion so far, the observance of these principles is likely to lead to incremental improvements to the design and administration of these applied linguistic artefacts, such as participatory policy formulation, the provision of second chance tests, the elimination of bias, an openness about test purpose, and so on. Their further exploration therefore has the potential of being equally informative.

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel