Evidence and Record-Keeping

A key function of all types of assessment in LOA is to elicit evidence about learning, which can be collected systematically and used to inform further actions, such as checking student understanding and monitoring progress against objectives. The first LOA principle by Carless (2007), for instance, advocates that assessment tasks should capture evidence of skills and knowledge across a period of study, not simply after shortterm study before an assessment (pp. 59-60). The elicitation dimension of Turner and Purpura’s (2016) LOA model discusses a broad range of language elicitations that can take place in a classroom, ranging from planned assessments, such as achievement tests, diagnostic tests, and group work with peer feedback, to unplanned, spontaneous assessments, such as spontaneous questioning and feedback during talking. Along the same lines, the LOA cycle in Jones and Saville (2016) points out several opportunities for evidence collection, from teacher observations and informal records to a more structured record of achievement (linked to learning objectives, the language and activities from lessons, as well as the learner’s strengths and areas for development) to external proficiency exams. Ideally, both the teacher and learners should be responsible for collecting different types of evidence and recording it to compile a rich learning profile. However, such systematic collection and record-keeping of information may prove challenging without sufficient technological support, such as, for instance, a learning management system or other digital platforms and tools. In this section, we document LOA examples whereby teachers collected evidence in a classroom environment with varying degrees of technological support.

Collecting Evidence Using a Range of Assessment, Feedback, and Self-Evaluation Tasks

Mitchell (2016) designed a corpus-based intervention to help EAP learners develop more effective vocabulary learning strategies, engage more substantially in vocabulary learning, and master the Academic Word List (AWL) for use in their EAP class and, more importantly, in their future university courses. The main part of the intervention was training the students in corpus consultation methods by using the online corpus tool FLAX (Witten et al., 2013). The research question focused on whether this training would “encourage deeper learning of words and engagement with vocabulary” (Witten et al., 2013, p. 19). In addition, Mitchell (2016) embedded this training in “principles of LOA, by promoting students’ self-assessment of learning strategies, creating tasks to promote new learning, and developing classroom tests to monitor progress” (p. 19). The project involved two 10-week cycles and a total of 30 students.

Relevant to our current discussion, one of the key features of the intervention was the systematic collection of evidence of students’ understanding and usage of vocabulary learning strategies, and the development of their vocabulary throughout the course of study. At the beginning of the 10-week intervention cycle, Mitchell presented the students with a “Vocabulary Learning Strategy (VLS) Questionnaire,” which aimed to initiate reflection on the use of current strategies and set vocabulary learning goals. For the first five weeks, she then created scaffolded vocabulary tasks to train students in corpus consultation methods with FLAX. After the initial training, students were asked to write a brief blog post about the usefulness of corpus tools in vocabulary learning and as a reflection on the use of vocabulary learning strategies. On a weekly basis, teams of students were responsible for creating an AWL Class Wordlist, which required information derived from corpora. Progress was monitored via a range of weekly formative assessments: a spelling and pronunciation test, a word families and sentence creation test, and a Kahoot! quiz that reviewed the previous week’s words. “Scores were recorded, feedback given, and progress tracked for these tests” (Mitchell, 2016, p. 20). In the middle of the cycle, students received a self-evaluation survey to capture their degree of vocabulary development so far, to evaluate the usefulness of FLAX, and to set vocabulary learning goals for the second half of the program. At the end of the cycle, the students were provided with a reflective writing task and a second VLS Questionnaire to “foster reflection on changes to students’ strategy use” (ibid.). The program also included three summative-type assessments administered in the last four weeks.

In sum, this AR project illustrates the systematic collection of evidence and record-keeping for LOA purposes: the teacher-researcher collected qualitative data from student reflective tasks (e.g., blog posts, surveys, questionnaires, and reflective paragraphs), from student interviews and personal observations, quantitative data from formative paper-based and Kahoot! quizzes, and from scores on summative assessments. In addition, the collection was administered throughout the course from the very beginning to the end. This systematic collection of evidence gave a rich picture of the learners’ profile - the majority made positive changes to their vocabulary learning habits, noticed progress in their vocabulary skills, and made more frequent use of academic words in their writing. However, these improvements were not necessarily reflected in their overall writing test scores, a fact attributed to the brevity of the study cycle (10 weeks), which may not allow for noticeable score progress.

 
Source
< Prev   CONTENTS   Source   Next >