AN IMPACT ANALYSIS OF QUESTIONS WITHIN AN EXTERNAL EXAMINATION ON ENGLISH LANGUAGE: Reflecting on validity and reliability

Introduction

The last two decades have seen significant conceptual, methodological, and practical innovations in the realm of language learning and teaching in Bangladesh following the introduction of the Communicative Language Teaching approach in the national school curriculum. The first ever Bangladesh Education Policy (2010) underscored the need for developing a workforce that can access and compete at the local as well as global job markets and put emphasis on the teaching and learning of Science, Mathematics, and English in order to achieve the vision of a ‘Digital’ Bangladesh by 2021 when the nation will be celebrating its 50 years of independence (Erling, Seargeant, Solly, Chowdhury, & Rahman, 2012). Reflecting the spirits of the National Education Policy 2010, the English language curriculum stressed the need for developing the communicative competence in English among the secondary school learners. However, the gradual deterioration of Bangladeshi learners’ level of English has sparked a vigorous debate in academia regarding the quality of state-sponsored English language policy, programme, and practices including the assessment system (Hamid, 2010; Hamid, Sussex, & Khan, 2009). In fact, the state of English language instruction in the Bangladeshi mainstream school system is severely plagued by the external high-stake examination system, which excludes the two important oral—aural skills (speaking and listening) and is based on only reading and writing skills. A test format that has experienced an erratic amputation of the two important skills gives rise to questions about its quality to assess students' communicative abilities, particularly in terms of its validity and reliability. While one major concern remains the total exclusion of the speaking and listening skills from the current SSC test format, there are also questions about the way the reading and writing skills are tested (Haider, 2008; Khan, 2010).

The debates on the quality of the existing language testing policy and format have been fuelled by the findings of some research studies that have revealed serious ‘backwash’ effects of current testing practices on teaching—learning activities and pointed out a strong presence of‘teaching to the test’ in Bangladeshi English language classrooms (Hamid, 2011; Islam, 2015; Karim, 2004;

Rahman, 2015; Selim & Mahboob, 2001). What has added to the already dismal state of the English language education of the country is the inadequacy of assessment knowledge, which may ‘cripple the quality of education’ (Popham, 2009, p. 43). It is argued that while teachers with assessment training and skills use tests to improve teaching and learning, teachers without assessment training use tests to obtain grades (Lopez & Bernal, 2009). Thus, the issues of English language test development and validation and delivery of test items have become a huge concern for the overall English language teaching policy makers and practitioners of the country.

In Bangladesh, English is taught as a compulsory subject for students in the primary and secondary schools (Grades 1—12). After the completion of Grade 10, students sit for an external examination named Secondary School Certificate (SSC) under eight different Boards of Intermediate and Secondary Education (BISE). Along with other subjects, the SSC candidates need to sit for two English papers (Paper I and Paper II), which cover different aspects of English language reading and writing skills. As prescribed by the English curriculum of the National Curriculum and Textbook Board (NCTB, 2012), after ten years of schooling, learners are expected to use English in their real-life situations by acquiring the necessary knowledge and skills, learning about cultures and values, pursuing higher education, and finding better jobs nationally and globally. However, there is a scarcity of research in Bangladesh that explores how far the test format and tools used in the SSC English examination and the marking procedure adopted by examiners are in line with the intentions of the NCTB curriculum.

Test items are considered the building blocks of a test, and the validity of a test can be greatly affected by the way the test items are developed and validated (Haladyna & Rodriguez, 2013). According to Thorndike (1967), the more effort is put into building better test items, the better the test is likely to be (as cited in Haladyna & Rodriguez, 2013, p. 3).

This chapter reports the findings of an impact study on the quality of the SSC examination 2017 by examining the quality of the questions set (test items) and the reliability of the marking procedure, conducted by three out of eight examination boards. In particular, the following research questions are put forward:

a. What is the quality of SSC English test questions in terms of their ability to discriminate between more- and less-able candidates?

b. How well do the test questions specifically relate to the intentions of assessing reading and writing skills in line with the SSC curriculum?

c. How reliable are the final marks awarded by the examiners and head examiners within three examination boards?

 
Source
< Prev   CONTENTS   Source   Next >