Overview of the Most Influential Student Experience and Engagement Surveys

In this section we will explore theoretical foundations, content, measurements, data collection, analysis and the use of the major student engagement and experience surveys. Our attention will be focused on several system-wide and international “flagship” projects that have attracted attention due to their widespread use. Those are the National Survey of Student Engagement (NSSE) in the US and Canada (which has been adapted into a number of NSSE-based national surveys), the Student Experience in the Research Universities (SERU) survey in the US and internationally (SERU-AAU and SERU-I, respectively), the National Student Survey in the UK (NSS) and the Dutch National Student Survey in the Netherlands (NSE).[1] The former two surveys were initiated by universities themselves over a decade ago with the purposes of inter-institutional comparison for institutional improvement. The latter two were introduced more recently by governmental agencies to increase higher education system's transparency, and to inform student choice of institutions and study programs. This reflects two different approaches to student surveys development (bottom-up vs. top-down) and affects their methodology and the uses of the data.

These four surveys have adjacent intellectual roots but base themselves upon different meaning of student experience. The underlying idea of NSSE and SERU is that student learning outcomes are affected by the characteristics of higher education institutions and their academic programs (Astin 1985; Pascarella and Terenzini 1991). Both put an emphasis on students' active engagement in educational practice as well as in extracurricular and civic activities, which they find equally important for student learning outcomes as the quality of institutional efforts to support student learning and development (Kuh 2001, 2003; McCormick et al. 2013). SERU survey specifically targets research universities (Kerr 2001). In its content SERU seeks to reflect the specific institutional characteristics of research universities by focusing on student engagement in three inter-related areas: teaching and learning, research and civic service. In contrast, NSS and NSE instruments are primarily concerned with the assessment of student course experience and seek to capture the various facets of the student learning process (Biggs 1987b; Prosser and Trigwell 1999; Ramsden 1979; Richardson 1983) by adapting instruments, such as the Ramsden's Course Experience Questionnaire (Ramsden 1991). They are focused on quality assessment and measure student satisfaction with other aspects of teaching and learning organisation, support, and environment,[2] and do not include measures of student engagement as students' own contribution to their learning and development.

Data collection and analysis procedures are similar for these four surveys, though target populations and response rates vary. All surveys are centrally administered (either by universities or independent companies) every year, and data is collected primarily through online platforms (NSSE and NSS additionally use paper-based questionnaires). All surveys are census-based.[3] SERU and NSE include all undergraduate students in their target populations, NSSE—firstand last-year students and NSS—final-year students only. NSS stands out among other surveys with the average response rate of more than 70 %, while SERU, NSSE and NSE demonstrate 25–35 % average response rate. Data is centrally managed and analyzed in case of NSSE, NSS and NSE: participating institutions have access only to aggregated results of students' responses. SERU utilizes decentralized approach in data analysis and provides for benchmarking as all members of consortium share reciprocally their databases with each other.

The uses of the data in these four surveys are affected by their origins and scope: NSSE and SERU data is used by universities more for internal quality enhancement, whereas NSS and NSE data is targeted in particular at external agencies and stakeholders. Since institutions voluntarily participate in NSSE and SERU, the data is used for institutional self-improvement and quality assurance efforts through benchmarking. NSSE examples include voluntary accreditation, increasing retention rates, informing reorganization of student services, diversity initiatives, etc. (see NSSE 2009, 2012 for more examples). SERU is more focused on informing academic department program reviews, though it is also used campus-wide for voluntary accreditation, assessment of campus climate, analysis of admission policies, etc. (see SERU 2014a). The major difference between NSSE and SERU in terms of data use is that the former provides more information on various types of institutions (four-year colleges, teaching universities), while the latter is focused on research university environment and allows to address narrow problems of various student sub-groups valuable for large research universities.[4] NSS and NSE data is used to inform prospective students' decision-making in higher education: the results are publicly available and are utilized in web-based platforms for comparing universities and academic programs. Universities also use this data to support internal discussions on teaching and learning, improve quality of student services as well as for marketing purposes. The major characteristics of these four surveys are summarized in Table 2.

Table 2 A comparison of major student engagement and experience survey designs






To assess student engagement in and exposure to proven educational practices that correspond to desirable learning outcomes (NSSE 2012)

To understand student experience in

research-intensive universities and to promote culture of institutional

self-improvement (SERU 2014b)

To measure students satisfaction with their courses and to help prospective students make study choices (NSS 2014)

To assess students' experience and satisfaction with the higher education course they pursue (NSS 2014)

Participation for universities



Obligatory for publicly funded universities in the UK

Obligatory for accredited Dutch higher education institutions

Theoretical foundations

Student engagement (Kuh 2001, 2003;

Pascarella and Terenzini 2005)

Input-environment-output model (Astin 1985); Research university (Kerr 2001)

Approaches to learning (Biggs,


Prosser and Trigwell 1999;

Ramsden 1979, 1991;

Richardson 1983)

Multiple instruments on student engagement, satisfaction

and learning outcomes

Survey content: topics

Participation in educationally purposeful activities, institutional requirements of coursework, perceptions of the college environment, educational and personal growth, etc.

Academic, research and civic engagement, time allocation, learning outcomes assessment, campus climate, plans and aspirations, satisfaction with academic program, global experiences, learning and technology, etc.

Satisfaction with teaching quality, assessment and feedback, academic support, organization and management, learning resources, personal development, overall experience, etc.

Content and organization of teaching,

acquired skills, preparation for career, academic guidance, quality of assessment, contact hours, internships, quality of learning environment, etc.

Survey content validity and reliability studies

McCormick and McClenney (2012), Pascarella et al. (2008), Pike (2013)

Chatman (2009, 2011)


et al. (2014),

Richardson et al. (2007)

Brenders (2013)

Table 2 (continued)





Data collection: sample

Census-based/random sample survey of

first-year and senior students

Census-based survey of undergraduate students

Census-based survey of last year students

Census-based survey of undergraduate students

Data collection: method and frequency

Online and

paper-based; once a year

Online; once a year

Online and paper-based; once a year

Online; once a year

Data collection: response rates

25–30 %

25–30 %

71 %

34 %

Data analysis

Centralized approach

Decentralized approach

Centralized approach

Centralized approach

Engagement indicators (benchmarks) and item by item comparisons

Factor scores and item by item comparisons

Item by item comparisons

Item by item comparisons

Data use

Mostly internal: for benchmarking, voluntary accreditation, decision-making support

Mostly internal: for program review, voluntary accreditation,

decision-making support

Mostly external: to inform prospective students' choice of the academic program, to create league tables, for marketing purposes

Mostly external: to inform prospective students' choice of the academic program, for marketing purposes

  • [1] Of course there are many more large scale student surveys worldwide (done by universities, ranking agencies and pollsters) but these projects are fairly representative of the cutting edge student engagement surveys in terms of their methodology, scope and data use
  • [2] For a recent review of NSS methodology, see Callender et al. (2014)
  • [3] Few institutions administer NSSE to a random sample of their students
  • [4] For example, SERU will be useful for understanding the low level of research engagement among female junior transfer students majoring in STEM, as there is usually enough data for the comparison of such minority groups between institutions
< Prev   CONTENTS   Next >