Recommendations to Policy Makers
Student survey data has been used to generate evidence on what works and what does not work in how higher education institutions conduct teaching and enable learning and development. This evidence is to serve several purposes and users: it is to inform policy and practice of institutions themselves, it is to inform policy of governments, and it is to inform the higher education stakeholders, ﬁrst and foremost students and their families. Given the vast implications of the use of survey data as evidence and as information in decision processes, student survey data and methods to collect student data ought to be scrutinized for reliability and validity. There are several quality standards that can serve as guidance in designing student surveys and in evaluating quality of survey data (cf. Alderman et al. 2012; Harvey 2003; Porter 2004; Richardson 2005):
1. Surveys have an explicit stated purpose which leads to quality enhancement. They are tailored to that speciﬁc purpose (Alderman et al. 2012).
2. Student feedback is sought “at the level at which one is endeavouring to monitor quality”, as soon as possible after the relevant educational activity (Richardson 2005, p. 409), and ideally repeatedly to monitor trends.
3. The survey instruments that aim at inter-institutional comparisons serve best as screening tools when two conditions are met: (i) the more alike the compared institutions are in their mission, purpose and resources, and (ii) the lower in the institutional hierarchy is the unit of analysis (surveys on the program level are the most desirable points of comparison).
4. Students and other stakeholders are involved in the entire process of survey design, implementation, analysis and reporting to aid relevance, clarity, and legitimacy of surveys.
5. Survey design is critically appraised as to the underlying ideological and policy frames: How different values are negotiated and balanced and reflected in the survey instruments? What value-signals the institution is sending through the questionnaires? Such critical reflexive processes can be more fruitful if different epistemic communities are involved in it; especially students who are directly affected by the policy interventions, and who have the ﬁrst-hand experience of practice (cf. Klemenčič and Brennan 2013).
6. If there are several surveys administered by the institution, possibilities are explored to integrate them. The different surveys are checked for possible conflicts in timing of administration, duplication of questions, etc.
7. To raise response rates, several methods have been recorded to increase response rates: multiple contacts; incentives included with survey instrument (not conditional on completion); (statement of) high survey salience to students; and request for help in the cover letter (Porter 2004).
8. Participants in the survey are aware of how the data will be used, i.e. the feedback loop. This may raise survey salience, i.e. the importance or relevance that students attribute to the survey topic, which is shown to raise response rates (Porter 2004).
At best, student surveys are used as screening instruments to discover major deﬁciencies in educational environment and provision, and major discrepancies in student behavior from the expected. Such diagnostic results in turn guide the institutional managers to explore causes and consequences of various practices and processes. This is done through qualitative methods which can generate contextualized data—indeed richer, deeper and more authentic data—on student experience and behaviour albeit on smaller scale, by focusing on the 'particular'. With the advancement in new technology and the universal use of digital media by students (Gardner and Davis 2013), research is already underway seeking to adapt qualitative empirical methods to digital use, to canvass data on student experience on a large scale (such as digital ethnography and digital phenomenology by Klemenčič 2013); and more exploratory and innovative research in this area is called for.
The rise of big data on students will make institutional research more complex and challenging. Institutional researchers will need to learn how to leverage data resources effectively to support decision-making. From basic student records, which have become automatized, the attention is shifting to 'issue intelligence' and 'contextual intelligence' to aid policy and strategic planning, including forecasting and scenarios building (Klemenčič and Brennan 2013). Along with the questions of what constitutes sound evidence for policy-making, more attention is devoted to institutional capacities for institutional research and data analytics to support decision-making.
Acknowledgment Support to Igor Chirikov from the Basic Research Program of the National Research University Higher School of Economics is gratefully acknowledged. The authors would like to thank Paul Ashwin for most helpful feedback to the earlier draft.