Question construction Introduction

The way a question is constructed influences respondent comprehension, information retrieval, judgement and reporting of subjective well-being. Seemingly small differences between questions may impact on the comparability of data collected through two or more different measures as well as their internal consistency and test-retest reliability over time. Questions that are easily understood, low in ambiguity, and not too burdensome for respondents should reduce error variability and enhance the validity of responses.

Box 2.1. Factors thought to influence the likelihood of error, response biases and heuristics

Factors associated with the underlying construct of interest

Survey design factors

Respondent factors

Task difficulty

• How easy or difficult is it for respondents to think about the construct or recall it from memory?

Question wording

• Is the wording complex or ambiguous? Can it be easily translated across languages and cultures? Is the tone of the question sufficiently neutral, or does it suggest particular answers should be favoured?

Motivation

• Are respondents equally motivated?

Fatigue

• Are respondents equally alert and engaged?

Translatability

• How easy or difficult is it to translate the construct into different languages?

Response formats

• Is the wording complex, ambiguous or difficult to translate? Can the response options be easily remembered? Can respondents reliably distinguish between response categories? Are there enough response categories to enable views to be expressed fully?

Susceptibility to social pressure, norms or demand characteristics

• Do respondents vary in terms of their susceptibility to social pressure/or their likelihood of responding in a socially desirable manner?

Risk of social norms

• How likely is it that there are social norms associated with the construct, i.e. normatively “good” and “bad” answers?

Question order

• Do preceding questions influence how an item is interpreted and/or prime the use of certain information when responding?

Language differences

• Do language differences between respondents influence how respondents interpret questions and response formats?

Risk of influence by momentary mood

• How likely is it that respondents’ momentary mood can influence how they remember/assess the construct of interest?

Survey source/introductory text

• Does the information provided to respondents suggest that a certain type of response is required (demand characteristics) or promote socially desirable responding?

Cultural differences

• Do cultural differences affect the type of response biases or heuristics that might be seen when respondents are satisficing?1

Risk of respondent discomfort

• How likely is it that respondents will find questions irritating or intrusive?

Survey mode

• Does the survey mode influence respondent motivation, response burden (e.g. memory burdens) and/or the likelihood of socially desirable responding?

Knowledge

• Do some respondents lack the knowledge or experience to be able to answer the question? (but attempt to do so anyway).

Respondent interest/engagement

• How relevant or interesting do respondents find the construct being measured?

Wider survey context

• Does the day of the week or the time of year affect responses? Could day-to-day events (such as major news stories) or the weather influence responses?

Cognitive ability

• Do respondents vary in their ability to understand the question and/or in their memory capacity?

1. Satisficing is when a respondent answers a question using the most easily available information rather than trying to recall the concept that the question is intended to address. A satisficing respondent may make use of a simple heuristic to answer the question or draw on information that is readily available in their mind rather than trying to provide a balanced response.

Question construction requires consideration of the precise wording of a question, and its translation into other languages where necessary, as well as the reference period that respondents are asked to consider when forming their answers (e.g. “in the last week” versus “in the last month”). As this topic is so large, detailed discussion of response formats are handled in their own separate Section 2, which follows - although of course the response formats that can be used will be influenced by how the initial question is framed.

Questions themselves are the most direct means through which a surveyor communicates their intent to respondents, but they are not the only source of information to which respondents may attend. Thus, comparability perhaps starts, but certainly does not end, with question construction - and other methodological factors influencing comparability are addressed in the sections that follow.

The section below describes issues of question construction in relation to evaluative measures, regarding assessments of life overall; affective measures capturing recent experiences of feelings and emotions; and psychological well-being or eudaimonic

Table 2.2. Guide to the issues covered in this chapter

Section of the chapter

Survey design issues under consideration

Key sources of error considered

Interactions between survey design issues

1. Question construction

  • • Question wording.
  • • Length of the reference period.

Communication/translation failures. Memory failures.

Response biases and heuristics.

Response formats (partly determined by question wording). Survey mode.

2. Response formats

  • • Number of response options to offer.
  • • Labelling of response categories.
  • • Unipolar versus bipolar measures.
  • • Order of presentation of response categories.

Communication/translation failures. Memory failures.

Response biases and heuristics.

Question wording (partly determines response format). Survey mode.

3. Question context, placement and order effects

  • • Question context and order effects.
  • • Question orderwithin a module of subjective well-being questions.
  • • Survey source and introductory text.

Contextual cueing.

Response biases and heuristics relating to demand characteristics, social desirability, consistency motif and priming effects.

Survey mode.

Survey type (e.g. general household versus specific purpose).

4. Mode effects and survey context

  • • Survey mode.
  • • When to conduct the survey.

Response biases and heuristics, particularly relating to respondent motivation/burden and social desirability. Contextual cueing as a result of wider survey context:

  • • Day-to-day events.
  • • Day of week.
  • • Seasonal effects.
  • • Weather effects.

Question construction and response formats. Survey type (e.g. general household versus specific purpose).

5. Response styles and the cultural context

  • • Risk of response styles.
  • • Risk of cultural differences in response styles.

Consistent response biases and heuristics, associated with individual respondents. Cultural differences in characteristic response biases, heuristics and styles.

Cross-cutting section with relevance throughout.

measures. Some examples of these types of measures are included in Annex A. Many of the general principles discussed here will apply to other forms of self-report measures, but the emphasis here is on evidence relating specifically to subjective well-being, where this is available.

 
Source
< Prev   CONTENTS   Source   Next >