Step 3: Design, Conduct, and Analyze Mental Models Interviews

The next step in the Mental Modeling approach is designing and conducting one- on-one in-depth interviews following a semistructured interview protocol. The research sample of individuals representing the stakeholder population(s) of interest (or cohort) is the core of the Mental Modeling research approach. This sample is usually comprised of 20-30 individuals, each representing a focal stakeholder. This stratified sampling is done in order to reveal the breadth of perceptions held. Research interviewees are selected from a larger pool of individuals to allow for random sampling and to provide a level of confidentiality. Subcohorts may also be used, or a matrix cohort design may be used to ensure representation of gender or other demographic factors.

Mental Models interviews follow a semistructured interview protocol designed to explore key topics identified in the expert model. Interviewers trained in the Mental Modeling approach are oriented to the project and trained on the interview protocol. Once the sample is developed, the interviews are conducted, typically over the phone, but sometimes in person if appropriate or required.[1] Interviews are recorded with interviewee’s permission and transcripts are produced and used as the primary data in structured analyses.

Questions, particularly early in the interview, are typically structured to elicit people’s mental models using a “what comes to mind when you think about approach,” asking the interviewee to think freely about a general topic rather than respond to a more narrowly focused question. Interviewers will also use general prompts such as, “Can you tell me more about that?” or “Why do you say that?” to probe interviewee responses, encouraging them to speak at length. This approach is specifically designed to allow topics of interest to the interviewee to more readily emerge, using the language and terminology that they would normally use. As the interviews progress, more specific and directed questions will be used to ensure coverage of all relevant variables in the expert model.

The interview data are then coded and analyzed against the expert model in order to describe stakeholders’ beliefs about the topic including: their values, interests, and priorities; what they know; what they don’t know or misunderstand; what they want to know; and who and what communications processes they trust. Depending on the needs and complexity of the project, formal or informal coding approaches can be applied. For less complex projects where one simply needs to summarize the prevalence of perceptions and beliefs, a basic, one-pass direct coding process may be used linking interviewee responses to specific concepts.

For more complex challenges, where stakeholders’ perceptions are likely to cover a broad spectrum of beliefs that are often more nuanced, or for projects that require application of more rigorous academic research standards for coding and analysis, a multiple-pass approach may be more appropriate. In a multiple-pass coding approach interviewee responses are first “tagged” to link responses to general topics (often expert model nodes or basic themes). This facilitates a more thorough exploration of the interview data than a linear, “by-question” coding process. In the second coding pass, responses are coded against more specific emerging themes. The prevalence of these themes is then enumerated and reported.

The comparison of structured qualitative analysis of the interview results against the expert model enables identification of key areas of alignment and critical gaps between the expert knowledge and the thinking of stakeholders, identifying: what stakeholders know, what they don’t know or misunderstand, what they want to know, and who and what communications sources and methods they trust. This analysis provides the requisite insight to develop precisely targeted strategies, policies, interventions, and/or communications materials with clear, measurable behavioral outcomes.

  • [1] 5n-person interviews can add considerable time and cost and may increase the potential for“please-the-interviewer” bias compared to phone interviews, which may be perceived as moreequitable by participants.
 
Source
< Prev   CONTENTS   Source   Next >