Analysing qualitative data

Introduction

This chapter focuses on how to analyse non-numeric data such as interviews, focus groups, surveys, talk, text and visual data. These data include written, printed or web texts, transcripts of spoken words, field notes from observations, research memos and also images (Fairclough 2003: 3). While there are a range of data sources falling under the umbrella of 'qualitative data', similarly there are a range of strategies available to analyse the data. There is, however, no singularly correct way to carry out qualitative data analysis and you will need to spend time choosing an approach that suits you. Qualitative data analysis is time-consuming, interpretative and iterative; it involves close attention to detail and good organisational skills. The chapter explains how computer software, such as NVivo, can be used as a tool to help you analyse your qualitative data.

By the end of this chapter, you should have a better understanding of:

  • • Preparing your data for qualitative analysis
  • • Different methods or ways of analysing qualitative data
  • • Computer software packages to aid your qualitative data analysis
  • • Specific approaches to qualitative data analysis, through case study and example
  • • How to assess the quality of your qualitative data analysis.

Preparing data for analysis

You might be lucky enough to have data in a form which is ready to analyse, for example, documents in electronic form that you can manipulate. Most qualitative data, however, comprise written notes (perhaps scribbledduring field observations or interviews), audio files and visual data. The process of getting information, comprising spoken words, ideas, thoughts, speeches, etc., into a written or printed form is generally referred to as transcribing.

Most audio recordings of interviews will need to be transcribed. The amount of the raw data to transcribe will depend on the purpose of your interview. While some transcriptions only capture a summary of the main points of an interview, for example, others may need a full and verbatim transcriptions. If your aim is to gather factual information from your recorded interview, you may be selective in what you choose to transcribe. In which case, a few extracts from your recorded interview as quotes may be sufficient. If, however, you are interested in the structure of your respondents' arguments in order to know the implied meanings of their words, then you may need an extensive transcript. The most commonly used approach is verbatim transcription (Figure 10.1). This is where you note who said what and what they said. It is important that you remove any identifying features from the transcripts (such as names, addresses and other personal data). See Chapter 8 for ethics and law regarding handling personal data.

In your transcription, you might tidy up the language so that all the conversational fillers such as 'erm' are removed, but equally you might leave them in. If you are doing conversational or discourse analysis, you are probably going to want to have more detail in your transcript, including markers of stress, intonation and changes in pitch or volume. Transcription is time-consuming, so it is essential you leave enough time to transcribe your data. The more detailed your recording, the longer it will take to transcribe. Whichever method of transcription you use, it is important to ensure the accuracy and integrity of the information is maintained and you are able to check your transcription against the original data to ensure there is no loss of critical information during transcription.

Although it seems like a long and tedious task, transcribing your own data gives you a head start with your data analysis because you will become very familiar with it. It is good practice to duplicate your original data or recordings used in qualitative analysis and to catalogue or index each piece of information with a unique serial number for reference purposes.

Phases in qualitative data analysis

In an article written for ‘rookie' qualitative researchers, Baptiste (2001) outlines a framework which aims to help researchers new to qualitative data analysis see the common features of qualitative data analysis. The paper was

Extract from a verbatim transcript

Figure 10.1 Extract from a verbatim transcript

written because he found that his students were bewildered by the multitude of approaches to qualitative data analysis. He outlines four phases:

  • 1. Defining the analysis
  • 2. Classifying the data
  • 3. Making connections between and among categories of data
  • 4. Conveying the message or the write-up.

The key stages and phases in qualitative data analysis are summarised in Figure 10.2. The stages may not necessarily follow a linear chronological

Stages in qualitative data analysis

Figure 10.2 Stages in qualitative data analysis

order as it is often necessary to revisit or review actions in previous phase to improve the reliability of your qualitative research.

Defining the analysis

This first phase is as much a part of your research design as it is a part of data analysis, further highlighting the point that design, data collection and data analysis are inextricably linked — especially in qualitative research.

The approach adopted for your qualitative data analysis is often dependent on what you consider to be valued as knowledge: your epistemology. The epistemological questions that you ask yourself relate to how you would try to acquire knowledge, what you believe counts as knowledge and how you would know (Baptiste 2001). You should also consider what you deem to be real: your ontology. When we think about ontological positions, we need to be clear about what we see as real and how our understanding of reality shapes how we do our research (Baptiste 2001). A further consideration is around the values and ethics associated with your approach to your research: your axiology. You need to consider to what extent your values will impact on your analysis, the role that your research participants will play in your research and what you will then do with the output of your research. The main factors to consider while choosing your qualitative data analysis approach are shown in Figure 10.3.

It may seem that this section is encouraging you to ask some very hard and deeply philosophical questions about your research: that is the aim. Qualitative analysis, by nature, is interpretive, and your epistemological, ontological and axiological positions will inevitably influence your methodology and subsequently your approach to analysis. It is therefore important for you to be able to articulate your position and to be able to evaluate the impact that position will have on your research — particularly its strengths and limitations.

Classifying the data

In the second phase, you begin the process of classifying your data. Becoming familiar with that data is one of your first priorities. Before you formally begin your analysis, you should allocate some time to briefly look through and read the data collected. This will include listening to the recordings of any interviews and reading through the transcripts, looking at the notes you wrote for yourself during the data collection period, and watching any video footage that you have. As you go through this cycle of reading and rereading, you will already be starting to notice patterns and similarities in your data. You are beginning to develop a template for conducting the full analysis. Analysing qualitative data is very time-consuming. You may need to go back over transcripts when new ideas occurred to you. So, make sure you leave enough time to do that. Analysis isn’t about reporting what your participants say — it's about what they mean. You need to try and move beyond just describing your data to actually understanding your participants’ views and thoughts on the issue or subject of investigation.

Coding your data

Having classified your data, the next step is to further develop the template through coding. The process of coding involves picking out the bits of your data set that you think are interesting and useful for the research that you are carrying out. The criteria for assigning codes to your data may be based on the topic, story, event, signifiers, idea.

Factors to consider in choosing qualitative data analysis techniquetheme, concept or theory relating to or suggested by your data

Figure 10.3 Factors to consider in choosing qualitative data analysis techniquetheme, concept or theory relating to or suggested by your data. You can code your data deductively, i.e. based on your pre-existing idea, hunches or theories about what you think the data may be showing or what you considered important within the data or inductively, based on emerging issues or patterns within the data (Neale 2016). You may find that the notes you made while collecting your data could be developed into codes or labels. The same codes might come up frequently; you can then read through the data set and highlight every time that the code is represented. You can choose to use the actual words that are used in your data set as your labels — this is called in-vivo coding’ — or you can choose to name them yourself. If you prefer, it is also possible to analyse your data more deductively. In which case the template you are using is pre-defined and not emergent. That means that you will go into the data set to look for specific concepts which match your pre-defined codes before you start your analysis. Whichever method you choose will, invariably, be influenced by your experience and the literature you have read. You may even decide to combine both inductive and deductive coding. Figure 10.4 shows possible criteria for assigning labels or codes to your data.

Once you are happy with your coding scheme, there are different ways that you can mark your codes. You may choose to make notes of key words in the margin of your transcript. Alternatively, you might prefer to get out your highlighter pens and to mark up the codes in different colours. You will find that some pieces of data will relate to more than one code, so you need to be sure that the method you have adopted can cater for these overlaps. The important point is that you find a method that works for you. Figure 10.5 shows a transcript which is in the process of being annotated.

As you work through the data identifying codes and highlighting them, you may begin to suffer from ‘code overload'. At this point, you need to stand back and see whether you are using different codes to describe the same thing. If you are, then you should see whether only one code can be used instead. This type of pruning the codes used throughout analysis will be ongoing. Having coded all of your data, you can now start to look for ways to connect those codes together into themes, concepts or categories. As you do this, you should be checking whether these themes are distinct from each other and that you have sufficient data to support each of them. You should be willing to redefine the categories as you go deeper into your analysis.

In order to do this grouping, you may decide to cut up a printed version of your transcripts and cluster your codes together into different categories. The beauty of this approach is that you can keep

Assigning codes and labels to your qualitative data

Figure 10.4 Assigning codes and labels to your qualitative data

moving the pieces around until you find the most suitable home for them. You could use the cut and paste function in your word-processing package to copy fragments of your data into a document named after the theme. When you do this, keep the contextual detail of your extracts (e.g. which transcript or which set of field notes the data came from). It is important that you can reference where your materials come from. If you are systematic here, it will save you a lot of time later down the line. If not, time will be wasted searching for individual quotes. You might, for example, choose to display your categories diagrammatically through concept maps or spider diagrams, as shown in Figure 10.6. As with coding, you need to find a method which allows you to see the connections between your data more clearly. Following a successful coding and data classification, analysis becomes more creative and more interesting and the next stage is making connections.

Making connections

In this phase, the aim is to help you and your readers to understand more deeply and more broadly the area investigated. As you interpret your findings, you will be looking for connections between the categories

Example of a coded extract

Figure 10.5 Example of a coded extract

that you have identified. What are the relationships between those categories? Are they of equal importance? Are there sociological theories that could help to interpret what is happening in the data? What does the literature say about the area you are researching and how do your own findings compare? The key issue is being able to show how the findings from your study relate to findings from other studies, thus deepening our understanding of the area you are researching.

Using software for qualitative data analysis

The method that has been described in previous sections describes a process that you can carry out without any particular data analysis software. It involves very basic techniques: coloured pens, scissors, glue, pencils, a word-processing package and multiple copies of transcripts. This is a manual approach to data analysis. As with quantitative data

Example of a mind map (for you to view, not to read)

Figure 10.6 Example of a mind map (for you to view, not to read)

analysis, there are software packages which can be used to help you manage the analysis. The tools these packages offer can help to organise, manage, search and code your data set. You need to remember, though, that they can only help your analysis — they will not do it for you. Deep engagement with the data will still involve you doing the thinking, and no computer package can do that for you.

One of the greatest benefits of qualitative data analysis packages is their data management functions. This means that they come into their own if you are working with large and complicated data sets. If you have never used qualitative data analysis software before and you are working with a small data set, you should ask yourself whether it is worth taking the time out of your dissertation to learn a package and what the advantages would be in doing so. If you’ve used qualitative data analysis packages before, if your institution has a licence for them, or if you hope to do more qualitative research in the future then there might be good reasons to give computer-aided analysis a try.

Most computer packages designed to analyse qualitative data are able to support data management, coding and searching. Many are now able to deal with different types of data, including, for example, audio, visual and textual data. Nine commonly used computer-aided qualitative data analysis packages are:

ATLAS.ti

QDA Miner

Tams Analyzer

Dedoose

NVivo

MAXQDA

HyperRESEARCH

Aquad

Transana

While you may have access to a number of the packages listed above, most universities will hold a licence for the software NVivo which is a popular qualitative data analysis tool. As such, in the following section we have offered a general guide to NVivo, but make sure you check with your own institution prior to planning your analysis. It is highly unlikely that you will not receive guidance, classes or worksheets on data analysis from your own institution. As such the information provided only offers a simple introduction to the programme, and you may need to consult relevant texts and institution guides to learn more about the software.

Using NVivo for analysing your data

NVivo allows you to organise, sort and manage non-numerical data. It can enable you to analyse and handle data in an effective way that you may not otherwise be able to do manually. NVivo is particularly useful in organising and analysing a large volume of data. With NVivo, it is easier to locate common patterns or structures from within your data. Those patterns or common topic identified within your data are stored as Nodes in NVivo. The software can handle a variety of non-numerical data including video recordings, word documents and field notes. The main purpose of the application is to identify relationships between your data and to record any discernible patterns. You can also use the programme to summarise your data using charts, models and other visual formats.

Apart from ensuring a systematic analysis of your data, NVivo can also be used in managing your literature review and other documents and resources used in your dissertation. The programme enables you to import texts, documents, pictures and other materials including PDF document, journals, PowerPoints, and audio recordings for your analysis. It uses a systematic coding processes to generate themes, common topics, and relationships within your documents. This may be useful if you want to identify common themes in literature in order to formulate your research questions. You can also use the programme to test or verify hypotheses and it allows you to ask specific questions relating to your data, using its ‘query’ tools. The programme can also help you in drawing conclusions from your data based on your literature review and data analysis.

Other approaches to qualitative data analysis

So far, we have described a very general approach to qualitative data analysis, which could loosely be described as thematic analysis. We have outlined a process of coding, categorisation, theme development and comparison. This can be done manually or with the help of software packages.

There are, however, many other approaches to analysing qualitative data that we think you need to know about. Figure 10.7 shows the most common techniques for analysing qualitative data. Denzin and Lincoln's The Sage Book of Qualitative Research (2011) offers a comprehensive overview. Here we will focus on only some of them. This overview is not comprehensive but aims to give a feeling for some of the approaches that could be adopted.

Discourse analysis

Discourse analysis does not refer to one single approach; rather, it is a general term for approaches to the analysis of both written and spoken text. Different approaches to discourse analysis have grown out of different disciplines: linguistics, cognitive psychology and post-structuralism (Potter 2004: 201). Miller and Brewer (2003: 75-76) offer five broad categories of discourse analysis. The first is linguistic in focus and looks at discourse styles in social settings

Qualitative analysis techniques

Figure 10.7 Qualitative analysis techniques

(such as school interactions, courtrooms, doctors' surgeries). The second looks at how language is used in natural situations — the ethnography of communication and the competencies needed for that communication. The third category, conversation analysis, investigates how conversation is organised. The fourth focuses on the choice of words used in the textual and verbal accounts of social representatives. The fifth, critical discourse analysis, sees language as being bound up with power and ideology; analysis focuses on how language benefits certain groups over others.

These approaches all have different emphases, yet they all share the understanding that language is a social act that is embedded within a social context which both influences and is influenced by language (Gee 2005).

Grounded theory

Grounded theory (Glaser and Strauss 1967) aims to develop theory out of the data which has been collected. This is different to many approaches to research where a theoretical framework is chosen at the beginning of a project and then the data analysed in relation to that framework. The process of data collection, data analysis and theory generation in grounded theory are closely connected. The research question is set and sample selected. The data is collected and coded, and concepts are generated. The coding shows where more data needs to be collected. This process continues until there is saturation (that is, no more codes are emerging). Relationships between the categories are identified and a theory postulated. This substantive theory is then tested in different settings. This may lead to the development of a formal theory. The grounded theory approach is not linear as it has been described here but is iterative and based on constant comparison, with different phases occurring simultaneously and being repeated.

Narrative analysis

Narrative analysis describes a suite of approaches that focuses on the analysis of the stories which people use to make sense of what Ezzy (2002) describes as disconnected episodes that together form a coherent construction of the past. The stories that narrative analysis analyses are the products of people who are living in a particular social, historical and cultural context; the stories they tell are a reflection of how they see themselves and others within their worlds (Lawler 2002). Narratives can be, among other things, used to give information, to structure our ideas about ourselves and to pass on experiences (Gibbs 2007: 60). Key examples of narratives are biographical and life-history accounts. There is little consensus on what narrative analysis involves, and Riessman has identified four different models (described in Bryman 2004: 412):

  • 1. Thematic (focus on what is said)
  • 2. Structural (focus on how it is said)
  • 3. Interactional (focus on the dialogue between teller and listener)
  • 4. Performative (focus on how the narrative is enacted).

Visual analysis

Visual analysis is used to analyse images which are both generated for the research study and those which are already in existence. Images can be ‘researcher found’ (generated by others) or ‘researcher generated’ (created by the researcher). Both are integral to the visual research process (Prosser 2006: 3). These images can be used as either aides-memoires or as data in their own right (Bryman 2004: 312). Most analytical approaches that are used for non-image-based research can also be used for those which involve images (Banks 2007: 38). A researcher analysing images needs to be sensitive to the context in which they were generated, the potential for multiple meanings and the impact of their own role in production of the images (Bryman 2004; Banks 2007). There are approaches to analysis which may be particularly appropriate to the analysis of visual images, for example:

  • • Semiotics — the study of signs and symbols to uncover their deeper meaning and how that meaning is understood (Chandler 1994)
  • • Qualitative content analysis — finding underlying themes in the images being analysed and situating those findings within the context in which the images were produced
  • • Ethnomethodological approaches — identifying the everyday practices by which people organise their lives (Banks 2007: 49).

These brief overviews of different approaches to data analysis have been designed to give you a taste of ways you can work with your data. If you choose to work with one of them, take a look at the recommended texts and also try and read a research paper which has adopted the same approach to analysis. This will show the ways in which the data analysis can be reported.

Checks on quality

In both quantitative and qualitative approaches, it is important to ensure that any analysis of data can produce information which can withstand questions around rigour and transparency. The quality control measures employed to scrutinise and check qualitative data will be different to those measures employed to assure quantitative analysis and there are checklists and guidelines available specifically for writing up qualitative research. Examples of such checklists and guidelines are as provided by Critical Appraisal Skills Programme (CASP 2013) and Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al. 2007).

In your research project, you want to be sure that your analysis is trustworthy. Miles and Huberman (1994: 277—280) offer five areas that you could use to assess the quality of your work.

  • 1. Objectivity/confirmability. Is the study relatively objective? Have the researcher’s biases been acknowledged?
  • 2. Reliability/dependability/auditability. Was the approach to the study consistent and stable over time?
  • 3. Internal validity!credibility!authenticity. Do the findings make sense? Are they credible? Do they paint a true picture of what we were studying?
  • 4. External validity/transferabilitylfittingness. How do these findings fit into the bigger picture? Can they be generalised to other settings?
  • 5. Utilisation!application!action orientation. What impact does the study have on the researchers and the researched? (Figure 10.8)

For each of the areas above. Miles and Huberman offer a series of questions that could form the basis of reflection on your study and, therefore, a check on its quality.

Key messages

  • • There is no single approach and no right way' to do qualitative data analysis.
  • • Many approaches share four general phases: analysis definition, data classification, connection-making and message conveyance.
  • • Analysis involves interpretation — establishing links to theory, literature and experiences.
  • • Software packages such as NVivo can aid your qualitative analysis.
  • • Any approach to data analysis needs to undergo checks on quality.
Capturing a Telling Instance

Figure 10.8 Capturing a Telling Instance

Key questions

  • • Have you researched different approaches to qualitative data analysis? Do you understand what differentiates them?
  • • Are you able to justify why your chosen approach to data analysis is appropriate in terms of the data you have collected and also your view of research?
  • • Have you adopted a systematic approach to data classification? Have you been through the classification cycle more than once?
  • • Have you sufficiently interpreted and explained patterns, categories and themes in your data? How do your findings fit with other work that is out there?
  • • Have you reflected honestly on your approach to data analysis? Where are the weaknesses and the strengths in what you have produced?

Further reading

Creswell, J. W. and Poth, C. N. (2018). Qualitative Inquiry and Research Design: Chousing among Five Approaches (4th Edition). London: Sage.

Denzin, N. K. and Lincoln, Y. S. (2018). The Sage Handbook of Qualitative Research (5th Edition). London: Sage.

Jackson, K. and Bazeley, P. (2019). Qualitative Data Analysis with Nvivo (3rd Edition). London: Sage.

Ritchie, J., Lewis, J., Nicholls, C. M. and Ormston, R. (2014). Qualitative Research Practice: A Guide for Social Science Students & Researchers (2nd Edition). London: Sage.

Silverman, D. (2014). Interpreting Qualitative Data (5th Edition). London: Sage.

Chapter I I

 
Source
< Prev   CONTENTS   Source   Next >