As I mentioned earlier, the goal of sampling is to select a subset of the population that closely resembles the population. Failure to handle sampling well may result in sample bias, which is a threat to both the internal and external validity of research. Sample bias can impact the results and, in turn, threaten our ability to generalize the results to the population from which we selected the sample. Sample bias can occur due to purposeful actions by those who are involved in research, or it can be completely unintentional and just a matter of chance. One potential motivation for intentional sample bias is a desire to see a program succeed. People whose jobs rely on the existence of a program, or even people who are just passionate about it succeeding, might, consciously or not, select only participants they believe are most likely to succeed. This is known as creaming, since they are selecting only the “cream of the crop” to be in the program. This is why there have been instances of correctional interventions and school programs being populated with lower-risk, higher-achieving individuals. The goal of such selection bias is to include people thought to be most likely to succeed. Even in instances when people with a stake in the program’s success lack control over who is admitted to the program, they may attempt to introduce sample bias into the program evaluation to skew the results. Years ago, I was tasked with evaluating a parenting program in a medium-security juvenile facility. While I was conducting observations, the instructor whose employment was contingent on the continued funding of the program repeatedly requested that I conduct interviews with 3 of her former students. The research plan that I agreed to with the facility administration did not involve any interviews, but the instructor kept pushing it, as she felt it was crucial to the program evaluation. I later learned that she had developed a very close relationship with these three and had used her own money to purchase them lots of clothing and other items, and they had remained in the community for some time without recidivating. The instructor was only expressing a desire to contact and arrange meetings for me to interview those 3 individuals, and not the other 42 program participants. So, while this practitioner had been unable to cream the participant list, she was attempting to facilitate the researcher’s access to only the most successful participants who would undoubtably have nothing but good things to say about her.
Sample bias can appear for other reasons, including how the poll or survey is made available to potential participants. With the proliferation of websites and media outlets, people now have the ability to choose to get news (or even conspiracy theories being peddled as news) only from sources that ascribe to particular political leanings. When a site chooses to conduct a poll by posting a survey link on their webpage, those who are most likely to see and respond to it are people who share the political views espoused by that show, publication, or network. It is unlikely that a poll shared in that manner will reach people with diverse opinions. When a conservative radio show or website asks its fans to provide approval ratings of a liberal politician, the results are likely to be very negative. The same goes for opinions about conservative politicians being judged by fans of a liberal talk show. This type of sample bias can impact criminal justice opinion polls and surveys. National Public Radio (NPR) conducted a randomized telephone survey in 2018, asking respondents about their preferred television news network and their opinion regarding “are immigrants an important part of our American identity?” Seventy-eight percent of respondents who preferred CNN responded affirmatively, compared to just 52 percent of those who preferred Fox News. So, if a poll appears on a CNN or Fox News website and invites people to click to vote, there is likely to be a good deal of sample bias built into the design (Rose, 2018). But if studies like this are biased, why should we even trust the results of the NPR survey? We can feel comfortable with it because they conducted a telephone survey of a random sample of households instead of posting a link in a location where only certain people would see it.
Over the next few pages, I will discuss sampling techniques and how they might impact our chances of selecting a representative sample. After that, I will revisit the sample bias issue and review ways to detect bias.