Challenges to Measuring Volunteering

To be sure, measuring volunteer work is an extremely difficult task that faces multiple challenges. In the first place, even the definition of volunteering is unsettled, in part because the term carries different meanings, and different connotations, in different cultures and settings, and some of these are unflattering or problematic (Cnaan, Handy, & Wadsworth, 1996; Handy et al., 2000). In some settings, volunteering becomes a pseudonym for what is really required extra work without pay. For some purposes, volunteering is conceived as a set of activities done only for or through organizations. In other uses, it includes as well activities done directly for individuals. But this immediately raises the question of which individuals are valid beneficiaries of an activity that meets the definition of volunteering—one’s children? Other family members? Only persons outside one’s family? If so, how broad a definition of “family” should be used? Just the “next-of-kin,” or cousins and second cousins twice removed? Furthermore, although volunteering is typically thought to be activity undertaken without pay, is no compensation possible? What about reimbursement for expenses or gaining occupational skills? Clearly, no definition of the concept can therefore rest on the use of the term alone, at least none that hopes to have it understood the same way by all respondents or used in crossnational comparisons.

Another formidable challenge is the logistics of data collection. Since volunteering does not involve significant monetary transactions, it is seldom tracked in any administrative records. Even organizations that systematically engage volunteers often find it difficult to record accurately the exact amount and type of work performed by volunteers. This leaves population surveys as the most feasible methodology for capturing the magnitude of volunteer work. However, this methodology is also fraught with multiple problems. Unlike paid employment, which is a well- defined and regularly performed activity, volunteer work is performed irregularly, often at particular times of the year, and by fewer people than those who are employed. Therefore, adequately capturing its magnitude requires a methodology that surveys a relatively large number of individuals and covers multiple “reference periods.” However, such surveys are expensive, so most volunteering surveys involve relatively small samples (a few thousand respondents at most), attempt to cover a long reference period (typically 1 year), and assume that respondents have the same concept in mind when they are asked about an activity referred to as “volunteering.” As a result, these surveys often lead to distorted, unreliable, and uncomparable results. Chief among these problems are the following:

  • Ambiguity about what activities are captured by a survey: As already mentioned, the concept of volunteering is ambiguous and its understanding varies not only among different schools of thought but also among members of the general public. As a result, even common use of the term “volunteering” within the survey questionnaire can produce inconsistent results;
  • Nonresponse bias: Survey participation is akin to volunteering in that both require that an individual dedicates some of his or her time to a task that does not entail compensation; consequently, people who refuse to participate in a survey are also more likely to be those who do not volunteer. Since the size of the sample used in a typical volunteering survey is relatively small, this “nonresponse bias” may grossly exaggerate the share of volunteers in a population (Abraham, Helms, & Presser, 2008);
  • Recall bias: Respondents rely on their memory to answer survey questions, and the longer the reference period, the more difficult it is to recall the required information accurately. Respondents tend to forget activities performed sporadically or long ago, and to exaggerate or highlight ones that are especially salient even when they are outside the reference period. This, again, may distort results (Hassan, 2005); and
  • Social desirability bias: Survey respondents tend to overreport socially desirable or socially expected behaviors, such as religious worship attendance, helping others, or volunteering. As a result, surveys often lead to systematic and substantial overestimations of the incidence of such behaviors (Fisher, 1993).

The extent to which these problems can distort the results is evident when we consider the wide variation in volunteering estimates produced by existing general opinion surveys, such as the survey of ten European countries conducted in the 1990s by the UK Volunteer Centre (Smith, 1996, pp. 180-189), successive waves of the World Values Survey (World Values Survey, 2009), the recent Gallup Worldview Survey (English, 2011), and country-specific general social surveys. For example, the World Values Survey, which, at least up through 2001, generated data on 96 countries and asked respondents whether they had volunteered, focused only on organization-based volunteering, used a long, 1-year reference period, and failed to collect data on the amount of time respondents devote to volunteering.[1] The recent cross-national survey touching on volunteering carried out by the Gallup organization covered 153 countries and gathered information on both organization-based and direct volunteering (English, 2011). However, the information on volunteering in this survey is limited to capturing the number of people involved (the volunteer?ing rate) with no indication of how much time these volunteers devoted. In addition, the survey relied on relatively small samples (typically 1-2 thousand people per country) and utilized quite general questions that could be interpreted differently by different respondents.[2] As a consequence, this survey has produced results that are of dubious accuracy. For example, this survey reports US volunteering rates to be 39-43 % for organization-based volunteering and 65-73 % for direct volunteering. In contrast, the Current Population Survey carried out by the United States Bureau of Labor Statistics (BLS) on a much larger sample of about 60,000 respondents found the organizational volunteering rate to be about 26 % (United States Bureau of Labor Statistics, 2010). Similar discrepancies exist in other countries’ results (e.g., Canada, Australia, and South Africa). Furthermore, the cross-national reliability of the Gallup data also raises questions. The rate of organizational volunteering reported for Russia, for example, at 26 %, is significantly higher than that reported for Sweden (13 %), Denmark (20 %), and France (22 %), which is inconsistent with every other known survey of volunteering in these countries (Salamon, Sokolowski et al., 2004).

The European Quality of Life Survey (EQLS), which covers all 27 EU member states (McCloughan, 2011) collected information about number of hours spent on unpaid work, but its primary focus was on the perceptions of well-being rather than measuring volunteer work. Consequently it asked only a generic question about “volunteering and charitable activities,” which bundled volunteering with a number of potential other “charitable activities” (e.g., making charitable contributions and taking part in charity balls or other events). Furthermore, it is not clear whether these activities were performed through organizations or directly for individuals, a distinction that is of crucial importance for interpreting volunteer work in different settings. Finally, the data source is available only for the European region and no comparable data using a similarly vague concept of “volunteering and charitable activities” are available on countries outside of the European region.

Methodological and conceptual difficulties are not the only obstacles to measuring volunteer effort, however. Another obstacle arises from concerns among segments of the volunteer practitioner community that the measurement of volunteer effort dehumanizes and unnecessarily commodifies volunteer effort, thereby robbing volunteering of its essential character as a fulfilling human activity undertaken out of a sense of altruism and social solidarity (see, for example, http://coyotecommunica- tions.com/coyoteblog/2011/09/22/un-volunteers-ifrc-ilo-others-make-huge-misst/). In this view, measuring the amount and value of volunteering is the first step on the slippery slope to government efforts to use the evidence of substantial volunteer input as a rationale to justify cuts in government expenditures on social programs.

While some politicians may leap to this conclusion, there is little evidence to support the idea that volunteers replace paid workers. Indeed, evidence seems to point to the contrary, that high levels of volunteering are correlated with high levels of paid employment in nonprofit organizations (Salamon & Sokolowski, 2001). Indeed, French sociologist Alain Touraine (1981) has found that participants in popular social movements gain a sense of validation and efficacy from seeing evidence of their power and potential. From this perspective, information on the scope and value of volunteering can validate and incentivize volunteers in addition to enhancing understanding of how to improve the promotion of and support for volunteer effort.

  • [1] Following 2001, the questions about membership and volunteer work in voluntary organizationswere replaced with one about active and inactive membership in voluntary organizations.
  • [2] For example, the question asking whether respondents helped a stranger or someone they didn’tknow who needed help could be interpreted by respondents as entailing anything from providinghours of assistance to incidental acts, such as giving someone directions on the street. Likewise,questions about whether respondents volunteered time to an organization may entail compulsorycommunity service required as a condition of graduation or mere attendance at events (such asreligious services).
 
Source
< Prev   CONTENTS   Source   Next >