The data we use
This is a book about public opinion and therefore the analyses in this book will be mostly based on a wide variety of survey data, conducted from academic and media sources. Before we move to a description of the data that we use, a few words should perhaps be said about the methodological aspects involved, particularly public opinion polling and the kind of data it produces. The most common, because it is the most objective way of measuring public opinion, is through polling representative samples of the mass public. The usefulness of such opinion polls as an instrument to measure 'what the people really think' may be questioned, however, on methodological grounds. The shortcomings of polls are well known. Usually, they are widely acknowledged by the researchers involved and therefore need no further discussion here. It is sufficient to acknowledge (once more) that their results lend themselves to different or even contrary interpretations and one may sometimes argue whether the glass is half full or half empty. In comparative research this is even more problematic. Nevertheless, it is also useful to repeat that, in spite of these obvious shortcomings, polls remain to us the most objective and verifiable way to 'take the public's pulse' since there are no real substitutes for this method. Nevertheless, there are good polls and bad polls and caution in interpreting their results is always called for. Also, to do research on public opinion and its impact on the political process does not imply that one thinks in normative terms that 'the public' is always right, nor that public opinion - or what passes for it - should always be heeded by responsible leaders. It is built, however, on the premise - and so is this book - that we cannot understand how and why governments arrive at certain decisions without taking public opinion into account. This is particularly true of the democracies in the Atlantic area. That this is also desirable is part of another longstanding normative tradition.
As a caveat we should remind ourselves in this connection, too, that our research as reported here has to face a further obstacle beyond the traditional problems of comparing survey outcomes over time and across different wordings, which springs from the cross-national nature of our comparisons. In our analysis, we rely on polls that have been conducted in many different countries. The cross-nationally comparative slant adds further complexity and ambiguity to any interpretation of those results at the aggregate level. In fact, comparing aggregate public opinion across countries brings into effect a major source of potential difficulty: the comparison of identical or even different questions asked in different political and cultural contexts.
There are two aspects to this problem. On the one hand, there is the problem of comparing identical questions asked in different languages and the equivalence among them.15 On the other, there is the even more complex problem of comparing differently worded questions asked in different languages on the same issues. In the latter case, the problem of comparing questions translated into different languages is added to that of comparing differently worded questions.
Finally, as far as the impact of public opinion on policymaking is concerned, the distinction between salient and non-salient attitudes is vital. The willingness to act upon one's convictions and participate in the political process is proportional to the degree of saliency and thus as equally relevant as the content of opinions. Yet, saliency is an aspect which many, if not most opinion polls conveniently overlook and is but partially approximated by taking the proportion of 'don't know/no answer' as a substitute.
Results of public opinion polls that intend to measure support for the use of force should therefore be treated with some caution. They usually produce not so much reliable (but politically relevant) indicators of absolute levels of support, but rather, measures of relative support that do allow us to make comparisons across time or situations or different conditions.
The data that we rely on in this book come from many different sources, including the PEW Global Surveys, Eurobarometer and, mainly, the Transatlantic Trends Survey, a major series of annual comparative surveys, undertaken since 2002 in the United States and in a (growing) number of European countries under the auspices of the German Marshall Fund for the United States (GMFUS) in cooperation with the Compagnia di San Paolo of Torino.16 The series started with the Worldviews 2002 study. Since 1974, the Chicago Council on Foreign Relations has polled Americans every four years, and in June 2002 for the first time a survey was held that followed in this series but was combined with a survey with many directly comparable questions, and sponsored by the GMFUS in six European countries (France, Germany, Italy, the Netherlands, Poland and the United Kingdom). After this initial poll, the series of surveys was continued in the form of an annual survey. In 2003 Portugal was added. Later (2004) Slovakia, Spain and Turkey were added, as were Bulgaria and Romania in 2006 and Sweden in 2011.1 7 Since 2006 the same questionnaire has also been used in parallel surveys of European elites.18
In this connection, we want to stress that when we talk in this chapter of the 'Europeans' what we have in fact in mind are the populations in the countries actually surveyed, the number of which, as indicated, has increased over the years from six to twelve. Sometimes, our sample of 'Europeans' shall consist of respondents from an even more limited number of countries on which we have more extensive data over longer periods of time.