- Research Design
- Preliminary Considerations
- Discrete Choice Experiments
- Sampling Strategy
- Recruitment of Participants, Pilot Study and Final Sample
- The Research Instrument
- Screening Section
- Choice Tournament
- Counting Analysis for ACBC
- HB Analysis: Calculation of Utilities and Importances
- HB with Covariates

# Research Design

## Preliminary Considerations

Indeed, the research attempts to find explanations for decisions made by consumers in certain circumstances. A positivist, explanatory research will be adopted, particularly when attempting to ascertain attribute importance in different contexts. These attributes were ascertained in a previous qualitative stage (via research and an investigation of the literature). This research design has already been used to elicit meal attributes, but with a different focus. Indeed, Ding et al. (2005) started with a qualitative stage to understand the key attributes of Chinese dinner specials, and interviewed ten undergraduate students to determine what were key attributes to them and also what they perceived as important to their peers.

## Discrete Choice Experiments

It seems more natural for consumers to make choices, not to rate or rank them (like in traditional conjoint analysis). Furthermore, as stated above, the use of technology has made that task more achievable. These advantages probably explain why DCA has become the natural choice for decision-making researchers using conjoint analysis methods. In DCA, the research design takes the form of DCEs. For instance, the decision maker responds to experimentally designed profiles of possible alternatives, in which each alternative has a different set of attributes (Verma and Thompson, 1996).

Sawtooth Software has developed Adaptive Choice-Based Conjoint (ACBC). ACBC is consistent with the theory that complex choices made by consumers entail the formation of a consideration set and then choosing a product within that consideration set (Orme, 2010).

## Sampling Strategy

The study used a non-probability technique: respondent-driven sampling (RDS). Therefore unlike traditional probabilistic sampling in which it all starts with a sampling frame, RDS results in creating a sampling frame after sampling is complete (Wejnert and Heckathorn, 2008). Lately, the term of respondent-driven sampling (RDS) is used as a variation of the chain-referral sampling methods (Salganik and Heckathorn, 2004). The possibility of accessing large segments of respondents is a known fact as in populations as large as the United States, every person is indirectly associated with every other person through six waves (Killworth and Bernard, 1978).

Although the sampling strategy is considered to be non-probabilistic, it can be argued that due to the effect discussed by the six-degrees of separation theory (Milgram, 1964), magnified in the digital age (Kleinfeld, 2002), almost every respondent of the sampling frame can be randomly contacted through referent sampling. Formulas to establish optimum sample size require random samples; thus to estimate a sample size for this research, the sample will be assumed to be obtained at random, as for the considerations about the characteristics of the population above mentioned. For all these reasons, it is considered that the estimation formula that Louviere et al. (2000) provided for calculating optimum sample size for random samples is appropriate in this research:

In Louviere et al.’s formula, N represents the minimum number of participants; Y is the total number of choice scenarios or replications; ‘p’ is the choice share of a restaurant concept, q = 1 — p; ‘z’ is the confidence level under normal distribution; and ‘a’ is the allowable margin of error. In the choice tournaments, respondents will be shown three restaurant concepts with ten attributes showing a particular level; in this case, there are (3x10) 30 choice scenarios, z = 1.96 for a confidence level of 95%. If there are three choices, the probability of choices one is 1/3 = 0.33, which is p. Then q = 1—0.33 = 0.67. If the margin of error is set at 5%, then:

Therefore, according to this formula, the minimum sample size is 104 respondents for the ACBC sample. However, it is to be noted that this formula was derived for a simple selection, and in this case, several screens are showing several combinations of 30 choice scenarios. This added complexity for an accurate calculation of the sample size and the application of the formula. And in the case of large populations like the one in this research, Orme (2014) discussed that sample sizes for conjoint studies range from about 150 to 1,200 respondents. However, for ACBC, it has been found that in smaller sample sizes, ACBC would yield similar group-level errors, with 38% fewer participants than in traditional DCEs, such as the ones in CBC (Chapman et al., 2009). It also appears that smaller sample sizes compensate for the additional time required to complete ACBC surveys (Cunningham et ah, 2010). As a minimum of 150 respondents is indicated as a rule of thumb and Louviere et al.’s formula points to a minimum of 104, the research attempted to get as many respondents as possible but not fewer than 150.

On the other hand, the first part of the research dealt with establishing which attributes are more important for certain segments, that is, which attributes are more relevant for this particular occasion or a particular age group, etc. In this case, we may consider the population to be very varied about a particular issue, for example, how many considered a particular attribute and how many would not? The maximum split, to be conservative, is a 50/50 split and has a sample size of 384; for a less varied population 80/20, the minimum number is 246. This is the sample aimed at the research.

## Recruitment of Participants, Pilot Study and Final Sample

The first respondents were recruited through the professional network Linkedln. The criteria for qualifying respondents were as follows:

a. The respondent should eat out at restaurants in the United Kingdom. In total, 6 out of 376 answered never (1.6%).

b. The respondent should be involved in making the decision. In total, 7 out of 376 answered never (1.9%).

c. To be of 19 years of age or above. In total, 6 out of 376 did not qualify

(1.6%).

The final number (363) is well above the minimum requirement of 246 respondents and closer to the upper requirement level of 384 participants mentioned above. Although the ACBC module can work with up to 100 attributes, respondents can’t choose efficiently with such a long list of attributes. Typical studies in practice cover about 5-12 attributes as recommended by Sawtooth Software. Prior to the choice tasks, a preliminary exercise of attribute reduction was conducted. Respondents had to choose from a total of 14 attributes - of which 5 are fixed. From the other 9 attributes, respondents had to choose 5- These 5 attributes will be called ‘optional’ attributes. Then there are 5 fixed attributes and 5 ‘optional attributes’. Therefore, 10 is the final number of attributes that any respondent would have to choose from during the choice tasks (choice tournament). The approach to pricing is summed pricing in which price is treated as a continuous variable. This means that pricing is affected by the choice of other attributes. Using the ‘summed’ pricing approach leads to restaurant concepts that show realistic prices. Restaurant with high-end features carry higher prices, and restaurants with low- end features carry lower prices. Under summed pricing, thousands of potential unique prices could be shown to respondents. The BYO section details more on how this works.

## The Research Instrument

There are four main parts in the ACBC questionnaire, namely, the general information part (SSI web module of Sawtooth Software), Build Your Own, Screening section and Choice Tournament. The general part has demographic questions, the respondent chooses an occasion for eating out and also chooses the ‘optional attributes’ discussed above.

Figure 4.1 shows the BYO task, noting that there are nine attributes and that the higher the level, the greater the cost incurred. Adding all the individual costs per feature resulted in a total for the meal cost for one person.

## Screening Section

In the screening section, four restaurant concepts are shown, for example, a restaurant that provides a particular level of service, with a level of variety, etc., and that concept also has a price tag. Some options like a greater level of service are more expensive. They are shown in Figure 4.2 in sequence order and preference order. Some options like ambiance have no preference order, meaning that whether they want a quiet or busy environment there is no record of the price paid.

**Figure 4.1 Features and cost per feature.**

**Figure 4.2 Total list of attributes with sequence and preference orders.**

## Choice Tournament

In the choice tournament section, the respondent was shown three restaurant concepts in each screen that survived the previous section. Eight choice tasks are shown, and respondents can choose only one of the concepts in each screen. An example can be seen in Figure 4.3.

The greyed-out rows show attributes that are tied across restaurant concepts so that respondents focus only on the remaining differences. Each choice results in a

**Figure 4.3 Example of choice task.**

winning concept that will then compete in subsequent rounds until the preferred concept is identified. That allows for understanding what attributes are the most important as they are trading them off with price.

## Counting Analysis for ACBC

Counts can then be a good starting point for the analysis. However, if there is disagreement about which levels are preferred, then summaries of importance from aggregate counts can artificially bias estimates of attributes’ importance. Therefore, a more accurate analysis of attribute importances can be determined using the utility values generated by hierarchical Bayes (HB) analysis.

## HB Analysis: Calculation of Utilities and Importances

A utility is a number that represents the attractiveness of a feature, for example, ‘great variety of dishes including vegetarian options and specials’, which is one of the three levels for the attribute menu options. A basic problem has been to create individual- level utilities for each respondent (Howell, 2009). It should be noted that individual utilities offer more valuable information than looking into the average of a sample. For example, if there are two options concerning portion sizes and half of the respondents go for larger portions and the other half for smaller portions, the averaged result would conclude that consumers are ambivalent with regard to portion sizes and that can be the worst conclusion. With individual-level utilities, it is possible to distinguish market segments that go for larger portions and target them separately. In the ACBC exercise, the respondents have a screening section of ten choice tasks. If respondents select the attributes with the larger number of attributes, the maximum number of combinations is (5x4x2x4x4x5x4x3x5 = 192,000 combinations). It may seem an impossible task to estimate the preferences for that colossal number of combinations from the relatively small amount of information collected. The BYO and screening sections allow for a reduction in that number and then the choice tasks present (three restaurant concepts within the ten attributes, that is, 30 combinations at a time). Anyway estimating preferences accurately is a challenging task that can be done using HB analysis in the ACBC Sawtooth Software platform. HB is a useful tool for modeling consumer research phenomena (Rossi et al., 2005). About how well the solution (average utility part-worth of every respondent) fits the data, the value called root likelihood (RLH) offered an estimate. The best possible value is 1.00 and the worst possible is the inverse of the number of choices available in the average task; in this case, there are three choices, and thus that value is (1/3 = 0.33). In some spread sheets, the system multiplies RLH by 1,000 so the worst possible value is 333 and the perfect fit is 1,000. The value of RLH obtained was 0.67 (670). It can be interpreted as just better than twice the chance level.

In conjoint studies, attribute importance can be derived from utility scores. The software determines importance scores by calculating the utility score range multiplied by 100%. The analysis looked into the difference in utility part-worths between levels of an attribute and the relative importance that each attribute will have for certain occasions.

## HB with Covariates

Orme and Howell (2009) explained that when segmentation studies are conducted, distances between utility means of segments are diminished. This is because HB shrinks the individual estimates of the part-worths towards the population mean. It is better to ascertain whether there are significant differences between segments if that is possible. That can be done if HB with covariates is used rather than generic HB. This research needs to find out whether average importances differ according to the occasion of eating out and whether differences between levels of attributes are significantly different. Orme and Howell (2009) compared average importances for three segments with a generic HB run and with a covariates HB run, and found that there was almost 50% more spread in the latter for a particular attribute. This enhanced spread was not obtained by chance but because it offers a truer representation of their segment means because of a more accurate representation of population means in the HB upper model. This means that there is a more meaningful, robust and accurate analysis between segments. For that reason, HB with covariates will be conducted for looking at possible differences between occasions for eating out.