Early-stage pricing

A new product concept must still prove itself at the early stage of the development process even though it has not been introduced to the market. A business case process is typically employed to justify whether or not to continue with development. The contents of the business case, of course, will vary from business to business, each having its own criteria regarding what has to be proven about the new concept. Some components that might be included are a competitive assessment, potential market assessment, legal issues (e.g., patent protection and infringement) to mention a few. Although the requirements will vary, a financial view will almost always be included because the purpose of the new concept is to increase long-run profitability.

A financial case itself involves a cost assessment, a revenue assessment, and a contribution assessment. Costs include the cost of production, marketing, and sales. Revenue includes total quantity sold times price. Contribution is the difference: revenue less costs. Revenue, however, requires a price point but since the product is not available in the market, a market-driven price point is certainly not available. Market research is required to develop a price. The uncertainty surrounding the estimate, however, would be great because of the nature of the problem: the product is, by definition, unknown to the market. One way to handle the issue is to develop a range of acceptable prices. The van Westendorp Price Sensitivity Meter can be used for this purpose. I will briefly describe the approach in the next section. A detailed account is provided in Paczkowski [2018]. See Westendorp [1976] for the original presentation and analysis.

van Westendorp price sensitivity meter

The van Westendorp method is used to develop a price range suitable for early business case development. A range estimate is based on a series of four questions, all dealing with customer perception of the quality of the product and how much they are willing to pay conditioned on that quality. Since the method is founded on perceptions, it is ideal for new product development in the early business case stage because customers would not have an actual product to examine and use. A product description would be used so that customers would be able to formulate some judgement or opinion of the product. The description could be supplemented with a prototype or mock-up of the product so that the customer could at least have a visual, or perhaps a textual, experience. The mock-up may not actually work, but it could still help customers form a perception of the new product. 12 The four questions are:

  • 1. At what price is the product so expensive you would not buy it?
  • 2. At what price is the product so inexpensive you would feel the quality is questionable?
  • 3. At what price is the product becoming expensive so you would have to think about buying it?
  • 4. At what price is the product a bargain - a great value for the money?

The first question is a “too expensive” question; the second is a “too inexpensive” or “cheap” question; the third is an “expensive” question; and the fourth is an “inexpensive” or “bargain” question. The “cheap” question identifies the point where the quality is too suspect so that a customer would not buy the product at all. The “too expensive” question identifies the point where the product is not outside a customers budget. The “bargain” and “expensive” questions reflect points

The four intersections are illustrated here. Source

FIGURE 3.16 The four intersections are illustrated here. Source: Paczkowski [2018]. Permission granted by Routledge.

where it is a product that cannot be passed up (“a bargain”) or is within a customers budget and doable.

The cumulative distribution of responses for each question are calculated. By convention, the “bargain” and “cheap” distributions are reversed. The four distributions are then plotted on the same graph space with the two expensive curves sloping upward and the “bargain” and “cheap” curves sloping downward. The two expensive curves should top-out at 1.0 at the upper right of the graph while the other two should bottom-out at 0.0 at the lower right. An example is shown in Figure 3.16.

It is evident from Figure 3.16 that there are four intersections. One possible set is:13

Optimal Price Point (OPP) The price at which the number of customers who rated the product too expensive equals the number rating it cheap. It is the equilibrium price between not buying the product and doubting its quality. This is where Too Expensive = Cheap.

Indifferent Price Point (IPP) The price at which the number of customers who rated the product as expensive equals the number rating it a bargain.14 This is where Expensive = Bargain.

Point of Marginal Cheapness (PMC) The lower bound of a range of acceptable prices where customers are on the edge or “margin” of questioning the product’s quality. This is where Expensive = Cheap.

Point of Marginal Expensiveness (PME) The upper bound of a range of acceptable prices where customers are on the edge or “margin” of viewing the product as outside their means. This is where Too Expensive = Bargain.15

The difference between the PMC and PME is usually interpreted as the optimal price range.

TABLE 3.6 These are the price points presented to musicians for the page turner.

$100

$150

$250

$300

$400

$500

$600

$800

Let me return to the music page turning new product. The marketing team of the developer believes the product’s price should be in the $100 - S800 range. A more definitive range is needed for the business case to be developed by the business case team. Musicians in several metropolitan areas were surveyed and asked four pricing questions:

  • 1. At what price do you think the cost becomes too expensive to be a good value? (The too expensive question)
  • 2. At what price do you think the cost is starting to get expensive, but not out of the question? (The expensive question)
  • 3. At what price do you think the cost is a bargain? (The inexpensive or bargain question)
  • 4. At what price do you think the cost is too low for the quality to be good? (The too inexpensive or cheap question)

Eight price points were presented as shown in Table 3.6. Each musician respondent selected one of the eight price points for each of the four questions. Some failed to make a selection so their responses were coded as missing and dropped from the analysis. Cumulative distributions were derived for each question across all remaining respondents. The business case development team was advised to develop a business case based on a range of $229 - $265. A follow-up study should be done once the business and marketing plans are approved to determine the exact price.

Summary

I covered the early stage product design in this chapter. Conjoint analysis is useful at this stage because it helps to identify the product attributes that are important to

TABLE 3.7 This is a summary of the van Westendorp price analysis for the music page turner new product.

Classification

Price

Cum. Respondents (%)

Optimal Price Point (OPP)

$239.22

24.3%

Indifference Price Point (IPP)

$252.60

34.6%

Point of Marginal Cheapness (PMC)

$228.83

26.7%

Point of Marginal Expensiveness (PME)

$265.34

30.5%

Pricing Options Range

$228.83 - $265.34 ($36.51)

the customers who will buy and use the product. The drawback to conjoint is that it is materialistically oriented - only attributes and their levels count for defining a product. The emotions a customer may attach to some features are ignored. The Kansei approach to product design tries to rectify this.

I also discussed developing a price range, not for the purpose of developing a final pricing strategy, but for having reasonable number for a business case. I discuss better pricing approaches in later chapters.

The next chapter takes the development of the new product to the next level - testing with customers to see if it will work and sell. A discrete choice approach, which is akin to conjoint analysis, will be the major focus of that chapter.

Appendix 3.A

Brief overview of the chi-square statistic

Assume you have a discrete variable measured on L levels. The Null Hypothesis is that the L proportions are equal. If the Null Hypothesis is true, then the expected frequency of cases at each level, fy, is

where n is the sample size and p is the proportion in each cell. The factor p = {/l is the proportion in each level which is a constant if the Null Hypothesis is true. The test compares the observed frequency, ,j, at each level to the expected frequency, Er The two frequencies should be equal except for random error, which should be small. The /2 test statistic is

This follows an asymptotic chi-square distribution with L- 1 degrees-of-freedom. The p-value for this test statistic indicates the significance. As a rule-of-thumb (ROT) for most statistical tests, a p-value < a, where a = 0.05 is the typical level of significance, indicates significance. This p-value is only for the upper tail of the chi-square distribution since only large values of the chi-square contradict the Null Hypothesis. This test is called the Pearson Chi-square Test. The chi-square density is illustrated in Figure 3.17. The hypothesis tests are shown in Figure 3.18.

The x2 test can be expanded to test independence of two factors. For example, a sample of customers could be shown four prototypes for a new product and then asked which one they would most likely purchase. The prototype designers want to know if there is any association between the prototype selected and income level. Suppose income is simply divided into three categories: < $4(Ж, $41К - $Ь0К, and $60К+. The first factor, income, has three levels and the second factor, the prototype selected, has four levels. This results in a two-way table called a contingency table which consists of counts or frequencies of each combination of the two factors. If the first factor which defines the rows has I levels and the second which defines the columns has J levels, then the contingency table is of size Ixj. For this example, the table is 3x4. A contingency table is shown in Table 3.8.

The density curve for the chi-square distribution varies as the number of degrees-of-freedom

FIGURE 3.17 The density curve for the chi-square distribution varies as the number of degrees-of-freedom. Notice the curve starts as a negative exponential but then quickly morphs into a right-skewed distribution and then begins to look like a normal distribution.

TABLE 3.8 This example contingency table shows the frequency counts of 650 customers who were asked to select one of four prototype products and their income level.

Income

Prototype Selected

А

В

С

D

< $40 К

30

50

51

20

$4!К - $60К

30

40

45

35

$60К+

90

60

104

95

Chi-square tests for differences in proportions for the four music prototypes are shown here for the Null Hypothesis that the proportions are all equal

FIGURE 3.18 Chi-square tests for differences in proportions for the four music prototypes are shown here for the Null Hypothesis that the proportions are all equal. For four levels, the expected proportion per level is 0.25. The Likelihood-ratio Chi-square is 11.0173 and the Pearson Chi-square is 11.5385. The p-values for both are below 0.05 so the Null Hypothesis is rejected by both tests.

Typically, the frequencies are converted to proportions by dividing each value in the table by the sample size, n . If AL is the frequency in cell ij of a contingency table, then n = ancl Pij = Nj/"- The Null Hypothesis is that the row (or

column) distributions are the same. This amounts to saying that there is only one distribution so you have homogeneity in the distributions. Therefore, any test of this equality is a test for homogeneity. Under independence, based on elementary probability theory, you should have an expected value for each cell of the table equal to the product of the respective row and column proportions. Let pt be the marginal proportion for row i, then p{ = Y pij. Similarly for /> (. Then:

where p, = 's the marginal proportion for the ilh row of Table 3.8 with the

dot notation indicating the summation; similarly for p The /2 test statistic is:

This follows an asymptotic chi-square distribution but with (f-l)X(/-l) degrees- of-freedom. The p-value again tells you the significance. This is still a Pearson Chi-square Test.

A second test is based on the log of the ratio of the observed to expected frequencies weighted by the observed frequencies:

where the N, and £, are defined as above. This also follows an asymptotic chi- square distribution with I-1 degrees-of-freedom. This is called the Likelihood-ratio Chi-square Test.

A contingency table is analyzed using

This is also a Likelihood-ratio Chi-square Test.

Figure 3.19 shows the chi-square test results for the data in Table 3.8.

Let me discuss the Pearson Chi-square statistic from a matrix perspective. I change the notation I use because matrix notation is used in Appendix В for correspondence analysis. So this section sets the stage for that discussion.

Consider a I Xj matrix N of the non-negative frequencies. This could be a crosstab of two measures. If lr is a I element column vector of Is, the size of the vector corresponding to the number of rows of N. and 1( is a J element column vector of Is corresponding to the columns of N, then the sample size is

The frequency matrix is converted to a matrix of proportions, P, by dividing each element of N by the sample size, it, so that:

The matrix P is called the correspondence matrix. The data for the example crosstab table are shown in the Basic Data portion of Figure 3.20.

Define the row masses as the values on the row margin of the matrix, one mass per row. The collection of row masses is the row marginal distribution. These masses are given by

where r is IX 1. Similarly, you can define the column masses, or column marginal distribution, as

which is J X 1. The matrix product, rcT is the matrix of expected proportions under independence.

The masses are collected into two diagonal matrices, one for rows, D, which is IX I, and the other for columns, I) which is J Xj. The masses for the example crosstab table are shown in the Masses portion of Figure 3.22 and the corresponding diagonal matrices are in the Diagonal Matrices of Masses portion of the figure.

These pieces are combined into one Ixj matrix of standardized residuals:

Residuals are the difference between the matrix of observed proportions and the expected proportions under independence. This matrix is shown in the Residuals

These are the results of the two chi-square tests for the contingency table in Tabl

FIGURE 3.19 These are the results of the two chi-square tests for the contingency table in Table 3.8. The table is reproduced as part of the output. The table shows the frequency counts and the proportions which are the frequencies divided by the sample size, и = 650. The Likelihood-ratio Chi-square is 25.279 and the Pearson Chi-square is 24.571. The respective p-values are both less than 0.05 so the Null Hypothesis is rejected by both tests.

This report illustrates the calculation of a Pearson Chi-square calculation

FIGURE 3.20 This report illustrates the calculation of a Pearson Chi-square calculation. The Basic Data Frequency Table at the top corresponds to the Contingency Table in Table 3.8. Note that the Pearson Chi-square value (24.571) matches the value in Table 3.8.

portion of Figure 3.22. If S is squared and multiplied by the sample size, n, the result is the matrix of Pearson Chi-square values for each cell of the crosstab. That is,

The observation that these are the chi-square values is based on the discussion above. The sum of these chi-square values is then the Pearson Chi-square for the crosstab table.

 
Source
< Prev   CONTENTS   Source   Next >