Competitive assessment: the role of CEA

Ideas for new products and services could come from a two-fold analysis of what is important to customers to satisfy their needs and problems and what competitors, both current and perspective, are doing to satisfy the customers. This is a Competitive Environment Analysis (CEA), a survey-based approach to researching opportunities that could be fueled by an analysis of key words and phrases as described above. In particular, the word and phrase analysis could be used to generate a candidate list of attributes for a new product that might meet customer needs. For example, a text analysis for the robot vacuum could result in a list of attributes that include:16

  • 1. Operating Time (hours) on battery power.
  • 2. Charge Time (hours) of batteries.
  • 3. Operating Pattern or area cleaned.
  • 4. Automatic Scheduled Cleaning by time and day.
  • 5. Infrared Sensor allowing the vacuum to navigate around a room.
  • 6. Dirt Sensor for detecting dirt on the floor.
  • 7. Type of Flooring vacuumed: Carpeting, Hardwood, Tile, and Linoleum.
  • 8. Spot Mode for addressing a particularly dirty area.
  • 9. Height Adjustment under furniture.
  • 10. High-efficiency Particulate Air Filter (HEPA) for trapping small particles of dust or dirt and certain allergens.
  • 11. Multi-Room Navigation to maneuver through different rooms.
  • 12. Battery Indicator for when the battery is low versus fully charged.
  • 13. Full Bin Indicator indicating when the dust bin needs to be emptied.
  • 14. Cliff Sensor that keeps the vacuum cleaner from falling down stairs.
  • 15. Charging Base to recharge batteries.
  • 16. Return to Charging Base for recharging.
  • 17. Bumper that prevents it from scuffing walls and furniture.
  • 18. Boundary Markers that keep the vacuum cleaner from going where it should not.
  • 19. Virtual Walls or small devices that emit an invisible infrared signal to prevent the robot cleaner from going where it should not go.
  • 20. Remote Control to program or control the cleaner.
  • 21. Price point that is competitive and indicates good value-for-the-money (VFM).

Suppose the competitive assessment group has identified 3 competitor products. How are these competitors meeting each of these attributes? Understanding this competitive environment would help product designers identify which attributes to emphasize and develop, but also identify opportunities, places in the product space where there are unmet needs that are important to customers.

One way to assess the environment is to conduct a survey of customers who have purchased existing products and ask them to assess how well the product they bought performed on the list of attributes. It is also necessary to understand how important each attribute is to them. Some may be very important so they would pay especial attention to the performance of a product regarding that attribute; other attributes may not be that important at all. Therefore, two sets of questions have to be asked:

  • 1. How important is each attribute?
  • 2. How does the brand purchased or used perform on each attribute?

The data set compiled from the responses is a three-way table: attributes by products by different reviewers (i.e., customers). The data could be viewed as a cube with one side the attribute importance measures, another the product performance, and the third the reviewers. One possible form of analysis is to cluster the data and examine the clusters. Another, more simplistic approach, is to collapse one of the dimensions and portray the other two in a two-dimensional map. The obvious dimension to collapse is the customer reviewers since the focus is on the attributes relative to the products; the customers are only the source of the data. The question is how to collapse the customer dimension. First consider the attributes. A question in the survey would ask customers to rate the importance of each attribute to them. Suppose a five-point Likert Scale was used ranging from “Not at alt Important’’ to “Extremely Important.” Although the interpretation of Likert Scales is controversial17, most analysts interpret the scale as representing an underlying continuous measure and, therefore, they would simply average the quantities. In this case, you would average the ratings over each customer and each product. The mean attribute importance and mean attribute performance can then be compared. The two-dimensional map would be a scatter plot with, say, the importances on the horizontal axis and the performances on the vertical axis. The points would be the attributes. For the robot vacuums, there would be 21 points. Each axis could be (arbitrarily) divided into two equal parts producing four quadrants.

Another map consists of a bar chart of the attribute importances shown in descending order from the most to the least important attribute. The products’ mean performance rating on each attribute would be overlaid as a symbol (e.g., *, +, etc.). This would concisely show how the products are performing on those attributes that are important. A variation is to calculate the overall mean performance and a 95% confidence interval around the overall mean. The overall mean and the two bounds would also be overlaid on the map. All competitive symbols lying within the bounds would indicate products that are statistically performing the same on an attribute.

A better approach regarding performance is to find the standard deviation of the performance ratings over all customers and products. Calculate the mean performance of each attribute for each product. This is a collapsing of the Customer x Attribute x Product matrix across the customer dimension leaving an Attribute x Product matrix. The entries in the cells of the table are the mean ratings by all customers who rated the attributes for the products. Sample sizes will necessarily vary because not all customers rate all the products. Further collapse this two-way matrix by finding the standard deviation of the mean performance rating across all products for each attribute. This will yield a one-dimensional table of performance standard deviations by the attributes.

The standard deviation shows the variation among the products for each attribute. This variation can be interpreted as a degree of similarity (DOS) of the products. A high DOS shows there is much variation among the competitors on how they provide an attribute while a small DOS shows there is little variation in performance among the competitors; they are similar. A DOS of 0.0 means all competitors are the same on an attribute.

The DOS for each attribute can be plotted against the mean importance rating of each attribute to show the competitive strength among the products relative to what is important to customers. This map can be divided into four quadrants, hence it is also sometimes simply referred to as a quadrant map. It shows the market structure for the attributes.

An example report is shown in Figure 2.16 and Figure 2.17. Figure 2.16 shows the mean importance ratings of the 21 robot vacuum attributes and the DOS across the competitors for each of the 21 attributes. Each respondent was asked the brand of robot they have and then were asked to rate that robot’s performance or satisfaction on each of the 21 attributes. For each attribute, the mean performance rating was calculated and then the standard deviation of the means for each attribute was calculated. These are shown in a table format in Figure 2.16 and in a map format in Figure 2.17. From the map, it is clear that the competitors all perform about equally on most attributes, all of which are important to customers; see the lower right quadrant of Figure 2.17. However, there is one attribute, Battery Indicator, for which performance is an issue, yet this is important to the customers. This suggests an opportunity: build a product with a better battery indicator or some type of low battery warning system. The design team, of course, needs more insight into these two problem areas, but at least they now have a start. More survey research could be done to refine their understanding of issues with both items.

Another level of analysis could be done, data permitting. For each attribute, the difference between the mean performance of your product and the mean of each of

This is a CEA summary table showing the 21 robot vacuum cleaner attributes

FIGURE 2.16 This is a CEA summary table showing the 21 robot vacuum cleaner attributes: their mean importance and performance “spread” as measured by the standard deviation of the performance ratings.

This is a CEA quadrant map showing the 21 robot vacuum cleaner attributes

FIGURE 2.17 This is a CEA quadrant map showing the 21 robot vacuum cleaner attributes: their mean importance and performance “spread.” There were three competitor products and the producer of the new vacuum for a total of four products.

TABLE 2.3 This table shows the effect on the probability of falsely rejecting the Null Hypothesis for different number of tests. Calculations are based on a = 0.05.

Number of Tests (k)

PrfFalscly Rejecting Hu)

1

0.050

2

0.098

3

0.143

4

0.185

5

0.226

10

0.401

100

0.994

the competitors’ mean performance can be determined for each attribute. The difference should, of course, be statistically tested. The recommended test is Dunnett’s multiple comparison test. In multiple comparison tests in general, there is an issue of performing more than one statistical test on the same data. The standard level of significance, which is the probability of falsely rejecting the Null Hypothesis, used in, say, a t-test is a = 0.05.18 This is sufficient when one test is performed to determine a difference. When more than one test is performed, however, it can be shown that the probability of falsely rejecting the Null Hypothesis, H0, is greater than 0.05. In fact, Pr(FaIsely Rejecting H0) = 1 - (1 -a)k where к is the number of tests. Table 2.3 shows what happens to this probability for different values of k.

Several procedures have been developed to control this inflation of the probability. Tukey’s Honestly Significant Difference (HSD) test is the most commonly used test. This test does not put a comparison restriction on the items tested: that is, they are not restricted to comparing items to a base item. Dunnett’s Test was developed for comparisons against a base. As noted by Paczkowski [2016], this test is “a modification of the t-test, [and] is appropriate when you have a control or standard for comparison.” See Paczkowski [2016] for a discussion of multiple comparison tests in general and some comments on Dunnett’s Test.

For each competitive comparison for each attribute, there would then be three possibilities: your brand is statistically higher than each competitor, statistically the same, or statistically lower. A heatmap could then be created that shows the attributes on the vertical axis, the competitors on the horizontal axis, and the cells colored coded to indicate the sign of the statistical difference. This would tell you how well you are performing on each attribute relative to your competition. This is, of course, a variation of the bar chart described above.

Contextual design

Contextual design is another approach to designing a new product that involves data collected by directly observing users’ interactions with existing systems, tools, materials, and so forth. The belief is that the best way to learn about the problems, needs, and requirements of customers is to observe them in their daily activities and record what they do and how they interact with their environment. Directly observing customers is a qualitative approach in the domain of ethnographic research.19 This approach has been used in a number of industries such as web design and applications, consumer products, manufacturing, automotives, and medical devices.

This approach is not about hearing the voice of the customer but about watching the customer in action. It is based on five principles:

  • 1. Designs must support and extend users’ work practices.
  • 2. People are experts at what they do, but are unable to articulate their own practices and habits.
  • 3. Good product design requires the partnership and participation of the end- users.
  • 4. Good design considers the users as a whole entity.

It should be clear that these design principles apply throughout this chapter. See Holtzblatt and Beyer 12013] for more information and discussion of these design principles.

 
Source
< Prev   CONTENTS   Source   Next >