A formal test requires three components. First, it should be possible to find two or more nearly equivalent groups of customers to be compared. Second, a marketer should be able to conduct different campaigns in each group. Third, it should be possible to compare and contrast the results. From the observations and instrumentations described so far, we not only have the ability to design experiments but also to conduct many such experiments and optimize the marketing program based on a large number of independent experiments.
Let me take the example of Cuppa Haven described earlier in this chapter. I introduced two campaigns. The first was a postal campaign to the top 25 percent of the communities visiting a mall. The second was a poster placed at the movie theater. Both offered the same app for download, and Cuppa Haven was able to compare the results of the two campaigns to decide that the poster at the movie theater was a better campaign. In this case, the two groups of customers were the top 25 percent of the communities likely to visit a mall, and a set of moviegoers at the mall. Cuppa Haven could place a campaign targeted to each set. In both cases, the customers were able to download the app from a source provided to them. Once downloads were executed, Cuppa Haven was able to collect and compare the results.
Central to the experiment is a market test orchestration engine that can execute these tests, collect the results, and display the results to the marketer. Sophisticated market test orchestration programs run hundreds of experiments simultaneously, especially in situations in which the object of the test is an electronic product. With software configurable products, targeted campaigns, and dynamic pricing, we have all the ingredients for market tests at a large scale.
The credit-scoring industry has discovered champion-challenger testing as a way to optimize credit collection strategies. Champion- challenger is a term used to describe the way in which the existing collections strategy, known as the champion, is routinely tested against an alternative approach, known as the challenger. To ensure accuracy, the challenger should be tested in a live environment, but controlled to avoid financial loss. Thus the challenger is developed and designed using recent customer information, and is implemented on a small, statistically robust sample with the results closely monitored.