Track: Did you succeed?

Effective post-launch product tracking requires a tracking system. One possibility is displayed in Figure 7.1. There are two main branches. One is based on transactions data: orders, revenues, returns, margins, and so forth. This is what most managers typically think about when tracking a product’s performance. The second branch is based on the plethora of text data businesses collect and house from customer comments, online reviews, etc.; all the text data I discussed in Chapter 2 for clues for new product ideas. The same data can be used to discover sentiments and opinions regarding the newly launched product. This holds, incidentally, for both new and existing products, but my focus is on new ones.

Tracking a product, whether new or existing, is not a simple issue. Creating or studying a report showing the number of units sold, perhaps the number returned.

This illustrates the key components and their relationships in a post-launch product tracking system

FIGURE 7.1 This illustrates the key components and their relationships in a post-launch product tracking system. The objective of this system is to collect and organize market transactions data to determine if the new product is meeting objectives.

and the net revenue earned, which is what most managers believe is “tracking,” only skims the surface of what is needed and possible. A tracking analysis must go much deeper to determine root causes for weaknesses in sales. Root causes could be by:

  • • marketing region:
  • • sales rep; and
  • • customer classification (e.g., loyal and satisfied customers).

Not to be overlooked are the different components of the marketing mix. The price points and price structure may be inappropriate for the target markets despite the pre-launch research described in the previous chapters; excessive use of discounts offered by the sales force may be stimulating sales but hurting net revenue; marketing messages, although adequately tested, may still not resonate with customers; website online purchasing tools (e.g., shopping carts) may be poorly implemented; and the list goes on. It is definitely important to identify which of these is the culprit hurting sales or even causing sales to be more than expected. If sales are not up to expectations, as determined by the choice models and simulations I described in Chapter 4 or the sales forecasts I described in Chapter 6, then corrective actions are needed and needed before the new product gains a negative appraisal in the market from which it might be unable to recover.

There are thus three parts to new product tracking post-launch, although these could, as I mentioned above, be applied to all existing products that have been in the market for some time. These are:

  • 1. analysis of what has happened to the product and what is currently happening to it;
  • 2. what will happen if some aspect of the product and its marketing is changed; and
  • 3. assessing customer sentiments and opinions about the product.

The first is the domain of Business Intelligence, the second is Business Analytics, and the third is Sentiment Analysis and Opinion Research. I will describe each branch of Figure 7.1 and their subdivisions in the following sections.

This chapter is divided into four sections corresponding to the two main branches of Figure 7.1. In the first section, I discuss the types of analysis possible with transactions-type data. This includes data visualization, predictive modeling, and forecast error analysis. The forecast error analysis differs from the one I described in Chapter 6 because this error analysis is based on actual market data as opposed to the testing data set. The second section focuses on sentiment analysis and opinion mining, a newer area with a lot of currently active research. The third and fourth sections are software review and summary, respectively.

Transactions analysis

In this section, I will cover some material on analyzing transactions data, although the word “transactions” is inaccurate. The data in the top branch of Figure 7.1 may consist of any type of data such as field representatives’ time logs, financial data, personnel data, and so forth. The types of analysis - Business Intelligence or Business Analytics - are the same. For this section, “transactions” refers to numerical data of any sort for any part of the business. The only fact that matters for this branch of Figure 7.1 is that text data for sentiment analysis and opinion mining is not included. They are handled in the second branch.

Transactions data are drawn from a number of different data tables or databases such as those listed in Figure 7.2. These tables could comprise a data store, data warehouse, data lake, or data mart. Elements of these tables are pulled into a Data Consolidator that appropriately transforms the data and loads them into a data mart.

A data store is the storage location closest to the source. It is temporary storage before the data are cleansed and loaded into the data warehouse which is a more encompassing and inclusive storage location for data. A data warehouse has a wide variety of types of data: financial, personnel, transaction, and so forth, all, of course, organized by topical and functional areas. A data lake is a variation of a data warehouse in that, like a warehouse, it stores a variety of data but these data are in their

This flow chart illustrates the processing of data from some source elements, a data store for example, to a consolidator that creates a data mart for a functional area

FIGURE 7.2 This flow chart illustrates the processing of data from some source elements, a data store for example, to a consolidator that creates a data mart for a functional area. End-user analysts can access the data mart using a query system to address questions which are either guided by managements issues or unguided ad hoc questions. A report of some sort usually results as an end product.

“native format” and can be structured and unstructured. They may come from social media, blogs, emails, sensors, and so forth. The costs of maintaining these data in a lake are lower than for a warehouse because the storage arrangements are less restrictive. The data lake, however, has other costs beyond those associated with maintenance. The primary cost is the level of preprocessing that has to be applied to the data from a lake that will be used in an analytical process. Since the data in a lake are unprocessed, by definition, and direct from their source, they will first have to be processed, cleaned, checked, and wrangled (i.e., merged with other data) before they could be used in the analytical process. See Lemahieu et al. [2018] on processing costs and Kazil and Jarmul [2016 ] for insight into the concept and complexities of data wrangling.

A data mart is an extract from one of these three storage media that puts the data closer to the end-user. It is more functionally oriented meaning the data are for a functional area. For example, there could be a finance data mart, a marketing data mart, and a personnel data mart. Analysts in the marketing department use their marketing data mart for their work and do not use (or even have access to) the finance data mart. There are several reasons for having a data mart as listed by Lemahieu et al. [2018]:

  • • they provide focused content for the end-users;
  • • they provide more focused and efficient end-user queries because there are fewer layers of data to navigate;
  • • they are closer to the end-users which minimizes network congestion; and
  • • they allow pre-defined reporting tools that are functionally oriented.

As noted by Lemahieu et al. [2018], because of the existence and use of data marts, which can be quite large and complex in their own right, the data warehouse they draw from is sometimes called an enterprise data warehouse to distinguish it from a data mart.

The data store is probably the furthest from the end-user but the closest to the source while the data mart is the opposite. All four have query capabilities allowing any user access to data although the data mart’s query capabilities are more in line with the abilities of the end-users it serves since they generally are less sophisticated in this regard than IT professionals who manage the other data bases. See Lemahieu et al. [2018] for discussion on data storage. Also see the succinct online article by Baesens summarizing these storage concepts.1

The process of extracting data from multiple sources, appropriately transforming them, and loading the transformed data into a more convenient data table is referred to as the Extract-Transform-Load process (ETL). This is a standard way of viewing the manipulation of large amounts of data with the goal of making them more accessible and consolidated for the end-user. Once the data have been consolidated, they are ready for the analytical applications. See Lemahieu et al. [2018 ] for some discussion of the ETL process.2

Business intelligence vs. business analytics

There are numerous key decisions that have to be made at all stages of the new product development process. This has been emphasized throughout the preceding chapters. These decisions affect not only the definition of the product but also its launch and future marketing. In addition, key decisions are made throughout the process regarding the continued efforts on the product development and launch itself. This is the business case process which results in a simple Go/No-Go decision. At each critical decision point, information is needed. With no or little information, decision makers have to guess or “approximate” the impact of their decision. This approximation, however, comes at a high cost because they could be wrong, and probably will be wrong, so big mistakes will happen. These mistakes are costly, perhaps in terms of lost opportunities, lost revenue, lost sales, lost market share, or a lost business. This should be evident.

As information is gained about customers, markets, and competitors, then decision makers would not guess as much and the costs of their decisions would fall. But not to zero! Even with a large amount of information, decisions could still be wrong so they would still incur a cost of guessing or approximating. This is especially true in a world with Big Data - but that is another story. Nonetheless, the costs of approximating will fall the more information you have and use.

Some people believe information is discrete or, better yet, binary: you either have it or you do not. But it is not discrete. It is continuous, running from Poor to Rich Information as I noted in Chapter 1. Poor Information is raw data, or perhaps some simple summary statistics such as means and proportions. This type tells you something, but not much. Rich Information, on the other hand, provides insight. It tells you something you did not know. It is insightful, useful, and actionable. Poor information is none or little of this.

Figure 7.3 is a modification of Figure 1.6 to show the type of analyses available along the Information Continuum. All too often, analyses are at the shallow end, restricted to means, proportions, and pie and bar charts which results in Poor Information. Analyses at the deep end, such as predictive modeling, resulting in Rich Information are needed.

Here are examples of Poor and Rich Information

FIGURE 7.3 Here are examples of Poor and Rich Information. Actually, the sources for them which are some form of data analysis. The sources on the left are Shallow Analytics while those on the right are Deep Analytics.

Rich Information has two variations:

  • 1. What did happen or what is currently happening on the one hand.
  • 2. What will happen under different conditions on the other hand.

The way you get both variations is somewhat the same; they both rely on statistical and data visualization methods. Yet, they are fundamentally different. The former, as I noted above, is the domain of Business Intelligence while the latter is the domain of Business Analytics. These are the two splits in the Transaction branch of Figure 7.1.

A further distinction is possible. More formally, Business Intelligence relies on simple statistics and data visualization to say something about what did happen or what is currently happening with the business and its markets. Dashboards are popular management tools, not analysts’ tools, for summarizing this Poor Information. Due to their layouts and infographics orientation, they convey a sense of authority and empowerment to management. Business decisions, by their nature, are forward looking so, in my opinion, dashboards with Business Intelligence summaries and graphics are of limited use.

Business Analytics, in the hands of skilled analysts, relies on scientific data visualization and complex predictive modeling to extract Rich Information from data. Scientific data visualization differs from the infographics type of visualization in that it is more penetrating. Infographics are meant to have a “wow” impact as they often have more glitz than substance. They tend to have a lot of chart-junk. Scientific data visualization is meant to provide Rich Information with clear, insightful presentation of data unencumbered by glitz and chart-junk. It also supports and complements predictive modeling, which is forward looking. It is this forward- looking aspect that is needed for business decisions. See Tufte [1983] for a discussion of chart-junk.

Business Intelligence and Business Analytics, although they have a separate focus, nonetheless complement and support each other. Business Intelligence, perhaps through a dashboard, may indicate a problem, say about pricing. Business Analytics would then be employed by the analysts, perhaps Data Scientists, to delve into the nature of the pricing problem and the implications for key business metrics if prices, both structure and levels, were to be changed. The same holds if a dashboard indicated a competitive change or an economy change (e.g., real GDP slows or the stock market has a precipitous decline) while Business Analytics would enable an analyst to assess the implications. The important point is that Business Intelligence and Business Analytics work together to reduce the cost of approximation: you know more and have richer information.

< Prev   CONTENTS   Source   Next >