PATTERN DISCOVERY

These tools take data with high variety and look for qualitative or quantitative patterns. Discovery tools include general or specialized searches across unstructured data. They carry specialized tools for machine learning for pattern recognition and can carry ontologies of different domains to make their task intelligent. With the explosion of big data, these tools advanced significantly in the qualitative analysis of unstructured data, such as social media blogs. The results of the analysis can include quantification of data, which is transferred to the analytics engine for further analysis, processing, or use. It might also result in qualitative reporting, such as semantic mapping and geospatial visualization.

EXPERIMENT DESIGN AND ADAPTIVE INTELLIGENCE

In chapter 1, I described my personal experience with watching a movie and the repeated commercials associated with the movie. In the example, I was hoping to see food processor commercials, since I was shopping for one online while watching the movie. Let me offer some details about adaptive intelligence that may provide context and user- specific variations in marketing campaign execution, leading to more focused advertising in such situations. These real-time decision engines must be based not on static rules, but on real-time analytics and are adaptive, introducing changes as they are executed. In this case, DSP should apply an adaptive bidding algorithm that changes its recommendations based on user profile, context of the video content viewing, and other contextual activities, such as my web browsing and search for food processors.

Advertisements saturate over a given number of times, after which any additional viewing is ineffective. A DMP could count the number of times an ad was displayed and decrement the likeness score for the specific ad each time it was shown, thereby favoring a different advertisement to be shown after a certain number of views. A number of sophisticated marketing experiments can be run to effectively control the saturation effect.

It seems there are two sets of input variables that constantly impact the success of advertising. The first includes search context, saturation, and response to an advertisement, and is fast moving and must be tracked in real time. The second set includes viewing habits, shopping behaviors, and other micro-segmentation-related variables, and is either static or changes gradually, accumulating over a long time period.

How would real-time adaptive analytics and decision engines acquire the background and accommodate changes to the models, while at the same time rapidly executing the engine and providing a context-dependent response? There are four components of a real-time adaptive analytics and decision engine

A sensor identifies an incoming event with a known entity. If we do not identify this entity, we can create a new one. However, if this is a known entity that we have been tracking, we will use the identifiers in the event to connect it to the previous history for this entity. The entity can be an individual, a device (e.g., a smartphone), or a known web browser identified via a previously placed cookie or pixel (see note 29 for web-tracking technologies). Under opt-in, if we placed a coupon on a smartphone and the user of the phone opted-in by accepting the coupon, we may have a fair amount of history about the individual. The analytics engine maintains a detailed customer profile based on past-identified history about the entity. The predictive modeler uses predictive analytics to create a cause-effect model, including impact of frequency (e.g., saturation in advertisement placement), offer acceptance, and micro-segmentation. The scorer component uses the models to score an entity for a prospective offer.

While sensor and scorer components may operate in real time, the analytics engine and predictive modeler do not need to operate in real time, but rather work with historical information to change the models. Returning to our example of online advertising, a cookie placed on the desktop identifies me as the movie watcher and can count the number of times an ad has been shown to me. The scorer decrements an advertisement based on past viewership for that advertisement. The analytics engine maintains my profile and identifies me as someone searching for a food processor. The predictive modeler provides a model that increases the score for an advertisement based on past web searches. The scorer picks up my context for the web search and places a food processor advertisement in the next advertisement placement opportunity. The sensor and scorer work in milliseconds, while the analytics engine and the modeler work in seconds or minutes.

Without a proper architecture, the integration of these components could be challenging. If we place all of these components in the same software, the divergent requirements for volume and velocity may choke the software. The real-time components require rapid capabilities to identify an entity, and use a number of models to score the opportunity. The task is extremely memory- and central-processing-unit (CPU) intensive and should be as lean as possible. In contrast, the analytics engine and predictive modeler may carry as much information as possible to conduct accurate modeling, including three to six months of past history and the ability to selectively decay or lower the data priority as time passes or as subsequent events confirm purchases against previously known events. I may be interested in purchasing a food processor this week and would be interested in a couple of well- placed advertisements, but the need will diminish over time as I either purchase one or lose interest.

As we engage with consumers, we have a number of methods to sense their actions, and a number of stages of engagement. A typical online engagement process may track the following stages:

  • 1. Anonymous customer—We do not know anything about the customer and do not have permission to collect information.
  • 2. Named customer—We have identified the customer and correlated to identification information such as device, IP address, name, Twitter handle, or phone number. At this stage, specific personal information cannot be used for individual offers because of lack of opt-in.
  • 3. Engaged customer—The customer has responded to an information request or advertisement, and is beginning to shop based on offers.
  • 4. Opted-in customer—The customer has given us permission to send offers or track information. At this stage, specific offers can be individualized and sent out.
  • 5. Buyer—The customer has purchased merchandise or a service.
  • 6. Advocate—The customer has started to “Like” the product or is posting favorably for a campaign. A real-time adaptive analytics and decision engine can help us track a customer through these stages and engage in a conversation to advance a customer from one stage to the next.

As long as the predictive modeling tool and the corresponding realtime component support the current version of predictive model markup language 1 1 (PMML), the predictive modelers can conduct discovery and promote the discovered model to the real-time component. Thus, as in the sports show analogy, the statistician can work in the back room and add new predictive models. The promotion process can be supported by a rigorous experiment design. The practitioners have been using the champion-challenger model for a long time in their manual promotion process. In a typical champion-challenger model, a set of models currently used in the production environment is labeled “champion" These are the current approved, accepted models for customer experience modeling. At the same time, the statisticians run an experiment design using a set of newly discovered models labeled “challengers" The experiment design is typically done using a sample small enough not to make a dent in the production environment, but large enough to be statistically significant. Let us say we randomly choose 200 households out of 10 million and use the “challenger" model for these experimental households. If the performance of the challenger is significantly better than the champion, the challenger replaces the champion for the entire population, and the process is repeated. Predictive modeling tools have used PMML to automate the champion-challenger promotion process, whereby the task of comparison, analysis, and promotion of the challenger can be performed automatically. It allows us to have better governance of the predictive models and how they are introduced or removed in the production environment.

 
Source
< Prev   CONTENTS   Source   Next >