PERSONAL PRIVACY PREFERENCE MANAGEMENT

The first part of the solution is a data obfuscation process. Most of the time, marketers are interested in customer characteristics that can be provided without privately identifiable information (PII)—that is, uniquely identifiable information about an individual that can be used to identify, locate, and contact that individual. All PII information can be destroyed, while still providing useful information to a marketer about a group of individuals. Now, under “opt-in,” the PII can be released selectively on a need-to-know basis.

As I worked on the data obfuscation process, I found that this process is significantly more complex than expected. While PII data is destroyed, I cannot leave related information that, if joined with obfuscated data, might lead back to the individual. For example, if I destroyed the address and phone number but left location information, someone could use the location information to establish the consumer’s residential address. Also, there are grades of PII information. Zip+4 or county designation may be an appropriate locater unless we are dealing with the home addresses of billionaires. Also, small samples are a problem. In the case of the Netflix matching engine, someone was able to determine the identity of an individual based on that person’s media content viewing preferences and patterns in anonymized data.9 The non-PII information can uniquely identify an individual if only one individual meets the profile. IBM has been investing in data-masking products and processes that allow us to systematically identify PII information in a data set, tag it, select masking algorithms, test the masking process, and establish the effectiveness of the masking solution.10, 11

While one can reasonably expect data masking to obfuscate customer identity, it should be usable for analytics. The algorithm should remove or randomize PII, but not destroy the statistical patterns required by a data scientist. For example, if I take a set of real addresses and replace them with XXX, anyone looking for statistical patterns along geographical boundaries would not be able to use the obfuscated data. Patented algorithms examine data masking for a large database, and are able to successfully mask the data across a group of data items. The algorithms systematically work on a group of fields to destroy privacy, while leaving the data characteristics for its intended task.

A privacy infrastructure provides the capability to store information about “opt-in” and use it for granting access. Anyone with proper access can obtain the PII information, as granted by the user, while others see only obfuscated data. This solution provides us with enormous capability to use statistical data for a group of individuals, while selectively offering “one-to-one” marketing wherever the consumer is willing to accept the offers.

An audit can test whether the obfuscation process, algorithms, and privacy access are working properly in a multipartner environment in which third parties may also have access to this data. If properly managed, the data privacy framework provides gated access to marketers based on permission granted by the consumer, and can significantly boost consumer confidence and the ability to finance data monetization.

Data privacy concerns are changing with time and the generations, leading to significant differences in personal privacy preferences. A trustworthy marketing program will build its trust with a customer gradually and with a full understanding of customer preferences.

 
Source
< Prev   CONTENTS   Source   Next >