Can personalization as we know it survive 2018? The most praised tactic in digital marketing has contributed to a divisive, fractured conversation about the nature of privacy.
Consider that some of the most controversial tech stories have involved varying degrees of personalization: Russian propaganda bots tricked social media algorithms to influence what users saw. Uber was accused of abusing location services to sidestep law enforcement. Facebook generously offered to collect your nude photos—to prevent "revenge porn," of course. Such stories undermined trust in technologies that collect personal data.
Personalization without choices and transparency doesn't feel like personalization to the user. However, the marketing community can re-establish trust by subjecting those services and tactics to tough questions. I have created three tests that can help us keep personalization honest.
What We Need to Test
Personalization starts with good intentions. We examine or operationalize the habits, preferences, and actions of users to provide them with more useful information and services. It's a fragile exchange. Consumers trade their data for "value"—however they define that return. But what people share seems to matter less than how and why they share.
Researchers at Columbia Business School surveyed 8,000 people about privacy and found three areas of overwhelming consensus:
- First, 75% of respondents said they were willing to share an assortment of personal data with brands they trust.
- Second, 86% of respondents wanted greater control over the data companies collect.
- And third, 85% said they want to know more about the data companies collect.
Those principles are easy to grasp but harder to practice. The gray area is dangerously wide, which is why tests—framed as questions—can help us determine when data collection and personalization risk abusing trust. Let's dig into them.
1. Can the consumer control the degree of personalization to minimize negative effects or amplify benefits?
YouTube provides a great case for contemplating control. You probably know that YouTube's algorithms personalize your video recommendations based on viewing habits. From YouTube's perspective, this service increases the probability that you will see more relevant videos in your feed, watch more videos you enjoy, and, consequently, generate more ad revenue for YouTube.
To viewers, YouTube personalization might be useful. Or it might feel like YouTube is manipulating their time, influencing their thoughts, and feeding an addictive habit. Neither perspective is objectively true or constant over time.
Either way, YouTubers have choices. People who feel that YouTube is going too far can deactivate autoplay and select each piece of content mindfully. Going further, they can block targeted advertising by adjusting Google Ads settings. As a next step, they could use YouTube without being signed into an account; that way, Google can't gather enough data to personalize the feed.
"Control" means consumers choose how a service operationalizes their data. YouTube doesn't make the options I suggested obvious, but they are present.
2. If consumers can't vary the level of data collection or personalization, would the user need to use a third-party service to stop the personalization?
Sometimes personalization is the service. If the Google Maps app can't base directions on your location and means of transportation, what's the point? Part of shopping on Amazon is accepting that it will suggest purchases based on your shopping history. It would be nice if the company offered some controls; but, in their absence, you could always choose not to use the services.
Ad retargeting is a different animal. Once you've visited a site that uses retargeting technology, it's too late to choose. If Acme Shoes stalks me across the Internet with banner ads of the boots I didn't want to buy, I'm stuck. I can't ask Acme Shoes or its marketing agency to knock it off.
Instead, I'd have to block cookies, switch browsers, run an adblocker, or use an anonymous browser. I can't vary the level of personalization, and I can't shut it down without circumventing the offending brand. Forced "personalization" is not personalization.
3. Does the company explain—in a transparent, understandable way—how it uses data?
Companies in the data and personalization business require us to sign impenetrable privacy agreements. They're hard to read, let alone understand. As researchers at Carnegie Mellon calculated, it would take a person 76 workdays, on average, to read the privacy policies on every website she visits in a given year.
What if, right before the unreadable agreement, companies complete these three bullets in plain words?
- We collect the following the information: _____
- We use it to: _____
- We sell the data to ____ or share the data with ____ (or neither)
If transparency would scare away potential users, hiding the truth doesn't fix the problem. Word gets out eventually, and, when it does, people tend to use the service anyway. Why not be up front?
Again, the Columbia Business School research found that 75% of people would share data with brands they trust. Obscuring the terms of data collection and personalization is untrustworthy behavior.
Personalization With Empathy
"Personalization" goes unnamed in public debates because it's marketing lingo. In marketing, many use personalization and know how it powerful it is. In the highest form, personalization is a profitable exercise in empathy.
Knowing that, let's take responsibility for control, choice, and transparency. Let's personalize personalization.
You may like these other MarketingProfs articles related to Customer Relationships:
- Boost Your Sales With Strategic Gifting [Infographic]
- How to Use Empathy in Your B2B Brand Storytelling
- The Role of Customer Empathy in the Future of Marketing
- How to Offer More Value to Your Crisis-Stricken Customers [Infographic]
- CX Will Be Essential for Rebuilding After COVID-19: Four Steps You Need to Take Now
- Planning Your COVID-Related Communications: A Flowchart [Infographic]