What does Einstein's theory of relativity have to do with customer experience? I don't have a fancy formula, but let's try this: Assessments of customer experiences = Memory of the experience relative to a frame of reference.
That formula is based on two key observations from the world of behavioral economics.
First, the memory of an experience trumps the actual experience. That may seem like splitting hairs, but what we remember, however accurate or inaccurate, is more important than what "actually" happened. The memory of the experience becomes reality.
Second, everything is relative. We evaluate the world through a lens of comparisons to meaningful reference points. An experience is better or worse than expected, more or less than before, different from or the same as promised, best or worst in class. Whatever the experience may be, our recall, reflection, and evaluation of an experience always is relative to some frame of reference. In the world of evaluating customer experiences, absolutes don't exist.
Those two points have significant consequences that companies should consider in the design of customer experiences and the measurement and analysis of experiences—and, by extension, customer loyalty.
Let's leave the memory issue for another day and in this article focus on relativity.
Measurement Relative to What?
People are lousy at visualizing break-through products, imagining new concepts, at de novo thinking. It's not because we aren't smart; it's because we sense everything and process information in a relative context, not in a vacuum. We need context to form opinions, make choices, and even to try to think of something new. That context, for better or for worse, defines the parameters of our thinking and sets our expectations.
That a handful of visionaries can break through the boundaries and imagine and build things never before envisioned only goes to prove the point, not contradict it.
Here are three things to remember about context and customer experience.
1. When people say they don't have a frame of reference, they are saying that they lack a basis for comparison
Does "fast" have any meaning to someone who has never experienced movement or measured speed? When we consider something disappointing or satisfying or great, we are—however aware or subconscious the contrast may be—making comparisons of some sort. That comparison can be a simple binary better than/worse than, or it can be a more sophisticated form of measurement that requires some type of tool or scale for standardized, comparative assessments.
2. Comparisons have to be valid and meaningful
The phrase "comparing apples and oranges" is nothing more than shorthand for dismissing a comparison as lacking in validity and relevance. Keep that in mind when someone suggests that you measure your supermarket's performance relative to the Ritz Carlton, or compare your insurance company with Disney or rate your manufacturing firm against Starbucks.
Although you can learn much from studying those and other firms, such comparisons lack meaning to your customers and relevance to your business, rendering the measurement all but meaningless.
3. Comparing performance against some abstract ideal also is useless
At least your customers are likely to have enjoyed a "coffee-based experience" at Starbucks or "the magic" of Disney; so while the comparison may be silly, at least they know they are holding an apple in one hand and an orange in the other. Asking customers how you stack up against some abstract ideal is an exercise in philosophical futility (and should only be on your surveys immediately after the question about the meaning of life).
The most meaningful comparative frame of reference for a wireless provider is competitors in the wireless space; for a car company, it's other auto manufacturers.
Direct measurements of customer experiences and loyalty against a comparative competitive set, however, can be difficult to obtain. In many instances, your customers may not use, and will have insufficient knowledge of and experience with, your competitors to be able to make comparisons. If surveys of your customers are sponsor identified, moreover, there might be a halo effect on the comparisons.
Though consumers can comparatively assess well-known competitive firms on such issues as brand image, reputation, and messaging, such comparative ratings on experience and loyalty measures often are more problematic.
Comparisons on What Ruler?
Comparisons require a common yardstick. Meaningful comparisons hinge on meaningful and reliable scales of comparison. For distance, temperature, speed, and other measurements we use standardized scales with universally recognized units of measurement that are mathematically equal, so an inch is equal in length to each and every inch anywhere, any time. Survey measurements of perceptions, preferences, and attitudes, however, lack the luxury of universally recognized standard units of count.
Faced with traditional Likert-type scales http://en.wikipedia.org/wiki/Likert_scale in which the units of measures have no inherent meaning, I have a distinct preference (or bias, if you prefer) for evaluating customer experiences and loyalty using a fully labeled expectations scale.
Expectations ground survey respondents in a meaningful comparative and realistic context. Customers can evaluate their banking experiences against what they expect from a bank or assess their online purchase experiences relative to their expectations when shopping online.
Asking consumers to rate a firm based on their expectations anchors people to a frame of reference they understand. Although doing so does not necessarily lead to direct comparisons to named competitors (unless they are specifically rated as well), ratings on expectations provide an inherent competitive context. Assessing a wireless carrier's performance against customer expectation of "what a world-class wireless provider can and should do," for example, is an effective way to help set the competitive context for performance-assessment.
Most survey scales use end-point anchors that have ambiguous definitions and are almost never fully labeled. Customers are thus forced to use abstract scales that lack inherent meaning, which significantly reduces the validity of results. The longer the scale and the fewer the labels or anchors, the more respondents are forced to interpolate (i.e., guess) between points.
Fully labeled points on an expectations (or any) scale, by contrast—in which responses can be represented as descriptors people read and hear, so customers can rate a company against standardized criteria—provide for more meaningful and valid units of measurement and comparison.
As a practical matter—because of both language limitations and constraints on space (online and, gulp, mobile) and memory (phone)—fully labeled scales usually have only five points. Many people prefer longer scales—7, 9, 10, or 11 are the most popular options—to get more variance, since many models and statistical analyses are fueled by variance in the data.
Yes, they drive variance, but for the wrong reasons—inconsistent and ambiguous usage of scale points—not because of more granular units of assessment. The unlabeled scales create greater inconsistency (or scale usage heterogeneity, if you prefer) between survey respondents and by the same respondent across multiple questions. The increased variance from longer, unlabeled scales is driven more by noise and respondent inconsistency than the collection of more stable, reliable data.
The only absolute vis-à-vis human perceptions, it seems, is that there are no absolutes. I'm not calling for a survey-burning to purge the field of legacy work consisting of unlabeled scales or lacking in meaningful comparative context. Whether for reasons of legacy, trend lines, organizational or political considerations, such breaks often aren't practical, and not all types of questions are easily asked in terms of expectations.
But people are, by nature, inherently predisposed or hard-wired to see the world in a comparative context. Practitioners in the customer experience and loyalty arena—as well as marketers and researchers in general—need to be cognizant of that fact and must explicitly take this human predisposition into account.
We need to design experiences and research programs (and consider migrating existing projects) to better reflect a world of relativity; this is the comparative reality that helps shape the behaviors that all of us are trying to understand, predict, and influence. There is always another shade of grey for comparison.