Every Web marketer knows that success in the online channel is a game of inches: it takes a rigorous process of campaign testing, optimizing, aggregating and retesting just to get the tiniest incremental improvements in user response rates and conversions.

But while the industry focuses on squeezing every dollar of value out of acquisition strategies, billions of dollars are being left on the table by campaigns that drive users to sites they can't use.

Just how severe is the problem? Consider this: last year, companies that advertised on the Web spent $6 billion to send users to sites that failed 44% of the time. In a study by the Nielsen Norman Group evaluating user behavior on 20 e-commerce sites, users were unable to complete 218 of 496 very basic tasks—such as locating a store or buying a gift—due to poor usability. Nielsen calculated that the average site could increase sales as much as 79% through improved usability.

In a game of inches, that's a country mile.

The good news is that improving the usability on most sites can be relatively simple and inexpensive. Most poor user experiences are the result of a site's failure to conform to basic usability best practices, such as those governing page load times and consistency of navigation. Usable, well-accepted data on what users need to succeed are widely available and should be an integral part of every Web marketing strategy.

The best method for pinpointing the precise causes of breakdown in the customer experience is actual user testing. Usability testing involves active listening and engagement of qualified users as they attempt to complete typical tasks on the site. It improves on survey and clickstream methods by providing raw, qualitative feedback from real users, allowing the moderator to dig deeper into usability issues the moment they occur.

The first step in undertaking usability testing is to work to articulate the site's purpose and role in the overall marketing plan. Then, identify user groups (new users/existing users, internal users/externals users, etc.) and their goals.

Following this, define the essential user tasks necessary to accomplish the goals. In many cases, other data collection methods, such as clickstream and surveys, help identify which user paths are most in need of improvement (e.g., users abandon shopping carts at certain points in the purchase process, or they consistently leave the site from a certain page).

A heuristic evaluation, conducted by a usability expert, such as an information architect, will also reveal aspects of a Web site that need further testing.

Testing is usually conducted on the existing site, but if there are well-known areas of concern the information architect may recommend prototyping alternative scenarios. This way, variations in the navigation or instructions can be tested alongside the existing user experience. This “mirroring” technique helps to immediately identify alternatives that work better, limiting the need for further testing down the road.

Once the user groups have been identified and a testing plan established, the information architect works with an approved recruitment screener to identify and schedule qualified test participants.

To obtain worthwhile qualitative results, test five to eight participants per user group. The basic principle for recruitment is simply that the selected participants be representative of site users in terms of demographic characteristics, Web experience, and site usage.

While any quiet, comfortable room will suffice for usability testing, having the right facilities makes a difference. A separate observation room allows monitoring of the usability lab through a two-way mirror so that observers can watch the moderator and tester at work without disturbing the session. The sessions are videotaped so that tester reactions and observations can be accurately captured for future analysis.

The actual testing takes the form of a series of 50-70 minute sessions conducted by a qualified usability moderator. The sessions are one-on-one, with a single tester working through the assigned usability tasks.

A “listening lab” methodology is crucial to success: the users are asked to narrate their thinking—every aspect of their visual and mental experiences—as they complete each task. The moderator probes usability issues further through ongoing questioning, but s/he must take care not to lead the user or offer rationales for the current design.

With the listening lab methodology, the experience of each user is rigorously and independently examined and cataloged—unlike focus groups, in which users tend to subconsciously influence each other.

Usability testing often reveals surprising gaps between the site developer's good intentions and the actual user experience. The site may be providing helpful instructions that the user is simply overlooking due to placement, or the page may be too helpful, containing information that's extraneous and therefore a nuisance to the user. Such issues involving negative experiences of existing elements might never be uncovered except through usability testing.

The good news is that usability testing usually produces a report with raw feedback and recommended site enhancements that reveals the need for tactical modifications that are easy to implement, such as removal of content, changes to header text, rearrangement of elements, etc. The information architect should oversee all upgrades to the user experience to ensure that they are consistent with user feedback.

Finally, ongoing clickstream analysis will prove the value of the upgrades and provide a benchmark for future enhancements.

When considering whether to take on formal usability testing, it's worth remembering how easy it is for users to switch to sites that offer better user experiences, and how difficult it is to win users back once they've experienced usability failures.

For poorly performing sites, the cost of gaining market share—through promotions, discounts and advertising blitzes—is enormous, whereas usability testing runs a fraction of the cost of a single campaign.

The methods of driving qualified traffic to sites are becoming increasingly sophisticated as we gain a better understanding of how users respond to online advertising. The time has come to close the loop on Web marketing strategy and turn our attention to how users respond to sites.

Subscribe today...it's free!

MarketingProfs provides thousands of marketing resources, entirely free!

Simply subscribe to our newsletter and get instant access to how-to articles, guides, webinars and more for nada, nothing, zip, zilch, on the house...delivered right to your inbox! MarketingProfs is the largest marketing community in the world, and we are here to help you be a better marketer.

Already a member? Sign in now.

Sign in with your preferred account, below.

Did you like this article?
Know someone who would enjoy it too? Share with your friends, free of charge, no sign up required! Simply share this link, and they will get instant access…
  • Copy Link

  • Email

  • Twitter

  • Facebook

  • Pinterest

  • Linkedin


ABOUT THE AUTHOR

Eric Anderson is a partner at digital agency White Horse and the author of Social Media Marketing: Game Theory and the Emergence of Collaboration.