Whether you have been testing for years or you are just getting started, building a successful website optimization program depends on careful planning, implementation, and measurement.

This is the second in a three-part series of articles that look at the steps involved in creating a successful optimization program. Part 1 discussed how to plan your testing and implementation program. In this article, we'll outline four key steps you'll need to take to implement your optimization program:

  1. Clearly define success and failure.
  2. Ensure good test design.
  3. Clarify your testing timeline.
  4. Test different audience segments.

Clearly define success and failure

A common disappointment among companies deploying testing and optimization technology stems from tests that fail to produce the type of gains expected. Seemingly without rhyme or reason, even the most dramatic design changes yield "no significant differences" in simple measures such as click-through and even less for more involved downstream metrics such as conversion rate.

Though that is the reality of testing, much of the disappointment stems from a lack of attention to the definition of "success" and "failure" as the design or changes are implemented.

Success in testing can be measured many different ways:

  • For some, "success" is a dramatic increase in a revenue-based metric, knowing that most senior stakeholders will respond to incremental revenue.
  • For others, "success" is a small increase in key visitor engagement metrics, knowing that a series of small gains eventually adds up.
  • For still others, "success" is a reduction in the number of problems present throughout the site, knowing that reducing barriers improves usability.
  • For some, especially those with an increasingly dated site, "success" is simply being able to deploy a new look without a negative impact on key performance indicators.

A lack of success in testing is often viewed as a failure on someone's part, but that is rarely the case. In reality, testing powers a continual learning process about your visitors and customers. If a particular image fails to increase conversion rates, you have learned that your audience does not respond to that particular image. If subsequent testing reveals that variations of the same image yield similar results, then you learn something about your audience's reaction to the image's content. In that context, there is no such thing as "failure" in testing—only a failure to achieve the specific defined objective.

Keep in mind that not every test can yield incremental millions in revenue for your business. Some tests will fail to produce the change desired; others will yield results but not across the key performance indicators; and still others will simply fail to produce statistically relevant differences.

Sign up for free to read the full article.

Take the first step (it's free).

Already a registered user? Sign in now.

Loading...

ABOUT THE AUTHOR
image of Kim Ann King

Kim Ann King is the CMO of Web and mobile optimization firm SiteSpect. A B2B software marketer for nearly three decades, she is the author of The Complete Guide to B2B Marketing: New Tactics, Tools, and Techniques to Compete in the Digital Economy (May 2015).

Twitter: @kimannking

LinkedIn: Kim Ann King