Whether you have been testing for years or you are just getting started, building a successful website optimization program depends on careful planning, implementation, and measurement.
This is the second in a three-part series of articles that look at the steps involved in creating a successful optimization program. Part 1 discussed how to plan your testing and implementation program. In this article, we'll outline four key steps you'll need to take to implement your optimization program:
- Clearly define success and failure.
- Ensure good test design.
- Clarify your testing timeline.
- Test different audience segments.
Clearly define success and failure
A common disappointment among companies deploying testing and optimization technology stems from tests that fail to produce the type of gains expected. Seemingly without rhyme or reason, even the most dramatic design changes yield "no significant differences" in simple measures such as click-through and even less for more involved downstream metrics such as conversion rate.
Though that is the reality of testing, much of the disappointment stems from a lack of attention to the definition of "success" and "failure" as the design or changes are implemented.
Success in testing can be measured many different ways:
- For some, "success" is a dramatic increase in a revenue-based metric, knowing that most senior stakeholders will respond to incremental revenue.
- For others, "success" is a small increase in key visitor engagement metrics, knowing that a series of small gains eventually adds up.
- For still others, "success" is a reduction in the number of problems present throughout the site, knowing that reducing barriers improves usability.
- For some, especially those with an increasingly dated site, "success" is simply being able to deploy a new look without a negative impact on key performance indicators.
A lack of success in testing is often viewed as a failure on someone's part, but that is rarely the case. In reality, testing powers a continual learning process about your visitors and customers. If a particular image fails to increase conversion rates, you have learned that your audience does not respond to that particular image. If subsequent testing reveals that variations of the same image yield similar results, then you learn something about your audience's reaction to the image's content. In that context, there is no such thing as "failure" in testing—only a failure to achieve the specific defined objective.
Keep in mind that not every test can yield incremental millions in revenue for your business. Some tests will fail to produce the change desired; others will yield results but not across the key performance indicators; and still others will simply fail to produce statistically relevant differences.
Take the first step (it's free).
You may also like:
- 10 of the Best Marketing Analytics Tools to Try in 2020
- How Retailers Should Approach AI and Big Data During Holiday Seasons
- How to Become a Data-Driven Company (Without a Data Scientist): Linda Schumacher on Marketing Smarts [Podcast]
- Forget ROAS, It's All About ROMI Now
- A Better Way to Gauge Sales Lift: Closed-Loop Measurement