Agility By Example (Cont.)

The c-c algorithm works by calculating the chances that a given unit will be displayed as a measure of the frequency of its past success divided by the sum of the frequencies of success for all candidates—a frequency of frequencies to normalize performance values regardless of how many exposures have occurred for a given candidate. It seeks to determine the frequency of display for any given candidate message a as follows:

frequency of display a = ( ƒa * wa )5 / ∑ i=1,…,n ( ƒi * wi )5

where:

a = a given candidate within the set of all candidates
ƒx = the ratio of success scores to exposures for candidate x
wx = a weighting score for candidate x
n = the count of all candidates

Once the frequency of display is calculated for all candidates, these frequencies are aggregated to create a cumulative frequency set that can be used to differentially select future candidates based on their past performance, using a random decimal value between 0 and 1.

There are two components of the above formula that deserve special note—the use of the weighting parameter w and the taking of the weighted frequency scores to a power of 5.

A Weighting Parameter

The use of the weighting parameter allows the c-c algorithm to value candidates differentially, so that if success with execution a is valued by the marketer twice as much as success with execution b, then execution b needs to perform twice as well as execution a to have the same display frequency. This enables the formula to optimize for total value (real or perceived) and not only for plain response success.

The weighted value for each candidate is defaulted to 1 but can be set to any rational positive value to alter the behavior of the algorithm.

The Power of 5

The use of the power of 5 in the c-c algorithm developed as a means of increasing the aggressiveness of this algorithm while seeking an optimal gain state. This particular power value was set after an exhaustive study of the effects of various settings using a series of Monte Carlo simulations created as scripts within the SPSS (Statistical Package for the Social Sciences) statistical environment.

Monte Carlo simulation was first developed as part of the Manhattan project's effort to build the first atom bomb and remains one of the most commonly used simulation methodologies. It allows the simulator to explore probability within complex models while accounting for the affects of stochastic behavior. The simulation process works by creating extensive trials runs using random numbers and a set of predefined rules, and then inferring statistical insights from an analysis of the collective results.

For each simulation run of the c-c algorithm, a set of 2-10 hypothetical messaging candidates were created with predefined frequencies of success for each candidate. These success frequencies were used to seed a simulation run of 10,000 exposures, which were continually optimized by the algorithm.

All candidates began with an equal chance of display. As the simulation progressed, the candidate's simulated successes and failures altered their display frequencies through the c-c formula, which in turn altered the overall gains for that simulation run.

The resulting performance (number of successes per exposure) for the runs was then compared against the average performance created by a random selection of candidates using the same success frequencies over 10,000 exposures. This process was repeated a few thousand times using a variety of candidate sets and power settings to create a robust data set for analysis.

The end result of the simulation was a stabilizing of the formula's seeking efficiency starting at a power of 5. A higher power can be used in the formula, but it was deemed unnecessary and showed no significant increase in efficiency based on the simulation work. At a power of 5, there appears to be enough inter-candidate separation generated even with even minor performance differences to assure optimal aggressiveness.

The simulation work also provided some other important insights concerning use of the algorithm. The most important of these was that the gain model worked best when there were at least three candidates, while the use of 5-7 candidates produced the most gainful results.

The Nitty-Gritty

Using the c-c algorithm, one can easily locate messaging champions by identifying those candidates with the highest frequencies of display. Non-champions can be removed at the marketer's discretion, while new candidates can then be placed into model at any time using seed values to initialize their starting display frequencies.

The technical infrastructure to support this algorithm on a Web site is relatively simple to build and should consist of code structures to do the following:

• Select candidates for display based on their calculated display frequencies.

• Track successful results of a messaging candidate's exposure to a consumer.

• Manage the data model behind the algorithm, which can include code for maintaining persistence and altering the behavior of the model.

The technology to put this algorithm in use is for the most part up to the marketer. I prefer to work with Java J2EE technologies, but a Microsoft.Net, Cold Fusion or PHP application server could be used to bring this work to life on a brand Web site.

A user flow for a c-c algorithm enabled Web site will go as follows.

The user will first request a dynamically created page, which will have a messaging call-to-action block or callout in its HTML layout. The application server, when creating this page view, will request a message candidate to be placed within the callout slot from a selection component within the application. This selection component works by generating a random decimal value between 0 and 1, which it will map against the cumulative display frequency model for all potential messaging candidates for that callout.

The frequency model is such that historically successful candidates will have a higher probability of selection than lower-performing candidates based on the c-c algorithm. At the same time that the selection component passes its chosen candidate back to page composer, it will log an exposure for this candidate in the data model, which will be used for future performance optimization.

Once a given candidate is exposed to (i.e., viewed by) a user, the application will start a tracking cycle, waiting for the user to trigger a predefined success event. The nature of this tracking will vary depending on how the success event is defined within the applications. These are two examples:

• If a click-through event is the success event, then success can be tracked by creating a redirect page that will record a success for a given candidate when a click-through occurs from the callout. After logging the event, the page will redirect the user to the desired page destination. This is how many ad-serving services work.

• If the purchase of a product is the success, then the exposure of a candidate to a consumer will set into motion a tracking hook within the session, which will record a success if and when the user purchases a designated product.

The model is naturally pessimistic and will assume a failure of the messaging candidate until told otherwise. This allows for the model to work within an asynchronous environment where exposures and successes are temporally separated and failures are often measured as an absence of successful behavior. Such pessimism works well online, since it does not require cumbersome tracking tools to wait until sessions have expired to log an unsuccessful event.

As the tracking component gathers new behavioral tracking data (exposures and successes), it will update the data model behind the c-c algorithm so that the selection component can optimize future messaging performance. This updating can occur with each tracked event or be cached for periodic updates. With each update, the model will use the array of values for exposures and successes to create the new frequency-of-display scores for the candidates by applying the c-c algorithm each time.

For efficiency's sake, managing these data arrays as a series of aggregates scores for each candidate is recommended, rather than logging each transaction and aggregating these data with each update event. The only compelling reason to maintain transactional data in its raw form is that the marketer may want to enable momentum behaviors within the model, where a past set of n transactions can be stored to allow the placement of greater weight on more recent or distant transactions.

In a segmented marketing context, separate models can be set up for each consumer segment viewing a given page view, so that a custom gain model can be created for each segment; doing so should provide improved gains if the different populations are expected to react differently to the various messaging candidates.

Wrapping Up

Stepping back from the granular details of Web site messaging optimization, it is important to remember what we are hoping to achieve with it. It is also important not to forget its limitations.

One trouble with automation lies in its out-of-sight, out-of-mind quality. It is easy to forget about it once it is in place. The key to messaging optimization is to not become complacent. It is important to revisit not only the candidates but also the entire optimization strategy from time to time to make sure that it is still being gainful. Let automation be a path to do more with less, but do not let it become an excuse to lose diligence.

I would also recommend that the marketer be open to various optimization algorithms before settling on one. The c-c algorithm is a strong one, but it does not encompass a solution for all messaging problems. (For instance, the c-c algorithm optimizes behavior to just one callout but does not approach the issue of how various callouts on the same page can be optimized together to provide optimal overall gain for a total page view. I have been thinking that genetic algorithms might be better suited for this type of higher-level optimization.)

With good planning and diligence, the issue of what to do gainfully with short-term visitors can be resolved. Although the details of this process can seem at times to be overwhelming, the magic of this approach lies in its ease once put in place, allowing the marketer to have the freedom to begin pondering what to do next with his or her visitors.

Continue reading "Online Marketer Seeks Meaningful Short-Term Relationship—Serious Inquiries Only (Part 2)" ...

Subscribe today...it's free!

MarketingProfs provides thousands of marketing resources, entirely free!

Simply subscribe to our newsletter and get instant access to how-to articles, guides, webinars and more for nada, nothing, zip, zilch, on the house...delivered right to your inbox! MarketingProfs is the largest marketing community in the world, and we are here to help you be a better marketer.

Matthew Syrett is a marketing consultant/analyst—a hybrid marketer, film producer, technologist, and statistician. He was vice-president of product development at the LinkShare Corporation and vice-president at Grey Interactive. Reach him via syrett (at) gmail (dot) com.

Web Sites Articles

You may like these other MarketingProfs articles related to Web Sites: