A key tenet for data-driven marketing professionals is fact-based decision-making. However, something strange is occurring in scientific studies, where tested and proven results are becoming difficult to replicate. The challenge for marketing professionals is to realize that what is true today may not be true tomorrow.

Jonah Lehrer, in the December 13, 2010 issue of the New Yorker, authors a stunning article on the increasing ineffectiveness of therapeutic drugs. Medications tested eight to ten years ago on large populations seem to work only half as well in similar trials today. So then, many tests for medications in randomized, double blind clinical trials (considered the gold standard in science) are not repeatable—not even close! Lehrer points out the conundrum: “The test of replicability … is the foundation of modern research. Replicability is how the (scientific) community enforces itself.”

Lehrer mentions this phenomenon isn’t simply relegated to pharmaceuticals. In fact, in vitamin and cardiac stent tests, what was once “known” and proven is now mostly without validation if tested again. “It’s as if our facts were losing their truth,” Lehrer says. “Claims that have been enshrined in textbooks are suddenly unprovable.”

Observation, testing, experimentation, and replication are foundations of scientific process. The issue in a nutshell is explained by Lehrer: “If replication is what separates the rigor of science from the squishiness of pseudo-science, where do we put all these rigorously validated findings that can no longer be proved?”

Some may argue that a simple case of changing populations, or even improvements in psychology account for changes in testing results. Or even that perhaps there was miscalculation, statistical fluke, or other anomaly in previous trials. These are entirely possible explanations, but it is unlikely that hundreds, if not thousands of studies, would be facing the same challenges today. And even so, there might be some statistical difference, but variances of 30-50% are common in repeated experiments, especially those conducted over a period of years. There must be something to this drastic decline.

The “decline effect” is keeping scientists across the globe quiet about whether their experiments can be replicated. One scientist was told by his mentor that his “real mistake was trying to replicate his work” and that he would only set himself up for “disappointment."

Adding insult to injury, it appears that scientific journals don’t want to publish findings regarding the decline effect. Biologist Leigh Simmons from the University of Western Australia tried to submit results to various publications detailing his difficulty in duplicating his experiments, but “the journals only wanted confirming data,” he says. And that’s ultimately a shame because there is much to learn not just from success, but failure.

This then, may explain why much scientific “evidence” is often contradictory. One year, we learn that coffee is good for health, and then two years later, it’s bad. We find out that hormone replacement therapy for a menopausal woman is recommended one year, only to find out the next that it never should have been recommended in the first place. “The situation is even worse when the subject is fashionable,” Lehrer says.

All told, there is bias, conflict of interest, faulty design, selective publishing, psychology progress and certainly other developments at work. However, the decline effect is a much more significant finding as it “reminds us how difficult it is to prove anything.” Because what is true today may not be true tomorrow, the best result when dealing with “proven” scientific facts is a healthy dose of skepticism and a question everything mindset.

Questions:



  • If yesterday’s rigorous and validated findings can no longer be proved, what does this mean for those practicing empirical, fact based decisioning?


  • Statistically speaking, the decline effect shouldn’t be happening, but it does. What other reasons explain why facts can lose their “truthiness” over time?

Sign up for free to read the full article.

Take the first step (it's free).

Already a registered user? Sign in now.

Loading...

ABOUT THE AUTHOR
Paul Barsch directs services marketing programs for Teradata, the world's largest data warehousing and analytics company. Previously, Paul was marketing director for HP Enterprise Services $1.3 billion healthcare industry and a senior marketing manager at global consultancy, BearingPoint. Paul is a senior contributor to MarketingProfs, a frequent columnist for MarketingProfs DailyFix, and has published over fifteen articles in marketing, management, technology and healthcare publications. Paul earned his Bachelors of Science in Business Administration from California Polytechnic State University, San Luis Obispo. He and his family reside in San Diego, CA.