Successful customer and market research can provide the data you need to support specific business objectives, such as understanding requirements for a new market or a new product, fine-tuning your customer journey mapping, improving competitive market positioning, refining a customer facing process, analyzing your win/loss results, and measuring customer satisfaction.
With the proliferation of survey software, it can be tempting to jump into creating survey questions, aided in part by tips published on the software company's website. But successful research is so much more than wordsmithing the questions and choosing appropriate rating scales: It's a science that takes time and money. Your methodology and corresponding survey instruments/questionnaires must support the purpose of your research, along with any business, timing, and budget constraints.
Bad decisions made as a result of a flawed research process (purpose, questions, implementation, and results interpretation) can be extremely costly, and potentially fatal. Usually, only the very largest companies have research experts on their marketing team. If you don't have one, consider engaging an external expert to mitigate risk on this very strategic project.
Six Best-Practices to Guide Your Research
Based on our several decades of successfully implementing research initiatives and crafting research instruments, from questionnaires to discussion guides, here are six proven best-practices to help you design, field, and interpret your study.
1. Be clear about your purpose
Determine specifically what you want to learn, and understand what data you need to support the decision.
Here's an example:
- To decide whether to pursue this market, we need to know_____________.
- As a result of this research, we will be able to make the following business decisions(s) __________.
Once you understand the problem and opportunity, start the process by answering these four questions:
- What do we know now?
- What are the best opportunities as we understand them now?
- How can we frame our options based on what we know now?
- What do we need to know that we don't know now to make this decision?
2. Determine your target population
One of the critical decisions in the design phase is determining your target population—that is, the people who will participate in your study during the fielding phase. It is important to have a sufficiently large sample size if you want the results to be statically meaningful. If the sample size is too small, it is very possible to draw erroneous conclusions.
A sample should be chosen at random from the population so that it is representative of the population. Decisions based on the characteristics of the sample can then be generalized to the entire population. Random sampling enables you to draw statistical inferences based on information collected from a small group representative of the population under investigation. With a random sampling technique, each subject in the population has an equal chance of being included in the sample.
Make sure your research participants are qualified to answer the questions; otherwise, they may feel compelled to make up an answer. It is acceptable to include a "Don't Know" category; however, when you receive a lot of "don't knows," it may mean that they really do not know and are not qualified to answer—or that "Don't Know" was the easiest answer to select.
3. Design your instrument
Here are some quick tips for how to design your survey instrument so it delivers meaningful results that drive action:
- Carefully plan enticements or requirements for the study participants.
- Provide a brief overview of the objectives of the study in the research purpose document.
- Keep the number of your questions to 15-30. Enthusiasm and interest wane if the instrument, whether it will be fielded online, by phone, or in person, is too long; if it is, it will affect the validity of the responses to the questions.
- Keep questions brief, direct, unambiguous, and focused on a single topic. Questions should be written in neutral language; avoid biased or judgmental wording (such as "should," "ought to," "bad," "wonderful").
- Group your questions into subsets, with headings to orient the respondent. This is important regardless of methodology. All of the questions should fit together in a logical, orderly, thematically holistic manner. Questions from "left field" are distracting.
- Make early questions less controversial and designed to pique interest.
- Leave demographic questions for the very end. They clearly require the least thought, thus allowing the respondents enough energy to tackle more complex questions. Plus, questions that are perceived as "personal" may make participants more defensive and lead them to believe that their anonymity is being violated, thus altering their subsequent responses.
- For self-administered instruments, such as online surveys, maintain consistency among scale questions so that all worse-to-best scales run in one direction (left to right or right to left). Research suggests that a scale with the "best" response at the left provides a higher mean response than the same question arranged worse-to-best (called the primacy effect, or the tendency of people to favor the left side of the scale). So, best-to-worse will yield better results, while worse-to-best may give a lower mean with a wider standard deviation.
- Use open-ended short-answer questions to provide richer, more personalized responses. Remember, though, that these are harder to analyze, since group summaries can be made only after time-consuming content analysis. Limit the number of short-answer questions unless you are prepared to spend considerable time in assessing group consensus.
- End with a brief description of how the data will be used. And a sincere "thank you," along with reassurance that someone will be reading and analyzing the data, is helpful to encourage survey completion in the future.
4. Test, then implement
Always test your instrument before deployment. What may seem obvious to the instrument author may be completely unclear to the typical respondent. Or, worse, a difficult question will be misunderstood or skipped, and a difficult-to-understand self-administered instrument is most certainly destined to be discarded.
5. Analyze and report the results
When analyzing research results, you want to be sure to address validity.
The validity of the questions can be assessed by examining the number of respondents who chose each response option; no single option should have more than 85% of the responses, and none less than 5%.
Keep the business purpose of your research in mind as you analyze the data. Examine the responses to individual questions within the context of the relevant business issue. You may need to look at the responses across a group of questions to discern the implications of the responses to that business issue.
The inclusion of key demographics provides valuable opportunities for insightful subgroup analysis.
6. Take action
Last, but far from least, take action based on the results. Then close the loop by reporting on what changed as a result based on the business decision(s) the study was designed to support.
How an External Expert Can Help
From experience, an expert knows what questions to ask you to flesh out the purpose of your research—to make it specific enough to adequately guide the question development. Also, there may be many stakeholders in your organization, and each of them may have something they want to learn from the research. Yet, it is impossible for one study to do it all. The expert can function as a neutral third party to ensure the focus needed for success.
If you don't have contact information for the target population, an expert can suggest cost-effective sources and qualification criteria, and also help you decide the level of granularity needed based on your exact needs.
There are many moving parts to successfully fielding a study. Often, you won't have a second chance to return to the well; accordingly, if you don't have the expertise internally, hire an external expert. An external expert can help ensure the correct data is consistently and properly collected. This is especially important in interviews and focus groups, where it is easy to go off-script or inject unintentional biases that corrupt the findings.
As with any initiative, there's the potential for plans to go awry. External experts can more quickly recognize when something is off track so you can re-adjust in time. Because of their experience with data, external experts can often see relevant patterns more quickly.
A Note About Third-Party Resources
Part art and part science, research results (and the business decisions they inform) are only as good as the preparation that went into the design and fielding. Unless you truly need research/survey expertise on a daily basis, outsourcing is an effective way for small and midsize enterprises (SMEs) to access the same economies of scale, efficiency, and expertise that large companies enjoy.
To ensure your outsourcing efforts pay off, take a deliberate approach by choosing a partner with experience in preparing for and crafting instruments, deploying research, coding and analyzing data, and connecting research results to the business decisions your study is designed to support.
A well-designed network of partners helps you expand your research and marketing expertise In a cost-effective manner.
You may like these other MarketingProfs articles related to Market Research:
- Market Research for B2B Marketing Success: Jim Longo on Marketing Smarts [Podcast]
- Get to Know the New B2B Decision-Makers: LinkedIn's Ty Heath Shares New Research on Marketing Smarts [Podcast]
- 10 of the Best Tools for Market Research
- Why Customers Take Brand Surveys
- How to Identify and Avoid Survey Response Bias [Infographic]
- Small Towns Present Big Opportunities for Marketers: Rural-Business Expert Becky McCray on Marketing Smarts [Podcast]