Why You Can’t Trust the Predictions of Your Market Research — and What to Do About It

Let’s say you’ve developed a new product, and you want to know how much it will sell. What’s the best way to figure it out? Well, ask consumers, of course! Right?

Well sort of.

Now assume you do a survey of 385 of your target consumers. You show them a well-written product concept (a description of your product in a standardized format, used for market research) and ask them how likely they would be to try it.  30% say they will definitely give it a try. Great!  Based on the sample size, you should be able to have 95% confidence (+/-5%) that 30% of consumers will try your product when you go to market.  Sounds like your product will be a big success!

So you gear up production to meet 30% market demand.  You set your sales and marketing budget knowing that 30% of consumers will give you a try.  Everything is set to go!  And then…only 10% of consumers try your product.  What happened??

This is the classic and most well-established example of the difference between statistical confidence and predictive validity.  In other words, even when you have confidence that responses represent how the total population would answer a question, you can’t have confidence that this predicts how they will actually behave.

The problem in this example was not with the survey or the product concept or the statistics. The problem was that consumers consistently and predictably overstate intent to purchase a new product.  So while the survey accurately predicted that 30% of consumers in the population say they would try the product, only one-third of those really will.

How big a problem is this in marketing?  Unfortunately, big.  Here are a few more examples:

  • Stated vs. Derived Drivers - Many times in a survey, you’ll ask consumers why they buy something or do something. They say it’s because of X, Y and Z. These are the stated drivers of behavior.  But then you look at the data, and you see other factors (let’s call them A, B and C) are actually the strongest predictors of the behavior. These are the derived drivers of behavior. Why are stated and derived drivers frequently difference?  Because people often don’t know what drives their own behavior.
     
  • Pricing - A common question to ask in research is the maximum price someone is willing to pay for a product or service (or in the Van Westendorp methodology, you ask the price at which a product is so expensive the consumer would not consider buying it). Unfortunately, consumers sometimes deliberately understate the maximum they will pay, and in other cases unintentionally but grossly overestimate their willingness to pay. So again, people’s responses about their own behavior, including how much they’ll pay, do not accurately predict how they will actually behave in the market.
     
  • Choice Modeling - Discrete choice modeling, conjoint analysis and maximum-difference testing are all the rage right now. If you’re not familiar with these, they're used when you have too many variables to just ask survey respondents to do ratings or rankings. They force respondents to make choices between options, and then use sophisticated statistical techniques to project results to the entire audience.  Because these techniques are designed to exaggerate small differences, they are great for rank ordering (e.g. ranking claims) and optimizing (e.g. identifying the optimal feature set). But many people also use them to predict outcomes (e.g. new product volume).  The obvious problem here is that if the methodology exaggerates differences, absolute predictions will be overstated.  One study showed a 50-400% overstatement of product volume by choice methodologies.

What can you do about this?  Here are a few tips:

  • Ask for validation data - It’s easy for researchers to cite the statistical confidence of a sample size and say it’s all that matters. If you have an important decision to make, ask the supplier for data validating that past predictions were consistent with what actually happened in-market. At The Seidewitz Group, we’re big fans of volumetric forecasting because it is one of the few market research methodologies with strong validation data.
     
  • Beware of arbitrary adjustments and rules of thumb - We’ve seen researchers try to address overstatement by making adjustments based on subjective judgment or rules of thumb. For example, we saw a research firm use an elaborate argument to adjust 20% purchase intent down to 1%, while someone at the company who commissioned the research used an elaborate argument to adjust the purchase intent down only slightly, to 18%. Who was right? Probably neither. The lesson is that adjustments to data should be based on sound data about how consumers misreport their own behavior, not someone’s best guess.
     
  • Use standardized methodologies that allow you to compare to norms - Many research firms pride themselves on proprietary, fancy new research techniques. Our advice is to avoid these in favor of techniques that have been used repeatedly and shown to be predictive of in-market behavior. A big advantage: you can compare your results to results of similar products to figure out how strong your product performed. Examples in which standardized methodologies with strong historical data are available include concept testing, product use testing, Net Promoter/satisfaction research, brand awareness studies, ad testing. Stick with standardized methodologies when you can and you will have a much stronger understanding of your data.

Perhaps the most important tip is to constantly be on the lookout.  There are many examples of research that is statistically reliable but not predictively valid.  Too few research firms understand this distinction.

So it’s up to you to be vigilant.  Before you hire a research supplier, ask them what data they have to validate their methodology and what norms they have that have been showed to predict in-market success.  If they can’t answer this question—or worse yet, don’t understand what you’re asking—think twice about hiring them.