Does Marketing Research Live in an Echo Chamber? - Part 2
(If you haven't already, be sure to read part 1 to this blog post.)
I'm assuming that if you're reading this you're probably a marketing researcher. If that's the case, you probably have access to a bunch of survey questionnaire. Look on your hard drive or reach into your file drawer if you're old school and take out three or four. Look them over and back up far enough so you're looking at their general structure rather than the details. What do you see? I'll bet they all share a similar approach:
- First, you almost certainly see a bunch of screening questions designed to eliminate all but a pre-defined target group from taking the main survey. This makes sense because you pay for sample, pay incentives, and pay for resources to administer, manage, and analyze the survey and you don't want to waste resources.
- Next your surveys probably have a section or two devoted to category-level questions, so you can profile results based on pre-established dimensions of opinions about and usage within whatever product/service category the survey focuses on. Again, this makes sense. It establishes context and in many cases may allow you to compare your findings to previous work.
- Then your surveys probably show or reveal something in order to get respondents' reactions to it. Purchase interest. Likes and dislikes. Batteries of questions dissecting emotional reactions. Maybe some open-ended questions to lay bare the underpinnings of those emotions. Maybe you'll go around a couple of times on the reveal/react carousel. Or dissect the combinatorial possibilities with some conjoint or discrete choice questions. All sensible choices. This is the meat of the survey and you want to be able to mine it for key findings and "insights," which are, after all, what the client is paying for.
- Finally, you'll almost surely find a bunch of classification questions - again, quite a sensible thing to include for establishing context and tying your work back to the client's tracking data and historical survey archive.
Everything in your surveys is sensible, serious, and workmanlike.
But back up and consider your surveys from the perspective of someone who, somehow, had never seen anything like them before. There's really quite a lot about your respondents and what they do and think that your surveys make to mention of. Your surveys contain such a limited set of choices!
There is, for instance, no information at all about what their friends and family members think, or what they would say about either your product or new idea. Nothing about what your respondents search for on Google or read online. Nothing about where they go everyday as they travel around the places they live, or what they see or what they do there. Nothing about their inner lives: what they love, long for, dream about, pledge allegiance to, dislike, despise, or denigrate. Nothing about their pasts, really, or about their futures. And there's nothing about them that they are not or cannot become conscious of and respond to in answer to your question. Not to mention the problem that all of your questions come in prefab form, so the issues they address are constrained to the issues the client already recognizes. And let's not forget that there's no information from anyone who fell outside of your carefully limited screening quotas. How can you be sure that there are no discoveries to be made among them?
You'll object that you didn't omit any of that because you aren't interested in experimenting with it or would reject the possibility of finding value out of hand, but you have a limited budget and only so much time to spend. The client won't pay for experimentation and, besides, there's a decision to be made. The client has a lot riding on the research and they've given a good deal of thought to the question of what they need to know to make that decision. You need and want to give them good value for their money.
I'd like to suggest that this is a powerful example of what drives bubbles and echo chambers. Marketing research has evolved to serve structured corporate decision making and that process has evolved to demand the inputs that marketing research provides.
Corporate decision making and marketing research have coevolved the MR survey with a closed set of constructs and almost ritualistic format. Unfortunately, the evolution of surveys has been driven by the imperatives of the Red Queen - running faster and faster to stay in the same place, becoming an ever-closer fit with the needs of corporate decision making, budgets, and timelines. Each step has not been determined by the possibilities of utilizing new data sources, analytic approaches, or even accurately predicting future behavior, but by the ever-closer interlocking of the decision process and the information machine that feeds it.
Like Republican pundits talking to Republican audiences and eventually creating a closed worldview that, among other things, mis-called the election, we've created a mutually evolved worldview that has satisfied us and our clients for many years, but, as the CEOs keep telling us, often seems to fail to predict the real world and doesn't seem to be getting any better.
Will the appearance of data nerds bearing data on things like clickstreams, search histories, location, social influence, sentiment analysis, and facial expressions - complex real-word behavior and data that isn't based on the ability to frame a conscious answer to a prefabricated question - be the beginning of the opening-up of that closed co-evolved world? Will MR break off the echoing conversation before our clients do? Will it take the equivalent of a public defeat on a national election stage to turn things around, or will MR put the genuine technical expertise, hard work, and ingenuity to work before we have to cobble together a last minute concession speech?