December 19th, 2012
By Walt Dickie, Senior Vice President
(If you haven’t already, be sure to read part 1 to this blog post.)
I’m assuming that if you’re reading this you’re probably a marketing researcher. If that’s the case, you probably have access to a bunch of survey questionnaire. Look on your hard drive or reach into your file drawer if you’re old school and take out three or four. Look them over and back up far enough so you’re looking at their general structure rather than the details. What do you see? I’ll bet they all share a similar approach:
- First, you almost certainly see a bunch of screening questions designed to eliminate all but a pre-defined target group from taking the main survey. This makes sense because you pay for sample, pay incentives, and pay for resources to administer, manage, and analyze the survey and you don’t want to waste resources.
- Next your surveys probably have a section or two devoted to category-level questions, so you can profile results based on pre-established dimensions of opinions about and usage within whatever product/service category the survey focuses on. Again, this makes sense. It establishes context and in many cases may allow you to compare your findings to previous work.
- Then your surveys probably show or reveal something in order to get respondents’ reactions to it. Purchase interest. Likes and dislikes. Batteries of questions dissecting emotional reactions. Maybe some open-ended questions to lay bare the underpinnings of those emotions. Maybe you’ll go around a couple of times on the reveal/react carousel. Or dissect the combinatorial possibilities with some conjoint or discrete choice questions. All sensible choices. This is the meat of the survey and you want to be able to mine it for key findings and “insights,” which are, after all, what the client is paying for.
- Finally, you’ll almost surely find a bunch of classification questions – again, quite a sensible thing to include for establishing context and tying your work back to the client’s tracking data and historical survey archive.
Everything in your surveys is sensible, serious, and workmanlike.
But back up and consider your surveys from the perspective of someone who, somehow, had never seen anything like them before. There’s really quite a lot about your respondents and what they do and think that your surveys make to mention of. Your surveys contain such a limited set of choices!
There is, for instance, no information at all about what their friends and family members think, or what they would say about either your product or new idea. Nothing about what your respondents search for on Google or read online. Nothing about where they go everyday as they travel around the places they live, or what they see or what they do there. Nothing about their inner lives: what they love, long for, dream about, pledge allegiance to, dislike, despise, or denigrate. Nothing about their pasts, really, or about their futures. And there’s nothing about them that they are not or cannot become conscious of and respond to in answer to your question. Not to mention the problem that all of your questions come in prefab form, so the issues they address are constrained to the issues the client already recognizes. And let’s not forget that there’s no information from anyone who fell outside of your carefully limited screening quotas. How can you be sure that there are no discoveries to be made among them?
You’ll object that you didn’t omit any of that because you aren’t interested in experimenting with it or would reject the possibility of finding value out of hand, but you have a limited budget and only so much time to spend. The client won’t pay for experimentation and, besides, there’s a decision to be made. The client has a lot riding on the research and they’ve given a good deal of thought to the question of what they need to know to make that decision. You need and want to give them good value for their money.
I’d like to suggest that this is a powerful example of what drives bubbles and echo chambers. Marketing research has evolved to serve structured corporate decision making and that process has evolved to demand the inputs that marketing research provides.
Corporate decision making and marketing research have coevolved the MR survey with a closed set of constructs and almost ritualistic format. Unfortunately, the evolution of surveys has been driven by the imperatives of the Red Queen – running faster and faster to stay in the same place, becoming an ever-closer fit with the needs of corporate decision making, budgets, and timelines. Each step has not been determined by the possibilities of utilizing new data sources, analytic approaches, or even accurately predicting future behavior, but by the ever-closer interlocking of the decision process and the information machine that feeds it.
Like Republican pundits talking to Republican audiences and eventually creating a closed worldview that, among other things, mis-called the election, we’ve created a mutually evolved worldview that has satisfied us and our clients for many years, but, as the CEOs keep telling us, often seems to fail to predict the real world and doesn’t seem to be getting any better.
Will the appearance of data nerds bearing data on things like clickstreams, search histories, location, social influence, sentiment analysis, and facial expressions – complex real-word behavior and data that isn’t based on the ability to frame a conscious answer to a prefabricated question – be the beginning of the opening-up of that closed co-evolved world? Will MR break off the echoing conversation before our clients do? Will it take the equivalent of a public defeat on a national election stage to turn things around, or will MR put the genuine technical expertise, hard work, and ingenuity to work before we have to cobble together a last minute concession speech?
December 17th, 2012
By Walt Dickie, Executive Vice President
I’ve been working in the MR industry since 1978, and the one unchanging theme of that long period has been the constant complaint by the senior corporate executives – who fund the industry by their demand for research-based decision making – that MR just isn’t very good at identifying opportunities or pointing to the new products, services, and ventures with the best chances of successfully capitalizing on them.
I remember being surprised when I first encountered a CEO denouncing the entire MR enterprise at a major conference – which probably happened sometime within my first or second year. Was my new, desperately needed first job after grad school, and maybe the whole industry, going to go down in flames before I even got started? It hasn’t but the drumbeat of C-suite dissatisfaction has never lessened.
This all came to me as I was reading one of the many articles from the “nerdiest election” in which Romney had not prepared a concession speech because – by all the accounts I’ve seen – he didn’t think he’d need one. It sounds like Romney really got blindsided. Not only did he not have a concession speech held in reserve, but he also planned to celebrate the election with a fireworks display over Boston Harbor and his campaign even had a “Romney Wins” website up and ready. (Someone posted screen grabs, of course.)
As Slate’s John Dickerson said, “He got the numbers wrong … in the end Romney and Ryan had to watch CNN to find out how their campaign was doing.”
The blog posts and news stories lay blame on what David Frum labeled the ”conservative entertainment complex.” And it looks like the Romney’s “marketing research” group followed the media’s lead in asking the questions they wanted asked, hearing the answers they wanted to hear, and reinforcing an internal viewpoint that, in the end, failed as MR and left the CEO “gobsmacked” and angry.
I read all of this with the shock of recognition. My mind snapped back to 1985 and the introduction of New Coke. All the research that had been done! And, the magnitude of the disaster that followed! The astonishment of everyone inside Coca-Cola that their carefully constructed edifice had been built on sand. I had seen all of it and had enjoyed having the inside scoop thanks to two colleagues who had come from Coke’s MR department and understood the backstory, which had included a huge research effort.
The post-election stories seem to be revealing a distinctive, internal worldview shared by the campaign and its media supporters. Republican pollster, Whit Ayres, described the research being drive by “rosy assumptions on a likely electorate…at…substantial variance with recent history.”
An “echo chamber” bubble developed when the campaign research staff and its clients elaborated a shared narrative about polling and sampling methods and how to interpret results. Outside that bubble, the academic social scientists and media stat nerds – with Nate Silver as their symbolic leader – were using different methods, asking different questions, and interpreting their findings differently.
They were right in the end, and the people in the bubble were wrong.
A case can be made that MR and our clients have created a similar bubble, in which we talk to each other in an echo chamber. And, a case can be made that the academic social scientists and techie stat nerds are “threatening” traditional MR with everything – including Big Data, social media and web based behavioral analytics, location data, remote facial analysis, and eye tracking – living in an entirely different world.
The GRIT survey as well as reports coming out of consultancies like Cambiar (site registration required), say clearly that corporate research departments are eagerly welcoming all the new “non-MR” vendors knocking on their doors. What’s worrisome is what would happen if the CEOs start sending more work to the guys outside the MR world, and they started having some successes calling the game. That’s what the voices that expect the leading “MR” vendors of the end of the decade to be companies like Google, Facebook, and Twitter.
Why didn’t all of these new approaches arise out of marketing research? At the very least, why weren’t traditional marketing research companies their earliest and most eager adopters? Why didn’t clients hear about all of these developments from their MR vendors? Is it because the conversation was “closed” and things like that simply had no place?
I’m not arguing that MR is operating in bad faith – only that they and their client audiences may have constructed a particular shared view of commissioned and conducted research that has become closed, limited, and overly rigid.
Marketing researchers are almost universally serious, sober people who see themselves as technical experts in a field that demands hard work, clear thinking, and ingenuity to provide vital information to decision makers while often suffering little respect, diminished budgets, and constricting timelines. But make no mistake about it, marketing research has been working with the same basic approaches for at least a couple of generations. I made a stab at some of the qualitative issues in a previous post, so in part 2 to this post, I will examine a few things about quantitative research.
December 10th, 2012
By Walt Dickie, Executive Vice President
I’ve been living in a sort of fugue state since the presidential election. I keep drifting into extended musing over the ways that the campaigns were acting out lessons for marketing researchers. I’m hoping that this will come to an end fairly soon; in fact, I think I can feel the effect beginning to wear off.
In the early stages of the rush, all of the lessons seemed to highlight weaknesses in either qualitative or quantitative MR. But now I’m getting more positive messages.
If you know where to look and how to spot them, you can see the influence of Operations in the Election Day stories. There seems to have been an Operations disaster, but I think I also see one of the rare cases, in my experience, where someone listened to Ops early on, took their advice, and nailed the project.
In the case of the ill-fated “Republican Party’s newest, unprecedented, and most technologically advanced plan to win the 2012 presidential election,” Project ORCA, the Ops guys apparently lost to the PR guys. I’m absolutely sure the campaign’s Operations guys insisted upon holding serious training sessions for the volunteers that would use the system because I know there had to have been Ops geeks around and I know how Ops geeks think. But, according to a volunteer who blogged his experience (and disappointment), volunteers “were invited to take part in nightly conference calls. The calls were more of the slick marketing speech type than helpful training sessions.”
In the words of Zac Moffatt, the campaigns digital director, the system “kind of buckled under the strain (of) the amount of information incoming.” The lack of training left the people out in the field un-prepared and unable to communicate. Even with “800 Romney workers…staffing phones…the surge in traffic was so great that the system didn’t work for 90 minutes,” leaving the field workers scrambling and headquarters without field input.
I have to say that I felt awful reading this. I know that somewhere an Operations Director was crying, and that his or her advice had gone unheeded. I’m dead sure of it.
But the most interesting story about Ops guys and an election didn’t even happen in 2012; –it happened in the Democratic camp in 2008.
Amazingly enough, in 2008 the Democrats had built a system called Houdini, that, like ORCA in 2012, was designed to “make the names of those who had already voted disappear from the Get Out The Vote lists” that were being maintained by volunteers in the field, who could then not waste time on people who had already voted and concentrate on the people who hadn’t.
Like ORCA, Houdini failed. Spectacularly. “On Election Day, the call volume was even more than anticipated and took out the entire phone system for the Obama campaign. It didn’t just effect the reporting of vote totals but effected anything that involve a central campaign phone line.”
But here’s where the Ops guys come in. After Houdini’s failure the developers did a post mortem and scaled back the functionality of their 2012 system, Narwhal.
“It was basically determined that it wasn’t worth the risk or the amount of work for every precinct in the country. The creators of Houdini came in from Google and decided that it wasn’t possible to build a system that would scale that big.”
This is amazing – somebody decided to build a system with reduced functionality in order to get improved reliability. Do you suppose the political team or the marketing guys wanted less information on Election Day? Hell, no! I’m guessing they fought like cornered Tasmanian Devils not only to expand the scope, also but to add new features. I wasn’t there, of course, but I’ve been in enough discussions between MR analysts, developers, and Operations people to be pretty sure who fought for what. The guys from Google were Ops savvy.
Operations people are under-appreciated. Their job is to execute somebody else’s brainchild, and we rely on them to make it a success. Like the hedgehog in the fable, they know one big thing: failure is not an option. Projects have to get done. On time. Within the budget. The data has to be right. There will not be a second chance. They’ll move mountains to get things done. Sometimes they go home at night wondering why no one sought or followed their advice and sailed right into the big rock they saw so clearly ahead.
But apparently as the developers proceeded from 2008’s Houdini to 2012’s Narwhal the Ops guys won one. I hope they got the credit they deserved because too often they don’t.
December 6th, 2012
By Walt Dickie, Executive Vice President
The torrent of shopping data from Black Friday and Cyber Monday is coming in. Although their names suggest a f first person shooter game involving Robocop in some futuristic battle, these two days are the now-traditional kickoff to the U.S. Christmas shopping orgy, and a clarion call to the armies of the commentariat and blogosphere.
And, once again, in what appears to be as much a part of the new American Christmas tradition as the shopping experience itself, the big headlines are all about the massive growth of online shopping.
If you’ve somehow missed all the frenzied scribbling, you can turn to IBM which published the key data that almost everyone seems to have relied on in the IBM 2012 Holiday Benchmark Reports. There, in just a few pages of data, you’ll find both Friday and Monday’s online sales data broken down by retail category and compared to last year’s results.
The topline news story is, of course, the huge increase in online shopping: up 17.4% compared to last year’s Thanksgiving Day, up20.7% compared to Black Friday 2011, and a whopping 30.3% up on Monday compared to a year ago.
Close behind is the news about mobile: IBM estimates that 24% of retail site traffic came from mobile devices on Black Friday this year, up from 14.3% last year – a monster increase of 67.8%. On Cyber Monday – traditionally understood as the day people went back to work and shopped their brains out on their employers’ broadband connections – mobile users were responsible for 18.4% of retail site traffic, which was up from 10.8% a year ago – an incredible 71.4% increase.
The oddity of the week was the unexpected difference between Android and iOS (Apple) users that emerged after correcting for Android’s dominance in the smartphone race. On Black Friday iOS devices accounted for 77% of mobile shopping traffic while Android accounted for only 23%. This is an oddity because currently Android phones and tablets outnumber iOS phones and tablets by about 60/40. Work it all out and “iPhone (and iPad) users are about three times more engaged in shopping with their devices than Android users.”
Horace Dediu does an excellent job of unpacking the “Android engagement paradox,” which he attributes mostly to “later adopters” buying Android phones in numbers sufficient to have overcome Apple’s early lead in smartphones. But, in the end, he finds this answer unsatisfactory. I wonder if the Android/iOS “paradox” contains a message for marketing research about mobile sampling – should we be thinking about weighting our mobile samples or imposing Android/iOS quotas?
But the main message for MR comes from some much more basic observations about mobile usage.
What caught my attention in IBM’s data wasn’t the comparison between this year’s Black Cyber Days to last year’s, but the column comparing Black Friday 2011 – last year’s Big Gun – with Friday, November 16, 2012 – seven days before this year’s opening round, a “normal” pre-holiday Friday, which I will henceforth refer to as, “Normal Friday 2012.”
You may remember that the headlines for Black Friday 2011 were more or less the same as they were this year: online as a whole and mobile shopping in particular were way up. But “Normal Friday 2012” blew away Black Friday 2011 on several measures. For instance, sales increased 10.8% on retail sites compared to last year’s Black Friday, and, on average, a sale in 2012 involved 3 items more than a sale in 2011. The headline-making shopping news of 2011 now trails the new normal.
Mobile is the major factor:
Mobile data is in blue; other data is in orange. Data involving sales is outlined in red.
Of the variables that increased on Normal Friday compared to Black Friday, most involve mobile and, overall that mobile site traffic increased 4.6%. Although mobile sales increased, sessions that involved viewing only a single page increased overall on mobile, as well as abandoned shopping carts. Moreover, with the exception of mobile sales, all the variables involving closing a sale fell, as well as sessions in which a visitor placed an item in a cart.
Normal Friday 2011 obviously involved more looking and comparing even though buying did manage an increase.
All of the other data collected over the past couple of years reinforces the obvious conclusion that mobile devices – smartphones and tablets – have become even more important components of shopping. Checking prices and features, sales, finding online coupons, and, for that matter, seeing advertising, are now normal parts of the shopping experience. As I said, we knew that.
But what seems to be fairly stunning is that in a single year the headline-making news of Black Friday is now the everyday expectation of Normal Friday.
The moral is that the retailers get it; they’re struggling with it but they get it. They all know that they have four screens to think about – TV, computer, tablet, and phone. They all know about showrooming: “when a customer visits a brick and mortar retail location to touch and feel a product and then goes online…to purchase the product.” And, the smartest of them have stopped complaining about it and are working on leveraging it with apps that provide extra services in-store – new items, sales notices, bar code scans – then let you “flip” to their website to take advantage of online deals. The same app lets you find their deals when showrooming in a competitor’s store, too. They’re adapting to the new normal.
Can this please be the year that marketing researchers – clients and suppliers – stop wondering whether to add a “cell phone segment” to a sample spec and accept the fact that we need to expand our data collection toolbox, to fit the the time, connectivity, and screen size constraints of mobile while also expanding our thinking? We should be focusing on leveraging the incredible capabilities that mobile presents for collecting new kinds of data in new kinds of situations. We need to walk away from the “online revolution” of the early 2000s and realize that we’re in a new, increasingly mobile-dominated age.
By the way, the average session on a mobile device on Black Friday 2011 took 4:03, and Normal Friday 2012 it had shrunk to 3:46. That session probably took place while the shopper was doing at least two other things at the same time, and everything that was going on was almost certainly of interest to MR. But, we probably missed it.
December 4th, 2012
By Patti Fernandez, Research Director
The Marketing Research Event was buzzing with excitement and anticipation. What tales would we hear, what knowledge would we uncover, what trends would take center stage? And, in the end, on what new paths would we, as researchers, venture?
Insight development via storytelling and storytelling through data visualization were very much in the air. Many a session encouraged us, like Dorothy, to follow the yellow brick road toward our own Emerald City where insights break the confines of numbers and quotes and live within visually compelling stories.
But, in today’s data-driven world, how can we tell a story visually while seamlessly satisfying the needs of the data literalists? And, how can we shake the compulsion to show everything we’ve uncovered because (in our minds) every nugget matters?
The key is not only to tell a story, but also to approach the insight development process in the same way as story-creation. Here are five key elements to a solid storytelling approach:
Relevance is Key
- There is usually a rhyme and reason for everything that is included in a story (foreshadowing, plot-building, etc.).
- In that same way, results and insights should serve as key puzzle pieces that help build and complete a bigger picture.
- Relevance, though, takes time. We must first go treasure-hunting through all of our findings in order to determine which ones truly are worthy of supporting the key insights that need to be communicated.
- Stories follow a natural, rational order that keeps us alert and engaged with the plot.
- Our insights and findings, then, should follow the same path. They should help take the audience on a journey that makes sense and keeps them on the edge of their seats.
Create Conflict and Resolution
- Without conflict there is no resolution – without resolution there is no end to a story.
- Always aim to keep the plot of your story anchored. Your role as a researcher is to tell a story that ultimately helps resolve some sort of conflict.
Define Your Characters and Their Roles
- Characters have set roles in the story – they exist for a reason.
- In order to approach research in an organized and rational manner, we must first define who the characters are and what role they play.
- We may be swayed to think that the brand or product is the hero, but it is the consumer who should wear this badge. Brands are simply the tools that help the hero resolve conflict.
Bring Your Story to Life
- A good story will keep us turning the pages if it’s told in an engaging manner. Overuse of descriptive or circular plots can deter engagement and leave us tossing the story aside without finishing.
- And, just like a poorly written story, research results that are loaded with data that makes the audience have to work too hard to decipher the true message can fall flat.
- Using visual depictions of information to surprise and make data easily digestible will not only make your research more engaging, but also make it easier to present the story in personal and animated manner.
In the end, it’s not simply how you present your insights with iconic figures, captivating prose, and visually stimulating graphics – it’s how you approach the insight-finding process. So, take a leap of faith and follow the rabbit down the hole through a journey of discovery.
November 8th, 2012
By Bob Relihan, Senior Vice President
In a recent blog post, my good friend and colleague, Walt Dickie, has taken the success of Nate Silver’s data-focused and accurate prediction of last night’s election outcome and the failure of so many pundits to do the same as a metaphor for the power of big data and the twilight of the focus group moderator. His argument is that hard-eyed, statistically-significant data, modeled and analyzed properly, trumped instinct and expertise of many pundits and their years of experience feeling the winds of voter moods and sentiments. This is clear death knell for the focus group and its moderator who also applies years of experience and instinct to interpreting the often opaque feelings of consumers.
I have a certain amount of sympathy for this agreement, particularly after listening to the hours of hot air expended by pundits over the past few (many?) months. I begin to have sympathy for the marketing managers who have to listen to countless presentations of findings from focus groups.
What is more, election night provided another example of the triumph of big data. I was able to read a table this morning that gave average wait times at the polls in different states. The data was the product of an analysis of all the Tweets yesterday. I certainly could not have done that, accurately or not, with focus groups. I am not certain I could have deployed a traditional survey to yield information so quickly.
But, does this all provide a hint of the demise of focus groups and skilled moderators? I don’t think so.
In the first place, not all pundits failed in their prediction of the election outcome. A scoring of the punditry revealed that left-leaning pundits were remarkably accurate. In fact, there were a few with better accuracy than Silver. Right-leaning pundits? Well, most were considerably wide of the mark. When I talk to consumers in a qualitative setting and bring my expertise and instincts to bear upon the comments, I believe I am being objective, as objective as I can be. That objectivity results, I believe, in reliable insights.
Silver’s much praised accuracy has limitations. He predicted the outcome of this specific election. He was asked to predict very specific and well-understood behavior taking place at a specific time. Rarely, as a focus group moderator have I had to answer so circumscribed a question. Rather, I am asked to develop hypotheses about the reactions of consumers in a range of possible futures. What are the attitudes and emotions of consumers that tell me how they might respond to a new entrant in a category? A new service they have never seen before? A message about an unheard of benefit of a well-known product?
Focus groups, conducted by sensitive, experienced analysts, can provide this kind of direction to marketers. And, they will for the foreseeable future.
November 8th, 2012
By Walt Dickie, Executive Vice President
Tuesday’s election is being hailed as “The Triumph of the Nerds.” Barack Obama won the presidential election, but Nate Silver won the war over how we understand the world.
The traditional pundits were on TV, in the papers and blogs, interpreting what they were hearing and feeling. Peggy Noonan:
“Something old is roaring back.” … “people (on a Romney rope line) wouldn’t let go of my hand” … “something is moving with evangelicals … quiet, unreported and spreading” … the Republicans have the passion now, the enthusiasm. … In Florida a few weeks ago I saw Romney signs, not Obama ones. From Ohio I hear the same. From tony Northwest Washington, D.C., I hear the same.”
On the other side were the Moneyball data nerds, with Nate Silver carrying their standard:
“Among 12 national polls published on Monday, Mr. Obama led by an average of 1.6 percentage points. Perhaps more important is the trend in the surveys. On average, Mr. Obama gained 1.5 percentage points from the prior edition of the same polls, improving his standing in nine of the surveys while losing ground in just one. … Because these surveys had large sample sizes, the trend is both statistically and practically meaningful.”
The morning after, Paul Bradshaw posted that the US election was a wake up call for data illiterate journalists – the pundits – who “evaluate, filter, and order (information) through the rather ineffable quality alternatively known as ‘news judgment,’ ‘news sense,’ or ‘savvy.”
Bradshaw, and the blogger Mark Coddington, whom he quotes, look beyond the question of which camp “won” or “lost” the election, and see an epistemological revolution in reporting the news:
Silver’s process — his epistemology — is almost exactly the opposite of (traditional punditry): “Where political journalists’ information is privileged, his is public, coming from poll results that all the rest of us see, too. Where political journalists’ information is evaluated through a subjective and nebulous professional/cultural sense of judgment, his evaluation is systematic and scientifically based. It involves judgment, too, but because it’s based in a scientific process, we can trace how he applied that judgment to reach his conclusions.”
But this blog post is about marketing research, not journalism, although as I’ve argued before the two fields have a lot in common.
When I read Bradshaw yesterday morning, I could hardly help re-writing his observations, since he could easily have been talking about the traditional approach to qualitative analysis. Here’s my re-write:
Qualitative analysts get access to information directly from consumers, then evaluate, filter, and order it through their “judgment,” “sense,” or “savvy.” This is how qualitative analysts say to their clients (and to themselves), ‘This is why you can trust what we say we know — because we found it out through this process.’”
Journalistic intuition suffered a severe blow Tuesday, though I doubt it will prove fatal. The data-free intuition of focus group moderators is getting hammered by Silver-esque data-driven analysis, but it hasn’t succumbed yet, either.
Still, I have to wonder if the writing is on the wall. I’ll leave the last word to Bradshaw:
Journalists who professed to be political experts were shown to be well connected, well-informed perhaps, but – on the thing that ultimately decided the result: how people were planning to vote – not well educated. They were left reporting opinions, while Nate Silver and others reported research.
October 26th, 2012
By Walt Dickie, Senior Vice President
If you’ve ever read anything about SETI, the search for extraterrestrial intelligence, you’ve probably heard of Drake’s Equation—well, you might have heard about it on Star Trek or the Big Bang Theory instead! Formulated by the astronomer Frank Drake, it estimates the odds of intelligent life in our galaxy and is one of the foundations for the whole enterprise. SETI came into being when Drake, whose Project Ozma was the first systematic search for alien radio signals, addressed a meeting at the Green Bank National Radio Astronomy Observatory in 1960 and offered an estimate of the odds that the project could succeed. The Drake Equation is justly famous as one of the first attempts to put the possible existence of little green men into mathematical perspective.
I was in high school when I first heard about SETI and the Drake Equation. For no good reason, I’ve often found myself musing about it which is why it recently popped into my mind again when one of my partners was getting ready to moderate a discussion at an MR conference.
The conference, like most MR conferences these days, was heavy on presentations about New Methods and The Future of Marketing Research. Needless to say, not a few of the conference presenters agreed with almost every blogger in the industry that Major Change is Just Around the Corner and that the Future For Every Established MR Company Is Bleak.
My partner’s role at the conference was to moderate a discussion between the audience and one of the keynote speakers, and he was preparing by reviewing the speaker’s presentation, a copy of which he’d sent me. We were exchanging emails about the issues in the talk and the potential questions they raised when it occurred to me that MR needed its own version of the Drake Equation.
Just as the early SETI community needed some estimate of the likelihood of contacting alien life, the current MR community needs an estimate of the likelihood of being superseded by new research technology. It’s almost all we talk about these days, and, being mathematically inclined folks, we deserve a calculation of our odds.
So here, with apologies to Frank Drake, I propose the MR version of the Drake Equation as a contribution to SSRT, the Search for Superior Research Technology.
The Drake Equation
N = R** fp * ne * fℓ* fi * fc * L
SETI: N = the number of civilizations in our galaxy with which communication might be possible
SSRT: N = The current number of competitors capable of putting your company out of business
Drake’s proposed value
My proposed value
|The average annual rate of star formation per year in our galaxy||
|The average annual rate at which significant new approaches for understanding some aspect of human behavior or thought appear||
|The fraction of those stars that have planets||
|The fraction of those approaches that are applicable to the design/ delivery/ communication of consumer products or services||
|The average number of planets that can potentially support life per star that has planets||
|The fraction of the above that depend on data/inputs that can be collected using conceivable/ deliverable/ socially and ethically acceptable technology||
|The fraction of the above that actually go on to develop life at some point||
|The fraction of the above that are commercialized at some point||
|The fraction of the above that actually go on to develop intelligent life||
|The fraction of the above that are faster and/or less expensive than your company’s offerings||
|The fraction of civilizations that develop a technology that releases detectable signs of their existence into space||
|The fraction of the above that will produce results that are more actionable/ effective/ predictive than your company’s offerings||
|The length of time for which such civilizations release detectable signals into space||
|The length of time that those approaches will be seen as valid/interesting/relevant as the basis for commercial products/ services||
Some Comments on my Estimates:
R* I think my estimate of truly unique, significant new approaches coming along every two years may be generous. Not that new approaches or technologies don’t appear more often than that, but most of these are minor wrinkles on existing approaches – better ways to do something that’s already being done. If you’re a technological optimist you might go for more frequent discoveries. I think that one a year would be an optimistic estimate, but feel free to enter your own.
fp Almost anything – maybe not quite anything – that’s applicable to humans is applicable in some non-trivial way to consumer products and services. Optimistic estimate: .99
ne On the other hand, some of the new things that come along require approaches that either aren’t technically feasible, at least at scale, within any foreseeable future or would never pass a social/ethical or possibly legal challenge. I’ll allow that almost any technical obstacle can be overcome, but I’m not so sure about the social issues. I’d hesitate to make this a certainty under even the most optimistic scenario. Optimistic estimate: .9
fℓ With the rise of crowd sourcing augmenting venture capital and other ways to finance new business ventures, and entrepreneurial enthusiasm apparently being boundless, pretty much everything that can be commercialized will be, at some point. Optimistic estimate: 1
fi Not everything is quick and not everything can be made cheap. I scored this one a toss-up. If you believe that advances in technology will eventually bring down any conceivable cost, your optimistic estimate would be 1.
fc On the other hand, a lot of current technologies have already been pretty much optimized and aren’t advancing anymore, so something really new has a fair chance of bringing new insights to the party. That’s 2:1 in favor of the new stuff in my book. But it’s possible to reason that any legitimate theoretical advance will inevitably produce some significant new insight. Optimistic estimate: 1
L There is no useful data on the rate at which new basic approaches appear in the sciences or technology. (This question might be phrased in terms of the frequency of paradigm shifts, about which there is no consensus.) I went with my gut feeling that after about 25 years paradigms begin to be seen as played out. I think that an optimistic estimate might be 2-4 times longer than this.
Using my estimates, there are probably 2-3 technologies capable of putting your company out of business by undercutting the approaches your business is based on; using what I think are the most optimistic (pessimistic?) estimates for every variable in the SSRT Equation, there are somewhere between about 45 and 90.
By the way, although Drake’s original estimates yield an estimate of 10 advanced civilizations in the galaxy, the consensus estimates developed at the first SETI conference yielded something between 1,000 and 100,000,000. I wonder if a reasonable argument can be made using both the SETI and SSRT versions of the equation together that not only are there plenty of companies capable of putting yours out of business, some of them are or will be run by aliens.
October 8th, 2012
By Bob Relihan, Senior Vice President
Hardly a day passes when an article about the fallibility of marketing research does not cross my desk. The latest is from Forbes, and I was compelled to read it by its somewhat incendiary title, “Why So Much Market Research Sucks.” Roger Dooley makes the typical arguments, although they are couched in the context of the strengths of neuromarketing. You have heard them. Consumers can’t explain why they do or prefer anything. They rationalize; they can look only backward, not forward.
There are always horror stories of findings that made little sense but were “followed” nonetheless. These problems lie not with the consumers or the “research,” but with the marketers and the researchers. They want the research to tell them what to do; they don’t want to use it to stimulate their thinking and help them make good decisions.
Dooley says surveys are fine for simple behavioral questions and little else. “If you want to get the real story on the behavior of your customers, readers, etc., don’t rely on self-reported data. While such data can be fine for simple facts, like, ‘Did you eat breakfast today?’ It will rarely answer questions like, ‘Why do you prefer Grey Goose vodka?’” But, the fact is it will.
A good researcher and listener can ask a Grey Goose partisan, “Tell me everything you can about vodka and drinking vodka.” And, she will respond with a laundry list of associations. The researcher will ask the same question of a Belvedere fan and generate another list. From the differences between those two lists (really, a sufficient number of similar lists) the savvy researcher will be able to infer why Grey Goose drinkers prefer that brand. Research, after all, is as much interpretation and analysis as it is data.
As this sanctimonious defense of traditional research was forming in my mind, a conflicting vision entered. We have all seen the new IBM “Smarter Marketing” ads targeting CMOs with the message of customer analytics. The “customer is the new boss.” Through comments, social networks, and reviews, she tells the company “what to make, what it’s made from, how it’s shipped, and the way it’s sold.” The customer has immediate impact on the company, and the company can respond in near real-time.
Now, this throws the model of traditional research into disarray because that paradigm is built on time. From the “Market Research Sucks” perspective, consumers cannot predict what they will do in the future based on their rationalizations of past behavior. From my ideal perspective, a sensitive analyst can infer future behavior based upon consumers’ descriptions of categories and past behavior.
Fine. But, what if a marketer is able to offer a consumer a red dress right after she tweets that she just “loves red dresses”? Her tweet may be just as much a rationalization as it would be had she told it to an interviewer, but in the moment it has emotional weight and validity. Six months hence, she might say “the dress isn’t for me,” but in the moment it feels like the marketer in talking right to her, and the dress is just what she wanted.
And, no more need for research to mediate time.
So, in this new world of immediate communication between marketers and customers, market research will still have a role in explaining the big trends, the big social movements. It will provide strategic insight that can drive large-scale planning. But, on the day-to-day tactical level, it may simply be the conduit of the marketer/consumer conversation, aggregating and synthesizing all that is said.
September 27th, 2012
By Bob Relihan, Senior Vice President
If you want to understand consumers, you have to know how they communicate. Pew has just released a report that is another bit of evidence that people are communicating more fluidly and less linearly. In other words, writing is being displaced, at least partly, by non-verbal means.
Pew finds that 46% of internet users post original photos or videos online and that 41% post photos or videos they find elsewhere on the internet. A majority, 56% do one or the other, and a third of internet users do both. To be sure, some of this activity is no different than the sharing of vacation photos that has go on since the first Brownie. But the ubiquity and frequency of photo sharing makes a normal and expected form of behavior and communication.
When my niece posts a picture on Facebook of a dog and a cat sleeping together, she is certainly saying, “Look at this; aren’t they cute.” But by displaying that picture publically where all her friends can see it, she has created a badge. The picture speaks to her feelings, beliefs and values. Moreover, she apparently feels no need to explain the values communicated in the picture. I am guessing that she thinks they are self-evident. I am also guessing that she actually could not explain them fully.
The more people create visual badges for themselves on the Facebook, Pinterest, Tumblr, and the like, the less willing and able they will be to articulate the meanings and values those badges express.
This trend has profound implications for those of us who wish to understand what consumers communicate.
- If we wish to engage consumers and provide them with an opportunity to express what they believe or feel about their lives and our products, we will need to provide them with a space to express themselves visually. Simply asking questions with room for either structure or unstructured responses will not be sufficient.
- Visual communications will be the “new normal.” Those of us, and I am one, who have tacked projective exercises onto our group interviews in an effort to “dig deeper” will need to recognize that these visual activities may well be the first and only shovels available. They are not extra; they are central.
- And, if consumers are communicating visually rather than verbally, we need to understand the meaning of the different badges and images they use. The more consumers use these images, the more these meanings will be unique and less susceptible to being “translated” into conventional language. If I want to explain to you what my niece is thinking, my only means may well be showing you that picture of the dog and cat.
This will be a new world of research, and I am looking forward to engaging it.