March 29th, 2013
By Walt Dickie
Most marketing researchers are like sports fans: they’re not content to watch the game and see that Team A beat Team B last weekend. They need to work out a story about the strengths and weaknesses of the teams and develop a prediction about what that implies for next week’s games and the teams’ season performances.
I think this personality quirk explains at least some of the current obsession about the industry’s future. Some days it seems like every MR person in the universe is so busy reading and writing blog posts, making and listening to conference presentations, and sniffing out the next clue to the “Next Big Thing” or the “End of MR as We Know It” that no one is getting any paid work done. On this charge, by the way, I’m as guilty as the next guy.
But I’m conscious of feeling a growing antsiness about all of this − not because I think we’ve all gone overboard and should stop, but because I’m increasingly aware that the theory driving most of the blogging, conferencing, and sniffing is way too simplistic.
Here’s how I’d summarize the general approach behind most of the MR blogging I read (and some I’ve written):
There is a new technology/analytic approach/data gathering technique that has produced some very impressive results. A small number of innovative suppliers and their forward-thinking clients have been quietly developing this for a while now, and their efforts are beginning to move beyond the experimental stage. Real tools and platforms are starting to appear, and the leading edge tools and platforms are now coming out of beta and attracting both more investment and more client interest. The buzz is growing among clients/suppliers that this new approach could very well revolutionize/eliminate marketing research as we know it …
What is actually driving this model? What is the underlying mechanism that’s being proposed to drive the model from “some very impressive results” to “revolutionize/eliminate marketing research?”
As anyone will tell you who has ever tried to manage something from the “very impressive results” stage to an actual ongoing business, much less a business capable of overthrowing an industry, early success doesn’t come close to guaranteeing sustainability. A theory that can be reduced to neophilia – Look! A new thing! New things rule! – just isn’t adequate.
So what would a more complete theory of change in marketing research look like?
A good place to start with many questions like this one is to ask, “What defines marketing research as a distinctive activity and business?” I’ve thought about this question a lot over the years, and here’s the best answer I’ve come up with:
“MR is research conducted among the consumers/businesses that purchase or may potentially purchase a product/service, and is carried out to support a corporate marketing decision-making process.”
There are two important points in this definition:
- It locates MR as a business between a paying client on the one hand and a customer base on the other. MR plays the role of middleman communicating between these two poles when the client is engaged in making decisions about the customer market.
- It distinguishes MR from other kinds of commercial research by specifying the particular institutional role it serves: the client’s marketing function. Other kinds of commercial research, even if they involve the client’s customer base, fall under some other “research” umbrella, not MR.
The usefulness of this approach that it gives us leverage to predict where the drivers of change in MR are likely to be found: consumer communication and corporate decision making.
I’ve long argued that MR should be understood as being driven by communications – You can go so far as to say that at least the data collection side of the business can be best understood by recasting it in communications terms. How do we communicate with consumers in a way that will enable them to communicate their perceptions, opinions, beliefs, and behaviors so that we can produce, and subsequently communicate, useful analyses and guidance for our clients?
Not only the way we communicate with consumers, but also the content of those communications are determined by the mechanisms, technologies, and norms of communication in the culture. This is one of the drivers of change: we follow as consumers drop telephone and print and take up the web, email, texting/messaging, and social networking.
The same process also drives what we can communicate about: as consumers expand the sphere across which they are comfortable communicating and sharing personal information, they become more comfortable sharing those things with us. This is one way to understand some of the hesitancy we encounter from time to time.
Corporate Decision Support
In most large successful companies, the process of making marketing decisions is not free-form, intuitive, or invented on the fly. Most companies have expended a significant amount of thought and effort developing protocols for making decisions: fixed, well-defined procedures to be followed, questions to be asked, tests to be performed, information to be obtained, and criteria to be met.
Corporate marketing decisions can be seen as being made at two levels: a “higher,” more general level that concerns markets as a whole – their composition, structure, dynamics, and prospects – and a “lower,” more specific level that concerns the marketing of specific products/services within markets. “Higher level” research generally supports broad, strategic decisions; “lower level” research generally supports narrower, more tactical decisions – although this distinction is not always sharply drawn. In order to easily differentiate between the two levels, I’ll refer to these as “market level” research and “customer level” research in what follows.
These protocols are embodied in standard operating procedures, and are followed as a matter of course. Client teams negotiate exactly how to apply these protocols to the specific issues at hand, what corporate resources to expend in following them for a specific case, and how to fit that process into larger timelines. But they don’t question or re-invent the protocols themselves as a normal part of their business responsibilities.
Predicting Change in MR
Considering the different contributions of communication norms and decision protocols and the way these two key drivers evolve lead to what may be a somewhat startling conclusion:
- The aspects of marketing research driven by the cultural norms of communication – primarily the data collection side of the business – will change regularly and, possibly, rapidly in periods of rapid changes in communication. Such changes will be heavily driven by the processes of development and adoption of new technology.
- The aspects of marketing research driven by decision-making protocols – client RFPs, many aspects of study design, analysis, and reporting – will tend to change slowly because corporate decision-making standards are created by corporations to interlock with the larger corporate management environment of which marketing is but a small part.
In other words, rapidly changing new communications technology will be put to the service of unchanging corporate needs for some unspecified, but significant period of time. Put more fancifully and colorfully: even if quantum entanglement hyper-neurology is adopted as a data gathering technique, the MR industry will respond by adapting traditional project designs to use quantum entanglement hyper-neurological methods rather than explore radically different approaches made possible by QEHN as long as client decision-making protocols demand the same kind of output – QEHN will be used to evaluate large attribute grids with 10-point scales.
February 14th, 2013
By Shaili Bhatt, Senior Analyst
In this era of over-sharing, curated storytelling is imperative. While many of us own and use smartphones as cameras, it’s a challenge to remember to do something with the pictures and videos that we capture with these devices.
Many of us love to capture pictures and videos on our phones, and more often than not, we try to publish the most irresistible moments in our social galleries. (There’s an undeniable sense of accomplishment when I can share my stories and memories—perhaps you can relate.)
This is no different for our market research projects. If anything, it’s even more important to share key pictures and videos from our projects to help tell the story across clients’ reports and presentations.
Whether we want to make an album, an elaborate scrapbook, float it all in the Cloud on Facebook, Twitter, or Instagram, or deliver an outstanding report or presentation that really gets to the heart of the story with consumers and clients—most DIY options can be overwhelming, expensive, and for some, it can feel like a chore.
It is time to rethink the way we share mobile pictures and videos—and consume other mobile media—and look for tech-savvy time-savers.
Recently, I was excited to run across a bunch of new, free iPhone apps with robust movie-making and storytelling capabilities: Qwiki, Givit, Magisto, Viddy and Splice. These tools allow anyone with an iPhone to turn pictures and videos into a brief movie that you can share in minutes! On my way home last night, I snapped some pictures and videos of advertising paraphernalia that I used to created this video collage (click image for link).
Again, I got that research-geek thrill to uncover how these tools could potentially benefit all areas of market research:
- “On-demand” Movie/Video Collage Activity
Our participants are able to capture so many timely moments for us on a variety of mobile devices (smartphones, iPads, digital cameras, you name it!). It’s really up to us to deliver a system that organizes and focuses all of this data.
Qwiki is my favorite app of the bunch, as it automates the picture/video selection process into a one-click movie. The app works by automatically stitching together media from the iPhone camera roll and creates a 30-second to one-minute mashup from a certain day or album.
Song selection for the Qwiki occurs from the phone’s music library or from “soundtracks” preloaded in the app. Perfectionists and those of us with additional interest can easily play around with media configurations and change the audio track, which gives us an even deeper look into the mood of the visuals.
Indeed, video collages and movie mashups can bring impressive creativity and flexibility to qualitative research.
- Introductions/Warm-up: share a brief video collage or slideshow of their family, interests, typical day
- Homework or “On-demand” Movie/Video Collage Activity: create video collages and movie mash-up from experiences captured on their phones
- Reporting: integrate a video collage (that we’ve created) of participants’ key pictures and videos for a quick debrief/recap
- Presentations: show a similar video collage to get the discussion started, bringing the project to life!
The seamlessness of such videos add richness and energy to our stories in all of the ways above, as well as any others that we can dream up.
The days of creative departments and contracting with a video editor are by no means “over.” These apps lack the ability to output a professionally designed highlight reel with exact precision, multiple formats, or perfect image resolution, and its related audio effects and capabilities are limited, at best.
Still, these new apps are remarkably efficient, and we’ve certainly found another cool solution to elevate our perceptions of the smarts in our smartphones! Telling a great story is even easier as we ramp up our use of video collages—and make them faster, better and cheaper with today’s mobile capabilities.
What are your suggestions and feedback around these new apps? How are you integrating videos into your current market research efforts? Post a comment to share what you think!
January 9th, 2013
By Bob Relihan, Senior Vice President
Walt Dickie had done a very nice job of knitting together the trends in the adoption of various electronic devices. Certainly PCs are flattening out and will eventually decline. And, I agree there will be a time when virtually every cell phone is a SmartPhone. Walt also plots a curve that predicts exponential growth in the tablet/e-reader market, but backs off from the implications. “I’ve gone with a growth curve that can’t be right in the long term – it has to flatten out – but might be okay in the short term.”
I am not so sure.
Now, it is likely true that all but a few high flyers and the tech obsessed (as well as those involved in illicit activities) will ever have more than one smart phone. The device, after all, is tied to one’s personal phone number. But, the same constraining logic does not apply to tablets and e-readers, particularly when they merge into one category with vaguely similar features and price points ranging from $79 to over $800.
So, in the next few years, as more and more users acquire new tablets with better features and still have serviceable old ones on hand, it is easy to imagine a home with a first generation Kindle by the beside, an older iPad on the kitchen table for reading the news and checking the weather in the morning, and the latest tablet sitting on the coffee table in front of the television. Will there be a television? Why carry a tablet with you? You can have one wherever you turn.
There was a time when the household was dominated by one large console television in the living room. Over the years conducting focus groups, I have asked in passing, “So, how many TVs do you have?” Seven is no longer an uncommon answer…in a household of two. A future with a tablet in every room is not that far fetched.
By Walt Dickie, Executive Vice President
I love the Pew Research Center, especially their Internet & American Life Project. I can always find something interesting to think about on their website, and I admire their invariably solid methods. We use Pew data to make strategic decisions, but we also go to Pew for inspiration when imagining future scenarios.
A recent Pew post, including data through September 2012, based on a long-running tracking study and a more recent update on smartphone ownership brings together information about consumer ownership of desktop computers, laptops, cell phones, smartphones, and tablets among U.S. adults.
Some of the most important parts of the dataset are still a bit sketchy. Not because Pew didn’t do their usual excellent job collecting it, but because the number of data points is still pretty limited. In the spirit of fooling around with numbers and the informality of blogging, I decided to analyze this data to generate some hypotheses about smartphone and tablet adoption. The analysis that follows plays somewhat fast and loose–extrapolating trends beyond the range of the data and basing these trends on an inadequate number of data points. This sort of thing is fun and may be stimulating; it is not conclusive, nor is it meant to be.
Here’s a selection from Pew’s device ownership data plotted together on a single graph:
Desktops + Laptops (“PCs”)
I aggregated Pew’s data on desktop and laptop ownership to create this curve for traditional “personal computers,” which goes above 100% because it’s quite possible for someone to own one or more of each species. If you look at the original Pew data for laptops and desktops separately (not shown here), you’ll see that laptop sales are still rising while desktop sales are dropping precipitously. The curve shown represents a leveling-off of PC ownership at somewhere between 1.1 and 1.2 PC’s per household, which the Pew data suggests will be mostly laptops as desktops die and are not replaced.
I feel reasonably okay with fitting this flattening curve to the cellphone data, and although projecting curves forward in time like this always gives me the willies, the result looks at least somewhat plausible. Cell phone ownership is clearly flattening out with something like 15% of the population being reported as doing without. Cell ownership will probably never hit 100%–landline phones never did–but it will certainly get further into the 80s, and maybe even into the 90s, as landlines pretty much fade away and all the people who grew up in the pre-cell phone era disappear. Of all the curves on this graph, I think this one may be the most realistic.
Fitting a curve to the four data points on smartphone ownership is clearly beyond the pale, but having decided to work with what we’ve got why not do it anyway? An exponential growth curve may capture what, by all accounts, has been a startling adoption rate, but of course no trend, even a really powerful one, will continue into the future without slowing-probably more noticeably than is predicted here. Smartphone ownership seems to have paused in the middle of this year, but with the phenomenal sales figures being reported for the iPhone 5 maybe predicting penetration to continue its strong growth isn’t so bad, at least in the short term. In any case, I wanted an aggressive scenario to explore the impact of smart phone growth, and that’s what this curve represents.
Not many data points (5) here either but tablet adoption sure sounds like it’s accelerating according to the news reports, and an exponential curve fits the existing data almost perfectly. I’m writing this immediately in the wake of “Cyber Monday,” and the news outlets and blogosphere are reporting tales of tablet frenzy following the debut of the iPad mini and Amazon Fire. Again, whether exponential growth will be sustained is questionable, but it doesn’t seem wildly off to estimate that we’ll experience that kind of growth in the short term.
Interestingly, the curve of e-Reader ownership shows an almost identical rate and pattern of growth, which raises the question of whether these are one species or two. Although tablets and readers started out as separate species, it’s hard to imagine how they continue to evolve without merging into a functional/price continuum competing in a single market, even if some of them continue to be distinguished by very different screen technologies adapting them for use under different lighting levels. If the two combine into a single product line the case for exponential growth may be strengthened.
So, again, with some basis in statistics and observation, but mostly because I want to create a best-case scenario for new device adoption, I’ve gone with a growth curve that can’t be right in the long term – it has to flatten out – but might be okay in the short term.
Using some obviously bogus methods but maintaining at least a nodding relationship to “reality,” I hereby predict that smartphones will essentially eliminate “feature phones” from the cell phone marketplace in about 3 years; at which point, everyone who owns a cell phone—something on the order of 85% of US adults—may own a smartphone. I further predict that in about 1 year ownership of tablets in the US will equal ownership of traditional PCs, with most households owning one or more mini/maxi/reader “tablets.”
All of which means that MR has a very short window for adapting its data collection methods from a PC-centric paradigm to one centered on smartphones and tablets.
This will be one of the biggest challenges in our immediate future for both C+R and in the MR industry as a whole. We simply have to get “mobile” right. As long as our clients demand data-driven insights, we’re going to depend on consumers being willing to share their perceptions and opinions with us, and we depend on technological means to collect that data.
Looking back, the conversion from phone/mall/mail survey data collection to online methods at the end of the 90s seemed like a major revolution that upended almost everything and rang in a new era. But, in hindsight, although the mechanisms of research changed a lot during that period, what now seems to stand out is that the basic paradigm of question-and-answer surveys changed very little. Other than porting surveys from “CRT terminals” in the phone room to PC screens in the nation’s family rooms, dens, bedrooms, and kitchens, the underlying form of the survey hardly changed at all. Surveys grew images, videos, and Flash widgets but the great majority of MR surveys created today for online administration could be ported (back) to the phone room quickly and with ease.
But mobile is going to be different. Cellphone use is dominated by short interactions while few online surveys take less than 20 minutes (and many take more). Today’s surveys, though they can be taken using a mobile device, are a miserable experience because the industry mindset is still fixated on the PC. (My friend and colleague, Bob Relihan, says that the most miserable experience on a cell phone is trying to fill out a web form, and online surveys are composed of one web form after another.) A whole new survey paradigm will have to be invented to model a virtual “long survey” from a series of very short interactions on a mobile device or something resembling a Google Survey. And the sampling industry is going to have to reinvent itself once again.
The “PC revolution” (the 80s) and the “Web revolution” (the 90s) are going to give way to the “Mobile revolution.” The question is “When?” and the answer could easily be, “Very soon!”
December 19th, 2012
By Walt Dickie, Senior Vice President
(If you haven’t already, be sure to read part 1 to this blog post.)
I’m assuming that if you’re reading this you’re probably a marketing researcher. If that’s the case, you probably have access to a bunch of survey questionnaire. Look on your hard drive or reach into your file drawer if you’re old school and take out three or four. Look them over and back up far enough so you’re looking at their general structure rather than the details. What do you see? I’ll bet they all share a similar approach:
- First, you almost certainly see a bunch of screening questions designed to eliminate all but a pre-defined target group from taking the main survey. This makes sense because you pay for sample, pay incentives, and pay for resources to administer, manage, and analyze the survey and you don’t want to waste resources.
- Next your surveys probably have a section or two devoted to category-level questions, so you can profile results based on pre-established dimensions of opinions about and usage within whatever product/service category the survey focuses on. Again, this makes sense. It establishes context and in many cases may allow you to compare your findings to previous work.
- Then your surveys probably show or reveal something in order to get respondents’ reactions to it. Purchase interest. Likes and dislikes. Batteries of questions dissecting emotional reactions. Maybe some open-ended questions to lay bare the underpinnings of those emotions. Maybe you’ll go around a couple of times on the reveal/react carousel. Or dissect the combinatorial possibilities with some conjoint or discrete choice questions. All sensible choices. This is the meat of the survey and you want to be able to mine it for key findings and “insights,” which are, after all, what the client is paying for.
- Finally, you’ll almost surely find a bunch of classification questions – again, quite a sensible thing to include for establishing context and tying your work back to the client’s tracking data and historical survey archive.
Everything in your surveys is sensible, serious, and workmanlike.
But back up and consider your surveys from the perspective of someone who, somehow, had never seen anything like them before. There’s really quite a lot about your respondents and what they do and think that your surveys make to mention of. Your surveys contain such a limited set of choices!
There is, for instance, no information at all about what their friends and family members think, or what they would say about either your product or new idea. Nothing about what your respondents search for on Google or read online. Nothing about where they go everyday as they travel around the places they live, or what they see or what they do there. Nothing about their inner lives: what they love, long for, dream about, pledge allegiance to, dislike, despise, or denigrate. Nothing about their pasts, really, or about their futures. And there’s nothing about them that they are not or cannot become conscious of and respond to in answer to your question. Not to mention the problem that all of your questions come in prefab form, so the issues they address are constrained to the issues the client already recognizes. And let’s not forget that there’s no information from anyone who fell outside of your carefully limited screening quotas. How can you be sure that there are no discoveries to be made among them?
You’ll object that you didn’t omit any of that because you aren’t interested in experimenting with it or would reject the possibility of finding value out of hand, but you have a limited budget and only so much time to spend. The client won’t pay for experimentation and, besides, there’s a decision to be made. The client has a lot riding on the research and they’ve given a good deal of thought to the question of what they need to know to make that decision. You need and want to give them good value for their money.
I’d like to suggest that this is a powerful example of what drives bubbles and echo chambers. Marketing research has evolved to serve structured corporate decision making and that process has evolved to demand the inputs that marketing research provides.
Corporate decision making and marketing research have coevolved the MR survey with a closed set of constructs and almost ritualistic format. Unfortunately, the evolution of surveys has been driven by the imperatives of the Red Queen – running faster and faster to stay in the same place, becoming an ever-closer fit with the needs of corporate decision making, budgets, and timelines. Each step has not been determined by the possibilities of utilizing new data sources, analytic approaches, or even accurately predicting future behavior, but by the ever-closer interlocking of the decision process and the information machine that feeds it.
Like Republican pundits talking to Republican audiences and eventually creating a closed worldview that, among other things, mis-called the election, we’ve created a mutually evolved worldview that has satisfied us and our clients for many years, but, as the CEOs keep telling us, often seems to fail to predict the real world and doesn’t seem to be getting any better.
Will the appearance of data nerds bearing data on things like clickstreams, search histories, location, social influence, sentiment analysis, and facial expressions – complex real-word behavior and data that isn’t based on the ability to frame a conscious answer to a prefabricated question – be the beginning of the opening-up of that closed co-evolved world? Will MR break off the echoing conversation before our clients do? Will it take the equivalent of a public defeat on a national election stage to turn things around, or will MR put the genuine technical expertise, hard work, and ingenuity to work before we have to cobble together a last minute concession speech?
December 17th, 2012
By Walt Dickie, Executive Vice President
I’ve been working in the MR industry since 1978, and the one unchanging theme of that long period has been the constant complaint by the senior corporate executives – who fund the industry by their demand for research-based decision making – that MR just isn’t very good at identifying opportunities or pointing to the new products, services, and ventures with the best chances of successfully capitalizing on them.
I remember being surprised when I first encountered a CEO denouncing the entire MR enterprise at a major conference – which probably happened sometime within my first or second year. Was my new, desperately needed first job after grad school, and maybe the whole industry, going to go down in flames before I even got started? It hasn’t but the drumbeat of C-suite dissatisfaction has never lessened.
This all came to me as I was reading one of the many articles from the “nerdiest election” in which Romney had not prepared a concession speech because – by all the accounts I’ve seen – he didn’t think he’d need one. It sounds like Romney really got blindsided. Not only did he not have a concession speech held in reserve, but he also planned to celebrate the election with a fireworks display over Boston Harbor and his campaign even had a “Romney Wins” website up and ready. (Someone posted screen grabs, of course.)
As Slate’s John Dickerson said, “He got the numbers wrong … in the end Romney and Ryan had to watch CNN to find out how their campaign was doing.”
The blog posts and news stories lay blame on what David Frum labeled the ”conservative entertainment complex.” And it looks like the Romney’s “marketing research” group followed the media’s lead in asking the questions they wanted asked, hearing the answers they wanted to hear, and reinforcing an internal viewpoint that, in the end, failed as MR and left the CEO “gobsmacked” and angry.
I read all of this with the shock of recognition. My mind snapped back to 1985 and the introduction of New Coke. All the research that had been done! And, the magnitude of the disaster that followed! The astonishment of everyone inside Coca-Cola that their carefully constructed edifice had been built on sand. I had seen all of it and had enjoyed having the inside scoop thanks to two colleagues who had come from Coke’s MR department and understood the backstory, which had included a huge research effort.
The post-election stories seem to be revealing a distinctive, internal worldview shared by the campaign and its media supporters. Republican pollster, Whit Ayres, described the research being drive by “rosy assumptions on a likely electorate…at…substantial variance with recent history.”
An “echo chamber” bubble developed when the campaign research staff and its clients elaborated a shared narrative about polling and sampling methods and how to interpret results. Outside that bubble, the academic social scientists and media stat nerds – with Nate Silver as their symbolic leader – were using different methods, asking different questions, and interpreting their findings differently.
They were right in the end, and the people in the bubble were wrong.
A case can be made that MR and our clients have created a similar bubble, in which we talk to each other in an echo chamber. And, a case can be made that the academic social scientists and techie stat nerds are “threatening” traditional MR with everything – including Big Data, social media and web based behavioral analytics, location data, remote facial analysis, and eye tracking – living in an entirely different world.
The GRIT survey as well as reports coming out of consultancies like Cambiar (site registration required), say clearly that corporate research departments are eagerly welcoming all the new “non-MR” vendors knocking on their doors. What’s worrisome is what would happen if the CEOs start sending more work to the guys outside the MR world, and they started having some successes calling the game. That’s what the voices that expect the leading “MR” vendors of the end of the decade to be companies like Google, Facebook, and Twitter.
Why didn’t all of these new approaches arise out of marketing research? At the very least, why weren’t traditional marketing research companies their earliest and most eager adopters? Why didn’t clients hear about all of these developments from their MR vendors? Is it because the conversation was “closed” and things like that simply had no place?
I’m not arguing that MR is operating in bad faith – only that they and their client audiences may have constructed a particular shared view of commissioned and conducted research that has become closed, limited, and overly rigid.
Marketing researchers are almost universally serious, sober people who see themselves as technical experts in a field that demands hard work, clear thinking, and ingenuity to provide vital information to decision makers while often suffering little respect, diminished budgets, and constricting timelines. But make no mistake about it, marketing research has been working with the same basic approaches for at least a couple of generations. I made a stab at some of the qualitative issues in a previous post, so in part 2 to this post, I will examine a few things about quantitative research.
December 10th, 2012
By Walt Dickie, Executive Vice President
I’ve been living in a sort of fugue state since the presidential election. I keep drifting into extended musing over the ways that the campaigns were acting out lessons for marketing researchers. I’m hoping that this will come to an end fairly soon; in fact, I think I can feel the effect beginning to wear off.
In the early stages of the rush, all of the lessons seemed to highlight weaknesses in either qualitative or quantitative MR. But now I’m getting more positive messages.
If you know where to look and how to spot them, you can see the influence of Operations in the Election Day stories. There seems to have been an Operations disaster, but I think I also see one of the rare cases, in my experience, where someone listened to Ops early on, took their advice, and nailed the project.
In the case of the ill-fated “Republican Party’s newest, unprecedented, and most technologically advanced plan to win the 2012 presidential election,” Project ORCA, the Ops guys apparently lost to the PR guys. I’m absolutely sure the campaign’s Operations guys insisted upon holding serious training sessions for the volunteers that would use the system because I know there had to have been Ops geeks around and I know how Ops geeks think. But, according to a volunteer who blogged his experience (and disappointment), volunteers “were invited to take part in nightly conference calls. The calls were more of the slick marketing speech type than helpful training sessions.”
In the words of Zac Moffatt, the campaigns digital director, the system “kind of buckled under the strain (of) the amount of information incoming.” The lack of training left the people out in the field un-prepared and unable to communicate. Even with “800 Romney workers…staffing phones…the surge in traffic was so great that the system didn’t work for 90 minutes,” leaving the field workers scrambling and headquarters without field input.
I have to say that I felt awful reading this. I know that somewhere an Operations Director was crying, and that his or her advice had gone unheeded. I’m dead sure of it.
But the most interesting story about Ops guys and an election didn’t even happen in 2012; –it happened in the Democratic camp in 2008.
Amazingly enough, in 2008 the Democrats had built a system called Houdini, that, like ORCA in 2012, was designed to “make the names of those who had already voted disappear from the Get Out The Vote lists” that were being maintained by volunteers in the field, who could then not waste time on people who had already voted and concentrate on the people who hadn’t.
Like ORCA, Houdini failed. Spectacularly. “On Election Day, the call volume was even more than anticipated and took out the entire phone system for the Obama campaign. It didn’t just effect the reporting of vote totals but effected anything that involve a central campaign phone line.”
But here’s where the Ops guys come in. After Houdini’s failure the developers did a post mortem and scaled back the functionality of their 2012 system, Narwhal.
“It was basically determined that it wasn’t worth the risk or the amount of work for every precinct in the country. The creators of Houdini came in from Google and decided that it wasn’t possible to build a system that would scale that big.”
This is amazing – somebody decided to build a system with reduced functionality in order to get improved reliability. Do you suppose the political team or the marketing guys wanted less information on Election Day? Hell, no! I’m guessing they fought like cornered Tasmanian Devils not only to expand the scope, also but to add new features. I wasn’t there, of course, but I’ve been in enough discussions between MR analysts, developers, and Operations people to be pretty sure who fought for what. The guys from Google were Ops savvy.
Operations people are under-appreciated. Their job is to execute somebody else’s brainchild, and we rely on them to make it a success. Like the hedgehog in the fable, they know one big thing: failure is not an option. Projects have to get done. On time. Within the budget. The data has to be right. There will not be a second chance. They’ll move mountains to get things done. Sometimes they go home at night wondering why no one sought or followed their advice and sailed right into the big rock they saw so clearly ahead.
But apparently as the developers proceeded from 2008’s Houdini to 2012’s Narwhal the Ops guys won one. I hope they got the credit they deserved because too often they don’t.
December 6th, 2012
By Walt Dickie, Executive Vice President
The torrent of shopping data from Black Friday and Cyber Monday is coming in. Although their names suggest a f first person shooter game involving Robocop in some futuristic battle, these two days are the now-traditional kickoff to the U.S. Christmas shopping orgy, and a clarion call to the armies of the commentariat and blogosphere.
And, once again, in what appears to be as much a part of the new American Christmas tradition as the shopping experience itself, the big headlines are all about the massive growth of online shopping.
If you’ve somehow missed all the frenzied scribbling, you can turn to IBM which published the key data that almost everyone seems to have relied on in the IBM 2012 Holiday Benchmark Reports. There, in just a few pages of data, you’ll find both Friday and Monday’s online sales data broken down by retail category and compared to last year’s results.
The topline news story is, of course, the huge increase in online shopping: up 17.4% compared to last year’s Thanksgiving Day, up20.7% compared to Black Friday 2011, and a whopping 30.3% up on Monday compared to a year ago.
Close behind is the news about mobile: IBM estimates that 24% of retail site traffic came from mobile devices on Black Friday this year, up from 14.3% last year – a monster increase of 67.8%. On Cyber Monday – traditionally understood as the day people went back to work and shopped their brains out on their employers’ broadband connections – mobile users were responsible for 18.4% of retail site traffic, which was up from 10.8% a year ago – an incredible 71.4% increase.
The oddity of the week was the unexpected difference between Android and iOS (Apple) users that emerged after correcting for Android’s dominance in the smartphone race. On Black Friday iOS devices accounted for 77% of mobile shopping traffic while Android accounted for only 23%. This is an oddity because currently Android phones and tablets outnumber iOS phones and tablets by about 60/40. Work it all out and “iPhone (and iPad) users are about three times more engaged in shopping with their devices than Android users.”
Horace Dediu does an excellent job of unpacking the “Android engagement paradox,” which he attributes mostly to “later adopters” buying Android phones in numbers sufficient to have overcome Apple’s early lead in smartphones. But, in the end, he finds this answer unsatisfactory. I wonder if the Android/iOS “paradox” contains a message for marketing research about mobile sampling – should we be thinking about weighting our mobile samples or imposing Android/iOS quotas?
But the main message for MR comes from some much more basic observations about mobile usage.
What caught my attention in IBM’s data wasn’t the comparison between this year’s Black Cyber Days to last year’s, but the column comparing Black Friday 2011 – last year’s Big Gun – with Friday, November 16, 2012 – seven days before this year’s opening round, a “normal” pre-holiday Friday, which I will henceforth refer to as, “Normal Friday 2012.”
You may remember that the headlines for Black Friday 2011 were more or less the same as they were this year: online as a whole and mobile shopping in particular were way up. But “Normal Friday 2012” blew away Black Friday 2011 on several measures. For instance, sales increased 10.8% on retail sites compared to last year’s Black Friday, and, on average, a sale in 2012 involved 3 items more than a sale in 2011. The headline-making shopping news of 2011 now trails the new normal.
Mobile is the major factor:
Mobile data is in blue; other data is in orange. Data involving sales is outlined in red.
Of the variables that increased on Normal Friday compared to Black Friday, most involve mobile and, overall that mobile site traffic increased 4.6%. Although mobile sales increased, sessions that involved viewing only a single page increased overall on mobile, as well as abandoned shopping carts. Moreover, with the exception of mobile sales, all the variables involving closing a sale fell, as well as sessions in which a visitor placed an item in a cart.
Normal Friday 2011 obviously involved more looking and comparing even though buying did manage an increase.
All of the other data collected over the past couple of years reinforces the obvious conclusion that mobile devices – smartphones and tablets – have become even more important components of shopping. Checking prices and features, sales, finding online coupons, and, for that matter, seeing advertising, are now normal parts of the shopping experience. As I said, we knew that.
But what seems to be fairly stunning is that in a single year the headline-making news of Black Friday is now the everyday expectation of Normal Friday.
The moral is that the retailers get it; they’re struggling with it but they get it. They all know that they have four screens to think about – TV, computer, tablet, and phone. They all know about showrooming: “when a customer visits a brick and mortar retail location to touch and feel a product and then goes online…to purchase the product.” And, the smartest of them have stopped complaining about it and are working on leveraging it with apps that provide extra services in-store – new items, sales notices, bar code scans – then let you “flip” to their website to take advantage of online deals. The same app lets you find their deals when showrooming in a competitor’s store, too. They’re adapting to the new normal.
Can this please be the year that marketing researchers – clients and suppliers – stop wondering whether to add a “cell phone segment” to a sample spec and accept the fact that we need to expand our data collection toolbox, to fit the the time, connectivity, and screen size constraints of mobile while also expanding our thinking? We should be focusing on leveraging the incredible capabilities that mobile presents for collecting new kinds of data in new kinds of situations. We need to walk away from the “online revolution” of the early 2000s and realize that we’re in a new, increasingly mobile-dominated age.
By the way, the average session on a mobile device on Black Friday 2011 took 4:03, and Normal Friday 2012 it had shrunk to 3:46. That session probably took place while the shopper was doing at least two other things at the same time, and everything that was going on was almost certainly of interest to MR. But, we probably missed it.
December 4th, 2012
By Patti Fernandez, Research Director
The Marketing Research Event was buzzing with excitement and anticipation. What tales would we hear, what knowledge would we uncover, what trends would take center stage? And, in the end, on what new paths would we, as researchers, venture?
Insight development via storytelling and storytelling through data visualization were very much in the air. Many a session encouraged us, like Dorothy, to follow the yellow brick road toward our own Emerald City where insights break the confines of numbers and quotes and live within visually compelling stories.
But, in today’s data-driven world, how can we tell a story visually while seamlessly satisfying the needs of the data literalists? And, how can we shake the compulsion to show everything we’ve uncovered because (in our minds) every nugget matters?
The key is not only to tell a story, but also to approach the insight development process in the same way as story-creation. Here are five key elements to a solid storytelling approach:
Relevance is Key
- There is usually a rhyme and reason for everything that is included in a story (foreshadowing, plot-building, etc.).
- In that same way, results and insights should serve as key puzzle pieces that help build and complete a bigger picture.
- Relevance, though, takes time. We must first go treasure-hunting through all of our findings in order to determine which ones truly are worthy of supporting the key insights that need to be communicated.
- Stories follow a natural, rational order that keeps us alert and engaged with the plot.
- Our insights and findings, then, should follow the same path. They should help take the audience on a journey that makes sense and keeps them on the edge of their seats.
Create Conflict and Resolution
- Without conflict there is no resolution – without resolution there is no end to a story.
- Always aim to keep the plot of your story anchored. Your role as a researcher is to tell a story that ultimately helps resolve some sort of conflict.
Define Your Characters and Their Roles
- Characters have set roles in the story – they exist for a reason.
- In order to approach research in an organized and rational manner, we must first define who the characters are and what role they play.
- We may be swayed to think that the brand or product is the hero, but it is the consumer who should wear this badge. Brands are simply the tools that help the hero resolve conflict.
Bring Your Story to Life
- A good story will keep us turning the pages if it’s told in an engaging manner. Overuse of descriptive or circular plots can deter engagement and leave us tossing the story aside without finishing.
- And, just like a poorly written story, research results that are loaded with data that makes the audience have to work too hard to decipher the true message can fall flat.
- Using visual depictions of information to surprise and make data easily digestible will not only make your research more engaging, but also make it easier to present the story in personal and animated manner.
In the end, it’s not simply how you present your insights with iconic figures, captivating prose, and visually stimulating graphics – it’s how you approach the insight-finding process. So, take a leap of faith and follow the rabbit down the hole through a journey of discovery.
November 8th, 2012
By Bob Relihan, Senior Vice President
In a recent blog post, my good friend and colleague, Walt Dickie, has taken the success of Nate Silver’s data-focused and accurate prediction of last night’s election outcome and the failure of so many pundits to do the same as a metaphor for the power of big data and the twilight of the focus group moderator. His argument is that hard-eyed, statistically-significant data, modeled and analyzed properly, trumped instinct and expertise of many pundits and their years of experience feeling the winds of voter moods and sentiments. This is clear death knell for the focus group and its moderator who also applies years of experience and instinct to interpreting the often opaque feelings of consumers.
I have a certain amount of sympathy for this agreement, particularly after listening to the hours of hot air expended by pundits over the past few (many?) months. I begin to have sympathy for the marketing managers who have to listen to countless presentations of findings from focus groups.
What is more, election night provided another example of the triumph of big data. I was able to read a table this morning that gave average wait times at the polls in different states. The data was the product of an analysis of all the Tweets yesterday. I certainly could not have done that, accurately or not, with focus groups. I am not certain I could have deployed a traditional survey to yield information so quickly.
But, does this all provide a hint of the demise of focus groups and skilled moderators? I don’t think so.
In the first place, not all pundits failed in their prediction of the election outcome. A scoring of the punditry revealed that left-leaning pundits were remarkably accurate. In fact, there were a few with better accuracy than Silver. Right-leaning pundits? Well, most were considerably wide of the mark. When I talk to consumers in a qualitative setting and bring my expertise and instincts to bear upon the comments, I believe I am being objective, as objective as I can be. That objectivity results, I believe, in reliable insights.
Silver’s much praised accuracy has limitations. He predicted the outcome of this specific election. He was asked to predict very specific and well-understood behavior taking place at a specific time. Rarely, as a focus group moderator have I had to answer so circumscribed a question. Rather, I am asked to develop hypotheses about the reactions of consumers in a range of possible futures. What are the attitudes and emotions of consumers that tell me how they might respond to a new entrant in a category? A new service they have never seen before? A message about an unheard of benefit of a well-known product?
Focus groups, conducted by sensitive, experienced analysts, can provide this kind of direction to marketers. And, they will for the foreseeable future.