December 6th, 2012
By Walt Dickie, Executive Vice President
The torrent of shopping data from Black Friday and Cyber Monday is coming in. Although their names suggest a f first person shooter game involving Robocop in some futuristic battle, these two days are the now-traditional kickoff to the U.S. Christmas shopping orgy, and a clarion call to the armies of the commentariat and blogosphere.
And, once again, in what appears to be as much a part of the new American Christmas tradition as the shopping experience itself, the big headlines are all about the massive growth of online shopping.
If you’ve somehow missed all the frenzied scribbling, you can turn to IBM which published the key data that almost everyone seems to have relied on in the IBM 2012 Holiday Benchmark Reports. There, in just a few pages of data, you’ll find both Friday and Monday’s online sales data broken down by retail category and compared to last year’s results.
The topline news story is, of course, the huge increase in online shopping: up 17.4% compared to last year’s Thanksgiving Day, up20.7% compared to Black Friday 2011, and a whopping 30.3% up on Monday compared to a year ago.
Close behind is the news about mobile: IBM estimates that 24% of retail site traffic came from mobile devices on Black Friday this year, up from 14.3% last year – a monster increase of 67.8%. On Cyber Monday – traditionally understood as the day people went back to work and shopped their brains out on their employers’ broadband connections – mobile users were responsible for 18.4% of retail site traffic, which was up from 10.8% a year ago – an incredible 71.4% increase.
The oddity of the week was the unexpected difference between Android and iOS (Apple) users that emerged after correcting for Android’s dominance in the smartphone race. On Black Friday iOS devices accounted for 77% of mobile shopping traffic while Android accounted for only 23%. This is an oddity because currently Android phones and tablets outnumber iOS phones and tablets by about 60/40. Work it all out and “iPhone (and iPad) users are about three times more engaged in shopping with their devices than Android users.”
Horace Dediu does an excellent job of unpacking the “Android engagement paradox,” which he attributes mostly to “later adopters” buying Android phones in numbers sufficient to have overcome Apple’s early lead in smartphones. But, in the end, he finds this answer unsatisfactory. I wonder if the Android/iOS “paradox” contains a message for marketing research about mobile sampling – should we be thinking about weighting our mobile samples or imposing Android/iOS quotas?
But the main message for MR comes from some much more basic observations about mobile usage.
What caught my attention in IBM’s data wasn’t the comparison between this year’s Black Cyber Days to last year’s, but the column comparing Black Friday 2011 – last year’s Big Gun – with Friday, November 16, 2012 – seven days before this year’s opening round, a “normal” pre-holiday Friday, which I will henceforth refer to as, “Normal Friday 2012.”
You may remember that the headlines for Black Friday 2011 were more or less the same as they were this year: online as a whole and mobile shopping in particular were way up. But “Normal Friday 2012” blew away Black Friday 2011 on several measures. For instance, sales increased 10.8% on retail sites compared to last year’s Black Friday, and, on average, a sale in 2012 involved 3 items more than a sale in 2011. The headline-making shopping news of 2011 now trails the new normal.
Mobile is the major factor:
Mobile data is in blue; other data is in orange. Data involving sales is outlined in red.
Of the variables that increased on Normal Friday compared to Black Friday, most involve mobile and, overall that mobile site traffic increased 4.6%. Although mobile sales increased, sessions that involved viewing only a single page increased overall on mobile, as well as abandoned shopping carts. Moreover, with the exception of mobile sales, all the variables involving closing a sale fell, as well as sessions in which a visitor placed an item in a cart.
Normal Friday 2011 obviously involved more looking and comparing even though buying did manage an increase.
All of the other data collected over the past couple of years reinforces the obvious conclusion that mobile devices – smartphones and tablets – have become even more important components of shopping. Checking prices and features, sales, finding online coupons, and, for that matter, seeing advertising, are now normal parts of the shopping experience. As I said, we knew that.
But what seems to be fairly stunning is that in a single year the headline-making news of Black Friday is now the everyday expectation of Normal Friday.
The moral is that the retailers get it; they’re struggling with it but they get it. They all know that they have four screens to think about – TV, computer, tablet, and phone. They all know about showrooming: “when a customer visits a brick and mortar retail location to touch and feel a product and then goes online…to purchase the product.” And, the smartest of them have stopped complaining about it and are working on leveraging it with apps that provide extra services in-store – new items, sales notices, bar code scans – then let you “flip” to their website to take advantage of online deals. The same app lets you find their deals when showrooming in a competitor’s store, too. They’re adapting to the new normal.
Can this please be the year that marketing researchers – clients and suppliers – stop wondering whether to add a “cell phone segment” to a sample spec and accept the fact that we need to expand our data collection toolbox, to fit the the time, connectivity, and screen size constraints of mobile while also expanding our thinking? We should be focusing on leveraging the incredible capabilities that mobile presents for collecting new kinds of data in new kinds of situations. We need to walk away from the “online revolution” of the early 2000s and realize that we’re in a new, increasingly mobile-dominated age.
By the way, the average session on a mobile device on Black Friday 2011 took 4:03, and Normal Friday 2012 it had shrunk to 3:46. That session probably took place while the shopper was doing at least two other things at the same time, and everything that was going on was almost certainly of interest to MR. But, we probably missed it.
December 4th, 2012
By Patti Fernandez, Research Director
The Marketing Research Event was buzzing with excitement and anticipation. What tales would we hear, what knowledge would we uncover, what trends would take center stage? And, in the end, on what new paths would we, as researchers, venture?
Insight development via storytelling and storytelling through data visualization were very much in the air. Many a session encouraged us, like Dorothy, to follow the yellow brick road toward our own Emerald City where insights break the confines of numbers and quotes and live within visually compelling stories.
But, in today’s data-driven world, how can we tell a story visually while seamlessly satisfying the needs of the data literalists? And, how can we shake the compulsion to show everything we’ve uncovered because (in our minds) every nugget matters?
The key is not only to tell a story, but also to approach the insight development process in the same way as story-creation. Here are five key elements to a solid storytelling approach:
Relevance is Key
- There is usually a rhyme and reason for everything that is included in a story (foreshadowing, plot-building, etc.).
- In that same way, results and insights should serve as key puzzle pieces that help build and complete a bigger picture.
- Relevance, though, takes time. We must first go treasure-hunting through all of our findings in order to determine which ones truly are worthy of supporting the key insights that need to be communicated.
- Stories follow a natural, rational order that keeps us alert and engaged with the plot.
- Our insights and findings, then, should follow the same path. They should help take the audience on a journey that makes sense and keeps them on the edge of their seats.
Create Conflict and Resolution
- Without conflict there is no resolution – without resolution there is no end to a story.
- Always aim to keep the plot of your story anchored. Your role as a researcher is to tell a story that ultimately helps resolve some sort of conflict.
Define Your Characters and Their Roles
- Characters have set roles in the story – they exist for a reason.
- In order to approach research in an organized and rational manner, we must first define who the characters are and what role they play.
- We may be swayed to think that the brand or product is the hero, but it is the consumer who should wear this badge. Brands are simply the tools that help the hero resolve conflict.
Bring Your Story to Life
- A good story will keep us turning the pages if it’s told in an engaging manner. Overuse of descriptive or circular plots can deter engagement and leave us tossing the story aside without finishing.
- And, just like a poorly written story, research results that are loaded with data that makes the audience have to work too hard to decipher the true message can fall flat.
- Using visual depictions of information to surprise and make data easily digestible will not only make your research more engaging, but also make it easier to present the story in personal and animated manner.
In the end, it’s not simply how you present your insights with iconic figures, captivating prose, and visually stimulating graphics – it’s how you approach the insight-finding process. So, take a leap of faith and follow the rabbit down the hole through a journey of discovery.
November 8th, 2012
By Bob Relihan, Senior Vice President
In a recent blog post, my good friend and colleague, Walt Dickie, has taken the success of Nate Silver’s data-focused and accurate prediction of last night’s election outcome and the failure of so many pundits to do the same as a metaphor for the power of big data and the twilight of the focus group moderator. His argument is that hard-eyed, statistically-significant data, modeled and analyzed properly, trumped instinct and expertise of many pundits and their years of experience feeling the winds of voter moods and sentiments. This is clear death knell for the focus group and its moderator who also applies years of experience and instinct to interpreting the often opaque feelings of consumers.
I have a certain amount of sympathy for this agreement, particularly after listening to the hours of hot air expended by pundits over the past few (many?) months. I begin to have sympathy for the marketing managers who have to listen to countless presentations of findings from focus groups.
What is more, election night provided another example of the triumph of big data. I was able to read a table this morning that gave average wait times at the polls in different states. The data was the product of an analysis of all the Tweets yesterday. I certainly could not have done that, accurately or not, with focus groups. I am not certain I could have deployed a traditional survey to yield information so quickly.
But, does this all provide a hint of the demise of focus groups and skilled moderators? I don’t think so.
In the first place, not all pundits failed in their prediction of the election outcome. A scoring of the punditry revealed that left-leaning pundits were remarkably accurate. In fact, there were a few with better accuracy than Silver. Right-leaning pundits? Well, most were considerably wide of the mark. When I talk to consumers in a qualitative setting and bring my expertise and instincts to bear upon the comments, I believe I am being objective, as objective as I can be. That objectivity results, I believe, in reliable insights.
Silver’s much praised accuracy has limitations. He predicted the outcome of this specific election. He was asked to predict very specific and well-understood behavior taking place at a specific time. Rarely, as a focus group moderator have I had to answer so circumscribed a question. Rather, I am asked to develop hypotheses about the reactions of consumers in a range of possible futures. What are the attitudes and emotions of consumers that tell me how they might respond to a new entrant in a category? A new service they have never seen before? A message about an unheard of benefit of a well-known product?
Focus groups, conducted by sensitive, experienced analysts, can provide this kind of direction to marketers. And, they will for the foreseeable future.
November 8th, 2012
By Walt Dickie, Executive Vice President
Tuesday’s election is being hailed as “The Triumph of the Nerds.” Barack Obama won the presidential election, but Nate Silver won the war over how we understand the world.
The traditional pundits were on TV, in the papers and blogs, interpreting what they were hearing and feeling. Peggy Noonan:
“Something old is roaring back.” … “people (on a Romney rope line) wouldn’t let go of my hand” … “something is moving with evangelicals … quiet, unreported and spreading” … the Republicans have the passion now, the enthusiasm. … In Florida a few weeks ago I saw Romney signs, not Obama ones. From Ohio I hear the same. From tony Northwest Washington, D.C., I hear the same.”
On the other side were the Moneyball data nerds, with Nate Silver carrying their standard:
“Among 12 national polls published on Monday, Mr. Obama led by an average of 1.6 percentage points. Perhaps more important is the trend in the surveys. On average, Mr. Obama gained 1.5 percentage points from the prior edition of the same polls, improving his standing in nine of the surveys while losing ground in just one. … Because these surveys had large sample sizes, the trend is both statistically and practically meaningful.”
The morning after, Paul Bradshaw posted that the US election was a wake up call for data illiterate journalists – the pundits – who “evaluate, filter, and order (information) through the rather ineffable quality alternatively known as ‘news judgment,’ ‘news sense,’ or ‘savvy.”
Bradshaw, and the blogger Mark Coddington, whom he quotes, look beyond the question of which camp “won” or “lost” the election, and see an epistemological revolution in reporting the news:
Silver’s process — his epistemology — is almost exactly the opposite of (traditional punditry): “Where political journalists’ information is privileged, his is public, coming from poll results that all the rest of us see, too. Where political journalists’ information is evaluated through a subjective and nebulous professional/cultural sense of judgment, his evaluation is systematic and scientifically based. It involves judgment, too, but because it’s based in a scientific process, we can trace how he applied that judgment to reach his conclusions.”
But this blog post is about marketing research, not journalism, although as I’ve argued before the two fields have a lot in common.
When I read Bradshaw yesterday morning, I could hardly help re-writing his observations, since he could easily have been talking about the traditional approach to qualitative analysis. Here’s my re-write:
Qualitative analysts get access to information directly from consumers, then evaluate, filter, and order it through their “judgment,” “sense,” or “savvy.” This is how qualitative analysts say to their clients (and to themselves), ‘This is why you can trust what we say we know — because we found it out through this process.’”
Journalistic intuition suffered a severe blow Tuesday, though I doubt it will prove fatal. The data-free intuition of focus group moderators is getting hammered by Silver-esque data-driven analysis, but it hasn’t succumbed yet, either.
Still, I have to wonder if the writing is on the wall. I’ll leave the last word to Bradshaw:
Journalists who professed to be political experts were shown to be well connected, well-informed perhaps, but – on the thing that ultimately decided the result: how people were planning to vote – not well educated. They were left reporting opinions, while Nate Silver and others reported research.
October 26th, 2012
By Walt Dickie, Senior Vice President
If you’ve ever read anything about SETI, the search for extraterrestrial intelligence, you’ve probably heard of Drake’s Equation—well, you might have heard about it on Star Trek or the Big Bang Theory instead! Formulated by the astronomer Frank Drake, it estimates the odds of intelligent life in our galaxy and is one of the foundations for the whole enterprise. SETI came into being when Drake, whose Project Ozma was the first systematic search for alien radio signals, addressed a meeting at the Green Bank National Radio Astronomy Observatory in 1960 and offered an estimate of the odds that the project could succeed. The Drake Equation is justly famous as one of the first attempts to put the possible existence of little green men into mathematical perspective.
I was in high school when I first heard about SETI and the Drake Equation. For no good reason, I’ve often found myself musing about it which is why it recently popped into my mind again when one of my partners was getting ready to moderate a discussion at an MR conference.
The conference, like most MR conferences these days, was heavy on presentations about New Methods and The Future of Marketing Research. Needless to say, not a few of the conference presenters agreed with almost every blogger in the industry that Major Change is Just Around the Corner and that the Future For Every Established MR Company Is Bleak.
My partner’s role at the conference was to moderate a discussion between the audience and one of the keynote speakers, and he was preparing by reviewing the speaker’s presentation, a copy of which he’d sent me. We were exchanging emails about the issues in the talk and the potential questions they raised when it occurred to me that MR needed its own version of the Drake Equation.
Just as the early SETI community needed some estimate of the likelihood of contacting alien life, the current MR community needs an estimate of the likelihood of being superseded by new research technology. It’s almost all we talk about these days, and, being mathematically inclined folks, we deserve a calculation of our odds.
So here, with apologies to Frank Drake, I propose the MR version of the Drake Equation as a contribution to SSRT, the Search for Superior Research Technology.
The Drake Equation
N = R** fp * ne * fℓ* fi * fc * L
SETI: N = the number of civilizations in our galaxy with which communication might be possible
SSRT: N = The current number of competitors capable of putting your company out of business
Drake’s proposed value
My proposed value
|The average annual rate of star formation per year in our galaxy||
|The average annual rate at which significant new approaches for understanding some aspect of human behavior or thought appear||
|The fraction of those stars that have planets||
|The fraction of those approaches that are applicable to the design/ delivery/ communication of consumer products or services||
|The average number of planets that can potentially support life per star that has planets||
|The fraction of the above that depend on data/inputs that can be collected using conceivable/ deliverable/ socially and ethically acceptable technology||
|The fraction of the above that actually go on to develop life at some point||
|The fraction of the above that are commercialized at some point||
|The fraction of the above that actually go on to develop intelligent life||
|The fraction of the above that are faster and/or less expensive than your company’s offerings||
|The fraction of civilizations that develop a technology that releases detectable signs of their existence into space||
|The fraction of the above that will produce results that are more actionable/ effective/ predictive than your company’s offerings||
|The length of time for which such civilizations release detectable signals into space||
|The length of time that those approaches will be seen as valid/interesting/relevant as the basis for commercial products/ services||
Some Comments on my Estimates:
R* I think my estimate of truly unique, significant new approaches coming along every two years may be generous. Not that new approaches or technologies don’t appear more often than that, but most of these are minor wrinkles on existing approaches – better ways to do something that’s already being done. If you’re a technological optimist you might go for more frequent discoveries. I think that one a year would be an optimistic estimate, but feel free to enter your own.
fp Almost anything – maybe not quite anything – that’s applicable to humans is applicable in some non-trivial way to consumer products and services. Optimistic estimate: .99
ne On the other hand, some of the new things that come along require approaches that either aren’t technically feasible, at least at scale, within any foreseeable future or would never pass a social/ethical or possibly legal challenge. I’ll allow that almost any technical obstacle can be overcome, but I’m not so sure about the social issues. I’d hesitate to make this a certainty under even the most optimistic scenario. Optimistic estimate: .9
fℓ With the rise of crowd sourcing augmenting venture capital and other ways to finance new business ventures, and entrepreneurial enthusiasm apparently being boundless, pretty much everything that can be commercialized will be, at some point. Optimistic estimate: 1
fi Not everything is quick and not everything can be made cheap. I scored this one a toss-up. If you believe that advances in technology will eventually bring down any conceivable cost, your optimistic estimate would be 1.
fc On the other hand, a lot of current technologies have already been pretty much optimized and aren’t advancing anymore, so something really new has a fair chance of bringing new insights to the party. That’s 2:1 in favor of the new stuff in my book. But it’s possible to reason that any legitimate theoretical advance will inevitably produce some significant new insight. Optimistic estimate: 1
L There is no useful data on the rate at which new basic approaches appear in the sciences or technology. (This question might be phrased in terms of the frequency of paradigm shifts, about which there is no consensus.) I went with my gut feeling that after about 25 years paradigms begin to be seen as played out. I think that an optimistic estimate might be 2-4 times longer than this.
Using my estimates, there are probably 2-3 technologies capable of putting your company out of business by undercutting the approaches your business is based on; using what I think are the most optimistic (pessimistic?) estimates for every variable in the SSRT Equation, there are somewhere between about 45 and 90.
By the way, although Drake’s original estimates yield an estimate of 10 advanced civilizations in the galaxy, the consensus estimates developed at the first SETI conference yielded something between 1,000 and 100,000,000. I wonder if a reasonable argument can be made using both the SETI and SSRT versions of the equation together that not only are there plenty of companies capable of putting yours out of business, some of them are or will be run by aliens.
October 8th, 2012
By Bob Relihan, Senior Vice President
Hardly a day passes when an article about the fallibility of marketing research does not cross my desk. The latest is from Forbes, and I was compelled to read it by its somewhat incendiary title, “Why So Much Market Research Sucks.” Roger Dooley makes the typical arguments, although they are couched in the context of the strengths of neuromarketing. You have heard them. Consumers can’t explain why they do or prefer anything. They rationalize; they can look only backward, not forward.
There are always horror stories of findings that made little sense but were “followed” nonetheless. These problems lie not with the consumers or the “research,” but with the marketers and the researchers. They want the research to tell them what to do; they don’t want to use it to stimulate their thinking and help them make good decisions.
Dooley says surveys are fine for simple behavioral questions and little else. “If you want to get the real story on the behavior of your customers, readers, etc., don’t rely on self-reported data. While such data can be fine for simple facts, like, ‘Did you eat breakfast today?’ It will rarely answer questions like, ‘Why do you prefer Grey Goose vodka?’” But, the fact is it will.
A good researcher and listener can ask a Grey Goose partisan, “Tell me everything you can about vodka and drinking vodka.” And, she will respond with a laundry list of associations. The researcher will ask the same question of a Belvedere fan and generate another list. From the differences between those two lists (really, a sufficient number of similar lists) the savvy researcher will be able to infer why Grey Goose drinkers prefer that brand. Research, after all, is as much interpretation and analysis as it is data.
As this sanctimonious defense of traditional research was forming in my mind, a conflicting vision entered. We have all seen the new IBM “Smarter Marketing” ads targeting CMOs with the message of customer analytics. The “customer is the new boss.” Through comments, social networks, and reviews, she tells the company “what to make, what it’s made from, how it’s shipped, and the way it’s sold.” The customer has immediate impact on the company, and the company can respond in near real-time.
Now, this throws the model of traditional research into disarray because that paradigm is built on time. From the “Market Research Sucks” perspective, consumers cannot predict what they will do in the future based on their rationalizations of past behavior. From my ideal perspective, a sensitive analyst can infer future behavior based upon consumers’ descriptions of categories and past behavior.
Fine. But, what if a marketer is able to offer a consumer a red dress right after she tweets that she just “loves red dresses”? Her tweet may be just as much a rationalization as it would be had she told it to an interviewer, but in the moment it has emotional weight and validity. Six months hence, she might say “the dress isn’t for me,” but in the moment it feels like the marketer in talking right to her, and the dress is just what she wanted.
And, no more need for research to mediate time.
So, in this new world of immediate communication between marketers and customers, market research will still have a role in explaining the big trends, the big social movements. It will provide strategic insight that can drive large-scale planning. But, on the day-to-day tactical level, it may simply be the conduit of the marketer/consumer conversation, aggregating and synthesizing all that is said.
September 27th, 2012
By Bob Relihan, Senior Vice President
If you want to understand consumers, you have to know how they communicate. Pew has just released a report that is another bit of evidence that people are communicating more fluidly and less linearly. In other words, writing is being displaced, at least partly, by non-verbal means.
Pew finds that 46% of internet users post original photos or videos online and that 41% post photos or videos they find elsewhere on the internet. A majority, 56% do one or the other, and a third of internet users do both. To be sure, some of this activity is no different than the sharing of vacation photos that has go on since the first Brownie. But the ubiquity and frequency of photo sharing makes a normal and expected form of behavior and communication.
When my niece posts a picture on Facebook of a dog and a cat sleeping together, she is certainly saying, “Look at this; aren’t they cute.” But by displaying that picture publically where all her friends can see it, she has created a badge. The picture speaks to her feelings, beliefs and values. Moreover, she apparently feels no need to explain the values communicated in the picture. I am guessing that she thinks they are self-evident. I am also guessing that she actually could not explain them fully.
The more people create visual badges for themselves on the Facebook, Pinterest, Tumblr, and the like, the less willing and able they will be to articulate the meanings and values those badges express.
This trend has profound implications for those of us who wish to understand what consumers communicate.
- If we wish to engage consumers and provide them with an opportunity to express what they believe or feel about their lives and our products, we will need to provide them with a space to express themselves visually. Simply asking questions with room for either structure or unstructured responses will not be sufficient.
- Visual communications will be the “new normal.” Those of us, and I am one, who have tacked projective exercises onto our group interviews in an effort to “dig deeper” will need to recognize that these visual activities may well be the first and only shovels available. They are not extra; they are central.
- And, if consumers are communicating visually rather than verbally, we need to understand the meaning of the different badges and images they use. The more consumers use these images, the more these meanings will be unique and less susceptible to being “translated” into conventional language. If I want to explain to you what my niece is thinking, my only means may well be showing you that picture of the dog and cat.
This will be a new world of research, and I am looking forward to engaging it.
August 15th, 2012
By Shaili Bhatt
How do voters in the United States view the current state of the economy compared with four years ago? Which international and domestic public policy issues matter to voters today? Are race, religion and personal wealth viewed as motivators or barriers for certain voters to support one candidate over another?
Political research in the USA is not only conducted on the candidates themselves, but very often research is conducted on the issues that drive each campaign. As Election Day on Tuesday, November 6th grows closer, candidates will reinforce and defend their positions from the podium, finely tuning their messages to reassure ardent supporters and reach undecided voters.
For the complete article published in QRCA Views, click here.
July 27th, 2012
By Walt Dickie, Executive Vice President
In June, The Pew Foundation published some very interesting data on cellphone based internet use that packs some worrisome implications for a lot of online marketing research.
Some 88% of U.S. adults own a cell phone of some kind as of April 2012, and more than half of these cell owners (55%) use their phone to go online … 31% of these current cell internet users say that they mostly go online using their cell phone, and not using some other device such as a desktop or laptop computer. That works out to 17% of all adult cell owners who … use their phone for most of their online browsing (my emphasis).
Pew also finds that 5% of cell phone owners use their cell phones and some other device equally for online access, 33% mostly use some other device, although they also use their cell phones to get online, while 45% of cell phone owners don’t go online at all using their phones.
So, let’s do a little back-of-the-envelope calculation: based on these stats, how many cell phone users should we be finding in our general-population online survey samples?
We have to make some assumptions. Pew asks their questions in terms of the respondent’s device choice for “most” online access. Let’s say that “most online browsing” means something like 75% of all browsing. In other words, let’s estimate that the 17% of adult cell owners who “mostly” use their phones are actually doing so for about 75% of their browsing. Similarly, let’s assume that the 33% who “mostly” use some other device are actually using their phones 25% of the time. Finally, we’ll assume that those who split their online access equally between phones and other devices are splitting the time 50/50.
Using those numbers and Pew’s overall cell ownership data, we should expect .88*((.17*.75) + (.5*.05) + (.33*.25)) = 20.7% to show up as using cell phones in a general population sample.
If the people who “mostly” use another device actually use their cell phones for only 10% of their online access, then this proportion would drop to 16.3%. In the extreme case, in which people who “mostly” use cell phone for access do so only 51% of the time, and people who “mostly” use another device actually choose their cell phones only 1% of the time, we would still expect to see cell access making up about 10% of a general population sample.
So, based on Pew’s data, the incidence of cell phone access to general population surveys should be in the 10% to 20% range.
If that sounds problematic, the trend data that Pew offers seems even more so.
Pew doesn’t give tracking data on “cell mostly” users but they do give data on the growth of cell-based online access overall. It’s not unreasonable to assume that the “cell mostly” segment will grow at roughly the same rate as cell access as a whole. Here’s Pew’s data re-drawn and projected forward.
Pew’s data shows that phone-based internet access is growing at just about 10% per year. At that rate essentially 100% of a gen pop sample “should” be using a mobile device in about 10 years. If MR samples continue to under-represent people who access the internet via cell phone by 65% to 75%, then the “standard” MR sample sources will shrink by a comparable amount.
Of course, this is a crude estimate – whatever happens, the trend line won’t be linear – and ten years is not tomorrow.
But still, assuming that these numbers are anything like inside the park, this implies some big problems. We need to know what is keeping cell users away from online MR surveys, and we need to find ways of changing our approaches to make our research more amenable to mobile access.
Pew’s report doesn’t directly address the question of how to do this, of course, but it does have some strong hints about what might be involved in the reasons given for choosing cell phones as web access devices.
“Cell mostly” users say that their cell phone is a “simpler, more effective choice for going online” compared to other available options (18%); 7% say that they “do mostly basic activities when they go online”; and 6% “find their cell phone to be easier to use than a traditional computer.”
Would these people consider marketing research surveys simple and basic? How long can an online activity take and still be simple and basic. How complex? What kind of engagement can be involved? Is going through a battery of a dozen or so attributes and rating each one on a 1-to-something scale either simple or basic? Is viewing a succession of concept statements with accompanying images – over a wireless connection – simple and basic?
We know from many sources that cell phone use is dominated by short “sessions”: a quick text message, a visit to Google Maps for directions, checking Yelp to find a good restaurant nearby, a fast check of incoming email. This isn’t to say that people don’t sustain long periods of engagement – playing Angry Birds on the bus to work, reading the news, even reading a novel using the Kindle app. But many aspects of cell phones, from screen size to data plans to spotty coverage urge short bursts of use that generally don’t mesh well with anything resembling even a 10 or 20 minute questionnaire.
Although I don’t know for certain where the cell phone users that are missing from our general population samples have gone or why they’re not in our samples, I do have a hunch. They’re not in our samples because they’re not even in our world, which was built, sample panel on sample panel, river source on river source, on a PC/laptop model of online engagement and interaction. The “cell phone mostly” web users have simply moved on to something simpler and more basic.
I’ve blogged about this issue before and will again, I expect. There is a conflict between “marketing research” understood as “the collection of data designed for statistical analysis tailored to the needs of standardized corporate decision making procedures,” which is what drives a major portion of client MR activity, and marketing research defined as “collecting as much data relevant to marketing issues from as many sources and in as many modes as is possible via available technology.”
The incorporation of MR into corporate decision-making happened during an era when the technology at hand – phone and mall interviewing, then online surveys – created a certain style of research that demanded high focus and a fairly large time commitment from respondents. That kind of research is still quite possible, but its days may well be numbered.
By Walt Dickie, Executive Vice President
One of the big events of my formative years happened around 1967 at MIT when the guys in the Earth Science Department announced that they could “predict” what the weather was like in Cambridge an hour earlier.
What actually happened was that a meteorological model had been developed that forecast the weather pretty accurately 24 hours in advance. The problem was that the model took a little over a day to execute on the mainframe, so by the time it issued a forecast it was “predicting” the weather we had just experienced. It was a geeky story, and everyone I knew thought it was pretty funny. A computer model had been developed that was as good at forecasting the weather as looking out the window.
But we all knew that it was a really big deal. With improvements to the algorithm and some advances in the hardware, it would only be a matter of time before the model would run in an hour, then a minute, and it would soon be capable of forecasting the weather not 24 hours in advance but 48 hours, or a week, a month … who could guess?
I’ve watched the weather my whole life because I’ve always been addicted to outdoor sports – I ride a bike almost daily outside of the three- or four-month period when the Chicago winter drives all but the insane indoors. I’ve been a skier since childhood, as has my wife, and we’ve passed that mania on to our kids, so that gives us a reason to follow the winter weather. And I was bitten by sailing when growing up on a New England lake, which now takes out on Lake Michigan and, again, has me pouring over the weather sites.
Historically, there have only been a few ways to forecast future weather. Until very recently forecasters struggled to consistently beat the algorithm my mom used when I was a kid: “Tomorrow will be pretty much like today.” If you think about your local weather, you’ll probably see what we see here in Chicago – the weather mostly doesn’t change much from one day to the next, until it flips a switch and changes a lot. Fronts move through every few days – but in between, tomorrow is much like today. You might be right with that algorithm as often as 3 days out of 4, or even 4 out of 5. It was a struggle to develop a meteorological system better than that.
As recently as the early 20th century, “forecasters would chart the current set of observations, then look through a library of past maps to find the one that most resembled the new chart. Once you had found a reasonably similar map, you looked at how the past situation had evolved and based your forecast on that.” When the technology appeared giving forecasters a reasonably complete picture of today’s weather “upstream” from their location, they were able to adopt a variation on this technique and base tomorrow’s Chicago forecast on what was happening on the Great Plains today, rather than relying on an old map.
And then came the modelers’ breakthrough – the algorithm that forecast the weather an hour ago – and soon it was possible to base forecasts on scientific principles and mathematical calculation.
So, vastly simplified, the evolution of weather forecasting was this: predict that tomorrow will be like today, predict that tomorrow will be like it is today somewhere else, and, finally, calculate tomorrow’s weather by mathematically extrapolating the underlying physics of today into the future.
I’ve been thinking about this progression recently because C+R, like most MR firms these days, is spending an increasing amount of time trying to predict the future. The industry is changing; the economy is changing; maybe the entire global financial system is changing; technology is certainly changing. How do we navigate? How do we make business decisions for the future without some way of forecasting the future?
Everyone here is thinking about these issues, but there are three of us who are particularly involved because of our job responsibilities. And we’ve discovered that each of us has a personal method for forecasting.
Partner Number One is heavily, almost exclusively involved in sales, and has constant contact with clients and potential clients who are trying to articulate their research needs. Partner Number Two monitors the new products and services that our competitors and the industry as a whole are introducing, paying particular attention to successful leading-edge competitors. And Partner Number Three monitors emerging technology and social trends and tries to infer their likely impacts.
And guess what? Partner Number One says that, as near as can be told, tomorrow is going to be a lot like today. Although there is a lot of discussion about big changes online, at conferences, in speeches, and in trade publications, the projects that clients need today and expect to need in the immediate future are much the same as they’ve been in the recent past. Timelines are shorter and budgets may be tighter, but tomorrow’s weather looks like today’s.
Partner Number Two sees a lot of new product and service activity going on, and notices what seem to be some really amazing storms and lightning bolts as new firms with new offerings post double-digit growth rates year upon year while others seem to explode only to fizzle. Some amazing things are announced and then never heard from again. It seems like we can look around us on the map, but that it’s really hard to know which direction is “upstream” from where we’re located. It’s hard to tell if the weather being experienced elsewhere on the industry map will travel toward us or away from us.
Partner Number Three sometimes seems to detect solid trends. There really seems to be some clear trends in technology – both the technology that our client businesses are adopting and the technology that consumers are using. Some trends in consumer communication technology seem especially clear, and if data collection will play any role in our future then we can base that part of our forecast model on them. But other areas, particularly the “information ecology” of businesses seem roiled up and hard to read. We have some pieces of a prediction model, but our algorithm for forecasting still needs a lot of work. It’s not clear that we’re anywhere near the point I witnessed back in college when the model first beat my mom’s forecasting approach.
I find myself torn between algorithms. Like the weather, many businesses have had one day follow another with little material change for long periods – until change overtakes them like a summer squall. Been to a bookstore lately? Just because an innovation lit up the landscape somewhere doesn’t necessarily mean that the same thing would happen elsewhere, or that the market would support two, three, or a dozen similar offerings. Maybe a competitor’s success is due to local conditions – like updrafts that spawn tornadoes on the Plains but almost never come east to Chicago. And although the trends seem to point toward weather systems coming in soon, the forecast model isn’t any better at this point than looking out the window and expecting more of the same.
So we try a little of each forecaster’s method, experimenting with the trends and borrowing from competitor’s successes, while finding today’s weather still largely unchanged from yesterday’s.