November 8th, 2012
By Walt Dickie, Executive Vice President
Tuesday’s election is being hailed as “The Triumph of the Nerds.” Barack Obama won the presidential election, but Nate Silver won the war over how we understand the world.
The traditional pundits were on TV, in the papers and blogs, interpreting what they were hearing and feeling. Peggy Noonan:
“Something old is roaring back.” … “people (on a Romney rope line) wouldn’t let go of my hand” … “something is moving with evangelicals … quiet, unreported and spreading” … the Republicans have the passion now, the enthusiasm. … In Florida a few weeks ago I saw Romney signs, not Obama ones. From Ohio I hear the same. From tony Northwest Washington, D.C., I hear the same.”
On the other side were the Moneyball data nerds, with Nate Silver carrying their standard:
“Among 12 national polls published on Monday, Mr. Obama led by an average of 1.6 percentage points. Perhaps more important is the trend in the surveys. On average, Mr. Obama gained 1.5 percentage points from the prior edition of the same polls, improving his standing in nine of the surveys while losing ground in just one. … Because these surveys had large sample sizes, the trend is both statistically and practically meaningful.”
The morning after, Paul Bradshaw posted that the US election was a wake up call for data illiterate journalists – the pundits – who “evaluate, filter, and order (information) through the rather ineffable quality alternatively known as ‘news judgment,’ ‘news sense,’ or ‘savvy.”
Bradshaw, and the blogger Mark Coddington, whom he quotes, look beyond the question of which camp “won” or “lost” the election, and see an epistemological revolution in reporting the news:
Silver’s process — his epistemology — is almost exactly the opposite of (traditional punditry): “Where political journalists’ information is privileged, his is public, coming from poll results that all the rest of us see, too. Where political journalists’ information is evaluated through a subjective and nebulous professional/cultural sense of judgment, his evaluation is systematic and scientifically based. It involves judgment, too, but because it’s based in a scientific process, we can trace how he applied that judgment to reach his conclusions.”
But this blog post is about marketing research, not journalism, although as I’ve argued before the two fields have a lot in common.
When I read Bradshaw yesterday morning, I could hardly help re-writing his observations, since he could easily have been talking about the traditional approach to qualitative analysis. Here’s my re-write:
Qualitative analysts get access to information directly from consumers, then evaluate, filter, and order it through their “judgment,” “sense,” or “savvy.” This is how qualitative analysts say to their clients (and to themselves), ‘This is why you can trust what we say we know — because we found it out through this process.’”
Journalistic intuition suffered a severe blow Tuesday, though I doubt it will prove fatal. The data-free intuition of focus group moderators is getting hammered by Silver-esque data-driven analysis, but it hasn’t succumbed yet, either.
Still, I have to wonder if the writing is on the wall. I’ll leave the last word to Bradshaw:
Journalists who professed to be political experts were shown to be well connected, well-informed perhaps, but – on the thing that ultimately decided the result: how people were planning to vote – not well educated. They were left reporting opinions, while Nate Silver and others reported research.
October 26th, 2012
By Walt Dickie, Senior Vice President
If you’ve ever read anything about SETI, the search for extraterrestrial intelligence, you’ve probably heard of Drake’s Equation—well, you might have heard about it on Star Trek or the Big Bang Theory instead! Formulated by the astronomer Frank Drake, it estimates the odds of intelligent life in our galaxy and is one of the foundations for the whole enterprise. SETI came into being when Drake, whose Project Ozma was the first systematic search for alien radio signals, addressed a meeting at the Green Bank National Radio Astronomy Observatory in 1960 and offered an estimate of the odds that the project could succeed. The Drake Equation is justly famous as one of the first attempts to put the possible existence of little green men into mathematical perspective.
I was in high school when I first heard about SETI and the Drake Equation. For no good reason, I’ve often found myself musing about it which is why it recently popped into my mind again when one of my partners was getting ready to moderate a discussion at an MR conference.
The conference, like most MR conferences these days, was heavy on presentations about New Methods and The Future of Marketing Research. Needless to say, not a few of the conference presenters agreed with almost every blogger in the industry that Major Change is Just Around the Corner and that the Future For Every Established MR Company Is Bleak.
My partner’s role at the conference was to moderate a discussion between the audience and one of the keynote speakers, and he was preparing by reviewing the speaker’s presentation, a copy of which he’d sent me. We were exchanging emails about the issues in the talk and the potential questions they raised when it occurred to me that MR needed its own version of the Drake Equation.
Just as the early SETI community needed some estimate of the likelihood of contacting alien life, the current MR community needs an estimate of the likelihood of being superseded by new research technology. It’s almost all we talk about these days, and, being mathematically inclined folks, we deserve a calculation of our odds.
So here, with apologies to Frank Drake, I propose the MR version of the Drake Equation as a contribution to SSRT, the Search for Superior Research Technology.
The Drake Equation
N = R** fp * ne * fℓ* fi * fc * L
SETI: N = the number of civilizations in our galaxy with which communication might be possible
SSRT: N = The current number of competitors capable of putting your company out of business
Drake’s proposed value
My proposed value
|The average annual rate of star formation per year in our galaxy||
|The average annual rate at which significant new approaches for understanding some aspect of human behavior or thought appear||
|The fraction of those stars that have planets||
|The fraction of those approaches that are applicable to the design/ delivery/ communication of consumer products or services||
|The average number of planets that can potentially support life per star that has planets||
|The fraction of the above that depend on data/inputs that can be collected using conceivable/ deliverable/ socially and ethically acceptable technology||
|The fraction of the above that actually go on to develop life at some point||
|The fraction of the above that are commercialized at some point||
|The fraction of the above that actually go on to develop intelligent life||
|The fraction of the above that are faster and/or less expensive than your company’s offerings||
|The fraction of civilizations that develop a technology that releases detectable signs of their existence into space||
|The fraction of the above that will produce results that are more actionable/ effective/ predictive than your company’s offerings||
|The length of time for which such civilizations release detectable signals into space||
|The length of time that those approaches will be seen as valid/interesting/relevant as the basis for commercial products/ services||
Some Comments on my Estimates:
R* I think my estimate of truly unique, significant new approaches coming along every two years may be generous. Not that new approaches or technologies don’t appear more often than that, but most of these are minor wrinkles on existing approaches – better ways to do something that’s already being done. If you’re a technological optimist you might go for more frequent discoveries. I think that one a year would be an optimistic estimate, but feel free to enter your own.
fp Almost anything – maybe not quite anything – that’s applicable to humans is applicable in some non-trivial way to consumer products and services. Optimistic estimate: .99
ne On the other hand, some of the new things that come along require approaches that either aren’t technically feasible, at least at scale, within any foreseeable future or would never pass a social/ethical or possibly legal challenge. I’ll allow that almost any technical obstacle can be overcome, but I’m not so sure about the social issues. I’d hesitate to make this a certainty under even the most optimistic scenario. Optimistic estimate: .9
fℓ With the rise of crowd sourcing augmenting venture capital and other ways to finance new business ventures, and entrepreneurial enthusiasm apparently being boundless, pretty much everything that can be commercialized will be, at some point. Optimistic estimate: 1
fi Not everything is quick and not everything can be made cheap. I scored this one a toss-up. If you believe that advances in technology will eventually bring down any conceivable cost, your optimistic estimate would be 1.
fc On the other hand, a lot of current technologies have already been pretty much optimized and aren’t advancing anymore, so something really new has a fair chance of bringing new insights to the party. That’s 2:1 in favor of the new stuff in my book. But it’s possible to reason that any legitimate theoretical advance will inevitably produce some significant new insight. Optimistic estimate: 1
L There is no useful data on the rate at which new basic approaches appear in the sciences or technology. (This question might be phrased in terms of the frequency of paradigm shifts, about which there is no consensus.) I went with my gut feeling that after about 25 years paradigms begin to be seen as played out. I think that an optimistic estimate might be 2-4 times longer than this.
Using my estimates, there are probably 2-3 technologies capable of putting your company out of business by undercutting the approaches your business is based on; using what I think are the most optimistic (pessimistic?) estimates for every variable in the SSRT Equation, there are somewhere between about 45 and 90.
By the way, although Drake’s original estimates yield an estimate of 10 advanced civilizations in the galaxy, the consensus estimates developed at the first SETI conference yielded something between 1,000 and 100,000,000. I wonder if a reasonable argument can be made using both the SETI and SSRT versions of the equation together that not only are there plenty of companies capable of putting yours out of business, some of them are or will be run by aliens.
October 8th, 2012
By Bob Relihan, Senior Vice President
Hardly a day passes when an article about the fallibility of marketing research does not cross my desk. The latest is from Forbes, and I was compelled to read it by its somewhat incendiary title, “Why So Much Market Research Sucks.” Roger Dooley makes the typical arguments, although they are couched in the context of the strengths of neuromarketing. You have heard them. Consumers can’t explain why they do or prefer anything. They rationalize; they can look only backward, not forward.
There are always horror stories of findings that made little sense but were “followed” nonetheless. These problems lie not with the consumers or the “research,” but with the marketers and the researchers. They want the research to tell them what to do; they don’t want to use it to stimulate their thinking and help them make good decisions.
Dooley says surveys are fine for simple behavioral questions and little else. “If you want to get the real story on the behavior of your customers, readers, etc., don’t rely on self-reported data. While such data can be fine for simple facts, like, ‘Did you eat breakfast today?’ It will rarely answer questions like, ‘Why do you prefer Grey Goose vodka?’” But, the fact is it will.
A good researcher and listener can ask a Grey Goose partisan, “Tell me everything you can about vodka and drinking vodka.” And, she will respond with a laundry list of associations. The researcher will ask the same question of a Belvedere fan and generate another list. From the differences between those two lists (really, a sufficient number of similar lists) the savvy researcher will be able to infer why Grey Goose drinkers prefer that brand. Research, after all, is as much interpretation and analysis as it is data.
As this sanctimonious defense of traditional research was forming in my mind, a conflicting vision entered. We have all seen the new IBM “Smarter Marketing” ads targeting CMOs with the message of customer analytics. The “customer is the new boss.” Through comments, social networks, and reviews, she tells the company “what to make, what it’s made from, how it’s shipped, and the way it’s sold.” The customer has immediate impact on the company, and the company can respond in near real-time.
Now, this throws the model of traditional research into disarray because that paradigm is built on time. From the “Market Research Sucks” perspective, consumers cannot predict what they will do in the future based on their rationalizations of past behavior. From my ideal perspective, a sensitive analyst can infer future behavior based upon consumers’ descriptions of categories and past behavior.
Fine. But, what if a marketer is able to offer a consumer a red dress right after she tweets that she just “loves red dresses”? Her tweet may be just as much a rationalization as it would be had she told it to an interviewer, but in the moment it has emotional weight and validity. Six months hence, she might say “the dress isn’t for me,” but in the moment it feels like the marketer in talking right to her, and the dress is just what she wanted.
And, no more need for research to mediate time.
So, in this new world of immediate communication between marketers and customers, market research will still have a role in explaining the big trends, the big social movements. It will provide strategic insight that can drive large-scale planning. But, on the day-to-day tactical level, it may simply be the conduit of the marketer/consumer conversation, aggregating and synthesizing all that is said.
September 27th, 2012
By Bob Relihan, Senior Vice President
If you want to understand consumers, you have to know how they communicate. Pew has just released a report that is another bit of evidence that people are communicating more fluidly and less linearly. In other words, writing is being displaced, at least partly, by non-verbal means.
Pew finds that 46% of internet users post original photos or videos online and that 41% post photos or videos they find elsewhere on the internet. A majority, 56% do one or the other, and a third of internet users do both. To be sure, some of this activity is no different than the sharing of vacation photos that has go on since the first Brownie. But the ubiquity and frequency of photo sharing makes a normal and expected form of behavior and communication.
When my niece posts a picture on Facebook of a dog and a cat sleeping together, she is certainly saying, “Look at this; aren’t they cute.” But by displaying that picture publically where all her friends can see it, she has created a badge. The picture speaks to her feelings, beliefs and values. Moreover, she apparently feels no need to explain the values communicated in the picture. I am guessing that she thinks they are self-evident. I am also guessing that she actually could not explain them fully.
The more people create visual badges for themselves on the Facebook, Pinterest, Tumblr, and the like, the less willing and able they will be to articulate the meanings and values those badges express.
This trend has profound implications for those of us who wish to understand what consumers communicate.
- If we wish to engage consumers and provide them with an opportunity to express what they believe or feel about their lives and our products, we will need to provide them with a space to express themselves visually. Simply asking questions with room for either structure or unstructured responses will not be sufficient.
- Visual communications will be the “new normal.” Those of us, and I am one, who have tacked projective exercises onto our group interviews in an effort to “dig deeper” will need to recognize that these visual activities may well be the first and only shovels available. They are not extra; they are central.
- And, if consumers are communicating visually rather than verbally, we need to understand the meaning of the different badges and images they use. The more consumers use these images, the more these meanings will be unique and less susceptible to being “translated” into conventional language. If I want to explain to you what my niece is thinking, my only means may well be showing you that picture of the dog and cat.
This will be a new world of research, and I am looking forward to engaging it.
August 15th, 2012
By Shaili Bhatt
How do voters in the United States view the current state of the economy compared with four years ago? Which international and domestic public policy issues matter to voters today? Are race, religion and personal wealth viewed as motivators or barriers for certain voters to support one candidate over another?
Political research in the USA is not only conducted on the candidates themselves, but very often research is conducted on the issues that drive each campaign. As Election Day on Tuesday, November 6th grows closer, candidates will reinforce and defend their positions from the podium, finely tuning their messages to reassure ardent supporters and reach undecided voters.
For the complete article published in QRCA Views, click here.
July 27th, 2012
By Walt Dickie, Executive Vice President
In June, The Pew Foundation published some very interesting data on cellphone based internet use that packs some worrisome implications for a lot of online marketing research.
Some 88% of U.S. adults own a cell phone of some kind as of April 2012, and more than half of these cell owners (55%) use their phone to go online … 31% of these current cell internet users say that they mostly go online using their cell phone, and not using some other device such as a desktop or laptop computer. That works out to 17% of all adult cell owners who … use their phone for most of their online browsing (my emphasis).
Pew also finds that 5% of cell phone owners use their cell phones and some other device equally for online access, 33% mostly use some other device, although they also use their cell phones to get online, while 45% of cell phone owners don’t go online at all using their phones.
So, let’s do a little back-of-the-envelope calculation: based on these stats, how many cell phone users should we be finding in our general-population online survey samples?
We have to make some assumptions. Pew asks their questions in terms of the respondent’s device choice for “most” online access. Let’s say that “most online browsing” means something like 75% of all browsing. In other words, let’s estimate that the 17% of adult cell owners who “mostly” use their phones are actually doing so for about 75% of their browsing. Similarly, let’s assume that the 33% who “mostly” use some other device are actually using their phones 25% of the time. Finally, we’ll assume that those who split their online access equally between phones and other devices are splitting the time 50/50.
Using those numbers and Pew’s overall cell ownership data, we should expect .88*((.17*.75) + (.5*.05) + (.33*.25)) = 20.7% to show up as using cell phones in a general population sample.
If the people who “mostly” use another device actually use their cell phones for only 10% of their online access, then this proportion would drop to 16.3%. In the extreme case, in which people who “mostly” use cell phone for access do so only 51% of the time, and people who “mostly” use another device actually choose their cell phones only 1% of the time, we would still expect to see cell access making up about 10% of a general population sample.
So, based on Pew’s data, the incidence of cell phone access to general population surveys should be in the 10% to 20% range.
If that sounds problematic, the trend data that Pew offers seems even more so.
Pew doesn’t give tracking data on “cell mostly” users but they do give data on the growth of cell-based online access overall. It’s not unreasonable to assume that the “cell mostly” segment will grow at roughly the same rate as cell access as a whole. Here’s Pew’s data re-drawn and projected forward.
Pew’s data shows that phone-based internet access is growing at just about 10% per year. At that rate essentially 100% of a gen pop sample “should” be using a mobile device in about 10 years. If MR samples continue to under-represent people who access the internet via cell phone by 65% to 75%, then the “standard” MR sample sources will shrink by a comparable amount.
Of course, this is a crude estimate – whatever happens, the trend line won’t be linear – and ten years is not tomorrow.
But still, assuming that these numbers are anything like inside the park, this implies some big problems. We need to know what is keeping cell users away from online MR surveys, and we need to find ways of changing our approaches to make our research more amenable to mobile access.
Pew’s report doesn’t directly address the question of how to do this, of course, but it does have some strong hints about what might be involved in the reasons given for choosing cell phones as web access devices.
“Cell mostly” users say that their cell phone is a “simpler, more effective choice for going online” compared to other available options (18%); 7% say that they “do mostly basic activities when they go online”; and 6% “find their cell phone to be easier to use than a traditional computer.”
Would these people consider marketing research surveys simple and basic? How long can an online activity take and still be simple and basic. How complex? What kind of engagement can be involved? Is going through a battery of a dozen or so attributes and rating each one on a 1-to-something scale either simple or basic? Is viewing a succession of concept statements with accompanying images – over a wireless connection – simple and basic?
We know from many sources that cell phone use is dominated by short “sessions”: a quick text message, a visit to Google Maps for directions, checking Yelp to find a good restaurant nearby, a fast check of incoming email. This isn’t to say that people don’t sustain long periods of engagement – playing Angry Birds on the bus to work, reading the news, even reading a novel using the Kindle app. But many aspects of cell phones, from screen size to data plans to spotty coverage urge short bursts of use that generally don’t mesh well with anything resembling even a 10 or 20 minute questionnaire.
Although I don’t know for certain where the cell phone users that are missing from our general population samples have gone or why they’re not in our samples, I do have a hunch. They’re not in our samples because they’re not even in our world, which was built, sample panel on sample panel, river source on river source, on a PC/laptop model of online engagement and interaction. The “cell phone mostly” web users have simply moved on to something simpler and more basic.
I’ve blogged about this issue before and will again, I expect. There is a conflict between “marketing research” understood as “the collection of data designed for statistical analysis tailored to the needs of standardized corporate decision making procedures,” which is what drives a major portion of client MR activity, and marketing research defined as “collecting as much data relevant to marketing issues from as many sources and in as many modes as is possible via available technology.”
The incorporation of MR into corporate decision-making happened during an era when the technology at hand – phone and mall interviewing, then online surveys – created a certain style of research that demanded high focus and a fairly large time commitment from respondents. That kind of research is still quite possible, but its days may well be numbered.
July 19th, 2012
By Kat Figatner, Research Director
Online qualitative studies can be challenging for clients to get their team involved in the research. Unlike traditional focus groups, where clients are a captive audience soaking in the live research while snacking on peanut M&Ms, online studies can easily slip through the cracks of a busy schedule.
As a newer member of the online qual team, I am continually impressed by how C+R’s trifecta of stellar client service, operations, and research teams work seamlessly together to keep clients engaged in the research, thereby increasing the value they get out of it.
Our client service team sets up multiple touch points throughout a study :
- platform walk-throughs to demonstrate to clients how to interact with the online tools,
- mid-field “study halls” to watch the action unfold, and
- post-study debriefs to explore implications of the insights.
These steps guide clients throughout the journey of the online project so that they are immersing themselves into the consumers’ lives alongside us.
Our operations team works behind the scenes to recruit and manage the respondents. By administering online screeners programmed internally, we are able to cast a wide net to find the right consumers who fit the target for each study. Once recruited, respondents are managed closely to make sure they fully participate in the research and produce illuminating data.
Finally, the crux of the trifecta is our analysts, who have an innate curiosity and passion for research. We design studies that both draw on traditional qualitative projective techniques and leverage the latest technology to create an interactive and robust discussion with consumers. Moderating and analyzing in teams means we elicit deeper responses from respondents and elevate the learnings to actionable insights.
C+R’s online qualitative team has grown exponentially in the past couple of years. Our trifecta of client service, operations and research teams has converted clients to see the value of online qual research.
By Walt Dickie, Executive Vice President
One of the big events of my formative years happened around 1967 at MIT when the guys in the Earth Science Department announced that they could “predict” what the weather was like in Cambridge an hour earlier.
What actually happened was that a meteorological model had been developed that forecast the weather pretty accurately 24 hours in advance. The problem was that the model took a little over a day to execute on the mainframe, so by the time it issued a forecast it was “predicting” the weather we had just experienced. It was a geeky story, and everyone I knew thought it was pretty funny. A computer model had been developed that was as good at forecasting the weather as looking out the window.
But we all knew that it was a really big deal. With improvements to the algorithm and some advances in the hardware, it would only be a matter of time before the model would run in an hour, then a minute, and it would soon be capable of forecasting the weather not 24 hours in advance but 48 hours, or a week, a month … who could guess?
I’ve watched the weather my whole life because I’ve always been addicted to outdoor sports – I ride a bike almost daily outside of the three- or four-month period when the Chicago winter drives all but the insane indoors. I’ve been a skier since childhood, as has my wife, and we’ve passed that mania on to our kids, so that gives us a reason to follow the winter weather. And I was bitten by sailing when growing up on a New England lake, which now takes out on Lake Michigan and, again, has me pouring over the weather sites.
Historically, there have only been a few ways to forecast future weather. Until very recently forecasters struggled to consistently beat the algorithm my mom used when I was a kid: “Tomorrow will be pretty much like today.” If you think about your local weather, you’ll probably see what we see here in Chicago – the weather mostly doesn’t change much from one day to the next, until it flips a switch and changes a lot. Fronts move through every few days – but in between, tomorrow is much like today. You might be right with that algorithm as often as 3 days out of 4, or even 4 out of 5. It was a struggle to develop a meteorological system better than that.
As recently as the early 20th century, “forecasters would chart the current set of observations, then look through a library of past maps to find the one that most resembled the new chart. Once you had found a reasonably similar map, you looked at how the past situation had evolved and based your forecast on that.” When the technology appeared giving forecasters a reasonably complete picture of today’s weather “upstream” from their location, they were able to adopt a variation on this technique and base tomorrow’s Chicago forecast on what was happening on the Great Plains today, rather than relying on an old map.
And then came the modelers’ breakthrough – the algorithm that forecast the weather an hour ago – and soon it was possible to base forecasts on scientific principles and mathematical calculation.
So, vastly simplified, the evolution of weather forecasting was this: predict that tomorrow will be like today, predict that tomorrow will be like it is today somewhere else, and, finally, calculate tomorrow’s weather by mathematically extrapolating the underlying physics of today into the future.
I’ve been thinking about this progression recently because C+R, like most MR firms these days, is spending an increasing amount of time trying to predict the future. The industry is changing; the economy is changing; maybe the entire global financial system is changing; technology is certainly changing. How do we navigate? How do we make business decisions for the future without some way of forecasting the future?
Everyone here is thinking about these issues, but there are three of us who are particularly involved because of our job responsibilities. And we’ve discovered that each of us has a personal method for forecasting.
Partner Number One is heavily, almost exclusively involved in sales, and has constant contact with clients and potential clients who are trying to articulate their research needs. Partner Number Two monitors the new products and services that our competitors and the industry as a whole are introducing, paying particular attention to successful leading-edge competitors. And Partner Number Three monitors emerging technology and social trends and tries to infer their likely impacts.
And guess what? Partner Number One says that, as near as can be told, tomorrow is going to be a lot like today. Although there is a lot of discussion about big changes online, at conferences, in speeches, and in trade publications, the projects that clients need today and expect to need in the immediate future are much the same as they’ve been in the recent past. Timelines are shorter and budgets may be tighter, but tomorrow’s weather looks like today’s.
Partner Number Two sees a lot of new product and service activity going on, and notices what seem to be some really amazing storms and lightning bolts as new firms with new offerings post double-digit growth rates year upon year while others seem to explode only to fizzle. Some amazing things are announced and then never heard from again. It seems like we can look around us on the map, but that it’s really hard to know which direction is “upstream” from where we’re located. It’s hard to tell if the weather being experienced elsewhere on the industry map will travel toward us or away from us.
Partner Number Three sometimes seems to detect solid trends. There really seems to be some clear trends in technology – both the technology that our client businesses are adopting and the technology that consumers are using. Some trends in consumer communication technology seem especially clear, and if data collection will play any role in our future then we can base that part of our forecast model on them. But other areas, particularly the “information ecology” of businesses seem roiled up and hard to read. We have some pieces of a prediction model, but our algorithm for forecasting still needs a lot of work. It’s not clear that we’re anywhere near the point I witnessed back in college when the model first beat my mom’s forecasting approach.
I find myself torn between algorithms. Like the weather, many businesses have had one day follow another with little material change for long periods – until change overtakes them like a summer squall. Been to a bookstore lately? Just because an innovation lit up the landscape somewhere doesn’t necessarily mean that the same thing would happen elsewhere, or that the market would support two, three, or a dozen similar offerings. Maybe a competitor’s success is due to local conditions – like updrafts that spawn tornadoes on the Plains but almost never come east to Chicago. And although the trends seem to point toward weather systems coming in soon, the forecast model isn’t any better at this point than looking out the window and expecting more of the same.
So we try a little of each forecaster’s method, experimenting with the trends and borrowing from competitor’s successes, while finding today’s weather still largely unchanged from yesterday’s.
By Walt Dickie, Executive Vice President
Having followed the dull roar of the MR commentariat on the future of marketing research (and having contributed to it in a minor way), I was fascinated by Adam Davidson’s article in last weekend’s New York Times Magazine, “Can Mom-and-Pop Shops Survive Extreme Gentrification?” Not because gentrification is terribly relevant to MR, but because of Davidson’s reaction to the news that some Mom-and-Pops still exist happily in heavily competitive, booming Greenwich Village.
Davidson, of NPR’s “Planet Money,” can be heard on “Morning Edition,” “All Things Considered” and “This American Life.” He’s well informed about economic issues, and both reliably insightful and entertaining. He also grew up in the Village when it was still “the Jane Jacobs ideal, a neighborhood crammed with small mom-and-pop stores.”
Now, of course, with Wall Street money coursing through New York’s veins, the “artists, weirdos and blue-collar families” that inhabited the Village of his childhood have been completely replaced by “guys in suits” and the Mom-and-Pops have been displaced by big, trendy names like “Marc Jacobs … Magnolia Bakery … Ralph Lauren, Jimmy Choo, (and) Burberry.”
You can see where this is going, right?
I was sitting on my back porch on Sunday morning, coffee in hand, idly riffling through the Times when I suddenly realized that Davidson’s tale of a trip back to the old neighborhood was giving off vibes about my own job and business. Mom-and-Pop marketing research firms have been disappearing lately into the acquisitive maws of both the multi-national organizations and the bankruptcy courts. Big name trendy firms, like Google, are suddenly seen around the old neighborhood. And the blogosphere is alight with a mixture of fear and cheerleading for the coming “disruptive” revolution.
So what was the secret of the successful Village Mom-and-Pops? Were they just the lucky the ones that suddenly found their old-fashioned wares in demand? Were they somehow chic because they were retro; fashionable because they were so nonchalant about fashion?
When Davidson asked the owner of “the oldest Village business (he) could think of” how his business had changed, he found stasis: “It’s about the same … We’re not way richer or poorer … We’re about the same.” The owner of another old-line business, a tavern, “put it more bluntly. He’s surviving, he said, because he’s not an especially ambitious businessman.”
The best part of the article, for me, was Davidson’s response to the news that a business hadn’t changed or grown in a generation: “And this didn’t bother him.”
Adam Davidson, NPR economics guy, chronicler of the connected, digital, international economy, doesn’t get to talk a lot to guys who aren’t concerned with quarter-over-quarter growth, IPOs, or becoming billionaires. Guys who say stuff like this: “If I just cared about the money, I’d have closed a long time ago.” Guys who will keep the business running “as long as the place is covering the costs.”
“I wondered why Bowman, like her fellow proprietors, was disavowing economic theory and not trying to maximize her profits. Then I remembered one fascinating statistic about our economy. There are more than 27 million businesses in the United States. About a thousand are huge conglomerates seeking to increase profits. Another several thousand are small or medium-size companies seeking their big score. A vast majority, however, are what economists call lifestyle businesses. They are owned by people whose goal is to do what they like and to cover their nut. These surviving proprietors hadn’t merely been lucky. They loved their businesses so much that they found a way to hold on to them, even if it meant making bad business decisions. It’s a remarkable accomplishment in its own right.”
All of this made we wonder whether our current view of the MR industry is so focused on the prevailing worship of not only maximizing profits but doing so on a scale unthinkable even a generation ago that we’re no longer looking at the full range of the data. It also made me wonder about Dunbar’s number.
Robin Dunbar, a British anthropologist, proposed back in the 90s, that there was a maximum group size beyond which social relationships could no longer be based purely on personal, individual relationships. Since then, Dunbar’s number has become rather fashionable and influential. It’s normally pegged at about 150 people.
“Mom and Pop” enterprises run by “lifestyle entrepreneurs” don’t generally grow their workforces beyond Dunbar’s number because, if they do, the “lifestyle” part is eroded by organizational issues. “Mom” or “Pop” find themselves spending more time and energy managing people than they can spend making shoes, selling coffee, tending bar, or … doing research.
Let’s assume – just for the sake of making a guesstimate – that an MR company has 150 employees. Let’s say their average salary is about $75K, taking into account everyone from the receptionist to the account people, the senior analysts, the operational folks, and the execs, and that rent, benefits and overhead add 50% to that. The gross margin for the business (sales minus direct project costs) is something like 30%, and Mom and Pop will keep the business running as long they can cover their costs. At those rates they’re going to need to do something like $50-55-million annually to keep the doors open.
Which means that any MR shop smaller than that may well be a “Dunbar enterprise,” able to survive while ignoring the basic demand of modern economics – maximizing profits. They make poor blogging material – until an Adam Davidson writes a sentimental piece about them. “Nothing to see here. Move on” is not exactly SEO bait.
Maybe this is a good place to say that I’m not endorsing this approach. I came into MR over 30 years ago, and I’ve enjoyed the lifestyle of a Dunbar shop, but I’m also a technology junkie and a grow-and-change nerd. I find new methodologies, new communication technologies, new platforms, and new business models so much more interesting than the Mom-and-Pop lifestyle that I’d endorse their pursuit even if they didn’t also promise much profit or growth. That would be my personal lifestyle choice.
But I’m also a research guy with a social science background and I can’t ignore the countervailing view. Who is looking closely at the future of Dunbar enterprises in MR? What do the business models look like for businesses based on data collection and analysis as the neighborhood gentrifies? What customer segments will continue bringing their trade to the old line Village shops like McNulty’s Tea & Coffee Co., Imperial Vintner, Tavern on Jane, and the other Mom-and-Pops? If you know anyone researching this or blogging about it, I’d love to hear about it.
Now the economics have changed. Any new operation needs deeper pockets and a stronger business plan, all of which will probably make it less interesting. … There are still some passionate people with exciting ideas who are making really bad — but entirely satisfying — business decisions.
June 21st, 2012
By Joy Boggio, Director of Online Qualitative Support
We are adapting new technologies so fast that what was cutting edge last year, is passé this year. The Wondrous recently had a great post about technologies that will soon be obsolete. Think the TV remote or FAX machines.
This reminded me of the debate over text analytics and verbatim management for online qualitative studies. The various TA software packages (Language Logic, Clarabridge) are said to move us into the frontier of the future by “machining” the findings that we have culled from our boards. This all sounds promising, and many of us assumed that this would save a moderator/analyst time and uncover insights buried beneath the vast amount of data that we no longer “live through.”
It seemed, at first glance, to be a great solution when we realized gleefully, “Hey, we have so much data!” Then, in the next breath we realized, “OH! We have so much data..!” Unlike traditional qualitative research in which the moderators immerse themselves in the data as it is happening, the online qualitative moderator must sift through data that has been accumulating over days. We must find a way to juggle and make sense of it all to just find the nuggets of information.
But, how can we identify those nuggets quickly and efficiently? At C+R Research, we have a seemingly overwhelming amount of data, so we have made many attempts to “machine” and organize qualitative “data.”
At TMTRE, many others also talked about their attempts at automating and coding these responses. Most have come to the same conclusion we have… you simply have to read the comments from the boards. The data set, while appearing to be tremendous, is still too small to get good results from any automated method of sorting or coding it. Many have tried Language Logic to categorize and Nvivo to organize, but they both add time and almost always require a second analyst, which may cause the sub-text of the responses to be lost.
Automating the work does seem to have a place when you are dealing with multi-phase projects or when you are talking to a few hundred or more respondents, but not so with an average bulletin board of 20-30 people. What was the overall consensus? “It’s QUAL, we shouldn’t strive to quantify it!”