September 27th, 2012
By Bob Relihan, Senior Vice President
If you want to understand consumers, you have to know how they communicate. Pew has just released a report that is another bit of evidence that people are communicating more fluidly and less linearly. In other words, writing is being displaced, at least partly, by non-verbal means.
Pew finds that 46% of internet users post original photos or videos online and that 41% post photos or videos they find elsewhere on the internet. A majority, 56% do one or the other, and a third of internet users do both. To be sure, some of this activity is no different than the sharing of vacation photos that has go on since the first Brownie. But the ubiquity and frequency of photo sharing makes a normal and expected form of behavior and communication.
When my niece posts a picture on Facebook of a dog and a cat sleeping together, she is certainly saying, “Look at this; aren’t they cute.” But by displaying that picture publically where all her friends can see it, she has created a badge. The picture speaks to her feelings, beliefs and values. Moreover, she apparently feels no need to explain the values communicated in the picture. I am guessing that she thinks they are self-evident. I am also guessing that she actually could not explain them fully.
The more people create visual badges for themselves on the Facebook, Pinterest, Tumblr, and the like, the less willing and able they will be to articulate the meanings and values those badges express.
This trend has profound implications for those of us who wish to understand what consumers communicate.
- If we wish to engage consumers and provide them with an opportunity to express what they believe or feel about their lives and our products, we will need to provide them with a space to express themselves visually. Simply asking questions with room for either structure or unstructured responses will not be sufficient.
- Visual communications will be the “new normal.” Those of us, and I am one, who have tacked projective exercises onto our group interviews in an effort to “dig deeper” will need to recognize that these visual activities may well be the first and only shovels available. They are not extra; they are central.
- And, if consumers are communicating visually rather than verbally, we need to understand the meaning of the different badges and images they use. The more consumers use these images, the more these meanings will be unique and less susceptible to being “translated” into conventional language. If I want to explain to you what my niece is thinking, my only means may well be showing you that picture of the dog and cat.
This will be a new world of research, and I am looking forward to engaging it.
August 15th, 2012
By Shaili Bhatt
How do voters in the United States view the current state of the economy compared with four years ago? Which international and domestic public policy issues matter to voters today? Are race, religion and personal wealth viewed as motivators or barriers for certain voters to support one candidate over another?
Political research in the USA is not only conducted on the candidates themselves, but very often research is conducted on the issues that drive each campaign. As Election Day on Tuesday, November 6th grows closer, candidates will reinforce and defend their positions from the podium, finely tuning their messages to reassure ardent supporters and reach undecided voters.
For the complete article published in QRCA Views, click here.
July 27th, 2012
By Walt Dickie, Executive Vice President
In June, The Pew Foundation published some very interesting data on cellphone based internet use that packs some worrisome implications for a lot of online marketing research.
Some 88% of U.S. adults own a cell phone of some kind as of April 2012, and more than half of these cell owners (55%) use their phone to go online … 31% of these current cell internet users say that they mostly go online using their cell phone, and not using some other device such as a desktop or laptop computer. That works out to 17% of all adult cell owners who … use their phone for most of their online browsing (my emphasis).
Pew also finds that 5% of cell phone owners use their cell phones and some other device equally for online access, 33% mostly use some other device, although they also use their cell phones to get online, while 45% of cell phone owners don’t go online at all using their phones.
So, let’s do a little back-of-the-envelope calculation: based on these stats, how many cell phone users should we be finding in our general-population online survey samples?
We have to make some assumptions. Pew asks their questions in terms of the respondent’s device choice for “most” online access. Let’s say that “most online browsing” means something like 75% of all browsing. In other words, let’s estimate that the 17% of adult cell owners who “mostly” use their phones are actually doing so for about 75% of their browsing. Similarly, let’s assume that the 33% who “mostly” use some other device are actually using their phones 25% of the time. Finally, we’ll assume that those who split their online access equally between phones and other devices are splitting the time 50/50.
Using those numbers and Pew’s overall cell ownership data, we should expect .88*((.17*.75) + (.5*.05) + (.33*.25)) = 20.7% to show up as using cell phones in a general population sample.
If the people who “mostly” use another device actually use their cell phones for only 10% of their online access, then this proportion would drop to 16.3%. In the extreme case, in which people who “mostly” use cell phone for access do so only 51% of the time, and people who “mostly” use another device actually choose their cell phones only 1% of the time, we would still expect to see cell access making up about 10% of a general population sample.
So, based on Pew’s data, the incidence of cell phone access to general population surveys should be in the 10% to 20% range.
If that sounds problematic, the trend data that Pew offers seems even more so.
Pew doesn’t give tracking data on “cell mostly” users but they do give data on the growth of cell-based online access overall. It’s not unreasonable to assume that the “cell mostly” segment will grow at roughly the same rate as cell access as a whole. Here’s Pew’s data re-drawn and projected forward.
Pew’s data shows that phone-based internet access is growing at just about 10% per year. At that rate essentially 100% of a gen pop sample “should” be using a mobile device in about 10 years. If MR samples continue to under-represent people who access the internet via cell phone by 65% to 75%, then the “standard” MR sample sources will shrink by a comparable amount.
Of course, this is a crude estimate – whatever happens, the trend line won’t be linear – and ten years is not tomorrow.
But still, assuming that these numbers are anything like inside the park, this implies some big problems. We need to know what is keeping cell users away from online MR surveys, and we need to find ways of changing our approaches to make our research more amenable to mobile access.
Pew’s report doesn’t directly address the question of how to do this, of course, but it does have some strong hints about what might be involved in the reasons given for choosing cell phones as web access devices.
“Cell mostly” users say that their cell phone is a “simpler, more effective choice for going online” compared to other available options (18%); 7% say that they “do mostly basic activities when they go online”; and 6% “find their cell phone to be easier to use than a traditional computer.”
Would these people consider marketing research surveys simple and basic? How long can an online activity take and still be simple and basic. How complex? What kind of engagement can be involved? Is going through a battery of a dozen or so attributes and rating each one on a 1-to-something scale either simple or basic? Is viewing a succession of concept statements with accompanying images – over a wireless connection – simple and basic?
We know from many sources that cell phone use is dominated by short “sessions”: a quick text message, a visit to Google Maps for directions, checking Yelp to find a good restaurant nearby, a fast check of incoming email. This isn’t to say that people don’t sustain long periods of engagement – playing Angry Birds on the bus to work, reading the news, even reading a novel using the Kindle app. But many aspects of cell phones, from screen size to data plans to spotty coverage urge short bursts of use that generally don’t mesh well with anything resembling even a 10 or 20 minute questionnaire.
Although I don’t know for certain where the cell phone users that are missing from our general population samples have gone or why they’re not in our samples, I do have a hunch. They’re not in our samples because they’re not even in our world, which was built, sample panel on sample panel, river source on river source, on a PC/laptop model of online engagement and interaction. The “cell phone mostly” web users have simply moved on to something simpler and more basic.
I’ve blogged about this issue before and will again, I expect. There is a conflict between “marketing research” understood as “the collection of data designed for statistical analysis tailored to the needs of standardized corporate decision making procedures,” which is what drives a major portion of client MR activity, and marketing research defined as “collecting as much data relevant to marketing issues from as many sources and in as many modes as is possible via available technology.”
The incorporation of MR into corporate decision-making happened during an era when the technology at hand – phone and mall interviewing, then online surveys – created a certain style of research that demanded high focus and a fairly large time commitment from respondents. That kind of research is still quite possible, but its days may well be numbered.
July 19th, 2012
By Kat Figatner, Research Director
Online qualitative studies can be challenging for clients to get their team involved in the research. Unlike traditional focus groups, where clients are a captive audience soaking in the live research while snacking on peanut M&Ms, online studies can easily slip through the cracks of a busy schedule.
As a newer member of the online qual team, I am continually impressed by how C+R’s trifecta of stellar client service, operations, and research teams work seamlessly together to keep clients engaged in the research, thereby increasing the value they get out of it.
Our client service team sets up multiple touch points throughout a study :
- platform walk-throughs to demonstrate to clients how to interact with the online tools,
- mid-field “study halls” to watch the action unfold, and
- post-study debriefs to explore implications of the insights.
These steps guide clients throughout the journey of the online project so that they are immersing themselves into the consumers’ lives alongside us.
Our operations team works behind the scenes to recruit and manage the respondents. By administering online screeners programmed internally, we are able to cast a wide net to find the right consumers who fit the target for each study. Once recruited, respondents are managed closely to make sure they fully participate in the research and produce illuminating data.
Finally, the crux of the trifecta is our analysts, who have an innate curiosity and passion for research. We design studies that both draw on traditional qualitative projective techniques and leverage the latest technology to create an interactive and robust discussion with consumers. Moderating and analyzing in teams means we elicit deeper responses from respondents and elevate the learnings to actionable insights.
C+R’s online qualitative team has grown exponentially in the past couple of years. Our trifecta of client service, operations and research teams has converted clients to see the value of online qual research.
By Walt Dickie, Executive Vice President
One of the big events of my formative years happened around 1967 at MIT when the guys in the Earth Science Department announced that they could “predict” what the weather was like in Cambridge an hour earlier.
What actually happened was that a meteorological model had been developed that forecast the weather pretty accurately 24 hours in advance. The problem was that the model took a little over a day to execute on the mainframe, so by the time it issued a forecast it was “predicting” the weather we had just experienced. It was a geeky story, and everyone I knew thought it was pretty funny. A computer model had been developed that was as good at forecasting the weather as looking out the window.
But we all knew that it was a really big deal. With improvements to the algorithm and some advances in the hardware, it would only be a matter of time before the model would run in an hour, then a minute, and it would soon be capable of forecasting the weather not 24 hours in advance but 48 hours, or a week, a month … who could guess?
I’ve watched the weather my whole life because I’ve always been addicted to outdoor sports – I ride a bike almost daily outside of the three- or four-month period when the Chicago winter drives all but the insane indoors. I’ve been a skier since childhood, as has my wife, and we’ve passed that mania on to our kids, so that gives us a reason to follow the winter weather. And I was bitten by sailing when growing up on a New England lake, which now takes out on Lake Michigan and, again, has me pouring over the weather sites.
Historically, there have only been a few ways to forecast future weather. Until very recently forecasters struggled to consistently beat the algorithm my mom used when I was a kid: “Tomorrow will be pretty much like today.” If you think about your local weather, you’ll probably see what we see here in Chicago – the weather mostly doesn’t change much from one day to the next, until it flips a switch and changes a lot. Fronts move through every few days – but in between, tomorrow is much like today. You might be right with that algorithm as often as 3 days out of 4, or even 4 out of 5. It was a struggle to develop a meteorological system better than that.
As recently as the early 20th century, “forecasters would chart the current set of observations, then look through a library of past maps to find the one that most resembled the new chart. Once you had found a reasonably similar map, you looked at how the past situation had evolved and based your forecast on that.” When the technology appeared giving forecasters a reasonably complete picture of today’s weather “upstream” from their location, they were able to adopt a variation on this technique and base tomorrow’s Chicago forecast on what was happening on the Great Plains today, rather than relying on an old map.
And then came the modelers’ breakthrough – the algorithm that forecast the weather an hour ago – and soon it was possible to base forecasts on scientific principles and mathematical calculation.
So, vastly simplified, the evolution of weather forecasting was this: predict that tomorrow will be like today, predict that tomorrow will be like it is today somewhere else, and, finally, calculate tomorrow’s weather by mathematically extrapolating the underlying physics of today into the future.
I’ve been thinking about this progression recently because C+R, like most MR firms these days, is spending an increasing amount of time trying to predict the future. The industry is changing; the economy is changing; maybe the entire global financial system is changing; technology is certainly changing. How do we navigate? How do we make business decisions for the future without some way of forecasting the future?
Everyone here is thinking about these issues, but there are three of us who are particularly involved because of our job responsibilities. And we’ve discovered that each of us has a personal method for forecasting.
Partner Number One is heavily, almost exclusively involved in sales, and has constant contact with clients and potential clients who are trying to articulate their research needs. Partner Number Two monitors the new products and services that our competitors and the industry as a whole are introducing, paying particular attention to successful leading-edge competitors. And Partner Number Three monitors emerging technology and social trends and tries to infer their likely impacts.
And guess what? Partner Number One says that, as near as can be told, tomorrow is going to be a lot like today. Although there is a lot of discussion about big changes online, at conferences, in speeches, and in trade publications, the projects that clients need today and expect to need in the immediate future are much the same as they’ve been in the recent past. Timelines are shorter and budgets may be tighter, but tomorrow’s weather looks like today’s.
Partner Number Two sees a lot of new product and service activity going on, and notices what seem to be some really amazing storms and lightning bolts as new firms with new offerings post double-digit growth rates year upon year while others seem to explode only to fizzle. Some amazing things are announced and then never heard from again. It seems like we can look around us on the map, but that it’s really hard to know which direction is “upstream” from where we’re located. It’s hard to tell if the weather being experienced elsewhere on the industry map will travel toward us or away from us.
Partner Number Three sometimes seems to detect solid trends. There really seems to be some clear trends in technology – both the technology that our client businesses are adopting and the technology that consumers are using. Some trends in consumer communication technology seem especially clear, and if data collection will play any role in our future then we can base that part of our forecast model on them. But other areas, particularly the “information ecology” of businesses seem roiled up and hard to read. We have some pieces of a prediction model, but our algorithm for forecasting still needs a lot of work. It’s not clear that we’re anywhere near the point I witnessed back in college when the model first beat my mom’s forecasting approach.
I find myself torn between algorithms. Like the weather, many businesses have had one day follow another with little material change for long periods – until change overtakes them like a summer squall. Been to a bookstore lately? Just because an innovation lit up the landscape somewhere doesn’t necessarily mean that the same thing would happen elsewhere, or that the market would support two, three, or a dozen similar offerings. Maybe a competitor’s success is due to local conditions – like updrafts that spawn tornadoes on the Plains but almost never come east to Chicago. And although the trends seem to point toward weather systems coming in soon, the forecast model isn’t any better at this point than looking out the window and expecting more of the same.
So we try a little of each forecaster’s method, experimenting with the trends and borrowing from competitor’s successes, while finding today’s weather still largely unchanged from yesterday’s.
By Walt Dickie, Executive Vice President
Having followed the dull roar of the MR commentariat on the future of marketing research (and having contributed to it in a minor way), I was fascinated by Adam Davidson’s article in last weekend’s New York Times Magazine, “Can Mom-and-Pop Shops Survive Extreme Gentrification?” Not because gentrification is terribly relevant to MR, but because of Davidson’s reaction to the news that some Mom-and-Pops still exist happily in heavily competitive, booming Greenwich Village.
Davidson, of NPR’s “Planet Money,” can be heard on “Morning Edition,” “All Things Considered” and “This American Life.” He’s well informed about economic issues, and both reliably insightful and entertaining. He also grew up in the Village when it was still “the Jane Jacobs ideal, a neighborhood crammed with small mom-and-pop stores.”
Now, of course, with Wall Street money coursing through New York’s veins, the “artists, weirdos and blue-collar families” that inhabited the Village of his childhood have been completely replaced by “guys in suits” and the Mom-and-Pops have been displaced by big, trendy names like “Marc Jacobs … Magnolia Bakery … Ralph Lauren, Jimmy Choo, (and) Burberry.”
You can see where this is going, right?
I was sitting on my back porch on Sunday morning, coffee in hand, idly riffling through the Times when I suddenly realized that Davidson’s tale of a trip back to the old neighborhood was giving off vibes about my own job and business. Mom-and-Pop marketing research firms have been disappearing lately into the acquisitive maws of both the multi-national organizations and the bankruptcy courts. Big name trendy firms, like Google, are suddenly seen around the old neighborhood. And the blogosphere is alight with a mixture of fear and cheerleading for the coming “disruptive” revolution.
So what was the secret of the successful Village Mom-and-Pops? Were they just the lucky the ones that suddenly found their old-fashioned wares in demand? Were they somehow chic because they were retro; fashionable because they were so nonchalant about fashion?
When Davidson asked the owner of “the oldest Village business (he) could think of” how his business had changed, he found stasis: “It’s about the same … We’re not way richer or poorer … We’re about the same.” The owner of another old-line business, a tavern, “put it more bluntly. He’s surviving, he said, because he’s not an especially ambitious businessman.”
The best part of the article, for me, was Davidson’s response to the news that a business hadn’t changed or grown in a generation: “And this didn’t bother him.”
Adam Davidson, NPR economics guy, chronicler of the connected, digital, international economy, doesn’t get to talk a lot to guys who aren’t concerned with quarter-over-quarter growth, IPOs, or becoming billionaires. Guys who say stuff like this: “If I just cared about the money, I’d have closed a long time ago.” Guys who will keep the business running “as long as the place is covering the costs.”
“I wondered why Bowman, like her fellow proprietors, was disavowing economic theory and not trying to maximize her profits. Then I remembered one fascinating statistic about our economy. There are more than 27 million businesses in the United States. About a thousand are huge conglomerates seeking to increase profits. Another several thousand are small or medium-size companies seeking their big score. A vast majority, however, are what economists call lifestyle businesses. They are owned by people whose goal is to do what they like and to cover their nut. These surviving proprietors hadn’t merely been lucky. They loved their businesses so much that they found a way to hold on to them, even if it meant making bad business decisions. It’s a remarkable accomplishment in its own right.”
All of this made we wonder whether our current view of the MR industry is so focused on the prevailing worship of not only maximizing profits but doing so on a scale unthinkable even a generation ago that we’re no longer looking at the full range of the data. It also made me wonder about Dunbar’s number.
Robin Dunbar, a British anthropologist, proposed back in the 90s, that there was a maximum group size beyond which social relationships could no longer be based purely on personal, individual relationships. Since then, Dunbar’s number has become rather fashionable and influential. It’s normally pegged at about 150 people.
“Mom and Pop” enterprises run by “lifestyle entrepreneurs” don’t generally grow their workforces beyond Dunbar’s number because, if they do, the “lifestyle” part is eroded by organizational issues. “Mom” or “Pop” find themselves spending more time and energy managing people than they can spend making shoes, selling coffee, tending bar, or … doing research.
Let’s assume – just for the sake of making a guesstimate – that an MR company has 150 employees. Let’s say their average salary is about $75K, taking into account everyone from the receptionist to the account people, the senior analysts, the operational folks, and the execs, and that rent, benefits and overhead add 50% to that. The gross margin for the business (sales minus direct project costs) is something like 30%, and Mom and Pop will keep the business running as long they can cover their costs. At those rates they’re going to need to do something like $50-55-million annually to keep the doors open.
Which means that any MR shop smaller than that may well be a “Dunbar enterprise,” able to survive while ignoring the basic demand of modern economics – maximizing profits. They make poor blogging material – until an Adam Davidson writes a sentimental piece about them. “Nothing to see here. Move on” is not exactly SEO bait.
Maybe this is a good place to say that I’m not endorsing this approach. I came into MR over 30 years ago, and I’ve enjoyed the lifestyle of a Dunbar shop, but I’m also a technology junkie and a grow-and-change nerd. I find new methodologies, new communication technologies, new platforms, and new business models so much more interesting than the Mom-and-Pop lifestyle that I’d endorse their pursuit even if they didn’t also promise much profit or growth. That would be my personal lifestyle choice.
But I’m also a research guy with a social science background and I can’t ignore the countervailing view. Who is looking closely at the future of Dunbar enterprises in MR? What do the business models look like for businesses based on data collection and analysis as the neighborhood gentrifies? What customer segments will continue bringing their trade to the old line Village shops like McNulty’s Tea & Coffee Co., Imperial Vintner, Tavern on Jane, and the other Mom-and-Pops? If you know anyone researching this or blogging about it, I’d love to hear about it.
Now the economics have changed. Any new operation needs deeper pockets and a stronger business plan, all of which will probably make it less interesting. … There are still some passionate people with exciting ideas who are making really bad — but entirely satisfying — business decisions.
June 21st, 2012
By Joy Boggio, Director of Online Qualitative Support
We are adapting new technologies so fast that what was cutting edge last year, is passé this year. The Wondrous recently had a great post about technologies that will soon be obsolete. Think the TV remote or FAX machines.
This reminded me of the debate over text analytics and verbatim management for online qualitative studies. The various TA software packages (Language Logic, Clarabridge) are said to move us into the frontier of the future by “machining” the findings that we have culled from our boards. This all sounds promising, and many of us assumed that this would save a moderator/analyst time and uncover insights buried beneath the vast amount of data that we no longer “live through.”
It seemed, at first glance, to be a great solution when we realized gleefully, “Hey, we have so much data!” Then, in the next breath we realized, “OH! We have so much data..!” Unlike traditional qualitative research in which the moderators immerse themselves in the data as it is happening, the online qualitative moderator must sift through data that has been accumulating over days. We must find a way to juggle and make sense of it all to just find the nuggets of information.
But, how can we identify those nuggets quickly and efficiently? At C+R Research, we have a seemingly overwhelming amount of data, so we have made many attempts to “machine” and organize qualitative “data.”
At TMTRE, many others also talked about their attempts at automating and coding these responses. Most have come to the same conclusion we have… you simply have to read the comments from the boards. The data set, while appearing to be tremendous, is still too small to get good results from any automated method of sorting or coding it. Many have tried Language Logic to categorize and Nvivo to organize, but they both add time and almost always require a second analyst, which may cause the sub-text of the responses to be lost.
Automating the work does seem to have a place when you are dealing with multi-phase projects or when you are talking to a few hundred or more respondents, but not so with an average bulletin board of 20-30 people. What was the overall consensus? “It’s QUAL, we shouldn’t strive to quantify it!”
June 13th, 2012
C+R’s Hispanic research experts, Brenda Hurley, Senior Vice President, and Juan Ruiz, Research Director, will be speaking on July 18th at the Shopper Insights in Action Conference which is being held at the Chicago Marriott Downtown Magnificent Mile.
In their session “Filling Up the Shopping Cart – A Look at Bicultural Hispanic Shoppers,” Juan and Brenda will help you identify ways to reach your Hispanic customers and challenge you to think in new ways about this evolving segment. Topics discussed include understanding:
- How Hispanics choose where to shop;
- Brand loyalty;
- The main factors influencing new product trial;
- Attitudes and usage of national brands vs. store brands vs. Latin American brands;
- The influence of coupons, promotions, rebates, bilingual packaging, celebrity endorsements, etc.; and finally,
- The role that the Internet plays for them when shopping.
To register for the conference and receive 25% off your registration, use our discount code SHOP12CR on the Shopper Insights in Action website.
June 11th, 2012
By Mary McIlrath, Senior Vice President
Last week we posted our Top 5 Do’s and Don’ts for Online Qualitative Moderation, which sparked discussion of best practices in this fast-growing collection of methods.
This week our friends at iModerate posted their blog “The 5 Absolute, No Excuse, Must-do’s for Online Qualitative Researchers.”
Our post focused on how to best interact with respondents, to draw out their best responses in a non-face-to-face context. iModerate’s took an operational perspective, answering the question of how to organize and execute high-quality online qualitative within a research company.
An important third part of this discussion is how to optimize the client experience. No matter how excellently the project is executed or moderated, a poor client experience can ruin the chances for repeat studies.
Across a gamut of client experience levels, we’ve found these Top 5 Keys to Happy Clients in online qualitative:
- Make the entire team comfortable with the project’s technology platform. It’s unusual to encounter an entire team who knows how to navigate a given platform. We’ve found it helpful to schedule a live walk-through several hours after the launch of a new project. A few respondents will have populated fields, so there is “real” data to demonstrate, but there’s not so much as to feel overwhelming. No need to demonstrate all the bells and whistles—just a basic 1–2–3:
- Here’s where to go first.
- Here’s how to move around.
- Here’s how to communicate with the moderator.
- Set clear deadlines for questions and stimulus. In focus groups, it’s normal for a client to arrive in the backroom half an hour before groups with a few changes to the guide and stimulus the moderator is seeing for the first time. Online qualitative does allow some flexibility on the fly, but also requires programming ahead of time. The more elaborate the technology (e.g., concept mark-up tools), the more lead time is needed for programming. Make sure clients know ahead of time what is needed from them for the project to run smoothly.
- Facilitate engagement. Asynchronous studies in particular are a double-edged sword. Clients can read posts at any time, which often means other “fires” take priority, and some won’t make time to read posts at all. Clients can be left feeling detached from what was said, and overwhelmed at going back and trying to catch up over several days. This reduces their sense of ownership, buy-in, and satisfaction with the project. Compare this to focus groups, where everyone commits to being out of the office and listening together.At C+R, we create a client engagement plan for every project, with a variety of tools. Two favorites are “study halls” in which clients meet together in a conference room, with or without a moderator present, to read together in a shared experience. Another is the “buddy system” in which after the first day, we identify two or three “star respondents” for each client to follow. This is much less overwhelming than following 30 respondents. For some clients, we lead debriefs in which they are responsible for reporting back on their “buddies.” In these ways, we aid accountability, which increases engagement, buy-in, and satisfaction.
- Create “backroom” intimacy. Rapport between moderators and clients can go a long way towards client satisfaction in traditional qualitative. Just as moderators have to build empathy with respondents, it is important to internalize the clients’ goals. This enables you to translate their on-paper research objectives into insightful analysis.If we can’t meet the client in person, we like to have a frank discussion on the kickoff call, and as needed during the project, to help make sure we’re all on the same page. This is especially important for a new client—if they feel “heard” and understood, their comfort with what may be a strange new process will rise. Their comfort raises their confidence in the project and their ultimate happiness with the results.
- Drop clues. As a study unfolds, a focus group moderator can come in the backroom and check in with clients to make sure everyone is aligned on findings. Online, we provide weekly or more frequent “teaser” emails to the team with high-level developing insights, and a couple of supporting quotes. This serves a dual purpose of encouraging discussion from clients who may see things differently and of reminding clients to visit the platform and see what people have been saying.
At C+R, our history of client happiness has translated into loyalty, and we enjoy continuing to please.
May 31st, 2012
By Walt Dickie, Executive Vice President
If you’ve followed this argument so far, you know that it’s headed straight for Google Surveys. Like everyone else in the MR business, C+R has been debating the eventual significance of Google’s entrance into our world. (There’s no reason to rehearse this discussion in this space; you can find good, thoughtful analysis and commentary here, here, and here.)
I’d like to focus on Google’s unique approach to the structure of a questionnaire: “With Google Consumer Surveys, you can run multi-question surveys by asking people one question at a time.” This seems designed with interstitial use firmly in mind. It is pure “mobile first, web second” design.
What is sacrificed, of course, is a true respondent-level record. Instead, as is explained in the accompanying white paper, “Comparing Google Consumer Surveys to Existing Probability and Non-Probability Based Internet Surveys,” Google uses the power of its Big Data store to predict respondent-level data “using the respondent’s IP address and DoubleClick cookie” to generate “inferred demographic and location information,” then, on the back end, Google re-weights the data to better approximate the Internet population. Their “possible weighting options, ordered by priority, include: (age, gender, state), (age, gender, region), (age, state), (age, region), (age, gender), (age), (region), (gender).” The result of this manipulation “produces a close approximation to a random sample of the US Internet population and results that are as accurate as probability based panels.”
Today, following their long-established pattern, Google’s effort is somewhat primitive. Surveys are very short, and analytic cross-breaks are limited to demographic and location variables, although I see no reason in principle why the same predictive approach to creating a synthetic respondent record could not be extended further or why questionnaires could not be of unlimited length, given sufficiently large datasets and sample availability.
Once you get thinking about synthetic or predictive approaches to survey data, lots of opportunities open up that were closed before. I need to think more about these, and also need to think much more deeply about the modeling some would involve, but here are a few approaches that have come to mind so far:
- Pre-plan which variablesmust to be associated within a single respondent record for analytic purposes, than break a long “web first, mobile second” survey into very short sub-sections keeping associated variables together within modules.
- Create many short surveys such that every variable appears with every other variable enough times to model the “complete” data set (with the help of Google-style inferred variables).
- Administer the long/full “web first, mobile second” survey to the “Immobile” or “Heritage” sample pool that will take a survey on a laptop/desktop. Break the long survey into short modules, and use Google-style inferred variables plus the long survey to
model the “complete” dataset.
- Accept the diminishing representativeness of Immobile/Heritage sample as well as any other penalties/costs (time, expense, etc.) and conduct projects that require long surveys on Heritage sample only. (This is what happens today when projects use telephone rep sample.)
- Work with clients to systematically develop libraries of data from customer communities, social media, etc. with associated demographic and customer-graphic information. Sample from customer bases using the info in this dataset to collect data on necessary “ad hoc/custom” variables not represented in the library and model the “complete” dataset using a short survey plus the library. (Essentially and proprietary version of the Google model.)
I’m sure there are other options to consider, and anyone can see at a glance that these options present large analytic, statistical, operational, and cost challenges. But assuming that MR will continue to need surveys longer than could be fitted all at once into an interstitial moment, something along the lines Google is using will be necessary. And, if that’s right, does the industry have the resources – in expertise, massive individualized dataset, and investment resources – to compete with the likes of Google?