By Walt Dickie, Executive Vice President
One of the big events of my formative years happened around 1967 at MIT when the guys in the Earth Science Department announced that they could “predict” what the weather was like in Cambridge an hour earlier.
What actually happened was that a meteorological model had been developed that forecast the weather pretty accurately 24 hours in advance. The problem was that the model took a little over a day to execute on the mainframe, so by the time it issued a forecast it was “predicting” the weather we had just experienced. It was a geeky story, and everyone I knew thought it was pretty funny. A computer model had been developed that was as good at forecasting the weather as looking out the window.
But we all knew that it was a really big deal. With improvements to the algorithm and some advances in the hardware, it would only be a matter of time before the model would run in an hour, then a minute, and it would soon be capable of forecasting the weather not 24 hours in advance but 48 hours, or a week, a month … who could guess?
I’ve watched the weather my whole life because I’ve always been addicted to outdoor sports – I ride a bike almost daily outside of the three- or four-month period when the Chicago winter drives all but the insane indoors. I’ve been a skier since childhood, as has my wife, and we’ve passed that mania on to our kids, so that gives us a reason to follow the winter weather. And I was bitten by sailing when growing up on a New England lake, which now takes out on Lake Michigan and, again, has me pouring over the weather sites.
Historically, there have only been a few ways to forecast future weather. Until very recently forecasters struggled to consistently beat the algorithm my mom used when I was a kid: “Tomorrow will be pretty much like today.” If you think about your local weather, you’ll probably see what we see here in Chicago – the weather mostly doesn’t change much from one day to the next, until it flips a switch and changes a lot. Fronts move through every few days – but in between, tomorrow is much like today. You might be right with that algorithm as often as 3 days out of 4, or even 4 out of 5. It was a struggle to develop a meteorological system better than that.
As recently as the early 20th century, “forecasters would chart the current set of observations, then look through a library of past maps to find the one that most resembled the new chart. Once you had found a reasonably similar map, you looked at how the past situation had evolved and based your forecast on that.” When the technology appeared giving forecasters a reasonably complete picture of today’s weather “upstream” from their location, they were able to adopt a variation on this technique and base tomorrow’s Chicago forecast on what was happening on the Great Plains today, rather than relying on an old map.
And then came the modelers’ breakthrough – the algorithm that forecast the weather an hour ago – and soon it was possible to base forecasts on scientific principles and mathematical calculation.
So, vastly simplified, the evolution of weather forecasting was this: predict that tomorrow will be like today, predict that tomorrow will be like it is today somewhere else, and, finally, calculate tomorrow’s weather by mathematically extrapolating the underlying physics of today into the future.
I’ve been thinking about this progression recently because C+R, like most MR firms these days, is spending an increasing amount of time trying to predict the future. The industry is changing; the economy is changing; maybe the entire global financial system is changing; technology is certainly changing. How do we navigate? How do we make business decisions for the future without some way of forecasting the future?
Everyone here is thinking about these issues, but there are three of us who are particularly involved because of our job responsibilities. And we’ve discovered that each of us has a personal method for forecasting.
Partner Number One is heavily, almost exclusively involved in sales, and has constant contact with clients and potential clients who are trying to articulate their research needs. Partner Number Two monitors the new products and services that our competitors and the industry as a whole are introducing, paying particular attention to successful leading-edge competitors. And Partner Number Three monitors emerging technology and social trends and tries to infer their likely impacts.
And guess what? Partner Number One says that, as near as can be told, tomorrow is going to be a lot like today. Although there is a lot of discussion about big changes online, at conferences, in speeches, and in trade publications, the projects that clients need today and expect to need in the immediate future are much the same as they’ve been in the recent past. Timelines are shorter and budgets may be tighter, but tomorrow’s weather looks like today’s.
Partner Number Two sees a lot of new product and service activity going on, and notices what seem to be some really amazing storms and lightning bolts as new firms with new offerings post double-digit growth rates year upon year while others seem to explode only to fizzle. Some amazing things are announced and then never heard from again. It seems like we can look around us on the map, but that it’s really hard to know which direction is “upstream” from where we’re located. It’s hard to tell if the weather being experienced elsewhere on the industry map will travel toward us or away from us.
Partner Number Three sometimes seems to detect solid trends. There really seems to be some clear trends in technology – both the technology that our client businesses are adopting and the technology that consumers are using. Some trends in consumer communication technology seem especially clear, and if data collection will play any role in our future then we can base that part of our forecast model on them. But other areas, particularly the “information ecology” of businesses seem roiled up and hard to read. We have some pieces of a prediction model, but our algorithm for forecasting still needs a lot of work. It’s not clear that we’re anywhere near the point I witnessed back in college when the model first beat my mom’s forecasting approach.
I find myself torn between algorithms. Like the weather, many businesses have had one day follow another with little material change for long periods – until change overtakes them like a summer squall. Been to a bookstore lately? Just because an innovation lit up the landscape somewhere doesn’t necessarily mean that the same thing would happen elsewhere, or that the market would support two, three, or a dozen similar offerings. Maybe a competitor’s success is due to local conditions – like updrafts that spawn tornadoes on the Plains but almost never come east to Chicago. And although the trends seem to point toward weather systems coming in soon, the forecast model isn’t any better at this point than looking out the window and expecting more of the same.
So we try a little of each forecaster’s method, experimenting with the trends and borrowing from competitor’s successes, while finding today’s weather still largely unchanged from yesterday’s.
By Walt Dickie, Executive Vice President
Having followed the dull roar of the MR commentariat on the future of marketing research (and having contributed to it in a minor way), I was fascinated by Adam Davidson’s article in last weekend’s New York Times Magazine, “Can Mom-and-Pop Shops Survive Extreme Gentrification?” Not because gentrification is terribly relevant to MR, but because of Davidson’s reaction to the news that some Mom-and-Pops still exist happily in heavily competitive, booming Greenwich Village.
Davidson, of NPR’s “Planet Money,” can be heard on “Morning Edition,” “All Things Considered” and “This American Life.” He’s well informed about economic issues, and both reliably insightful and entertaining. He also grew up in the Village when it was still “the Jane Jacobs ideal, a neighborhood crammed with small mom-and-pop stores.”
Now, of course, with Wall Street money coursing through New York’s veins, the “artists, weirdos and blue-collar families” that inhabited the Village of his childhood have been completely replaced by “guys in suits” and the Mom-and-Pops have been displaced by big, trendy names like “Marc Jacobs … Magnolia Bakery … Ralph Lauren, Jimmy Choo, (and) Burberry.”
You can see where this is going, right?
I was sitting on my back porch on Sunday morning, coffee in hand, idly riffling through the Times when I suddenly realized that Davidson’s tale of a trip back to the old neighborhood was giving off vibes about my own job and business. Mom-and-Pop marketing research firms have been disappearing lately into the acquisitive maws of both the multi-national organizations and the bankruptcy courts. Big name trendy firms, like Google, are suddenly seen around the old neighborhood. And the blogosphere is alight with a mixture of fear and cheerleading for the coming “disruptive” revolution.
So what was the secret of the successful Village Mom-and-Pops? Were they just the lucky the ones that suddenly found their old-fashioned wares in demand? Were they somehow chic because they were retro; fashionable because they were so nonchalant about fashion?
When Davidson asked the owner of “the oldest Village business (he) could think of” how his business had changed, he found stasis: “It’s about the same … We’re not way richer or poorer … We’re about the same.” The owner of another old-line business, a tavern, “put it more bluntly. He’s surviving, he said, because he’s not an especially ambitious businessman.”
The best part of the article, for me, was Davidson’s response to the news that a business hadn’t changed or grown in a generation: “And this didn’t bother him.”
Adam Davidson, NPR economics guy, chronicler of the connected, digital, international economy, doesn’t get to talk a lot to guys who aren’t concerned with quarter-over-quarter growth, IPOs, or becoming billionaires. Guys who say stuff like this: “If I just cared about the money, I’d have closed a long time ago.” Guys who will keep the business running “as long as the place is covering the costs.”
“I wondered why Bowman, like her fellow proprietors, was disavowing economic theory and not trying to maximize her profits. Then I remembered one fascinating statistic about our economy. There are more than 27 million businesses in the United States. About a thousand are huge conglomerates seeking to increase profits. Another several thousand are small or medium-size companies seeking their big score. A vast majority, however, are what economists call lifestyle businesses. They are owned by people whose goal is to do what they like and to cover their nut. These surviving proprietors hadn’t merely been lucky. They loved their businesses so much that they found a way to hold on to them, even if it meant making bad business decisions. It’s a remarkable accomplishment in its own right.”
All of this made we wonder whether our current view of the MR industry is so focused on the prevailing worship of not only maximizing profits but doing so on a scale unthinkable even a generation ago that we’re no longer looking at the full range of the data. It also made me wonder about Dunbar’s number.
Robin Dunbar, a British anthropologist, proposed back in the 90s, that there was a maximum group size beyond which social relationships could no longer be based purely on personal, individual relationships. Since then, Dunbar’s number has become rather fashionable and influential. It’s normally pegged at about 150 people.
“Mom and Pop” enterprises run by “lifestyle entrepreneurs” don’t generally grow their workforces beyond Dunbar’s number because, if they do, the “lifestyle” part is eroded by organizational issues. “Mom” or “Pop” find themselves spending more time and energy managing people than they can spend making shoes, selling coffee, tending bar, or … doing research.
Let’s assume – just for the sake of making a guesstimate – that an MR company has 150 employees. Let’s say their average salary is about $75K, taking into account everyone from the receptionist to the account people, the senior analysts, the operational folks, and the execs, and that rent, benefits and overhead add 50% to that. The gross margin for the business (sales minus direct project costs) is something like 30%, and Mom and Pop will keep the business running as long they can cover their costs. At those rates they’re going to need to do something like $50-55-million annually to keep the doors open.
Which means that any MR shop smaller than that may well be a “Dunbar enterprise,” able to survive while ignoring the basic demand of modern economics – maximizing profits. They make poor blogging material – until an Adam Davidson writes a sentimental piece about them. “Nothing to see here. Move on” is not exactly SEO bait.
Maybe this is a good place to say that I’m not endorsing this approach. I came into MR over 30 years ago, and I’ve enjoyed the lifestyle of a Dunbar shop, but I’m also a technology junkie and a grow-and-change nerd. I find new methodologies, new communication technologies, new platforms, and new business models so much more interesting than the Mom-and-Pop lifestyle that I’d endorse their pursuit even if they didn’t also promise much profit or growth. That would be my personal lifestyle choice.
But I’m also a research guy with a social science background and I can’t ignore the countervailing view. Who is looking closely at the future of Dunbar enterprises in MR? What do the business models look like for businesses based on data collection and analysis as the neighborhood gentrifies? What customer segments will continue bringing their trade to the old line Village shops like McNulty’s Tea & Coffee Co., Imperial Vintner, Tavern on Jane, and the other Mom-and-Pops? If you know anyone researching this or blogging about it, I’d love to hear about it.
Now the economics have changed. Any new operation needs deeper pockets and a stronger business plan, all of which will probably make it less interesting. … There are still some passionate people with exciting ideas who are making really bad — but entirely satisfying — business decisions.
June 21st, 2012
By Joy Boggio, Director of Online Qualitative Support
We are adapting new technologies so fast that what was cutting edge last year, is passé this year. The Wondrous recently had a great post about technologies that will soon be obsolete. Think the TV remote or FAX machines.
This reminded me of the debate over text analytics and verbatim management for online qualitative studies. The various TA software packages (Language Logic, Clarabridge) are said to move us into the frontier of the future by “machining” the findings that we have culled from our boards. This all sounds promising, and many of us assumed that this would save a moderator/analyst time and uncover insights buried beneath the vast amount of data that we no longer “live through.”
It seemed, at first glance, to be a great solution when we realized gleefully, “Hey, we have so much data!” Then, in the next breath we realized, “OH! We have so much data..!” Unlike traditional qualitative research in which the moderators immerse themselves in the data as it is happening, the online qualitative moderator must sift through data that has been accumulating over days. We must find a way to juggle and make sense of it all to just find the nuggets of information.
But, how can we identify those nuggets quickly and efficiently? At C+R Research, we have a seemingly overwhelming amount of data, so we have made many attempts to “machine” and organize qualitative “data.”
At TMTRE, many others also talked about their attempts at automating and coding these responses. Most have come to the same conclusion we have… you simply have to read the comments from the boards. The data set, while appearing to be tremendous, is still too small to get good results from any automated method of sorting or coding it. Many have tried Language Logic to categorize and Nvivo to organize, but they both add time and almost always require a second analyst, which may cause the sub-text of the responses to be lost.
Automating the work does seem to have a place when you are dealing with multi-phase projects or when you are talking to a few hundred or more respondents, but not so with an average bulletin board of 20-30 people. What was the overall consensus? “It’s QUAL, we shouldn’t strive to quantify it!”
June 13th, 2012
C+R’s Hispanic research experts, Brenda Hurley, Senior Vice President, and Juan Ruiz, Research Director, will be speaking on July 18th at the Shopper Insights in Action Conference which is being held at the Chicago Marriott Downtown Magnificent Mile.
In their session “Filling Up the Shopping Cart – A Look at Bicultural Hispanic Shoppers,” Juan and Brenda will help you identify ways to reach your Hispanic customers and challenge you to think in new ways about this evolving segment. Topics discussed include understanding:
- How Hispanics choose where to shop;
- Brand loyalty;
- The main factors influencing new product trial;
- Attitudes and usage of national brands vs. store brands vs. Latin American brands;
- The influence of coupons, promotions, rebates, bilingual packaging, celebrity endorsements, etc.; and finally,
- The role that the Internet plays for them when shopping.
To register for the conference and receive 25% off your registration, use our discount code SHOP12CR on the Shopper Insights in Action website.
June 11th, 2012
By Mary McIlrath, Senior Vice President
Last week we posted our Top 5 Do’s and Don’ts for Online Qualitative Moderation, which sparked discussion of best practices in this fast-growing collection of methods.
This week our friends at iModerate posted their blog “The 5 Absolute, No Excuse, Must-do’s for Online Qualitative Researchers.”
Our post focused on how to best interact with respondents, to draw out their best responses in a non-face-to-face context. iModerate’s took an operational perspective, answering the question of how to organize and execute high-quality online qualitative within a research company.
An important third part of this discussion is how to optimize the client experience. No matter how excellently the project is executed or moderated, a poor client experience can ruin the chances for repeat studies.
Across a gamut of client experience levels, we’ve found these Top 5 Keys to Happy Clients in online qualitative:
- Make the entire team comfortable with the project’s technology platform. It’s unusual to encounter an entire team who knows how to navigate a given platform. We’ve found it helpful to schedule a live walk-through several hours after the launch of a new project. A few respondents will have populated fields, so there is “real” data to demonstrate, but there’s not so much as to feel overwhelming. No need to demonstrate all the bells and whistles—just a basic 1–2–3:
- Here’s where to go first.
- Here’s how to move around.
- Here’s how to communicate with the moderator.
- Set clear deadlines for questions and stimulus. In focus groups, it’s normal for a client to arrive in the backroom half an hour before groups with a few changes to the guide and stimulus the moderator is seeing for the first time. Online qualitative does allow some flexibility on the fly, but also requires programming ahead of time. The more elaborate the technology (e.g., concept mark-up tools), the more lead time is needed for programming. Make sure clients know ahead of time what is needed from them for the project to run smoothly.
- Facilitate engagement. Asynchronous studies in particular are a double-edged sword. Clients can read posts at any time, which often means other “fires” take priority, and some won’t make time to read posts at all. Clients can be left feeling detached from what was said, and overwhelmed at going back and trying to catch up over several days. This reduces their sense of ownership, buy-in, and satisfaction with the project. Compare this to focus groups, where everyone commits to being out of the office and listening together.At C+R, we create a client engagement plan for every project, with a variety of tools. Two favorites are “study halls” in which clients meet together in a conference room, with or without a moderator present, to read together in a shared experience. Another is the “buddy system” in which after the first day, we identify two or three “star respondents” for each client to follow. This is much less overwhelming than following 30 respondents. For some clients, we lead debriefs in which they are responsible for reporting back on their “buddies.” In these ways, we aid accountability, which increases engagement, buy-in, and satisfaction.
- Create “backroom” intimacy. Rapport between moderators and clients can go a long way towards client satisfaction in traditional qualitative. Just as moderators have to build empathy with respondents, it is important to internalize the clients’ goals. This enables you to translate their on-paper research objectives into insightful analysis.If we can’t meet the client in person, we like to have a frank discussion on the kickoff call, and as needed during the project, to help make sure we’re all on the same page. This is especially important for a new client—if they feel “heard” and understood, their comfort with what may be a strange new process will rise. Their comfort raises their confidence in the project and their ultimate happiness with the results.
- Drop clues. As a study unfolds, a focus group moderator can come in the backroom and check in with clients to make sure everyone is aligned on findings. Online, we provide weekly or more frequent “teaser” emails to the team with high-level developing insights, and a couple of supporting quotes. This serves a dual purpose of encouraging discussion from clients who may see things differently and of reminding clients to visit the platform and see what people have been saying.
At C+R, our history of client happiness has translated into loyalty, and we enjoy continuing to please.
May 31st, 2012
By Walt Dickie, Executive Vice President
If you’ve followed this argument so far, you know that it’s headed straight for Google Surveys. Like everyone else in the MR business, C+R has been debating the eventual significance of Google’s entrance into our world. (There’s no reason to rehearse this discussion in this space; you can find good, thoughtful analysis and commentary here, here, and here.)
I’d like to focus on Google’s unique approach to the structure of a questionnaire: “With Google Consumer Surveys, you can run multi-question surveys by asking people one question at a time.” This seems designed with interstitial use firmly in mind. It is pure “mobile first, web second” design.
What is sacrificed, of course, is a true respondent-level record. Instead, as is explained in the accompanying white paper, “Comparing Google Consumer Surveys to Existing Probability and Non-Probability Based Internet Surveys,” Google uses the power of its Big Data store to predict respondent-level data “using the respondent’s IP address and DoubleClick cookie” to generate “inferred demographic and location information,” then, on the back end, Google re-weights the data to better approximate the Internet population. Their “possible weighting options, ordered by priority, include: (age, gender, state), (age, gender, region), (age, state), (age, region), (age, gender), (age), (region), (gender).” The result of this manipulation “produces a close approximation to a random sample of the US Internet population and results that are as accurate as probability based panels.”
Today, following their long-established pattern, Google’s effort is somewhat primitive. Surveys are very short, and analytic cross-breaks are limited to demographic and location variables, although I see no reason in principle why the same predictive approach to creating a synthetic respondent record could not be extended further or why questionnaires could not be of unlimited length, given sufficiently large datasets and sample availability.
Once you get thinking about synthetic or predictive approaches to survey data, lots of opportunities open up that were closed before. I need to think more about these, and also need to think much more deeply about the modeling some would involve, but here are a few approaches that have come to mind so far:
- Pre-plan which variablesmust to be associated within a single respondent record for analytic purposes, than break a long “web first, mobile second” survey into very short sub-sections keeping associated variables together within modules.
- Create many short surveys such that every variable appears with every other variable enough times to model the “complete” data set (with the help of Google-style inferred variables).
- Administer the long/full “web first, mobile second” survey to the “Immobile” or “Heritage” sample pool that will take a survey on a laptop/desktop. Break the long survey into short modules, and use Google-style inferred variables plus the long survey to
model the “complete” dataset.
- Accept the diminishing representativeness of Immobile/Heritage sample as well as any other penalties/costs (time, expense, etc.) and conduct projects that require long surveys on Heritage sample only. (This is what happens today when projects use telephone rep sample.)
- Work with clients to systematically develop libraries of data from customer communities, social media, etc. with associated demographic and customer-graphic information. Sample from customer bases using the info in this dataset to collect data on necessary “ad hoc/custom” variables not represented in the library and model the “complete” dataset using a short survey plus the library. (Essentially and proprietary version of the Google model.)
I’m sure there are other options to consider, and anyone can see at a glance that these options present large analytic, statistical, operational, and cost challenges. But assuming that MR will continue to need surveys longer than could be fitted all at once into an interstitial moment, something along the lines Google is using will be necessary. And, if that’s right, does the industry have the resources – in expertise, massive individualized dataset, and investment resources – to compete with the likes of Google?
May 23rd, 2012
By Walt Dickie, Executive Vice President
Many news outlets have picked up the results from a recent report from the Pew Internet & American Life Project on “Just-in-time Information through Mobile Connections.” Here’s the topline: “Some 70% of all cell phone owners and 86% of smartphone owners have used their phones in the previous 30 days to perform at least one of the following activities: Coordinate a meeting or get-together … Solve an unexpected problem that they or someone else had encountered … Decide whether to visit a business, such as a restaurant … Find information to help settle an argument they were having … Look up a score of a sporting event … Get up-to-the-minute traffic or public transit information to find the fastest way to get somewhere … Get help in an emergency situation.”
I’m thankful that I haven’t encountered all of these in the past month. But I’ve certainly encountered all of them at one point or another, and I’ll bet you have, too. They’re so common, so “normal,” that they almost go unnoticed. Together they represent one of the main categories of things that mobile phones, especially smartphones, are “for” today. (Though I’m kind of surprised that “Get help using, setting up, or fixing something” are missing from the list.)
Pew calls these occasions “just-in-time,” which seems right, but there’s another dimension involved: they take place while people are in the midst of doing something else. Around C+R we’ve been calling this kind of mobile usage “interstitial,” and it is a particularly relevant construct, as marketing research needs to rely more on reaching people via mobile devices. Our current methods are very much “extrastitial.”
Everyone in MR knows that online surveys are going to rely increasingly on respondents accessing them on mobile phones, principally smartphones. But that’s just not happening yet. At least here at C+R, smartphone-based attempts to access online surveys targeted to the general population (i.e., not specifically targeted to smartphone users) is generally in the single digits and generally flat over time. Why the disconnect between high and increasing mobile use and low and flat mobile access?
Online surveys (and most online qualitative methods) are designed to be viewed on a desktop/laptop computer, “leaning forward” to view a large screen, with input provided via a keyboard and mouse, and – most important – with the user fully focused on the task at hand, often for a significant period of time. Standard online surveys are designed to be “web first, mobile second,” while smartphone experiences (and the most successful apps) are “mobile first, web second” – “with you all the time and … used in moments of downtime.”
If this argument is right, it has huge implications for survey research.
The industry has never really tried to attract mobile users as a general-purpose sample source. Would the limited sample pool and dismal response rates being generated by the web first, mobile second approach be surpassed by a true mobile first, web second effort? Would a true mobile-oriented survey approach yield a much more representative, responsive sample?
Even more speculatively, as more and more online usage migrates to mobile devices, should we start thinking about mobile surveys as potentially becoming the eventual dominant form and traditional web surveys – conducted on “Immobile” or perhaps “Heritage” devices – the aging, declining technology? If so, then we’ll have to re-invent surveys to become interstitial. How will we do that and still be able to collect the extensive case data that some analyses require? (It feels like designing interstitial online qualitative experiences would be much more straightforward, but that’s a discussion for another day and post.)
Obviously we won’t be able to field 45-minute questionnaires anymore – a good thing from many angles – but how do we collect the information that we need for something like major category segmentations in a form that allows for interstitial usage?
Can we just encourage respondents to spread a long-form survey experience over several days? Something like 5 questions a day over the course of a week, giving the respondent the option at each juncture to continue now or resume tomorrow? This might look like the “remind me later” feature in Outlook: “What do you want to do now? (1) Answer 5 more questions, (2) Continue later” with “Continue later” bringing up a set list of delay periods – “remind me in XX hours” or something like that.
That’s logical, but it’s not appropriate for interstitial usage because it assumes scheduling and pre-planning. Interstitial usage “just pops up” between doing other things: talking to a friend, wondering about a sports team, figuring out where to go for lunch, waiting for a bus or train. (Go back and look at the Pew list of JIT occasions at the top of this post for confirmation on this.)
In our next post, we’ll explore Google Surveys and what they mean for MR business.
May 10th, 2012
By Robert Relihan, Senior Vice President
I noted recently that it is important to distinguish brand loyalists from simple brand users, even heavy brand users. Loyalists have a special relationship with your brand and are key to understanding a brand’s identity.
But, how do you identify these loyalists? Here are five behaviors that are worth developing in your customers because they are sheer signs of a heightened bond with your brand. And they are qualities to look for when you want to investigate your brand’s essence with those most in touch with it.
So, you know you have a brand loyalist when…
- She talks about your brand and recommends it to friends.
This is the classic definition of a loyalist. If you want to talk to a true partisan of your brand, ask her if she has recommended it to someone in the past month.
- She uses your brand in unique ways.
I recently read some questions on the web from women complaining they could not find a particular kind of Kraft cheese. They seemed quite upset. The cheese was essential to a number of recipes they made. A true brand loyalist not only loves your brand for what it is; she also loves what she can do with it. Consumers who have these alternate uses for your brand have integrated it much more deeply into their lives and identity.
- She really “knows” your brand.
A brand loyalist really knows your brand. When I have wanted to talk with NBA fans, I made sure they watched a certain number of games every week. But, I also asked them to name the two teams in last year’s finals and four teams in the Eastern Conference. It was always amazing to me how many “fans” who watched a couple of games a week could not answer those questions. True Rice Krispie loyalists would know the names of the three signature characters. True Heinz ketchup loyalists would know in what colors other than red the product has been available.
- Your brand is part of her family’s rituals and traditions.
Does she set out milk and cookies for Santa on Christmas Eve or does she leave a bottle of Coca-Cola with a snack? When a true loyalist describes a family event, she includes the brand name in her story.
- She “owns” your brand.
The true NBA fan wears apparel with the logo of his favorite team. But it is possible to “own” many brands. A true Kraft Singles loyalist brings them home from the grocery store and puts them in a blue “Red and Ned” Kraft Singles box in her refrigerator. A true Starbuck’s fan drinks her favorite coffee from a Starbuck’s mug.
To be a loyalist is to take charge of the brand. A true brand loyalist defines the brand and the experience as much as any sophisticated positioning effort. Marketers have always developed “clubs” and the like to encourage an affinity with brands. What is remarkable now is that social media is essential to this effort and makes it so much easier for the consumer to wrest control of the brand from the marketer. It is truly the age of the loyalist.
At C+R, we are continually looking for new definitions of brand loyalty and ways for the marketer to find these true loyalists. Let us find yours.
I have been talking recently about metaphors and how better storytelling can make our insights more compelling. I don’t believe that Kurt Vonnegut was ever in the position of presenting a segmentation analysis, but his advice is worth considering. I think I always want “someone to root for,” whether I am reading a novel or an analysis of a series of ethnographies. Or, at least I should.
May 4th, 2012
By Robert Relihan, Senior Vice President
One of the most difficult experiences for a focus group moderator is to be in a room with eight people who are supposed to be “loyalists” of a particular brand and discover that most could care less. The assumption, of course, was that anyone who was a heavy user had to have an affinity for the brand. So, you made sure that everyone purchased the brand in the past month and had purchased the brand three times in the past six months; they were very heavy users.
But they stared at you blankly when you tried to discuss what the brand stood for or what it meant to them. The fact is, to develop a meaningful understanding of a brand’s essence, you can’t simply talk to users; you have to talk to people who are passionate about the brand.
So, here are four things to remember when looking for your true (blue) customers
- Users are not always loyalists.
This is the most obvious point. When you need to talk to or understand people who are passionate about you brand, don’t be coy. Don’t create elaborate usage algorithms. You need to be assured that they do use your brand, but it is more important to ask them, “Is this your favorite brand?”
And, a corollary might be the usage is not affinity. Efforts to build loyalty, such as loyalty programs, can stimulate and reinforce usage, but they do not necessarily build affinity and engagement.
Finally, loyalty and affinity may be as much a feature of the category as it is of your brand. Coke and Pepsi have spent years and fortunes building up the notion that it is important to be loyal to one or the other. How could I not be a Coke or Pepsi loyalist? In effect, you have to be passionate about soft drinks before you can be a Coke or Pepsi loyalist.
- Loyalty is a disposition.
Some consumers seem more disposed to being passionate about the brands they use. They want to engage with them. They derive satisfaction from the simple act of choosing a brand and feeling it is “theirs.”
- Loyalists bring as much to your brand as the brand gives to them.
As much as marketers like to see themselves as charting a brand’s essence, users have often been in the driver seat. In the years of Honda’s rise as a major automotive brand in the US, it crafted an image as a solid, reliable car. Yet, when I talked with Honda owners, I was always struck by how solid, reliable and careful they were. How could a car not be reliable when it was driven by such owners?
- Look to their youth.
Loyalty to your brand does not come out of nowhere. If it is deep, it has been there for a long time. As a child I remember watching 20th Century on CBS television. It was sponsored by Prudential Insurance. The Rock of Gibraltar will always be in the back of my head when I think about my insurance.
If you are really looking to find your loyalists, you might well discover that they have had a relationship with your brand long before they actually purchased it. A large number of Porsches are driven by people who have wanted to own one since they were twelve. Ask the question, “What are your first memories of a brand?” If they can’t go back into their past, they probably aren’t a true loyalist.
This all seems to point to the importance of social media in building a brand and engaging loyalists. Social media give loyalists an arena to engage with brands, to define both themselves and the brand. But, there is a trap in social media for marketers. Recent research suggest that most individuals “friend” retailers on Facebook to get deals. That is, social media encourages usage. Does it create engagement and bonding; is it a two-way street? That is what social media must do to encourage loyalty.
At C+R we are ready to help you understand your “loyal” customers and how to stimulate their engagement with you.
April 24th, 2012
By Walt Dickie, Executive Vice President
One of the advantages of sticking around an industry for a long time is that you have a decent chance to have known someone who left a mark on its development. I was lucky enough to have known Saul Ben-Zeev, one of the guys who developed the focus group as a marketing research tool. I went to work for Saul in 1978, fresh out of graduate school, and I worked beside him as a junior analyst, then a senior colleague, and then, eventually, as a partner in the business.
I say that Saul was, “one of the guys” because focus groups, like most things, came about because several people were working with similar ideas at about the same time, and, in this case, all of them contributed to the “group depth interview,” as focus groups were then known.
Groups were developed from the “focused” one-on-one interviews that were pioneered by Robert Merton and Patricia Kendall in the mid 1940s. The depth, or focused, technique was applied to groups for therapeutic purposes in the 50s, when therapy groups, or T-groups, became a standard tool for psychologists. Saul, a product of the University of Chicago psychology department, was familiar with the technique and was one of several practitioners who worked to re-purpose group interviews for MR starting in the late 50s, when Creative Research, the predecessor to C+R, was founded.
The motive behind “group depth interviewing,” for both psychology and marketing, was mining for insights. And a great deal of serious thought was given on the academic side to the nature and quality of the “insights” that could be discovered. It was something Saul had spent a great deal of time thinking about, both as a student and a research professional.
So it may seem surprising that I’ve never known anyone in the business who put less stock in “insights” than Saul. He was particularly tough on what he called “gurus” who traded in “insight” without the benefit of rigorous analysis or meticulously constructed argument. And he was equally dismissive of those focus group moderators – several of whom we hired over the years – who felt their wealth of “insights” made up for their poor analytic skills. “Insights,” Saul said many times, “are a dime a dozen.”
What cost much more than a dime, and was the only truly worthwhile goal in Saul’s mind, was the ability of closely reasoned logic to instill a sense of confidence in the reader of a report, specifically, the confidence to make a decision.
What Saul realized a half-century ago is something that the MR industry seems to struggle with, learning and re-learning. Most marketing research is conducted because someone has to make a decision. A team will have to align around that decision, argue for it, and support it through a process involving intense scrutiny and, often, intense pressure from other teams to take a different course.
The industry seems to have learned that no one in business today needs more data, but the blogosphere seems to be all over the idea that they all need “insight.” Saul will be 86 this summer, and he doesn’t come around the office very much anymore, but every time I hear that a client is “starving for insight” I can hear Saul’s voice dismissing the thought.
Dictionaries say “insights” are intuitive and that they reveal some deep truth or essence. Saul certainly recognized that clients needed deep truths, and he delivered them – week after week, report after report – over a long, distinguished career. I went to many presentations with him, and saw the way his clients idolized him. And I can tell you without hesitation that the insights poured out of his pen (or pencil – he never really did get comfortable with a keyboard).
The thing about insights is that they feel deep and intuitive when you hear them and you’ve got the context that they fit into ready in your mind. The key fits the lock, turns, and suddenly, you get it! Without that context, that set up, an insight doesn’t hold up. You may feel its rightness in your gut, but you’ll have difficulty getting your team to align behind it and even more defending it. (It’s more than interesting to look at some famous insights when you’re a bit removed from the right context; often they’re not much more than gibberish without the support structure. “The medium is the message.” “Business is like the Beatles.”)
For an insight to be insightful, the audience has to be ready to get it. And for one to have an impact, they have to be able to get others to get it, even in the face of opposition. And for that, as Saul taught all of us who worked with him, you need to provide the supports.
Maybe one reason that clients feel starved for insights is that they’ve seen too many that were nothing but intuition; insights that evaporated at the slightest hint of a challenge. Or maybe they’ve seen too many tortured arguments that never got down to the deep level where insight lies.
Clients are starved for insights wrapped in a well thought-out supporting structure.
Personally, I think this was one of the many things Saul right. What’s really needed is an analyst who has the experience to understand the decision to be made, who carefully works through how what’s been learned relates to the issues that drive the decision, who can then find insights that will feel deep and intuitive. Anything less really isn’t worth a dime.