December 12th, 2013
By Bob Relihan, Senior Vice President
“Make sure that we get a good regional representation.” That has often been the charge from the marketing manager to the insights director. There has been a belief that different cuisines, climates, and experiences would have an impact on attitudes and tastes that could affect how consumers react to new products. So, we would be certain to conduct focus groups in markets in three different regions or that quotas were set to assure the sample represented the East, South, Midwest, and West equally. This was simply good research practice.
Then there was the evening I sat in a focus group room in Atlanta and discovered, as I went around the table that every one of the participants was originally from New Jersey. I had just conducted focus groups in New Jersey the night before. Why had I made that flight?
Perspectives change. To be sure, a brand’s sales figures can vary from market to market, but I cannot remember the last time I concluded the differences in brand perceptions from one market to another were ground in fundamental differences in the behavior or attitudes of consumers in those markets. “Place” as defined by traditional markets seems less relevant now. Virtual communities define differences now, or intra-regional differences such as those between an urban core and the suburbs. We are much more interested in ethnic or generational differences when we strive to be representative.
And, from an operational perspective, with more and more qualitative research being conducted virtually, it has become possible to assure that an online community has participants from all over the country. There is no need for travel to those four different markets to be “representative.”
But, a recent article by Richard Florida in The Atlantic Cities points to a large-scale study by a team social psychologists, “Divided We Stand: Three Psychological Regions of the United States and Their Political, Economic, Social and Health Correlates.” The team analyzed data from a number of surveys that stretched over a twelve year period, representing 1.5 million people from the 48 continental United States. They mapped and clustered the occurrence of five personality traits – openness, conscientiousness, extroversion, agreeableness and neuroticism.
This is Florida’s summary of their conclusions.
The study identifies three main regional types: friendly and conventional, relaxed and creative, and temperamental and uninhibited. The maps below, from the study, show how these line up across America’s states.
- The Friendly and Conventional Region is the blue area that runs from Michigan through the Midwest and much of the Sunbelt and traditional South. This region is defined by low levels of openness (the trait most closely associated with innovation, creativity and entrepreneurship), low levels of narcissism (the counterpoint to which is a high level of emotional stability) and moderate to high levels of extroversion, agreeableness and conscientiousness. This composite of traits shapes a regional personality that is sociable, considerate, dutiful, and traditional.
As the authors note, “the psychological profile and all the social indicators betray a region that is marked by conservative social values.” This ethos maps onto a region whose residents are primarily white and politically conservative, less likely to move, and more likely to remain close to family and friends. They also have relatively lower levels of education, wealth, innovation, and social tolerance. This region has high levels of social capital and engagement in religious and traditional civic organizations. As the authors conclude, “taken together, the characteristics of this psychological region suggest a place where traditional values, family, and the status quo are important.”
- The Relaxed and Creative Region is the green area along the West Coast and Rocky Mountains through Idaho, Arizona, and New Mexico. There is also a weaker concentration, identified by the much lighter green shading in parts of the Sunbelt (especially North Carolina) and some of New England (including Massachusetts). This regional profile is high in openness and oriented toward creativity, innovation and entrepreneurship. It is also low in extroversion (less-outgoing, more introverted) and agreeableness and especially low in neuroticism (in other words, it has higher levels of emotional stability).
Demographically, the population includes relatively high levels of college grads, more affluent people and higher levels of ethnic diversity. “Social capital is comparatively low here, but tolerance for cultural diversity and alternative lifestyles is high,” the article notes. Befitting its historical origins as the destination for pioneers, it is an “area where significant numbers of people are choosing to settle, as indicated by the positive association with residential mobility…. It is also a place where residents are politically liberal, as well as psychologically and physically healthy.”
- The Temperamental and Uninhibited Region is the deep orange area that covers the Northeast, New England and Middle Atlantic states. There are also lighter concentrations in the contiguous areas of Ohio and Indiana, as well as Texas. This region’s psychological profile is defined by very high levels neuroticism (hence the temperamental moniker), moderately high levels of openness, low levels of extroversion (or high levels of introversion) and very low levels of agreeableness and conscientiousness. This constellation of personality traits depict a type of person that is “reserved, aloof, impulsive, irritable, and inquisitive,” while also being “passionate, competitive, and liberal.” This region is highly educated and affluent, with high levels of ethnic and cultural diversity and a liberal political orientation.
If nothing else, this analysis serves to confirm certain stereotypes we all hold of those from different parts of the country. But, the authors of the study make other connections, seeing their data as providing a psychological underpinning to the politically conservative character of the South and Midwest and the entrepreneurial and creative character of the West. In their view, both of these conclusions have policy implications.
But, from the perspective marketing research, particularly that conducted in support of new product and communications development, this research gives support to a renewed concern for very specific geographic balance. One cannot doubt that individuals with the three character traits described above are very likely to have different reactions to new products and communications. Thus, assuring that the three regions are properly represented in any piece of research seems prudent. Perhaps, the basic three-market focus group project should feature three markets that appear to be ground zero for the three clusters — New York, Omaha, and Phoenix?
August 30th, 2013
By Bob Relihan
One of the biggest flashes of insight I had about the grocery was the realization that it could be just like the jewelry store.
I was walking through a grocery store with a woman as she shopped. We weren’t even calling this a “shop-along” yet. She put something in her cart. I remember it being a jar of mustard. I looked at her, and she knew what I was thinking. “This is a little present for myself. No one else in the house really likes it.” She paused for a moment. “I really like getting presents. You can’t buy a new pair of earrings every day, after all.”
Note that she hadn’t purchased some rich chocolate or an indulgent pastry. Mustard was special to her. But, the experience even transcended the mustard itself. Looking for presents made the entire trip to the grocery store much more enjoyable. Every week was a little Christmas.
It was a revelation. Up until then, I had spent a good deal of time exploring consumer needs and the power of products to fulfill those needs. To be sure, one of those needs might be the desire for a particular hedonistic experience. But, the need was still “rational,” and I was looking at how the product’s attributes delivered that experience. It was a closed system — consumer needs and product attributes, each set mapping on the other. And, it was a system marketers seemed to accept, judging by the questions we agreed were important.
In other words, we assumed that the product or service delivered the benefits. The woman in the grocery store taught me that was not the case.
- When consumers describe the benefits of a product, they are not always describing the product. They are just as likely to be reacting to some aspect of the environment that the product touches. And, that experience may be idiosyncratic to that particular consumer.
- The shopping experience can endow products with meaning independent of the products themselves. In other words, there really is such a thing as “shopper insights.”
- The woman would likely have described the mustard as “special” or “a treat.” But, these were not really attributes of the mustard. They were the product of her being the only one in the household who liked the mustard. Meaning and value are the result of context; rarely are they intrinsic.
The ultimate lesson is that a product does not exist in a vacuum. Its value and meaning to a consumer cannot be separated from the desire to have it or the act of shopping for it.
January 9th, 2013
By Bob Relihan, Senior Vice President
Walt Dickie had done a very nice job of knitting together the trends in the adoption of various electronic devices. Certainly PCs are flattening out and will eventually decline. And, I agree there will be a time when virtually every cell phone is a SmartPhone. Walt also plots a curve that predicts exponential growth in the tablet/e-reader market, but backs off from the implications. “I’ve gone with a growth curve that can’t be right in the long term – it has to flatten out – but might be okay in the short term.”
I am not so sure.
Now, it is likely true that all but a few high flyers and the tech obsessed (as well as those involved in illicit activities) will ever have more than one smart phone. The device, after all, is tied to one’s personal phone number. But, the same constraining logic does not apply to tablets and e-readers, particularly when they merge into one category with vaguely similar features and price points ranging from $79 to over $800.
So, in the next few years, as more and more users acquire new tablets with better features and still have serviceable old ones on hand, it is easy to imagine a home with a first generation Kindle by the beside, an older iPad on the kitchen table for reading the news and checking the weather in the morning, and the latest tablet sitting on the coffee table in front of the television. Will there be a television? Why carry a tablet with you? You can have one wherever you turn.
There was a time when the household was dominated by one large console television in the living room. Over the years conducting focus groups, I have asked in passing, “So, how many TVs do you have?” Seven is no longer an uncommon answer…in a household of two. A future with a tablet in every room is not that far fetched.
December 4th, 2012
By Patti Fernandez, Research Director
The Marketing Research Event was buzzing with excitement and anticipation. What tales would we hear, what knowledge would we uncover, what trends would take center stage? And, in the end, on what new paths would we, as researchers, venture?
Insight development via storytelling and storytelling through data visualization were very much in the air. Many a session encouraged us, like Dorothy, to follow the yellow brick road toward our own Emerald City where insights break the confines of numbers and quotes and live within visually compelling stories.
But, in today’s data-driven world, how can we tell a story visually while seamlessly satisfying the needs of the data literalists? And, how can we shake the compulsion to show everything we’ve uncovered because (in our minds) every nugget matters?
The key is not only to tell a story, but also to approach the insight development process in the same way as story-creation. Here are five key elements to a solid storytelling approach:
Relevance is Key
- There is usually a rhyme and reason for everything that is included in a story (foreshadowing, plot-building, etc.).
- In that same way, results and insights should serve as key puzzle pieces that help build and complete a bigger picture.
- Relevance, though, takes time. We must first go treasure-hunting through all of our findings in order to determine which ones truly are worthy of supporting the key insights that need to be communicated.
- Stories follow a natural, rational order that keeps us alert and engaged with the plot.
- Our insights and findings, then, should follow the same path. They should help take the audience on a journey that makes sense and keeps them on the edge of their seats.
Create Conflict and Resolution
- Without conflict there is no resolution – without resolution there is no end to a story.
- Always aim to keep the plot of your story anchored. Your role as a researcher is to tell a story that ultimately helps resolve some sort of conflict.
Define Your Characters and Their Roles
- Characters have set roles in the story – they exist for a reason.
- In order to approach research in an organized and rational manner, we must first define who the characters are and what role they play.
- We may be swayed to think that the brand or product is the hero, but it is the consumer who should wear this badge. Brands are simply the tools that help the hero resolve conflict.
Bring Your Story to Life
- A good story will keep us turning the pages if it’s told in an engaging manner. Overuse of descriptive or circular plots can deter engagement and leave us tossing the story aside without finishing.
- And, just like a poorly written story, research results that are loaded with data that makes the audience have to work too hard to decipher the true message can fall flat.
- Using visual depictions of information to surprise and make data easily digestible will not only make your research more engaging, but also make it easier to present the story in personal and animated manner.
In the end, it’s not simply how you present your insights with iconic figures, captivating prose, and visually stimulating graphics – it’s how you approach the insight-finding process. So, take a leap of faith and follow the rabbit down the hole through a journey of discovery.
November 8th, 2012
By Bob Relihan, Senior Vice President
In a recent blog post, my good friend and colleague, Walt Dickie, has taken the success of Nate Silver’s data-focused and accurate prediction of last night’s election outcome and the failure of so many pundits to do the same as a metaphor for the power of big data and the twilight of the focus group moderator. His argument is that hard-eyed, statistically-significant data, modeled and analyzed properly, trumped instinct and expertise of many pundits and their years of experience feeling the winds of voter moods and sentiments. This is clear death knell for the focus group and its moderator who also applies years of experience and instinct to interpreting the often opaque feelings of consumers.
I have a certain amount of sympathy for this agreement, particularly after listening to the hours of hot air expended by pundits over the past few (many?) months. I begin to have sympathy for the marketing managers who have to listen to countless presentations of findings from focus groups.
What is more, election night provided another example of the triumph of big data. I was able to read a table this morning that gave average wait times at the polls in different states. The data was the product of an analysis of all the Tweets yesterday. I certainly could not have done that, accurately or not, with focus groups. I am not certain I could have deployed a traditional survey to yield information so quickly.
But, does this all provide a hint of the demise of focus groups and skilled moderators? I don’t think so.
In the first place, not all pundits failed in their prediction of the election outcome. A scoring of the punditry revealed that left-leaning pundits were remarkably accurate. In fact, there were a few with better accuracy than Silver. Right-leaning pundits? Well, most were considerably wide of the mark. When I talk to consumers in a qualitative setting and bring my expertise and instincts to bear upon the comments, I believe I am being objective, as objective as I can be. That objectivity results, I believe, in reliable insights.
Silver’s much praised accuracy has limitations. He predicted the outcome of this specific election. He was asked to predict very specific and well-understood behavior taking place at a specific time. Rarely, as a focus group moderator have I had to answer so circumscribed a question. Rather, I am asked to develop hypotheses about the reactions of consumers in a range of possible futures. What are the attitudes and emotions of consumers that tell me how they might respond to a new entrant in a category? A new service they have never seen before? A message about an unheard of benefit of a well-known product?
Focus groups, conducted by sensitive, experienced analysts, can provide this kind of direction to marketers. And, they will for the foreseeable future.
November 8th, 2012
By Walt Dickie, Executive Vice President
Tuesday’s election is being hailed as “The Triumph of the Nerds.” Barack Obama won the presidential election, but Nate Silver won the war over how we understand the world.
The traditional pundits were on TV, in the papers and blogs, interpreting what they were hearing and feeling. Peggy Noonan:
“Something old is roaring back.” … “people (on a Romney rope line) wouldn’t let go of my hand” … “something is moving with evangelicals … quiet, unreported and spreading” … the Republicans have the passion now, the enthusiasm. … In Florida a few weeks ago I saw Romney signs, not Obama ones. From Ohio I hear the same. From tony Northwest Washington, D.C., I hear the same.”
On the other side were the Moneyball data nerds, with Nate Silver carrying their standard:
“Among 12 national polls published on Monday, Mr. Obama led by an average of 1.6 percentage points. Perhaps more important is the trend in the surveys. On average, Mr. Obama gained 1.5 percentage points from the prior edition of the same polls, improving his standing in nine of the surveys while losing ground in just one. … Because these surveys had large sample sizes, the trend is both statistically and practically meaningful.”
The morning after, Paul Bradshaw posted that the US election was a wake up call for data illiterate journalists – the pundits – who “evaluate, filter, and order (information) through the rather ineffable quality alternatively known as ‘news judgment,’ ‘news sense,’ or ‘savvy.”
Bradshaw, and the blogger Mark Coddington, whom he quotes, look beyond the question of which camp “won” or “lost” the election, and see an epistemological revolution in reporting the news:
Silver’s process — his epistemology — is almost exactly the opposite of (traditional punditry): “Where political journalists’ information is privileged, his is public, coming from poll results that all the rest of us see, too. Where political journalists’ information is evaluated through a subjective and nebulous professional/cultural sense of judgment, his evaluation is systematic and scientifically based. It involves judgment, too, but because it’s based in a scientific process, we can trace how he applied that judgment to reach his conclusions.”
But this blog post is about marketing research, not journalism, although as I’ve argued before the two fields have a lot in common.
When I read Bradshaw yesterday morning, I could hardly help re-writing his observations, since he could easily have been talking about the traditional approach to qualitative analysis. Here’s my re-write:
Qualitative analysts get access to information directly from consumers, then evaluate, filter, and order it through their “judgment,” “sense,” or “savvy.” This is how qualitative analysts say to their clients (and to themselves), ‘This is why you can trust what we say we know — because we found it out through this process.’”
Journalistic intuition suffered a severe blow Tuesday, though I doubt it will prove fatal. The data-free intuition of focus group moderators is getting hammered by Silver-esque data-driven analysis, but it hasn’t succumbed yet, either.
Still, I have to wonder if the writing is on the wall. I’ll leave the last word to Bradshaw:
Journalists who professed to be political experts were shown to be well connected, well-informed perhaps, but – on the thing that ultimately decided the result: how people were planning to vote – not well educated. They were left reporting opinions, while Nate Silver and others reported research.
July 27th, 2012
By Walt Dickie, Executive Vice President
In June, The Pew Foundation published some very interesting data on cellphone based internet use that packs some worrisome implications for a lot of online marketing research.
Some 88% of U.S. adults own a cell phone of some kind as of April 2012, and more than half of these cell owners (55%) use their phone to go online … 31% of these current cell internet users say that they mostly go online using their cell phone, and not using some other device such as a desktop or laptop computer. That works out to 17% of all adult cell owners who … use their phone for most of their online browsing (my emphasis).
Pew also finds that 5% of cell phone owners use their cell phones and some other device equally for online access, 33% mostly use some other device, although they also use their cell phones to get online, while 45% of cell phone owners don’t go online at all using their phones.
So, let’s do a little back-of-the-envelope calculation: based on these stats, how many cell phone users should we be finding in our general-population online survey samples?
We have to make some assumptions. Pew asks their questions in terms of the respondent’s device choice for “most” online access. Let’s say that “most online browsing” means something like 75% of all browsing. In other words, let’s estimate that the 17% of adult cell owners who “mostly” use their phones are actually doing so for about 75% of their browsing. Similarly, let’s assume that the 33% who “mostly” use some other device are actually using their phones 25% of the time. Finally, we’ll assume that those who split their online access equally between phones and other devices are splitting the time 50/50.
Using those numbers and Pew’s overall cell ownership data, we should expect .88*((.17*.75) + (.5*.05) + (.33*.25)) = 20.7% to show up as using cell phones in a general population sample.
If the people who “mostly” use another device actually use their cell phones for only 10% of their online access, then this proportion would drop to 16.3%. In the extreme case, in which people who “mostly” use cell phone for access do so only 51% of the time, and people who “mostly” use another device actually choose their cell phones only 1% of the time, we would still expect to see cell access making up about 10% of a general population sample.
So, based on Pew’s data, the incidence of cell phone access to general population surveys should be in the 10% to 20% range.
If that sounds problematic, the trend data that Pew offers seems even more so.
Pew doesn’t give tracking data on “cell mostly” users but they do give data on the growth of cell-based online access overall. It’s not unreasonable to assume that the “cell mostly” segment will grow at roughly the same rate as cell access as a whole. Here’s Pew’s data re-drawn and projected forward.
Pew’s data shows that phone-based internet access is growing at just about 10% per year. At that rate essentially 100% of a gen pop sample “should” be using a mobile device in about 10 years. If MR samples continue to under-represent people who access the internet via cell phone by 65% to 75%, then the “standard” MR sample sources will shrink by a comparable amount.
Of course, this is a crude estimate – whatever happens, the trend line won’t be linear – and ten years is not tomorrow.
But still, assuming that these numbers are anything like inside the park, this implies some big problems. We need to know what is keeping cell users away from online MR surveys, and we need to find ways of changing our approaches to make our research more amenable to mobile access.
Pew’s report doesn’t directly address the question of how to do this, of course, but it does have some strong hints about what might be involved in the reasons given for choosing cell phones as web access devices.
“Cell mostly” users say that their cell phone is a “simpler, more effective choice for going online” compared to other available options (18%); 7% say that they “do mostly basic activities when they go online”; and 6% “find their cell phone to be easier to use than a traditional computer.”
Would these people consider marketing research surveys simple and basic? How long can an online activity take and still be simple and basic. How complex? What kind of engagement can be involved? Is going through a battery of a dozen or so attributes and rating each one on a 1-to-something scale either simple or basic? Is viewing a succession of concept statements with accompanying images – over a wireless connection – simple and basic?
We know from many sources that cell phone use is dominated by short “sessions”: a quick text message, a visit to Google Maps for directions, checking Yelp to find a good restaurant nearby, a fast check of incoming email. This isn’t to say that people don’t sustain long periods of engagement – playing Angry Birds on the bus to work, reading the news, even reading a novel using the Kindle app. But many aspects of cell phones, from screen size to data plans to spotty coverage urge short bursts of use that generally don’t mesh well with anything resembling even a 10 or 20 minute questionnaire.
Although I don’t know for certain where the cell phone users that are missing from our general population samples have gone or why they’re not in our samples, I do have a hunch. They’re not in our samples because they’re not even in our world, which was built, sample panel on sample panel, river source on river source, on a PC/laptop model of online engagement and interaction. The “cell phone mostly” web users have simply moved on to something simpler and more basic.
I’ve blogged about this issue before and will again, I expect. There is a conflict between “marketing research” understood as “the collection of data designed for statistical analysis tailored to the needs of standardized corporate decision making procedures,” which is what drives a major portion of client MR activity, and marketing research defined as “collecting as much data relevant to marketing issues from as many sources and in as many modes as is possible via available technology.”
The incorporation of MR into corporate decision-making happened during an era when the technology at hand – phone and mall interviewing, then online surveys – created a certain style of research that demanded high focus and a fairly large time commitment from respondents. That kind of research is still quite possible, but its days may well be numbered.
June 21st, 2012
By Joy Boggio, Director of Online Qualitative Support
We are adapting new technologies so fast that what was cutting edge last year, is passé this year. The Wondrous recently had a great post about technologies that will soon be obsolete. Think the TV remote or FAX machines.
This reminded me of the debate over text analytics and verbatim management for online qualitative studies. The various TA software packages (Language Logic, Clarabridge) are said to move us into the frontier of the future by “machining” the findings that we have culled from our boards. This all sounds promising, and many of us assumed that this would save a moderator/analyst time and uncover insights buried beneath the vast amount of data that we no longer “live through.”
It seemed, at first glance, to be a great solution when we realized gleefully, “Hey, we have so much data!” Then, in the next breath we realized, “OH! We have so much data..!” Unlike traditional qualitative research in which the moderators immerse themselves in the data as it is happening, the online qualitative moderator must sift through data that has been accumulating over days. We must find a way to juggle and make sense of it all to just find the nuggets of information.
But, how can we identify those nuggets quickly and efficiently? At C+R Research, we have a seemingly overwhelming amount of data, so we have made many attempts to “machine” and organize qualitative “data.”
At TMTRE, many others also talked about their attempts at automating and coding these responses. Most have come to the same conclusion we have… you simply have to read the comments from the boards. The data set, while appearing to be tremendous, is still too small to get good results from any automated method of sorting or coding it. Many have tried Language Logic to categorize and Nvivo to organize, but they both add time and almost always require a second analyst, which may cause the sub-text of the responses to be lost.
Automating the work does seem to have a place when you are dealing with multi-phase projects or when you are talking to a few hundred or more respondents, but not so with an average bulletin board of 20-30 people. What was the overall consensus? “It’s QUAL, we shouldn’t strive to quantify it!”
April 24th, 2012
By Walt Dickie, Executive Vice President
One of the advantages of sticking around an industry for a long time is that you have a decent chance to have known someone who left a mark on its development. I was lucky enough to have known Saul Ben-Zeev, one of the guys who developed the focus group as a marketing research tool. I went to work for Saul in 1978, fresh out of graduate school, and I worked beside him as a junior analyst, then a senior colleague, and then, eventually, as a partner in the business.
I say that Saul was, “one of the guys” because focus groups, like most things, came about because several people were working with similar ideas at about the same time, and, in this case, all of them contributed to the “group depth interview,” as focus groups were then known.
Groups were developed from the “focused” one-on-one interviews that were pioneered by Robert Merton and Patricia Kendall in the mid 1940s. The depth, or focused, technique was applied to groups for therapeutic purposes in the 50s, when therapy groups, or T-groups, became a standard tool for psychologists. Saul, a product of the University of Chicago psychology department, was familiar with the technique and was one of several practitioners who worked to re-purpose group interviews for MR starting in the late 50s, when Creative Research, the predecessor to C+R, was founded.
The motive behind “group depth interviewing,” for both psychology and marketing, was mining for insights. And a great deal of serious thought was given on the academic side to the nature and quality of the “insights” that could be discovered. It was something Saul had spent a great deal of time thinking about, both as a student and a research professional.
So it may seem surprising that I’ve never known anyone in the business who put less stock in “insights” than Saul. He was particularly tough on what he called “gurus” who traded in “insight” without the benefit of rigorous analysis or meticulously constructed argument. And he was equally dismissive of those focus group moderators – several of whom we hired over the years – who felt their wealth of “insights” made up for their poor analytic skills. “Insights,” Saul said many times, “are a dime a dozen.”
What cost much more than a dime, and was the only truly worthwhile goal in Saul’s mind, was the ability of closely reasoned logic to instill a sense of confidence in the reader of a report, specifically, the confidence to make a decision.
What Saul realized a half-century ago is something that the MR industry seems to struggle with, learning and re-learning. Most marketing research is conducted because someone has to make a decision. A team will have to align around that decision, argue for it, and support it through a process involving intense scrutiny and, often, intense pressure from other teams to take a different course.
The industry seems to have learned that no one in business today needs more data, but the blogosphere seems to be all over the idea that they all need “insight.” Saul will be 86 this summer, and he doesn’t come around the office very much anymore, but every time I hear that a client is “starving for insight” I can hear Saul’s voice dismissing the thought.
Dictionaries say “insights” are intuitive and that they reveal some deep truth or essence. Saul certainly recognized that clients needed deep truths, and he delivered them – week after week, report after report – over a long, distinguished career. I went to many presentations with him, and saw the way his clients idolized him. And I can tell you without hesitation that the insights poured out of his pen (or pencil – he never really did get comfortable with a keyboard).
The thing about insights is that they feel deep and intuitive when you hear them and you’ve got the context that they fit into ready in your mind. The key fits the lock, turns, and suddenly, you get it! Without that context, that set up, an insight doesn’t hold up. You may feel its rightness in your gut, but you’ll have difficulty getting your team to align behind it and even more defending it. (It’s more than interesting to look at some famous insights when you’re a bit removed from the right context; often they’re not much more than gibberish without the support structure. “The medium is the message.” “Business is like the Beatles.”)
For an insight to be insightful, the audience has to be ready to get it. And for one to have an impact, they have to be able to get others to get it, even in the face of opposition. And for that, as Saul taught all of us who worked with him, you need to provide the supports.
Maybe one reason that clients feel starved for insights is that they’ve seen too many that were nothing but intuition; insights that evaporated at the slightest hint of a challenge. Or maybe they’ve seen too many tortured arguments that never got down to the deep level where insight lies.
Clients are starved for insights wrapped in a well thought-out supporting structure.
Personally, I think this was one of the many things Saul right. What’s really needed is an analyst who has the experience to understand the decision to be made, who carefully works through how what’s been learned relates to the issues that drive the decision, who can then find insights that will feel deep and intuitive. Anything less really isn’t worth a dime.
April 11th, 2012
By Robert Relihan, Senior Vice President
Since my blog post on becoming better storytellers, I have been thinking quite a bit about metaphors. Having the ability to think metaphorically is crucial to our understanding consumers and brands. We ask consumers to talk about their experiences in metaphors because it enables them to give voice to feelings that might otherwise remain hidden. As marketers, metaphors enable us to encapsulate the many facets of a brand in a single image.
But I am not always certain that we understand metaphors and their power. Or, perhaps, what may be more the case is that we fear that power because it cannot always be controlled. So, it’s important for us to understand that power.
We all know that a metaphor is a comparison. In its traditional definition, a metaphor does not use like or as — that’s a simile. It’s also important to know that metaphors are figurative, not literal. We feel a metaphor; we sense the connection. It isn’t telegraphed to us.
A way to understanding metaphors is to consider two ways in which they are misused or underused.
- We, as researchers, often ask participants in focus groups to express their feelings about a brand or activity in terms of something else — to create a metaphor. We believe that this activity will force them to take a fresh perspective and unlock perceptions they had not considered. But that is not what happens. We ask a classic qualitative question: “If Brand X were a dog, what breed of dog would it be?” Here is where things begin to go wrong. The respondent thinks, “Well, I like Brand X, and it makes me feel good. Golden Retrievers are friendly and make me feel good. Therefore, a Golden Retriever feels like Brand X.” Rather than expand her vision of the brand, the respondent has simply expressed a single dimension in different words. There is no expansion of meaning. But here is also where we go wrong. In the press of time, we let that pat answer stand. What we need to do is engage the respondent in an extended discussion about Golden Retrievers. There may be a meaningful metaphor there after all. What were the individual’s first memories of Golden Retrievers? What is it like to walk with a Golden Retriever? To sit with one? All of these answers enrich the respondent’s vision of a Golden Retriever and, through the logic of the metaphor, enrich our understanding of the brand.
- As marketers, we often make the same mistake as we think about our brands. We want metaphors that capture the essence of a brand in a single, memorable image. That metaphor can energize and give focus to the brand team. So, after much research and brainstorming, we decide that our brand of ketchup or soup — or whatever — is a ‘hero.’ It rescues consumers from humdrum meals. It helps them conquer the adversity of routine meals. We use the metaphor in a limited, self-congratulatory way. It becomes static, but a metaphor is always active. Every time we return to it, it should enrich our understanding of the brand. This is possible only if the metaphor is specific. If our brand is a hero, is it Odysseus? Robin Hood? Jack Bauer? If we reflect on any one of these heroes, we might discover different qualities in our brand. That is the power of a metaphor. It does not express what we know; it illuminates what we do not.
That’s how to use metaphors. A good metaphor reveals insights, and it does so repeatedly. At C+R we are committed to helping you discover the metaphors that give life to the essence of your brands.