March 15th, 2013
By Jorge Martinez, Director, LatinoEyes
In case you missed it, controversial Venezuelan president Hugo Chavez Frías died earlier this month after a long battle against cancer. To many, Mr. Chavez was a righteous leader and brave revolutionary that deserved the admiration of thousands all over the world; to others, he was a tyrant who imposed his views and mishandled the responsibilities that were vested in him when he became Venezuela’s president… and every subsequent time he successfully extended his stay in power. It seems, like with Mr. Chavez, there were no in-betweens: you either loved his legacy or flat out hated it.
This note, however, is not to talk about the former president or his policies but rather to share some thoughts on one of the effects of his rise to power. One that, in my opinion, will inevitably influence the fabric of the Hispanic population in the U.S.: the growing Venezuelan community in the U.S.
After Mr. Chavez rose to power in 1999, many of the changes he introduced caused a gradual but constant exodus of middle and upper middle class Venezuelans who felt either forced or compelled to look for better opportunities outside of the otherwise prosperous Venezuelan economy. Statistics from the Department of Homeland Security indicate that the number of Venezuelans obtaining legal permanent resident status just about doubled in a matter of years from a little over 5,000 in 2002 to over 10,000 in 2005, since then the flow has remained constant.
Many of those who fled Venezuela are highly educated and entrepreneurial individuals who now call cities like Katy in Texas and Weston in Florida home (each city has its own Venezuelan-themed moniker: Katyzuela and Westonzuela), and their arrival has begun to shape and influence their respective communities and the Hispanic community overall.
As a researcher, I have been fortunate to meet and become friends with some of those Venezuelans who now live in this country (on both ends of the Chavez-liking spectrum); and having access to them outside of the research arena has broadened my understanding of the Venezuelan culture and people, knowledge that comes in handy when you’re a qualitative researcher. I know that just as I have benefited from my immersion into Venezuelan culture so have many others and most importantly, so has society in general as Venezuelans leave their footprint in wide array of fields like arts (Gustavo Dudamel), sports (Omar Infante) and fashion (Carolina Herrera) among others. A quick look at Hispanic media is enough to notice the advent of Venezuelan talent in TV and music. At a more local level, I bet there are several arepa lovers out there thanking Venezuelans for introducing them to this corn deliciousness, or to the hallacas and pan de jamon.
In my view, cultures become richer as societies become more diverse and Venezuelans have contributed to enriching the ways in which Hispanic communities are defined throughout the US. Whether it is through entrepreneurial ventures, entertainment, or via academic and cultural experiences, their influence is palpable among Hispanics who have been exposed to the Venezuelan way of speaking, dancing, cooking and living life as a Hispanic in a foreign culture. As a moderator and researcher, I see this influence in a focus group, and I have come to appreciate the experience of a different Hispanic identity being present in consumer research.
The death of Mr. Chavez makes me wonder what will happen to those who fled Venezuela and came to the US? Will they have a burning desire to return to their home country or will they continue to make their Hispanic identity in the US stronger? What does the possibility of a Venezuelan reverse exodus mean to us researchers and to brands that target Hispanics? Many, like those in Katyzuela and Westonzuela, have developed deep roots in this country and now have US-born children, which I imagine makes the possibility of returning to Venezuela more challenging.
Hugo Chavez is dead. Now what?
December 17th, 2012
By Walt Dickie, Executive Vice President
I’ve been working in the MR industry since 1978, and the one unchanging theme of that long period has been the constant complaint by the senior corporate executives – who fund the industry by their demand for research-based decision making – that MR just isn’t very good at identifying opportunities or pointing to the new products, services, and ventures with the best chances of successfully capitalizing on them.
I remember being surprised when I first encountered a CEO denouncing the entire MR enterprise at a major conference – which probably happened sometime within my first or second year. Was my new, desperately needed first job after grad school, and maybe the whole industry, going to go down in flames before I even got started? It hasn’t but the drumbeat of C-suite dissatisfaction has never lessened.
This all came to me as I was reading one of the many articles from the “nerdiest election” in which Romney had not prepared a concession speech because – by all the accounts I’ve seen – he didn’t think he’d need one. It sounds like Romney really got blindsided. Not only did he not have a concession speech held in reserve, but he also planned to celebrate the election with a fireworks display over Boston Harbor and his campaign even had a “Romney Wins” website up and ready. (Someone posted screen grabs, of course.)
As Slate’s John Dickerson said, “He got the numbers wrong … in the end Romney and Ryan had to watch CNN to find out how their campaign was doing.”
The blog posts and news stories lay blame on what David Frum labeled the ”conservative entertainment complex.” And it looks like the Romney’s “marketing research” group followed the media’s lead in asking the questions they wanted asked, hearing the answers they wanted to hear, and reinforcing an internal viewpoint that, in the end, failed as MR and left the CEO “gobsmacked” and angry.
I read all of this with the shock of recognition. My mind snapped back to 1985 and the introduction of New Coke. All the research that had been done! And, the magnitude of the disaster that followed! The astonishment of everyone inside Coca-Cola that their carefully constructed edifice had been built on sand. I had seen all of it and had enjoyed having the inside scoop thanks to two colleagues who had come from Coke’s MR department and understood the backstory, which had included a huge research effort.
The post-election stories seem to be revealing a distinctive, internal worldview shared by the campaign and its media supporters. Republican pollster, Whit Ayres, described the research being drive by “rosy assumptions on a likely electorate…at…substantial variance with recent history.”
An “echo chamber” bubble developed when the campaign research staff and its clients elaborated a shared narrative about polling and sampling methods and how to interpret results. Outside that bubble, the academic social scientists and media stat nerds – with Nate Silver as their symbolic leader – were using different methods, asking different questions, and interpreting their findings differently.
They were right in the end, and the people in the bubble were wrong.
A case can be made that MR and our clients have created a similar bubble, in which we talk to each other in an echo chamber. And, a case can be made that the academic social scientists and techie stat nerds are “threatening” traditional MR with everything – including Big Data, social media and web based behavioral analytics, location data, remote facial analysis, and eye tracking – living in an entirely different world.
The GRIT survey as well as reports coming out of consultancies like Cambiar (site registration required), say clearly that corporate research departments are eagerly welcoming all the new “non-MR” vendors knocking on their doors. What’s worrisome is what would happen if the CEOs start sending more work to the guys outside the MR world, and they started having some successes calling the game. That’s what the voices that expect the leading “MR” vendors of the end of the decade to be companies like Google, Facebook, and Twitter.
Why didn’t all of these new approaches arise out of marketing research? At the very least, why weren’t traditional marketing research companies their earliest and most eager adopters? Why didn’t clients hear about all of these developments from their MR vendors? Is it because the conversation was “closed” and things like that simply had no place?
I’m not arguing that MR is operating in bad faith – only that they and their client audiences may have constructed a particular shared view of commissioned and conducted research that has become closed, limited, and overly rigid.
Marketing researchers are almost universally serious, sober people who see themselves as technical experts in a field that demands hard work, clear thinking, and ingenuity to provide vital information to decision makers while often suffering little respect, diminished budgets, and constricting timelines. But make no mistake about it, marketing research has been working with the same basic approaches for at least a couple of generations. I made a stab at some of the qualitative issues in a previous post, so in part 2 to this post, I will examine a few things about quantitative research.
December 10th, 2012
By Walt Dickie, Executive Vice President
I’ve been living in a sort of fugue state since the presidential election. I keep drifting into extended musing over the ways that the campaigns were acting out lessons for marketing researchers. I’m hoping that this will come to an end fairly soon; in fact, I think I can feel the effect beginning to wear off.
In the early stages of the rush, all of the lessons seemed to highlight weaknesses in either qualitative or quantitative MR. But now I’m getting more positive messages.
If you know where to look and how to spot them, you can see the influence of Operations in the Election Day stories. There seems to have been an Operations disaster, but I think I also see one of the rare cases, in my experience, where someone listened to Ops early on, took their advice, and nailed the project.
In the case of the ill-fated “Republican Party’s newest, unprecedented, and most technologically advanced plan to win the 2012 presidential election,” Project ORCA, the Ops guys apparently lost to the PR guys. I’m absolutely sure the campaign’s Operations guys insisted upon holding serious training sessions for the volunteers that would use the system because I know there had to have been Ops geeks around and I know how Ops geeks think. But, according to a volunteer who blogged his experience (and disappointment), volunteers “were invited to take part in nightly conference calls. The calls were more of the slick marketing speech type than helpful training sessions.”
In the words of Zac Moffatt, the campaigns digital director, the system “kind of buckled under the strain (of) the amount of information incoming.” The lack of training left the people out in the field un-prepared and unable to communicate. Even with “800 Romney workers…staffing phones…the surge in traffic was so great that the system didn’t work for 90 minutes,” leaving the field workers scrambling and headquarters without field input.
I have to say that I felt awful reading this. I know that somewhere an Operations Director was crying, and that his or her advice had gone unheeded. I’m dead sure of it.
But the most interesting story about Ops guys and an election didn’t even happen in 2012; –it happened in the Democratic camp in 2008.
Amazingly enough, in 2008 the Democrats had built a system called Houdini, that, like ORCA in 2012, was designed to “make the names of those who had already voted disappear from the Get Out The Vote lists” that were being maintained by volunteers in the field, who could then not waste time on people who had already voted and concentrate on the people who hadn’t.
Like ORCA, Houdini failed. Spectacularly. “On Election Day, the call volume was even more than anticipated and took out the entire phone system for the Obama campaign. It didn’t just effect the reporting of vote totals but effected anything that involve a central campaign phone line.”
But here’s where the Ops guys come in. After Houdini’s failure the developers did a post mortem and scaled back the functionality of their 2012 system, Narwhal.
“It was basically determined that it wasn’t worth the risk or the amount of work for every precinct in the country. The creators of Houdini came in from Google and decided that it wasn’t possible to build a system that would scale that big.”
This is amazing – somebody decided to build a system with reduced functionality in order to get improved reliability. Do you suppose the political team or the marketing guys wanted less information on Election Day? Hell, no! I’m guessing they fought like cornered Tasmanian Devils not only to expand the scope, also but to add new features. I wasn’t there, of course, but I’ve been in enough discussions between MR analysts, developers, and Operations people to be pretty sure who fought for what. The guys from Google were Ops savvy.
Operations people are under-appreciated. Their job is to execute somebody else’s brainchild, and we rely on them to make it a success. Like the hedgehog in the fable, they know one big thing: failure is not an option. Projects have to get done. On time. Within the budget. The data has to be right. There will not be a second chance. They’ll move mountains to get things done. Sometimes they go home at night wondering why no one sought or followed their advice and sailed right into the big rock they saw so clearly ahead.
But apparently as the developers proceeded from 2008’s Houdini to 2012’s Narwhal the Ops guys won one. I hope they got the credit they deserved because too often they don’t.
November 29th, 2012
By Erin Barber, Vice President
TMRE is the place for all that is new in research. This year was no exception. Not only did we hear about big data — the hottest of “hot” topics — we also heard about mobile, eye tracking, neuroscience, visual storytelling, “infotainment,” social media listening, online communities, Millennials, and much more. So what now? How do we process all of this information?
What do all of these technologies, methods, and reporting requirements mean for the research industry and its future? At the end of the day, what we took away from TMRE is that traditional research methods and the need to present crucial insights will never go away.
TMRE showed us that whatever new approached might appear on the horizon, they will derive power and validity from being integrated with tried and true methods. It is just like life; the new always succeeds by building on what has gone before. We can never forget the “shoulders of giants.”
Well, that’s exactly what we can do with all of the great, newer information we took away from TMRE. As corporations tried to make sense of Big Data…
- We will always need surveys, but now we need to think about how they are taken on mobile devices and tablets. We need to ensure that respondents not only have a good experience so they provide accurate answers, but we also want to make sure that the new mode provides us with the data we need. Mobile is here to stay and we need to account for it. It’s not just about using it for specific projects that we need in-the-moment experiences; it means that our respondents are taking surveys on mobile devices whether we want them to or not.
- Eye tracking and neuroscience provide us with a more complete story of how consumers shop, but we need our surveys and in-store shop-alongs (among other methods) to begin the story and give us that starting point in understanding how consumers plan and some context around why they chose not to buy something they stared at for minutes. Also, it allows us to compare to data from years ago and add new insight.
- The same goes for social media. It’s been incorporated into companies’ research repertoires, but it’s only a piece of the puzzle and alone does not give a perfect answer.
- Our clients have a brief window during which they provide their leadership with solutions. As such, we need to recognize this reality. We need to craft clear stories that our clients can present in 10 minutes or less. We need to help them paint the picture. We may love the “great” information we uncover, but let’s just answer the key questions for our clients and not inundate their stakeholders with data.
November 27th, 2012
By Mary McIlrath, Senior Vice President
We’re just returning from The Market Research Event in Boca Raton, FL, and in between the keynotes, breakouts, and cocktail hours, the buzz of activity centered around the Exhibition Hall. Our presentation, Falling Dow + Rising Tao: The New Quest for Balance and What It Means for Your Brand, touches on themes of juggling all aspects of the ecosystem of one’s life, and TMRE is a microcosm of that juggling act.
How much time should be spent working on projects back home? How much time should be spent networking with other industry professionals? Chatting with prospective vendors? And, arguably just as important, how to decide which pieces of swag from the 93 exhibitors should make it into the suitcase home?
Anyone who’s been to an exhibit hall knows that it’s easy to get the “fever” to collect as many free baubles as possible (excluding the premium giveaways like the Kindle Fire HD we raffled off). Quirk’s gathered bags and bags to give away to readers. But when the rubber hits the road and it’s time to pack to go home, what makes the cut?
When I looked at what I really brought home instead of abandoning in the hotel room, the “best” swag met three key criteria: 1) It made a logical, even clever tie back to the exhibitor’s brand equity, 2) It was practically useful in a way I was confident I would exploit, and 3) It didn’t take up too much space.
With those criteria in mind, I give you the Top 5 Swag Items from TMRE:
5. Brain stress ball keychain from Ideas to Go: It suggests creativity and mobility, and will make a great post-conference trinket for a colleague back home.
4. Chocolate cell phone from Gongos: A delicious way to remind clients of their mobile capabilities, and slim enough to fit in a briefcase pocket for choco-mergencies on the way home.
3. Remote control car from Research Now: A reminder of their driving video game in the exhibit hall. Way to bring “gamification” to the conference, guys!
2. Digital caricatures from GutCheck: Fits with their “instant gratification” platform, yet took long enough to create that models became a brief captive audience to hear about their services. It created a lot of buzz for the company around the conference.
1. “Chicago Mix” Garrett’s popcorn from C+R: If I do say so myself, there were many, many sticky orange fingers roaming the exhibit hall. Anyone who had lived in Chicago flocked for the freshly-flown-in delicacies. And by the time we presented on the final day of the conference, we were well-known as the company from Chicago with the tastiest treats.
It was hard to name just five. There were many items that came in handy (like the Pert Group’s sunglasses during a blazing hot al fresco lunch) and many pens, desk toys, and some T-shirts too. But these five set the bar for memorability. Now, colleagues, let’s start conspiring for next year’s top items…see you in Nashville for TMRE 2013!
November 8th, 2012
By Bob Relihan, Senior Vice President
In a recent blog post, my good friend and colleague, Walt Dickie, has taken the success of Nate Silver’s data-focused and accurate prediction of last night’s election outcome and the failure of so many pundits to do the same as a metaphor for the power of big data and the twilight of the focus group moderator. His argument is that hard-eyed, statistically-significant data, modeled and analyzed properly, trumped instinct and expertise of many pundits and their years of experience feeling the winds of voter moods and sentiments. This is clear death knell for the focus group and its moderator who also applies years of experience and instinct to interpreting the often opaque feelings of consumers.
I have a certain amount of sympathy for this agreement, particularly after listening to the hours of hot air expended by pundits over the past few (many?) months. I begin to have sympathy for the marketing managers who have to listen to countless presentations of findings from focus groups.
What is more, election night provided another example of the triumph of big data. I was able to read a table this morning that gave average wait times at the polls in different states. The data was the product of an analysis of all the Tweets yesterday. I certainly could not have done that, accurately or not, with focus groups. I am not certain I could have deployed a traditional survey to yield information so quickly.
But, does this all provide a hint of the demise of focus groups and skilled moderators? I don’t think so.
In the first place, not all pundits failed in their prediction of the election outcome. A scoring of the punditry revealed that left-leaning pundits were remarkably accurate. In fact, there were a few with better accuracy than Silver. Right-leaning pundits? Well, most were considerably wide of the mark. When I talk to consumers in a qualitative setting and bring my expertise and instincts to bear upon the comments, I believe I am being objective, as objective as I can be. That objectivity results, I believe, in reliable insights.
Silver’s much praised accuracy has limitations. He predicted the outcome of this specific election. He was asked to predict very specific and well-understood behavior taking place at a specific time. Rarely, as a focus group moderator have I had to answer so circumscribed a question. Rather, I am asked to develop hypotheses about the reactions of consumers in a range of possible futures. What are the attitudes and emotions of consumers that tell me how they might respond to a new entrant in a category? A new service they have never seen before? A message about an unheard of benefit of a well-known product?
Focus groups, conducted by sensitive, experienced analysts, can provide this kind of direction to marketers. And, they will for the foreseeable future.
November 8th, 2012
By Walt Dickie, Executive Vice President
Tuesday’s election is being hailed as “The Triumph of the Nerds.” Barack Obama won the presidential election, but Nate Silver won the war over how we understand the world.
The traditional pundits were on TV, in the papers and blogs, interpreting what they were hearing and feeling. Peggy Noonan:
“Something old is roaring back.” … “people (on a Romney rope line) wouldn’t let go of my hand” … “something is moving with evangelicals … quiet, unreported and spreading” … the Republicans have the passion now, the enthusiasm. … In Florida a few weeks ago I saw Romney signs, not Obama ones. From Ohio I hear the same. From tony Northwest Washington, D.C., I hear the same.”
On the other side were the Moneyball data nerds, with Nate Silver carrying their standard:
“Among 12 national polls published on Monday, Mr. Obama led by an average of 1.6 percentage points. Perhaps more important is the trend in the surveys. On average, Mr. Obama gained 1.5 percentage points from the prior edition of the same polls, improving his standing in nine of the surveys while losing ground in just one. … Because these surveys had large sample sizes, the trend is both statistically and practically meaningful.”
The morning after, Paul Bradshaw posted that the US election was a wake up call for data illiterate journalists – the pundits – who “evaluate, filter, and order (information) through the rather ineffable quality alternatively known as ‘news judgment,’ ‘news sense,’ or ‘savvy.”
Bradshaw, and the blogger Mark Coddington, whom he quotes, look beyond the question of which camp “won” or “lost” the election, and see an epistemological revolution in reporting the news:
Silver’s process — his epistemology — is almost exactly the opposite of (traditional punditry): “Where political journalists’ information is privileged, his is public, coming from poll results that all the rest of us see, too. Where political journalists’ information is evaluated through a subjective and nebulous professional/cultural sense of judgment, his evaluation is systematic and scientifically based. It involves judgment, too, but because it’s based in a scientific process, we can trace how he applied that judgment to reach his conclusions.”
But this blog post is about marketing research, not journalism, although as I’ve argued before the two fields have a lot in common.
When I read Bradshaw yesterday morning, I could hardly help re-writing his observations, since he could easily have been talking about the traditional approach to qualitative analysis. Here’s my re-write:
Qualitative analysts get access to information directly from consumers, then evaluate, filter, and order it through their “judgment,” “sense,” or “savvy.” This is how qualitative analysts say to their clients (and to themselves), ‘This is why you can trust what we say we know — because we found it out through this process.’”
Journalistic intuition suffered a severe blow Tuesday, though I doubt it will prove fatal. The data-free intuition of focus group moderators is getting hammered by Silver-esque data-driven analysis, but it hasn’t succumbed yet, either.
Still, I have to wonder if the writing is on the wall. I’ll leave the last word to Bradshaw:
Journalists who professed to be political experts were shown to be well connected, well-informed perhaps, but – on the thing that ultimately decided the result: how people were planning to vote – not well educated. They were left reporting opinions, while Nate Silver and others reported research.
October 26th, 2012
By Walt Dickie, Senior Vice President
If you’ve ever read anything about SETI, the search for extraterrestrial intelligence, you’ve probably heard of Drake’s Equation—well, you might have heard about it on Star Trek or the Big Bang Theory instead! Formulated by the astronomer Frank Drake, it estimates the odds of intelligent life in our galaxy and is one of the foundations for the whole enterprise. SETI came into being when Drake, whose Project Ozma was the first systematic search for alien radio signals, addressed a meeting at the Green Bank National Radio Astronomy Observatory in 1960 and offered an estimate of the odds that the project could succeed. The Drake Equation is justly famous as one of the first attempts to put the possible existence of little green men into mathematical perspective.
I was in high school when I first heard about SETI and the Drake Equation. For no good reason, I’ve often found myself musing about it which is why it recently popped into my mind again when one of my partners was getting ready to moderate a discussion at an MR conference.
The conference, like most MR conferences these days, was heavy on presentations about New Methods and The Future of Marketing Research. Needless to say, not a few of the conference presenters agreed with almost every blogger in the industry that Major Change is Just Around the Corner and that the Future For Every Established MR Company Is Bleak.
My partner’s role at the conference was to moderate a discussion between the audience and one of the keynote speakers, and he was preparing by reviewing the speaker’s presentation, a copy of which he’d sent me. We were exchanging emails about the issues in the talk and the potential questions they raised when it occurred to me that MR needed its own version of the Drake Equation.
Just as the early SETI community needed some estimate of the likelihood of contacting alien life, the current MR community needs an estimate of the likelihood of being superseded by new research technology. It’s almost all we talk about these days, and, being mathematically inclined folks, we deserve a calculation of our odds.
So here, with apologies to Frank Drake, I propose the MR version of the Drake Equation as a contribution to SSRT, the Search for Superior Research Technology.
The Drake Equation
N = R** fp * ne * fℓ* fi * fc * L
SETI: N = the number of civilizations in our galaxy with which communication might be possible
SSRT: N = The current number of competitors capable of putting your company out of business
Drake’s proposed value
My proposed value
|The average annual rate of star formation per year in our galaxy||
|The average annual rate at which significant new approaches for understanding some aspect of human behavior or thought appear||
|The fraction of those stars that have planets||
|The fraction of those approaches that are applicable to the design/ delivery/ communication of consumer products or services||
|The average number of planets that can potentially support life per star that has planets||
|The fraction of the above that depend on data/inputs that can be collected using conceivable/ deliverable/ socially and ethically acceptable technology||
|The fraction of the above that actually go on to develop life at some point||
|The fraction of the above that are commercialized at some point||
|The fraction of the above that actually go on to develop intelligent life||
|The fraction of the above that are faster and/or less expensive than your company’s offerings||
|The fraction of civilizations that develop a technology that releases detectable signs of their existence into space||
|The fraction of the above that will produce results that are more actionable/ effective/ predictive than your company’s offerings||
|The length of time for which such civilizations release detectable signals into space||
|The length of time that those approaches will be seen as valid/interesting/relevant as the basis for commercial products/ services||
Some Comments on my Estimates:
R* I think my estimate of truly unique, significant new approaches coming along every two years may be generous. Not that new approaches or technologies don’t appear more often than that, but most of these are minor wrinkles on existing approaches – better ways to do something that’s already being done. If you’re a technological optimist you might go for more frequent discoveries. I think that one a year would be an optimistic estimate, but feel free to enter your own.
fp Almost anything – maybe not quite anything – that’s applicable to humans is applicable in some non-trivial way to consumer products and services. Optimistic estimate: .99
ne On the other hand, some of the new things that come along require approaches that either aren’t technically feasible, at least at scale, within any foreseeable future or would never pass a social/ethical or possibly legal challenge. I’ll allow that almost any technical obstacle can be overcome, but I’m not so sure about the social issues. I’d hesitate to make this a certainty under even the most optimistic scenario. Optimistic estimate: .9
fℓ With the rise of crowd sourcing augmenting venture capital and other ways to finance new business ventures, and entrepreneurial enthusiasm apparently being boundless, pretty much everything that can be commercialized will be, at some point. Optimistic estimate: 1
fi Not everything is quick and not everything can be made cheap. I scored this one a toss-up. If you believe that advances in technology will eventually bring down any conceivable cost, your optimistic estimate would be 1.
fc On the other hand, a lot of current technologies have already been pretty much optimized and aren’t advancing anymore, so something really new has a fair chance of bringing new insights to the party. That’s 2:1 in favor of the new stuff in my book. But it’s possible to reason that any legitimate theoretical advance will inevitably produce some significant new insight. Optimistic estimate: 1
L There is no useful data on the rate at which new basic approaches appear in the sciences or technology. (This question might be phrased in terms of the frequency of paradigm shifts, about which there is no consensus.) I went with my gut feeling that after about 25 years paradigms begin to be seen as played out. I think that an optimistic estimate might be 2-4 times longer than this.
Using my estimates, there are probably 2-3 technologies capable of putting your company out of business by undercutting the approaches your business is based on; using what I think are the most optimistic (pessimistic?) estimates for every variable in the SSRT Equation, there are somewhere between about 45 and 90.
By the way, although Drake’s original estimates yield an estimate of 10 advanced civilizations in the galaxy, the consensus estimates developed at the first SETI conference yielded something between 1,000 and 100,000,000. I wonder if a reasonable argument can be made using both the SETI and SSRT versions of the equation together that not only are there plenty of companies capable of putting yours out of business, some of them are or will be run by aliens.
September 6th, 2012
By Bob Relihan, Senior Vice President
The internet has been buzzing the past few days with the story (rumor?) that Bruce Willis intends to sue Apple for the right to leave his iTunes library to his daughters. I hesitate to provide a link; there are so many. The story has even popped up on Forbes. Clearly, the story resonates with people, even beyond the image of John McClane vanquishing an internet tyrant. We all have a love/hate relationship with Apple, don’t we?
The story crystallizes a trend that has been growing over the past couple of decades. Willis, or his mythical stand-in, feels he owns all of those songs he has on iTunes. But, of course, he has really just bought the right to listen to them. This is a parable of the death of the culture of ownership.
Wealth, status, and position have been defined by one’s possessions, by what one owns. When this country was founded, being a citizen meant owning property. We haven’t all moved to communes yet, but…
- iTunes is only one instance of the shift away from ownership of media. I once had a huge collection of VHS tapes and my library of books covered an entire wall of my living room. Now I get all the movies I want through Netflix and what I read is on the cloud somewhere accessed on my iPad.
- Think of the proliferation of sites that will rent you anything from a car to a designer handbag. To be sure, we have had car rental agencies for a long time. But, Hertz and Avis have represented themselves as a convenience for the traveler or as the answer to an emergency. Zipcar positions itself explicitly as an alternative to ownership.
- Even the development of community gardens and tool sharing programs suggests that a livable community can eliminate some of the elements of ownership.
- It is even possible to see the trend toward eating out as one that allows a households to have small pantries and own fewer utensils. Maintaining a larder is no longer a requirement for having a home.
- And, Millennials apparently are abandoning car and home ownership as much to avoid the hassles of ownership as to cope with the weak economy.
We are moving to a culture where experiences and sharing define who we are more so that what we own or have. In effect, we are developing a culture that values the values of sharing and collaboration that my mother taught me more than those values of paternalism and distance that I learned from my father.
And what does all of this have to do with research? Much of what we do is predicated on ownership. In almost every case we recruit individuals who “purchase and use” a product. The acts of purchasing and owning are taken as signs of commitment to a product. They identify an individual as serious about a product or category, someone to listen to.
What do we do if society places less value on ownership? What do we do when there are any number of individuals who friend a product on Facebook or follow it on Twitter without ever actually buying it. I may never own a Porsche or an Omega, but I love finding out the latest news about these two brands; I know as much about them as any owner. So, the next time I want to talk to brand champions, do I need owners. I don’t think so.
By Walt Dickie, Executive Vice President
One of the big events of my formative years happened around 1967 at MIT when the guys in the Earth Science Department announced that they could “predict” what the weather was like in Cambridge an hour earlier.
What actually happened was that a meteorological model had been developed that forecast the weather pretty accurately 24 hours in advance. The problem was that the model took a little over a day to execute on the mainframe, so by the time it issued a forecast it was “predicting” the weather we had just experienced. It was a geeky story, and everyone I knew thought it was pretty funny. A computer model had been developed that was as good at forecasting the weather as looking out the window.
But we all knew that it was a really big deal. With improvements to the algorithm and some advances in the hardware, it would only be a matter of time before the model would run in an hour, then a minute, and it would soon be capable of forecasting the weather not 24 hours in advance but 48 hours, or a week, a month … who could guess?
I’ve watched the weather my whole life because I’ve always been addicted to outdoor sports – I ride a bike almost daily outside of the three- or four-month period when the Chicago winter drives all but the insane indoors. I’ve been a skier since childhood, as has my wife, and we’ve passed that mania on to our kids, so that gives us a reason to follow the winter weather. And I was bitten by sailing when growing up on a New England lake, which now takes out on Lake Michigan and, again, has me pouring over the weather sites.
Historically, there have only been a few ways to forecast future weather. Until very recently forecasters struggled to consistently beat the algorithm my mom used when I was a kid: “Tomorrow will be pretty much like today.” If you think about your local weather, you’ll probably see what we see here in Chicago – the weather mostly doesn’t change much from one day to the next, until it flips a switch and changes a lot. Fronts move through every few days – but in between, tomorrow is much like today. You might be right with that algorithm as often as 3 days out of 4, or even 4 out of 5. It was a struggle to develop a meteorological system better than that.
As recently as the early 20th century, “forecasters would chart the current set of observations, then look through a library of past maps to find the one that most resembled the new chart. Once you had found a reasonably similar map, you looked at how the past situation had evolved and based your forecast on that.” When the technology appeared giving forecasters a reasonably complete picture of today’s weather “upstream” from their location, they were able to adopt a variation on this technique and base tomorrow’s Chicago forecast on what was happening on the Great Plains today, rather than relying on an old map.
And then came the modelers’ breakthrough – the algorithm that forecast the weather an hour ago – and soon it was possible to base forecasts on scientific principles and mathematical calculation.
So, vastly simplified, the evolution of weather forecasting was this: predict that tomorrow will be like today, predict that tomorrow will be like it is today somewhere else, and, finally, calculate tomorrow’s weather by mathematically extrapolating the underlying physics of today into the future.
I’ve been thinking about this progression recently because C+R, like most MR firms these days, is spending an increasing amount of time trying to predict the future. The industry is changing; the economy is changing; maybe the entire global financial system is changing; technology is certainly changing. How do we navigate? How do we make business decisions for the future without some way of forecasting the future?
Everyone here is thinking about these issues, but there are three of us who are particularly involved because of our job responsibilities. And we’ve discovered that each of us has a personal method for forecasting.
Partner Number One is heavily, almost exclusively involved in sales, and has constant contact with clients and potential clients who are trying to articulate their research needs. Partner Number Two monitors the new products and services that our competitors and the industry as a whole are introducing, paying particular attention to successful leading-edge competitors. And Partner Number Three monitors emerging technology and social trends and tries to infer their likely impacts.
And guess what? Partner Number One says that, as near as can be told, tomorrow is going to be a lot like today. Although there is a lot of discussion about big changes online, at conferences, in speeches, and in trade publications, the projects that clients need today and expect to need in the immediate future are much the same as they’ve been in the recent past. Timelines are shorter and budgets may be tighter, but tomorrow’s weather looks like today’s.
Partner Number Two sees a lot of new product and service activity going on, and notices what seem to be some really amazing storms and lightning bolts as new firms with new offerings post double-digit growth rates year upon year while others seem to explode only to fizzle. Some amazing things are announced and then never heard from again. It seems like we can look around us on the map, but that it’s really hard to know which direction is “upstream” from where we’re located. It’s hard to tell if the weather being experienced elsewhere on the industry map will travel toward us or away from us.
Partner Number Three sometimes seems to detect solid trends. There really seems to be some clear trends in technology – both the technology that our client businesses are adopting and the technology that consumers are using. Some trends in consumer communication technology seem especially clear, and if data collection will play any role in our future then we can base that part of our forecast model on them. But other areas, particularly the “information ecology” of businesses seem roiled up and hard to read. We have some pieces of a prediction model, but our algorithm for forecasting still needs a lot of work. It’s not clear that we’re anywhere near the point I witnessed back in college when the model first beat my mom’s forecasting approach.
I find myself torn between algorithms. Like the weather, many businesses have had one day follow another with little material change for long periods – until change overtakes them like a summer squall. Been to a bookstore lately? Just because an innovation lit up the landscape somewhere doesn’t necessarily mean that the same thing would happen elsewhere, or that the market would support two, three, or a dozen similar offerings. Maybe a competitor’s success is due to local conditions – like updrafts that spawn tornadoes on the Plains but almost never come east to Chicago. And although the trends seem to point toward weather systems coming in soon, the forecast model isn’t any better at this point than looking out the window and expecting more of the same.
So we try a little of each forecaster’s method, experimenting with the trends and borrowing from competitor’s successes, while finding today’s weather still largely unchanged from yesterday’s.