Re-Imagining Marketing Research Survey Design
I'm not a news junkie, but I've always read newspapers. Having grown up and gone to college in New England, I've been reading the Times since I was a kid, and I can't remember a time when www.nytimes.com wasn't the home page on my computer.
My opinion of the Grey Lady has varied over the years. I've always admired how the Times tried to be the newspaper of record, even when it fell short of its own standards. And although I've thought they made some remarkably dumb moves over the years (see "Paywall, failure of") I've increasingly come to admire the Times as an online innovator.
Although it's commonplace today for online companies to have an API, the Times was the first major newspaper I was aware of to do so. It's also the only newspaper I know of that has an experimental development arm, beta620. And although it's almost impossible to imagine a news site without graphics, the Times has been especially thoughtful about the graphics they build and how they relate to the principles of journalism.
But the Times has also been working on data collection projects - I first noticed these around the last Presidential election cycle, but there may be earlier examples. I remember sending friends of mine links to an online "mood tracker" that aggregated answers to the question, "What are you concerned about right now?" into a timeline and used type size to indicate the number of people sharing a common concern. It was simple, informative, constantly shifting, and sometimes surprising.
More recently, the Times took a step further and created what looks to my eyes - practiced as they are at looking at surveys - as a truly re-imagined survey. It was in (a) timeline." Here's a small screen shot of what the display looks like:a year-end piece about the future of computing, and it asked readers "to make predictions and collaboratively edit
If you hover over any of the entries, it will open up and show a longer description of the topic. Here's what you see under "Routine Voice Interaction":
This is a survey question - something on the order of, "In what year do you expect routine voice interaction to be available on all computing devices?" But, besides displaying the question, the Times survey shows who suggested it (they also link to information about the proposer), the current consensus answer, the number of respondents who have previously disagreed with the then-current answer, and an invitation to vote on moving the date forward or backward. Imagine how grueling this would be set up in a typical online survey. The Times version has 53 separate events - I can hardly imagine a way to make responding to a survey grid that size bearable.
A lot of things are going on here; one being better graphic design than most survey tools can support, of course. But the more I think about this, the more it strikes me that the big change is that the Times has shifted the concept of this "survey" from a model of "we ask questions, and you answer them," to one that puts more power in the hands (and mouse) of the "respondent," who is invited to explore this space of topics, to think about topics in a context of their own choice rather than in the sequence that the questionnaire author chose, and to ignore topics in which they have no interest and/or knowledge. The "big idea" here is a significant shift of control from the researcher to the respondent.
I encourage you to go to the site and try it - especially if you're a tech person who knows and cares about computing. You'll find that you don't just respond to one topic at a time - you can, for instance, look across the items that are now all clustered around the same year and think about whether, in your mind, they should all belong together. You may advance one of the items a few years, which may make you re-think one of your earlier answers, so you go back and adjust that one to bring it into line with how you're thinking now. And you'll almost certainly give some items a pass.
Let me be clear here: I know the analytic reasons for questionnaire order and error messages that say, "You must answer all of the questions on this page before continuing." And I'm not suggesting that this exercise represents the future of all surveys or that it's even the best possible expression of a more user-driven alternative.
But ask yourself whether the ability to conduct certain kinds of statistical analysis based on certain kinds of sampling models, with control over order effects and incomplete data records always and in every case would outweigh the value of considered responses from involved respondents. The MR industry's answer to data quality issues almost always involves a critique of survey design, so why do we continue to produce surveys that perpetuate the ones we designed for phone interviews in 1992?