Contact Us

The Perils of Online Surveys

Paid Respondents: Do we need to change the way we sample?

When online surveys were first introduced to the marketing research community, there was justifiably great excitement and optimism. Online surveys promised numerous and substantive advantages over both phone and mall-intercept interviews: shorter field times, lower costs per interview, virtually complete elimination of interviewer bias, infinitely greater interview process control (e.g., randomizations, customizations, complex yet flawlessly executed skip patterns, logic checks etc.), elimination of manual data entry and, therefore, elimination of keypunch errors. Last but not least, online surveys offered a novelty that was attractive to the general public we wanted to interview.

Oh, how times have changed.

I recently googled “paid surveys” and got 2.8 million hits. Yes, 2.8 million. I went to one site, just one of many websites that promised to list all the best paid panels. It listed six pages of U.S. panels, a U.K. page, an Australian page, a New Zealand page, an Ireland page and many more.

Another site I went to recommended that I fill out a Form 1099, required by the IRS for anyone earning more than $600 a year from a single source. Anyone else see a red flag here? I signed up at two well-known sites just to see how this worked. I was inundated daily with offers to complete surveys, with typical pay rates of $2 to $3 for a 15-20 minute survey. Daily. More than daily. If I wanted to, I could have easily spent 10 hours a day filling out surveys and been paid all the while.

Money as a Motivator

The anonymity that eliminates interviewer bias also creates a temptation to cheat. Coupled with the very real cash incentives panels are being forced to dole out, this gives us a situation begging for dishonest activity.

Panel companies realize this. They work hard to communicate to panel members that they will not be paid if their responses are not valid. Respondents are explicitly warned about speeding. Payment is delayed until interviews have been examined and deemed valid. So, presumably, flatlining is also commonly rejected. However, if your motivation is to make money—as much money as you can as quickly as you can—don’t you think you would fairly quickly learn that speeding and flatlining don’t get you paid?  Doesn’t behavior modification theory give us the same answer that common sense does?  We may be teaching respondents how to game the system.

If I were of the mildly criminal persuasion (and aren’t we all?), I could work three or four (or 10) surveys simultaneously so that the interview length on any one of the surveys would not appear to be too short. I could randomly answer all questions so that flatlining would not be detected. Most surveys are still being written as if they were being conducted by phone or mall-intercept. So I could, very often at least, easily determine what respondent profile was being sought and provide untruthful answers so that I would qualify for the survey. Invalidating my survey responses could prove very difficult.

If I didn’t give considered responses, I could fill out a 20-minute survey in less than five minutes. By doing simultaneous surveys I could do perhaps 12 or more surveys an hour. If I’m unemployed, how does $25/hour sitting at home watching TV sound?

When Data Suffer

The reason I have begun thinking about this is because, entirely anecdotally, it seems to me that data quality may be deteriorating. A client of mine recently conducted a B-to-B survey in the high-tech sector. An admittedly difficult population to poll. He contracted with a highly respected high-tech sector panel firm but added a couple of questions to the survey to ensure data quality.

One question was quite simple: “For quality control purposes, please check option 3 below.” (The choices were option 1, option 2, option 3 and option 4.) Five of 49 respondents failed this question. The second question was a bit more subtle: “Which of the following brands do you have installed?” (The options were three fictitious brand names and “none of the above.”) Twenty seven of the remaining 44 failed this question. So a panel company that was supposedly able to deliver a 100 percent qualified B-to-B sample delivered a 35 percent qualified sample (or less). Without the trick questions, what would that data quality have been?

For a different client, I tossed in the quality control question (the first one above) just for fun. The category was a mainstream consumer packaged good, the population was anyone in the United States who was currently breathing. The panel company was one of the most highly regarded and well-known in the industry. Thirteen percent of respondents failed the “select option 3” question. That means at least 13 percent of the respondents were paying no attention at all to the survey. Thirteen percent may not seem high to you, but do you really want to add 13 percent more noise to a commercial data set? Don’t we have enough noise already?

Other Options

Given human nature, it seems inevitable to me that this situation exists. It seems inevitable to me that it will get worse. So it seems inevitable to me that we will need to find other ways to study our customers.

The Internet is becoming central to virtually all aspects of our lives. My wife spent much of her time while vacationing in Europe this summer posting pictures on her Facebook page and trading comments with our friends back home (and she’s no techie). I wrote this column while connected to the Internet and staring at the crystal blue sea of the Mediterranean. (We live in California.) This shift in human behavior offers new ways to study our customers.

Many companies offer observational data that tracks Internet activities: where you went, how long you stayed, what ads you were exposed to, what buttons you clicked, what products you bought, etc. No worries about paid respondents gaming this system, at least not yet.

Many surveys, mostly brief, are popping up when you visit specific sites. If I want to survey hockey players, it may soon be more efficient to post surveys on sites that hockey players visit, rather than buy a panel. Mall-intercepts resurrected in cyber form.

Social media is a cultural force so powerful it is quickly pervading virtually all aspects of our lives (see wife, vacation and Facebook above). As the social media theme of this issue indicates, it is quickly becoming a marketing tool to be reckoned with. Scrubbing content from social media sites is just the first and most obvious way for researchers to leverage this fecund resource.

Whatever the new world order in survey research becomes, the goal has always been and will always be to find out what customers truly want and why they want it. The world is changing and, if we want to remain relevant, we have to change with it. Maybe we need to change the way we sample.