Pitfalls in Online Polling

The desire to conduct web surveying can be seductive. In most cases, it is cheaper, faster and easier than traditional methods of data collection, and clients love the sound of that. But buyer beware — it creates a slew of issues and bias to contend with that many in the scientific research community are still trying to address. Here are a few of them:

  • Coverage Bias: If those without internet access cannot participate, any results from online polling will be skewed, period. As the chart shows above, 40% of American households do not have access to email or the internet on any kind of regular basis. Some pollsters use ‘opt-in’ panels, but these too exclude the non-internet population.
  • Volunteer Bias: When respondents choose themselves (i.e. opt-in or volunteer) — rather than being chosen as part of a random sample — then it becomes impossible to project research findings to the population of interest as a whole.  Volunteers can not be considered a representative sample and are not average consumers or voters.

In fact, at a recent Marketing Research Association (MRA) meeting, research papers were presented that demonstrated that average volunteer panelists:

  1. Belong to 8 other research company’s panels, known as “professional respondents;”
  2. 30% of the volunteer surveys are completed by less than 0.25% of the desired universe;
  3. These professional respondents took an average of 80 online surveys each in the last 90 days.
  • Non-Response Bias: In many opt-in panels, a small fraction of those invited actually participate in a poll, and little effort is made to convert non-respondents.  In fact, many pollsters’ panels are just databases of professional respondents and, over time, become over-ridden with nonparticipating panelists.  These professional respondents evenually lose interest in taking surveys and are not removed from the database on a regular basis.  Low participation of invited (or random selection) panelists can have a negative impact on the internal validity of the results.
  • Completion Bias: Many research companies do not offer real-time troubleshooting during the online fielding process and many respondents do not complete the surveys that they start.  Researchers have more abilities with online surveying than with telephone surveying because the respondent can’t see the survey while they’re on the telephone.  Consequently, pollsters can “pile it on” with more questions and complex answering schemes that are driving down the completion rate of online polls, proving it’s easier to quit an online poll than hang-up on a live interviewer on the telephone.

Finally, the data from online polls can be questionable as well.  At least one prominent retest found that the same survey on the same opt-in panel provided different results when re-run one week later — a sign of disturbing weaknesses in online panel methodologies and panel maintenance.

Some researchers have even suggested “propensity weights” to address online sampling issues, but weighting data can not systematically address bias in research.

In the end, validity and reliability are hard to come by when cheaper, faster and easier becomes the selected path by pollsters and market researchers alike.