9 Questions Every Journalist Should Be Asking

Before publishing any poll, here is a list of the 9 questions every journalist should ask before reporting the results.

Who conducts the poll?

Was it a reputable, independent pollster?  If not, then regard its findings with caution.  If you are not sure, then one test is its willingness to answer the questions below.  Reputable pollster should provide you with the information you need to evaluate the survey.

Who paid for the poll and why was it done?

If it was conducted for a respected media outlet, or for independent researchers, there is a good chance it was conducted impartially.  If it was conducted for a partisan client, such as a company, interest group or political party, it might still be a good survey (although readers/listeners/viewers should be told who the client was).  The validity of the poll depends on whether it was conducted by an organization that used a scientific approach to sampling and questionnaire design, whether it asked impartial questions, and whether full information about the questions asked and results obtained are provided.

If such information is provided, then the quality of the survey stands or falls according to its intrinsic merits.  If such information in not provided, then the poll should be treated with caution.  In either event, watch out for loaded questions and selective findings, designed to bolster the view of the client, rather than report public opinion fully and objectively.

What was the sample size for the survey?

The more people, the better – although a small-sample scientific survey is ALWAYS better than a large-sample self-selecting survey.  Note, however, that the total sample size is not always the only relevant number.  For example, voting intention surveys often show figures excluding “don’t knows”, respondents considered unlikely to vote, and those who refuse to disclose their preference.  While excluding these groups ensures that, the poll reports the opinion of the most relevant group – “likely voters” — reported voting-intention sample size may be significantly lower than the total sample, and the risk of sampling error therefore greater.

How were those respondents chosen?

Is it clear who is included in the sample and who was left out?  If the poll claims to represent the public as a whole (or a significant group of the public), has the polling company employed one of the methods outlined in the questions above?  If the poll was self-selecting – such as readers of a newspaper or magazine, or television viewers writing, or certain web surveys – then it should NEVER be presented as a representative survey.  If the poll was conducted in certain locations but not others, for example, cities but not rural areas, then this information should be made clear in any report.

When was the poll done?

Events have a dramatic impact on poll results.  The interpretation of a poll should depend on when it was conducted relative to key events.  Even the freshest poll results can be overtaken by events.  Poll results that are several months old may be perfectly valid, for example, if they concern underlying cultural attitudes or behaviors rather than topical events, but the date when the poll was conducted (as distinct from published) should always be disclosed.  The date of the fieldwork is particularly important for pre-election polls where voting intention can change right up to the moment the voter records their vote.

How were the interviews conducted?

There are four main methods; in person, by telephone, online or by mail.  Each method has its strengths and weaknesses.  Telephone surveys do not reach those who do not have telephone.  Online surveys reach only those people with internet access.  All methods depend on the availability and voluntary cooperation  of the respondents approached; response rates can very widely.  In all cases, reputable companies have developed statistical techniques to address these issues and convert their raw data into representative results.

What were respondents asked?

Try to get a copy of the full questionnaire, not just the published questions.  A reputable organization should publish the questionnaire on its website, or provide it on request.  Decide if the questions were balanced and be cautious about the results if the interview was structured in a way which seemed to lead the respondent towards a particular conclusion.

Are the results in line with other polls?

If it is possible, check other polls to see if the results are similar or very different.  Surveys aiming to cover the same topic should come to similar conclusions.  If the answers are very different, the reasons may become apparent when the questionnaire or the sampling method is examined.

Has the pollster met the minimum disclosure requirements and abided by the code of best practices and ethical standards set out by the public opinion polling industry?

Reputable pollsters have pledged to maintain high standards of scientific competence and integrity in conducting, analyzing, and reporting their work; in their relations with survey respondents (such as observing the Respondent’s Bill of Rights); with their clients; with those who eventually use the research for decision-making purposes; and, with the general public. They have also pledged to reject all tasks or assignments that would require activities inconsistent with the principles of this code.

For more information on polling best practices and the code for professional ethics, see the AAPOR website.