Frequently Asked Questions

What is an opinion poll?

An opinion poll is a scientific and representative survey designed to measure the views of a specific group – for example likely voters (for most political polls) or parents or trade union member.

What does SoonerPoll do with my individual answers or my answers to demographical questions?

The respondent’s individual responses to any and all questions remains completely confidential and is not shared with anyone, including the sponsor of the survey.  ONLY the overall results are reported to the sponsor or the media on public opinion polls, unless the respondent gives their approval such as a quote for the media’s article or story.  Demographical questions, such as education and income, are ONLY used to classify the questions of the survey for more in-depth study and kept in complete confidence.

Is SoonerPoll a telemarketer?

No. Telemarketers are those who are using the telephone for sales purposes. SoonerPoll is a public opinion pollster that is not selling anything when we call to perform polling or market research in Oklahoma.

What are your rights if you are interviewed by SoonerPoll?

SoonerPoll respects your rights as a poll participant and abides by the Respondent Bill of Rights developed by the Council for Marketing and Opinion Research (CMOR). What are you rights as a respondent?  Click here.

What about the “Do Not Call” List?

The “Do Not Call” list was developed to give the public the option of not being called by telemarketers – those who want to sell them something on the phone. As a result, do-not-call registries, both Oklahoma and federal, have reduced the number of intrusive requests for people’s time by telephone solicitors. But the U.S. government has recognized the importance of survey and opinion research and has ruled that survey research is not telemarketing and does not apply to the Do Not Call registry.

The reasons survey research and political opinion polling do not fall under these categories are that both are allowed and encouraged by government agencies and legislators as important means of understanding the opinions, preferences, needs and wants related to products, services, companies and elected officials for key public policy issues.

Is SoonerPoll a political campaign pollster?

No.  While we may ask political or campaign questions, our research is for primarily for the media.  While SoonerPoll does conduct polling for associations and interest groups, it is best classified as a media pollster engaging in public opinion research.  This is why we call ourselves: Oklahoma’s public opinion pollster.

Why is almost all of the poll results of ‘likely’ voters?

Likely voters are, by definition, those most likely to vote in an upcoming election.  Here is a in-depth discussion of the importance of likely voters.

What makes a survey “Scientific”?

The two main characteristics of scientific surveys are:

  • that respondents are chosen by the research organization according to explicit criteria to ensure representativeness, rather than being self-selected,
  • that questions are worded in a balanced way.  For example, if the population being sampled contains 52% who are women and 30% who are over 55, then a scientific opinion poll will represent those groups appropriately and the questions will be balanced and not lead the respondent towards a particular answer.
Why is survey research important?

Survey research is a critical tool of American businesses, the government and others to help shape the products and services people want and need and to impact on public policy. By cooperating with legitimate survey researchers, the public make their opinions known to the people who have the power to make changes. Research participants influence the type of products developed, the quality of customer service they receive and in some cases public and government policy. Through public involvement, survey research has made Americans’ lives easier and more enjoyable.

What do polling companies have to do to achieve representative samples?

While well conducted random and quota samples provide a broad approximation to the public, there are all kinds of reasons why they might contain slightly too many of some groups and slightly too few of others.  What normally happens is that polling companies ask respondents not only about their views but also about themselves.  This information is then used to compare the sample with, for example, census statistics.  The raw numbers from the poll may then be adjusted slightly, up or down, to match the profile of the population being surveyed.

Are other kinds of surveys bound to be wrong?

No.  Just as a stopped clock tells the right time twice a day, unscientific surveys will occasionally produce right percentages.  But they are far more likely to be badly wrong.  The most common forms of unscientific surveys are phone-in polls conducted by television programs and self-selected surveys conducted over the internet.  These contain two defects.  First, their samples are self-selecting.  Such polls tend to attract people who feel passionately about the subject of the poll, rather than a representative sample.  Second, such polls seldom collect the kind of extra information (such as gender and age) that would allow some judgment to be made about the nature of the sample.

But surely a phone-in or write-in poll in which, say, one million people take part is likely to be more accurate than an opinion poll sample of 1,000?

Not so. A biased sample is a biased sample, however large it is. One celebrated example of this was the US Presidential Election in 1936. A magazine, Literary Digest, sent out 10 million post cards asking people how they would vote, received almost 2.3 million back and said that Alfred Landon was leading Franklin Roosevelt by 57-43 percent. The Digest sent its postcards primarily to individuals with telephones and automobiles, their  “sample” included few working class people. A young pollster, George Gallup, employed a much smaller sample (though, at 50,000, it was much larger than those normally used today), but because he ensured that it was representative, he correctly showed Roosevelt on course to win by a landslide.

How can you possibly tell what millions of people think by asking just 300 or 500 respondents?

In much the same way that a chef can judge a large vat of soup by tasting just one spoonful. Providing that the soup has been well stirred, so that the spoonful is properly “representative”, one spoonful is sufficient. Polls operate on the same principle: achieving representative samples is broadly akin to stirring the soup. A non-scientific survey is like an unstirred vat of soup. A chef could drink a large amount from the top of the vat, and still obtain a misleading view if some of the ingredients have sunk to the bottom. Just as the trick in checking soup is to stir well, rather than to drink lots, so the essence of a scientific poll is to secure a representative sample, rather than a vast one.

But isn’t there some risk of sampling error in a poll of 300 or 500 people?

Yes.  Statistical theory allows us to estimate this.  Imagine a country that divides exactly equally on some issue – 50% hold one view while the other 50% think opposite.  Statistical theory tells us that, in a random poll of 500 people 19 times out of 20 a poll will be accurate to within 4 percentage points.  In other words, it will record at least 47 %, and no more than 54%, for each view.  But there is a one in 20 chance that the poll will fall outside this range.

I have seen polls conducted by different, well-regarded, companies on the same issue produce very different results. How come?

There are a number of possible reasons, beyond those issues to sampling error:

  • The polls might have been conducted at different times, even if they are published at the same time.  If the views of many people are fluid, and liable to change in response to events, then it might be that both polls are broadly right, and that the public mood shifted between the earlier and the later survey.
  • The polls may have used different definitions of the group that they are representing (e.g. different age, regions, ethnic groups, etc.)
  • They might have been conducted using different methods.  Results can be subject to “mode effects”: that is, some people might, consciously or sub-consciously, give different answers depending on whether they are asked questions in person by an interviewer, or impersonally in self-completion surveys sent by post or email/internet.  There is some evidence that anonymous self-completion surveys may secure greater candor on some sensitive issues, than face-to-face or telephone surveys.
  • The polls might have asked different questions.  Wording matters, especially on subjects where many people do not have strong views.  It is always worth checking the exact wording when polls appear to differ.
  • There might be an “order effect”.  One poll might ask a particular question “cold”, at the beginning of a survey; another poll might ask the same question “warm”, after a series of other questions on the same topic.  Differences sometimes arise between the two sets of results, again when many people do not have strong views, some people may give different answers depending on whether they are asked the question out of the blue or after being invited to consider some aspects of the issue first.
Does the way the question is asked influence the answer?

There is a great deal of knowledge about how questions should be worded, based on what we know about how people process information.  But this is really a matter of common sense.  It is important to look at the exact question that was asked and, if possible, to check the questions asked before it.  Questions can contain concepts within them, which lead the respondent in a certain direction e.g.  “There seem to be fewer policemen on the streets and a lot of people around here are concerned about rising crime, do you think the police in this area are overstretched?” or a question which contain more than one concept but where only one question is reported e.g. “How well is the city council dealing with traffic congestion and the lack of public transport?” reported as the level of concern with public transport.  Questions like these will not provide clear or helpful answers about what people really think of the police or public transport.

The context in which questions are asked can obviously influence the way in which people respond.  If a question about concern with crime is asked after a series of questions about whether people have ever felt nervous on public transport or have a relative or friend who has been mugged, etc., it is likely that more people will say they are concerned than if the question had been asked before the others.

When using answers to questions like this, it is important to be aware that the questions were biased or ambiguous and therefore the answers cannot be an accurate reflection of what the people answering them really believe.  This type of questioning is particularly popular with pressure groups that use them to try and get media coverage for their point of view.  Responsible journalists and commentators should not report those polls, or they should draw attention to misleading questions when reporting the results of opinion polls.