Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

Questions to ask about polls and surveys

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Archives » General Discussion (01/01/06 through 01/22/2007) Donate to DU
 
TechBear_Seattle Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jan-24-06 11:57 AM
Original message
Questions to ask about polls and surveys
A very big pet peeve of mine, having been a math major in college, is how people seem to just accept the results of polls, surveys and other opinion results. That problem -- and it is a problem, a major one -- has prompted me to put down a few questions you should ask yourself any time you look at a poll. I hope folks find them helpful.

1. What is the population being sampled?
The whole point of a poll is to model the opinions of a larger population. The first question you must ask is, "What is the population represented?" For example, is the poll supposed to reflect the views of all citizens, or just the "likely voters?" The distribution of different opinions will most likely vary between these groups, so knowing the population is important.

2. What was the methodology for conducting the poll?
When modeling the views of a population, researchers must decide how they will select the people who will represent the population. How these people are selected and questioned is the poll's methodology.

Pollsters can very easily get desired results by using inaccurate methodologies. A survey of the health effects of smoking might be done by sampling only young smokers who have not yet developed emphysema and lung cancer, for example. Or they might conduct a candidate survey by sampling people who live only in small conservative communities and ignoring or underrepresenting people who live in urban, more liberal areas. "Man on the street" polls will get very different results when the same questions are asked at a NASCAR event, an anti-war rally, "social hour" after services at a Baptist church or outside Neiman-Marcus.

The way the poll is conducted is also very important to know. A survey made by telephone automatically excludes the homeless, people who do not have a telephone, people who do not have a published telephone number, people who use only a cell phone rather than a land line, people who are at work when the pollster calls, and people who are too busy to talk. As a result, telephone surveys overrepresent the wealthy and middle classes and the elderly.

Any poll results which do not mention the methodology used should be taken with a very, very big grain of salt.

3. What is the sample size?
A poll's sample size is the number of people who responded to the poll. The more people who are included in the results, the more accurately the results will reflect the opinions of the population. The "sufficient" sample size -- the number of people needed to give a reasonable confidence in the results (see question 4) -- varies on the methodology used to conduct the poll. A targeted, accurate methodology can model the opinions of likely voters in the United States using a sample size of only about a thousand people. Other methodologies might require three or four thousand respondants to get the same level of confidence. And some methodologies, such as internet polls, are so inaccurate that the sufficient sample size approaches the size of the entire population.

4. What is the margin of error / level of confidence?
First off, let me say: The margin of error does not represent "wiggle room" in the results of the poll. While this is a common practice, any such use is a misrepresentation of the poll's results and is not supported by any of the math behind statistical science.

Every poll or survey has a margin of error. It is found through a calculation using the size of the population being modeled, the sample size and the methodology. It is one way of expressing the poll's level of confidence, ie how accurately the answers reflect the views of the population. A reasonably accurate poll will have a margin of error of 3% or 4%; that means that if the poll were repeated with a different sample population, the second results should be within 3% or 4% of the first results. If no margin of error is reported, you should assume that the poll does not accurately reflect the population.

5. What were the actual questions used to obtain the information?
Even a poll using an accurate, unbiased methodology and a large sample size can easily be skewed towards a desired result by using questions designed to elicit certain responses. Even if they ask for essentially the same information, how a question is phrased can alter a person's answer. Take, for example:

"Should women be allowed to make their own medical decisions without government interference?"
"Should women be prohibited from ending the lives of their fetuses?"

The bias is further complicated by the fact that demagogue will turn around and report both questions as showing support for or opposition against abortion rights. If a poll does not include the exact questions asked, it should be treated with extreme suspicion.
Printer Friendly | Permalink |  | Top
Vinnie From Indy Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jan-24-06 12:15 PM
Response to Original message
1. Well done!
Edited on Tue Jan-24-06 12:17 PM by Vinnie From Indy
It appears to me that many of our polling organizations in America are suspect. We have seen BushCo flex it's muscle in all areas of media to get favorable coverage and to think that this does not include polling companies is the height of folly. The effort by BushCo is not accidental in regard to "controlling the message". It uses and combines any and all tools available from "quid pro quo" to blackmail to paying for propaganda to "control the message". The level of success they have achieved at creating and filtering the vast majority of news that we receive in America is breathtaking and tragic.
Printer Friendly | Permalink |  | Top
 
TechBear_Seattle Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jan-24-06 12:19 PM
Response to Reply #1
2. My point exactly. n/t
Printer Friendly | Permalink |  | Top
 
TahitiNut Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jan-24-06 12:38 PM
Response to Original message
3. Just one word about "not scientific" ...
We should realize that this mantra merely disavows a poll as being statistically verifiable as representing a larger population. It does not mean that the poll doesn't represent a larger population. It means that NO conclusion can be validly drawn regarding any larger population. To emphasize this point, I can merely observe that a General Election is "not scientific" either! Those who vote in a General Election are self-selected just as those who vote in many polls. Thus, such polls represent only those who vote. Period.


As a side note, any polling has a possibility of fraud, some more than others. The safeguards against "ballot-box-stuffing" and manipulation of counts all have circumventions, some easier than others. The net materiality of such frauds is always arguable and almost never certain.
Printer Friendly | Permalink |  | Top
 
Gormy Cuss Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jan-24-06 12:42 PM
Response to Original message
4. I'd like to add a few more notes about interpreting studies.
Edited on Tue Jan-24-06 12:42 PM by Gormy Cuss
Significance when used to describe poll results can mean several things. It can refer to a substantive finding or, as is more typical these days, it is used in reference to statistical significance, which is used to describe the degree to which the finding may be the result of chance. Significance is determined through standard statistical tests -- it's not a sign of how important the result is to public opinion or policy.

Random sampling in a proper scientific poll or study doesn't mean haphazard. In simplest terms, it means every eligible subject has an equal chance of being selected, thus controlling bias. In the real world, a straight random sampling requires larger samples than the resources would support, so other sampling methods are used. Telephone surveys that use random digit dialing strategies are often stratified to balance replies in key categories (for example, political poll are often stratified to ensure sufficient responses from Dems and Repubs.)

Experimental design is a tool used to randomly assign subjects to either the treatment group (those who are using a product or service) and the control group, those who don't. If the subjects of each group are similar and the selection process is through random assignment, the differences in outcome may be attributable to the product or service. For example, if the treatment group was offered a job skills program, and the control group was not, and two years later the treatment group showed significantly higher wage records than the control, the job skills program may be the reason.

Sometimes it is impossible to use an experimental design, and researchers pair subjects with a similar group of people called a comparison group. The risk of selection bias is much higher here, but well chosen comparison groups are better than trying to interpret results with only the study subjects' outcomes.

If the study results do not discuss methodology, be very suspicious.

Printer Friendly | Permalink |  | Top
 
TahitiNut Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jan-24-06 01:16 PM
Response to Reply #4
5. About sampling methods ...
To eliminate bias, the method of sampling must incorporate selection characteristics that are NOT correlated to the variables being surveyed. Thus, "random" sampling is (more than) sufficient but not necessary. In telephone polling, the presumption that telephone numbers (and their distribution) are NOT correlated to any of the (opinion) variables being sampled is challengeable. To some degree, the 'normalization' of a sample (adjusting results according to demographic references) assumes both a degree of bias in the sampling method and a degree of reliability in the demographic references. Such a posteriori adjustments introduce opportunities for organizational biases-by-proxy - i.e. based on the demographic references.

Printer Friendly | Permalink |  | Top
 
Gormy Cuss Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jan-24-06 01:29 PM
Response to Reply #5
7. Oops! you just made a lot of eyes glaze over here.
Wasn't a failure to recognize the demographic bias in telephone sampling one of the reasons the pollsters predicted Dewey would beat Truman?
Printer Friendly | Permalink |  | Top
 
TahitiNut Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jan-24-06 01:40 PM
Response to Reply #7
8. Yes. It was the very beginning of telephone polling.
The results were consistent between the three major polling organizations. They all failed to consider the affluence factor of telephone owners and their availability to answer their phones at the time the calls were made. It's not clear, however, that 'normalization' according to demographic references attains materially more reliable results.

I should add one caveat to my remarks. Within the major polling organizations, the most bias is probably introduced by the design of the questions and the overall sequence (minor sequences are typically varied to achieve balance). The client (usually partisan) often has a LOT to say about the wording of the questions.
Printer Friendly | Permalink |  | Top
 
Gormy Cuss Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jan-24-06 02:00 PM
Response to Reply #8
9. Oh, those clients are why I never strayed over into market research.
It was bad enough having to explain to the economists and other analysts why wording and sequencing were important and why asking an open-ended question sometimes was much more reliable than trying to guess the likely responses ahead of time in an effort to save money.

At one point I told a newly minted Master's level analyst that if he didn't want to pay for pretests, he might as well not fund the survey.
Printer Friendly | Permalink |  | Top
 
kay1864 Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jan-24-06 01:29 PM
Response to Original message
6. Thanks for all the great points!
Very good synopsis of items to look out for.

That said, AFAIK the polls on pollingreport.com (the only polls I ever look at--"web polls" and the like are useless) do in fact follow all of the above criteria.

I for one don't think BushInc. is influencing the polls--if they are, they're doing a pretty lousy job! Maybe they've got "heckuva job" Brownie in charge of the poll-influencing?

Also, I get annoyed at DUers who say "I don't trust the polls, all polls are crap", since there's no backup offered for such claims.

Fact is, if a poll is well-designed, with neutral questions, and a randomized sample of sufficient size, there's no reason to dispute it. Unless, for example, it is consistently in conflict with similarly-designed polls (Rasmussen is often accused of this). My conclusion would be not "well goes to show you, all polls are crap", but rather "maybe Rasmussen doesn't have genuinely randomized samples--maybe they're fudging the normalization, maybe they're cherry-picking". So what do I do? I take Rasmussen with a grain of salt, and I look at the other current polls, which are actually reflective of the populace.

I'm with you TechBear--I don't accept the online "polls" and I rarely "DU this poll!" since the end results don't amount to anything. Note that CNN doesn't trumpet its own online polls, since *they* know such polls aren't scientifically accurate (wish I could say the same for the evening local news!). But I do (generally) accept the 1000+ sample-size polls, since they're a different animal altogether.
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Thu Apr 25th 2024, 03:32 AM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Archives » General Discussion (01/01/06 through 01/22/2007) Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC