General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsStudy says type X people prefer Y. Study says group A experiences less B.
"Hey! That's WRONG! I'm type X, and I hate Y! I'm in group A, and B happens to me all the time!"
I dread these inevitable stupid reactions to reports of statistical studies.
Popular press reports have to take part of the blame, often oversimplifying and distorting the actual contents of the studies they're reporting on for the sake of punchier headlines, or simply because the reporters are as bad as many of their readers at processing statistical and probabilistic information.
No matter how accurate or conscientious the reporting, however, it seems a lot of people just can't deal with associations and correlations on anything but a black-and-white level.
I'll make up a imaginary example, hopefully avoiding the emotional triggers often associated with real examples: Suppose naturally purple hair was common, and a study came out saying that, on average, people with purple hair rated lower on math ability.
Yes, that study could be wrong for any number of reasons (poor study design, bad sampling, biased testing, inaccurate assumptions), but it's not WRONG!!11!! because your sister has purple hair and got an 800 on her SATs. Correlations can be both real and strong or real and weak, with few exceptions or plenty of exceptions.
The study could be wrong, but it's not wrong because the results upset your idea of fairness. It's not wrong just because someone who is prejudiced against the purple-haired might use the study to justify their prejudice. It's not wrong because a stupidly oversimplified caricature of the real study (all people with purple hair are bad at math) is wrong.
Xipe Totec
(43,890 posts)RC
(25,592 posts)Never mind, I figured out they are the other type
Silent3
(15,204 posts)Wounded Bear
(58,647 posts)People who divide people into two groups, and people who don't.
bemildred
(90,061 posts)Edit: sometimes I think that is even what they are FOR.
Silent3
(15,204 posts)Are you making a distinction between stupid reporting of some studies, and the studies themselves? Do you think some questions are unworthy of being asked, or are wrong to ask?
In any event, I don't buy the idea that the stupidity, or lack thereof, of a study has much bearing on the stupidity of the responses. If a study strikes some people as humorous, certainly that will generate more smart ass comments, but that's not quite the same as grossly misunderstanding statistical and probabilistic distributions, it's not the same as being unable to grasp the distinction between tendencies and loose associations and hard-and-fast, black-and-white rules.
sibelian
(7,804 posts)but, you know, the study you're clearly talking about.... I mean, whaaaaaa? WHO thought that up? It's not exactly an inspiring, life-enrichening use of the scientific method, is it? I think you're right about the rest of what you say, tho.
Silent3
(15,204 posts)...but the kinds of reactions I'm talking about apply to many different studies when mentioned on various online discussion boards.
Imagine a study shows that, say, at the end of a twenty year longitudinal study, regular consumption of quantity X of sugary soda and candy led to a 12% increase in diagnosis of Type 2 diabetes compared to a control group.
Stupid headline for the story: Sugar Causes Diabetes
Stupid reader reaction #1: I told you, sugar is POISON!!!!!
Stupid reader reaction #2: That's bullshit! My aunt drank four liters of Pepsi and ate five Snickers bars every day, never got diabetes, and lived to 97 when she got hit by a bus!
sibelian
(7,804 posts)It's that sort of nonsese that "contradicts" statistical information regarding global warming. "It snowed heer. Globl warming is HOAX. That Logic." It's like everybody suddenly acquired the IQ of a herd of lolcats.
bemildred
(90,061 posts)Questions, like hypotheses, are a dime-a-dozen, but finding good, meaningful, illuminating questions is work, sometimes a lot of work. And then you still have to figure out how to collect evidence that supports them.
On the other hand, if you just formulate some catchy question or hypothesis, then go out and question some people about it, and then sort their answers into your little bins, and then compute some numbers, then you have nothing at all, except perhaps an exciting segment on some TV show.
That's what I think.
Something along the lines of the critique labelled "Poll Dance" in the letters section linked here:
Poll Dance
I suggest reconsideration of your PBK Presidents Poll, which appears neither to be a poll nor to have been conducted uniformly among college presidents.
Were advised only that this project collected responses from 70 individuals in an attempt to survey leaders of colleges and universities on issues facing higher education. Theres no description of sampling methodology, sample composition, sample weighting, questionnaire design, data validation or any computation of sampling error or statistical reliability.
What constitutes a poll is worthy of discussion. In the context of news reporting, which appears to be your aim, a poll is a study of attitudes or behavior among a randomly selected group of individuals whose characteristics are reliably representative of the broader population from which the sample was drawn. A probability-based sample, as required by this definition, is essential for the application of inferential statistics, the principle being that inferences about a full set can be made by examination of a randomly selected subset.
Many other niceties are involvedbest practices in questionnaire design and accuracy in data analysis, for instanceand particulars can be debated. But the day starts with sampling. Without it we have a compilation of anecdotenot reliably quantifiable in a representative sense, and thus not a poll, presidents or otherwise. The stylebook of The New York Times, for one, says that the words poll and survey are to be limited to scientific soundings of public opinion.
What we have, rather than a poll, may be an attempted census. To be successful, this would have to include as close as possible the presidents of all the 280 academic institutions with PBK chapters. Without awareness of, and if needed correction for, differential nonresponse, a 25 percent completion rate in a census, sorry to say, doesnt cut it.
http://theamericanscholar.org/responses-to-our-winter-2013-issue/