Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

We're so good at medical studies that most of them are wrong

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Health Donate to DU
 
n2doc Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Mar-09-10 11:20 AM
Original message
We're so good at medical studies that most of them are wrong
By John Timmer | Last updated 6 days ago

It's possible to get the mental equivalent of whiplash from the latest medical findings, as risk factors are identified one year and exonerated the next. According to a panel at the American Association for the Advancement of Science, this isn't a failure of medical research; it's a failure of statistics, and one that is becoming more common in fields ranging from genomics to astronomy. The problem is that our statistical tools for evaluating the probability of error haven't kept pace with our own successes, in the form of our ability to obtain massive data sets and perform multiple tests on them. Even given a low tolerance for error, the sheer number of tests performed ensures that some of them will produce erroneous results at random.

The panel consisted of Suresh Moolgavkar from the University of Washington, Berkeley's Juliet P. Shaffer, and Stanley Young from the National Institute of Statistical Sciences. The three gave talks that partially overlapped, at least when it came to describing the problem, so it's most informative to tackle the session at once, rather than by speaker.

Why we can't trust most medical studies

Statistical validation of results, as Shaffer described it, simply involves testing the null hypothesis: that the pattern you detect in your data occurs at random. If you can reject the null hypothesis—and science and medicine have settled on rejecting it when there's only a five percent or less chance that it occurred at random—then you accept that your actual finding is significant.

The problem now is that we're rapidly expanding our ability to do tests. Various speakers pointed to data sources as diverse as gene expression chips and the Sloan Digital Sky Survey, which provide tens of thousands of individual data points to analyze. At the same time, the growth of computing power has meant that we can ask many questions of these large data sets at once, and each one of these tests increases the prospects than an error will occur in a study; as Shaffer put it, "every decision increases your error prospects." She pointed out that dividing data into subgroups, which can often identify susceptible subpopulations, is also a decision, and increases the chances of a spurious error. Smaller populations are also more prone to random associations.

In the end, Young noted, by the time you reach 61 tests, there's a 95 percent chance that you'll get a significant result at random. And, let's face it—researchers want to see a significant result, so there's a strong, unintentional bias towards trying different tests until something pops out.

more
http://arstechnica.com/science/news/2010/03/were-so-good-at-medical-studies-that-most-of-them-are-wrong.ars
Printer Friendly | Permalink |  | Top
Zoeisright Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Mar-09-10 11:28 AM
Response to Original message
1. Do you have a degree in science? Have you ever conducted research?
If not, you don't know what the hell you're talking about. Look up statistical significance.
Printer Friendly | Permalink |  | Top
 
drm604 Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Mar-09-10 11:33 AM
Response to Reply #1
3. Maybe you should direct your comments at the author of the article,
rather than at n2doc, who simply posted the article.

The author is John Timmer: http://arstechnica.com/author/john-timmer
John Timmer
Science Editor
et Observatory moderator

John got a Bachelor of Arts in Biochemistry (yes, that's possible) from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. He's done over a decade's worth of research in genetics and developmental biology at places like Cornell Medical College and the Memorial Sloan-Kettering Cancer Center. In addition to being Ars' science content wrangler, John still teaches at Cornell and does freelance writing, editing, and programming, often with a scientific focus. When physically separated from his keyboard, John tends to respond by seeking out a volleyball court, bicycle, or a scenic location for communing with his hiking boots.
Printer Friendly | Permalink |  | Top
 
HuckleB Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Mar-09-10 01:00 PM
Response to Reply #1
6. The author is a researcher.
And he's not saying anything most researchers I know and read don't say.
Printer Friendly | Permalink |  | Top
 
Jim__ Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Mar-09-10 04:46 PM
Response to Reply #1
8. Is there anything specific in the article that you disagree with?
Edited on Tue Mar-09-10 04:58 PM by Jim__
The mathematics seems correct. If researchers normally run the number of different tests they're talking about, 61 on a given data set, then their claim seems valid. It also seems like this can be addressed, but at some cost.
Printer Friendly | Permalink |  | Top
 
drm604 Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Mar-09-10 11:29 AM
Response to Original message
2. So then why should we trust this study?
Edited on Tue Mar-09-10 11:34 AM by drm604
:evilgrin:
Printer Friendly | Permalink |  | Top
 
bemildred Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Mar-09-10 11:50 AM
Response to Original message
4. Only too true.
Statistics is based on probability theory, and I often see statistical results interpreted with a faith that borders on religion. Sometimes, as in the "News" media, this is obviously a matter of propagandizing the public; in scientific areas it appears to be a sign of the corrupting effect of money with strings on it, and has propaganda as a motive too. One sees very modest correlations used to spook the medical herd in this direction or that all the time.

Skepticism is always warranted where statistics are involved. People simply underestimate the complexity of the world we live in, and overestimate how orderly it is, and selection bias is built into human intelligence because order is what intelligence is all about, disorder is not interesting.

* -- I have a BA in Math and and MS in Computer Science and I DO understand chaos theory, combinatorics, probability and statistics.
Printer Friendly | Permalink |  | Top
 
Warpy Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Mar-09-10 11:58 AM
Response to Reply #4
5. The problem would seem to be one of sample size
especially in studies done before computer tabulation was possible. I'm sure we're doing a lot of sound medical practice that will be discredited in the future.
Printer Friendly | Permalink |  | Top
 
bemildred Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Mar-09-10 02:54 PM
Response to Reply #5
7. There are many different problems.
Edited on Tue Mar-09-10 03:11 PM by bemildred
You are talking about methodology, which is one thing, and an important subject, but I am talking about theory and attitudes, which is another and also important. When you enter the world of probability you forgo at the outset determinism, that is what probability is about, the mathematics of uncertainty, of "chance" and "randomness". One says: "Well, if we cannot have complete order, and we cannot say much of anything interesting about complete disorder, the stasis of maximum entropy, then what happens if we have some order and some disorder all mixed together, we have some rules but they do not compel?"

Scientists WANT to get results. Statistical results are inherently "fuzzy". Proper attention to sample sizes and sampling methodology theoretically allows you to arbitrarily reduce the "fuzziness", but your only method of checking that is yet more statistical work, more samples. There is never any deductive process that allows you to say "this is precisely so" or "this will certainly happen". Even in Physics they come up against this. It was a big comedown for Physics when it had to admit probabilistic methods to describe what they saw. Before quantum theory, all you had to worry about was "error" in your measurements, and statistics was a way to guard against that, to increase accuracy. Quantum mechanics moved the uncertainty out of the reach of methodological attacks. You inherently never know where any particular particle is or what any particular particle is going to do. How much more is that so in the enormously complex worlds of biology and sociology and finance?

There is an inherent circularity to statistical evidence, which must be guarded against. You see only what you look for. You never really know that you looked at all the right things. This is true of all observational science. It is why we are always finding new things.

There is a notion that large sample sizes lead to a form of certainty, but in fact this is so only in special cases, when the assumptions the study is based on are sound. One assumes that a certain population exists with certain attributes which may be measured by certain methodologies. Those are each and every one unproven assumptions. Suppose that your population is in fact better thought of as three populations with regard to the attributes that you want to study? How would you know? If the assumptions that your study is based on are incorrect, how do you surely know that? Are you really going to want to find that out?

Statistics is only a tool, which used in the proper way on the proper sorts of problems with due application of critical thinking can inform and guide you, in the right sort of situation is even offers a fuzzy sort of predictive capacity, like in playing cards, but it never compels, the real world is full of "error".

The fundamental problem is that people WANT to know and they WANT to be certain and with a certain amount of statistical razzle-dazzle it is easy to pretend to do that, and difficult for anyone to prove you wrong. Good medical people see this easily when the subject is "alternative medicine", and rightly so, but are much less skeptical when it comes from within their own community. Doctors WANT to help, and they WANT to cure disease, and that colors their judgement and constrains their skepticism much as with anyone else.
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Thu Apr 25th 2024, 05:55 AM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Topic Forums » Health Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC