Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

Failing to distinguish technically sound studies from poor ones

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Guns Donate to DU
 
jazzhound Donating Member (1000+ posts) Send PM | Profile | Ignore Sun May-02-10 07:37 PM
Original message
Failing to distinguish technically sound studies from poor ones
Edited on Sun May-02-10 07:40 PM by jazzhound
"ALL STUDIES ARE CREATED EQUAL": FAILING TO DISTINGUISH TECHNICALLY SOUND STUDIES FROM POOR ONES


Reviews of large bodies of research studies can be misleading if the reviewers implicitly give equal weight to all studies. Most of the research done in the guns-violence field, especially that published in medical journals, is technically primitive, relying on research methods that most social scientists would regard as reflective of the technical standards of the mid-1960’s or earlier. More specifically, the research commonly (1) uses simple univariate or bivariate analysis procedures rather than multivariate procedures that control for variables that may confound the relationship between violence and guns or gun control, (2) ignores the possible two-way relationship between guns and violence (gun levels may increase violence rates, but higher violence rates may also increase gun acquisition for defensive purposes), (3) uses primitive, invalid measures of gun availability (or none at all), and (4) relies on small local samples that are not representative of any larger population.

If the strong studies yielded the same findings as the weak ones, this would not be a problem. Unfortunately, in general the research supporting the ideas that guns cause violence and that gun laws reduce violence is nearly all of the technically primitive variety, while technically competent studies tend to support the null hypotheses that gun levels and gun laws have no significant net effect on violence rates. For example, among studies of the relationship between gun levels and homicide rates, technically inferior studies ignore the effects of violence rates on gun levels, find positive associations, and erroneously interpret them as reflecting the effect of gun levels on violence rates (e.g., Brearley 1932; Newton and Zimring 1969; Seitz 1972; Fisher 1976; Phillips, Votey, and Howell 1976; Brill 1977; Cook 1979; Lester 1988b). The technically better studies that use complex statistical procedures to take account of the possible two-way relationship generally find no evidence of net positive effect of gun levels on violence rates .

Consequently, it can be misleading when reviewers of the research literature engage in “research democracy,” acting as if “all studies are created equal” (Kleck 1985). Whenever scholars summarize evidence on a topic by simply listing studies, without comment of the relative methodological adequacy of each study, they are practicing research democracy. In drawing conclusions, serious scholars are supposed to weight evidence by the soundness of the methods used to generate it. To merely count up studies favoring a particular conclusion would generally lead to an outcome dominated by the technically inferior studies, since these tend to be more numerous. Probably in most fields poor research is more common that good research, but this is especially likely to be true in fields that generate intense emotions and ideologically based conflict, and it is certainly the case with work on guns and violence.

Dr. Gary Kleck – “Targeting Guns – Firearms and Their Control” pp. 32 & 33
(emphasis added, reprinted with permission)

Printer Friendly | Permalink |  | Top
SlipperySlope Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Jun-09-10 01:45 AM
Response to Original message
1. Reminds me of Wikipedia
Reminds of Wikipedia, where too often the threshold for inclusion is "it was published" instead of applying critical thought.
Printer Friendly | Permalink |  | Top
 
Travis Coates Donating Member (489 posts) Send PM | Profile | Ignore Wed Jun-16-10 11:00 PM
Response to Original message
2. Kicked
I'm not shocked at all that not a single anti responded to this thread
Printer Friendly | Permalink |  | Top
 
spin Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jun-17-10 06:48 PM
Response to Original message
3. K&R (nt)
Printer Friendly | Permalink |  | Top
 
Euromutt Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jun-17-10 10:52 PM
Response to Original message
4. A large problem with the medical/public health research is that it ignores its own standards
Edited on Thu Jun-17-10 10:53 PM by Euromutt
John Ioannidis has an article published almost five years ago titled "Why Most Published Research Findings Are False" (http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124). This is not a problem, broadly speaking; science is largely a problem of elimination, so a lot of hypotheses that are researched tend not to pan out. Initial research may indicate the possible presence of something significant, which subsequent research then shows to have been a statistical fluke.

The problem with the "epidemiological" approach to research is that it relies pretty much exclusively on retrospective studies, i.e. ones that "look back" by examining data already gathered for other purposes. This type of study has its place, e.g. as case control studies in medicine, as they're a comparatively cheap way of seeing whether there's enough substance to a hypothesis to justify further research. But, in the words of research oncologist and medical science blogger David Gorski (http://www.sciencebasedmedicine.org/?p=2962):

<...> here’s one thing to remember about retrospective studies in general. They often find associations that later turn out not to hold up under study using prospective studies or randomized trials or, alternatively, turn out to be much weaker than the retrospective study showed.


As a result, the findings from any retrospective study should not be accepted as valid until confirmed by prospective studies and/or randomized trials.

Now here's the thing about the "epidemiological" approach to firearms research: the material is all retrospective studies. The entire body of work never goes beyond establishing that there's a hypothesis that merits further research. It's all very cute for someone like Charles Branas to claim his team used "the same approach that epidemiologists have historically used to establish links between such things as smoking and lung cancer," but that conveniently overlooks that a) Doll's retrospective study produced a much stronger association than Branas' (90% of lung cancer patients studied by Doll turned out to be cigarette smokers, as opposed to the 6% of shooting victims studied by Branas who were carrying), and b) Doll's research still had to be validated by subsequent cohort studies. And when it comes to firearms, those validating studies are never done.

Why this reluctance among researchers who take the "epidemiological" approach to do follow-up studies, including of research by others in the same field? The most obvious hypothesis (if anyone can come up with something more plausible, let me know) is that they're quite aware that the associations their retrospective studies have generated will evaporate, or at least be severely weakened, in prospective studies. The associations generated by retrospective studies are the best (hell, the only) evidence they have to support their agenda, so they can't afford to jeopardize it. But avoiding evidence that would undermine your hypothesis reduces your work to pseudoscience, and it's hard to escape the impression that is what the "epidemiological" approach in firearms research is.
Printer Friendly | Permalink |  | Top
 
jazzhound Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jun-22-10 02:23 AM
Response to Reply #4
6. The pro-"control" lobby suppresses facts that discredit

their agenda on a regular basis. There is absolutely no refuting it at this stage. Kleck speaks of more than one occasion when Cook, Hemenway and pals were literally sitting in a conference room with him discussing studies that they later suppressed --- destroying any plausible deniability that they could claim regarding ignorance of these studies.

On another occasion, these clowns had the audacity to attack a study that they themselves had the opportunity to shape.

The integrity of these putzes is no more highly developed than those who "argue" the pro-"control" position on this board.
Printer Friendly | Permalink |  | Top
 
Taitertots Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Jun-19-10 07:20 PM
Response to Original message
5. If someone has not taken college level statistics courses they are generally not capable...
of distinguishing technically sound studies from biased or misleading studies.

This includes reading the full text of the study and going over their data analysis techniques.
Printer Friendly | Permalink |  | Top
 
Euromutt Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Jun-22-10 10:02 AM
Response to Reply #5
7. I'm not sure I agree with that statement
It's certainly easier to substantiate your argument if you have some grounding in statistical technique, but a modicum of critical thinking ability goes a long way to spotting technically unsound studies. All the more so because, not to put too fine a point on it, a lot statistical techniques (especially in econometric modeling) look suspiciously like an attempt use to "baffle 'em with bullshit," coupled with what Ted Goertzel (professor of sociology at Rutgers http://crab.rutgers.edu/~goertzel/) calls "statistical one-upmanship" (see his piece "Myths of Murder and Multiple Regression" here: http://crab.rutgers.edu/~goertzel/mythsofmurder.htm).

If you convincingly articulate why the authors of any given study appear to have confused correlation with causation (a very common pitfall), you're more than halfway there. If the only response is to point out how complex their math is, then you're all the way there, because then you can apply Occam's Razor. After all, Ptolemaeus had the math to "prove" the Sun orbits the Earth, but that didn't make him right; in fact, the very fact that his math had to be so complex in order to "prove" the predetermined hypothesis was a good indicator he was wrong.

To illustrate, above I unfavorably compared Branas' et al. "Investigating the link between gun possession and gun assault" (http://www.ncbi.nlm.nih.gov/pubmed/19762675) to Richard Doll's work regarding the association between cigarette smoking an lung cancer. As I pointed out, Doll found that ~90% of lung cancer patients studied turned out to be (then-)current or former smokers; it's fairly intuitive to tentatively conclude from that finding that smoking is a major casual factor in lung cancer. Compare this to UPenn's press release concerning Branas' article (http://www.uphs.upenn.edu/news/News_Releases/2009/09/gun-possession-safety/). The sub-headline cites the study's estimate "that people with a gun were 4.5 times more likely to be shot in an assault than those not possessing a gun."

Now, you don't have to have a huge grounding in statistics to conclude that with that kind of ratio, the percentage of people shot while carrying gun should, all other things being equal, be a sizable majority. To compare, based on U.S. statistics, smokers are ~10.7 times as likely to develop lung cancer than non-smokers, so it's not surprising that ~87% of lung cancer patients are current or former smokers. Those figures are fairly consistent with each other. But Branas et al. found that the percentage of assaultive shooting victims studied who were carrying a firearm at the time of the shooting was around 6%. Which means, granting that the researchers didn't trip over the needless complexity of their own math, or did a really bad job in selecting a control group (which, by the way, they did), all other things were not equal, and that there were other causal factors that played a much larger part in one's likelihood of getting shot (e.g. being a drug dealer or other petty criminal). Certainly, the findings didn't justify the conclusions or the press release, both emphasizing the risk of carrying a firearm, not with shooting victims without firearms outnumbering the ones with 15 to 1.
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Thu May 09th 2024, 04:31 AM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Topic Forums » Guns Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC