Science
Related: About this forumIrreproducible Scientific Results
This report on the irreproducibility of some scientific experimental results may be a very big deal for the scientists in the audience.
It has been jarring to learn in recent years that a reproducible result may actually be the rarest of birds. Replication, the ability of another lab to reproduce a finding, is the gold standard of science, reassurance that you have discovered something true. But that is getting harder all the time. With the most accessible truths already discovered, what remains are often subtle effects, some so delicate that they can be conjured up only under ideal circumstances, using highly specialized techniques.
Fears that this is resulting in some questionable findings began to emerge in 2005, when Dr. John P. A. Ioannidis, a kind of meta-scientist who researches research, wrote a paper pointedly titled Why Most Published Research Findings Are False.
Paradoxically the hottest fields, with the most people pursuing the same questions, are most prone to error, Dr. Ioannidis argued. If one of five competing labs is alone in finding an effect, that result is the one likely to be published. But there is a four in five chance that it is wrong. Papers reporting negative conclusions are more easily ignored.
Putting all of this together, Dr. Ioannidis devised a mathematical model supporting the conclusion that most published findings are probably incorrect.
Xipe Totec
(43,889 posts)longship
(40,416 posts)Particularly in medical research, where negative outcomes are often file drawered as being somehow unimportant.
Hence Science Based Medicine.
(All the examples in the article were medical research.)
I concur with the article and with proponents of science based medicine.
FiveGoodMen
(20,018 posts)bemildred
(90,061 posts)Statistical factoids.
Reproducibility is a very stringent criteria, that's why it's the gold standard.
GliderGuider
(21,088 posts)It's so much easier to get peers whose agreement will give results the aura of validity, than it is to actually reproduce results - especially in biochemical fields.
Peer review is necessary, but not sufficient.
bemildred
(90,061 posts)Scientists would do better to drop the pretense of objectivity (there is no such thing) and to develop better countermeasures for their subjectivity. (Peer review is one such, but as you point out it can be quite subjective too). Peer review is more of a pre-screen in my view.
I think people mostly find what they look for. And they mostly look for what everybody else is looking for.
There is also what I consider the bizarre habit of treating modest correlations between statistical attributes as though they signify something causal. They don't. At worst they are artifacts of your methodology, at best they are clues about where you ought to be looking.
And sometimes I think nobody knows what a significant digit is anymore. It's more of an aesthetic choice how many digits to use.
Peer review is essential where you cannot resort to empirical methods, astronomy, math sometimes, "soft" sciences. It's not much, but it's all you've got.
On the other hand, economics for example, could use a large dose of empiricism, most of macro-economics seems to be counter factual dogma. Or maybe it's the political end of economics where the problem lies, but anyway the current neo-liberal economic theory looks more like self-serving twaddle than what one can see happening.
And then there is the money, which corrupts as it nourishes.
Edit: </end rant>
bemildred
(90,061 posts)You can have the best encryption algorithm then known, and it somebody obtains the key through subterfuge, it's useless.
Our human nature, normally a good thing, undermines our attempts at theoretical rigor. We have needs. We want to be nice, we want to get along. We want to succeed.
Government and industry meddle in science constantly, and they are no bastions of objectivity either.
All that monkey politics undermines the process by which work is selected for publication, there is a publication bias and a confirmation bias both. "Publish or perish" guarantees a lot of it will be low quality.
GliderGuider
(21,088 posts)bemildred
(90,061 posts)eppur_se_muova
(36,256 posts)bemildred
(90,061 posts)The problem always is figuring out which 90% is crap. That is where allowing a thousand harpies to attack it on the internet can add clarity.
eppur_se_muova
(36,256 posts)That sounds like an idea that deserves a name of its own. I guess it's a variation on the "Mongolian Hordes technique" (already familiar to programmers).
bemildred
(90,061 posts)Troublemakers. I used to go look for them to test my work. The good ones had an instinctive grasp what to do to make it crash. It's not good for your ego, but it does make your work better.
EvolveOrConvolve
(6,452 posts)GliderGuider
(21,088 posts)The idea was to make sure that research on which Amgen was spending millions of development dollars still held up. They figured that a few of the studies would fail the test that the original results couldn't be reproduced because the findings were especially novel or described fresh therapeutic approaches.
But what they found was startling: Of the 53 landmark papers, only six could be proved valid. "Even knowing the limitations of preclinical research," observed C. Glenn Begley, then Amgen's head of global cancer research, "this was a shocking result."
Unfortunately, it wasn't unique. A group at Bayer HealthCare in Germany similarly found that only 25% of published papers on which it was basing R&D projects could be validated, suggesting that projects in which the firm had sunk huge resources should be abandoned. Whole fields of research, including some in which patients were already participating in clinical trials, are based on science that hasn't been, and possibly can't be, validated.
"The thing that should scare people is that so many of these important published studies turn out to be wrong when they're investigated further," says Michael Eisen, a biologist at UC Berkeley and the Howard Hughes Medical Institute. The Economist recently estimated spending on biomedical R&D in industrialized countries at $59 billion a year. That's how much could be at risk from faulty fundamental research.
kristopher
(29,798 posts)Dec 10, 2013 by Bob Yirka
(Phys.org) Randy Schekman winner (with colleagues) of the Nobel Prize this year in the Physiology or Medicine category for his work that involved describing how materials are carried to different parts of cells, has stirred up a hornet's nest in the scientific community by publishing an article in The Guardian lashing out at three of the top science journalsScience, Cell and Nature.
In the article Schekman claims that scientific research is being "disfigured by inappropriate incentives." He maintains that the top science journals are artificially inflating their stature by keeping the number of articles they publish low. He asserts that the practices of the top journals is causing undo difficulties with young researchers who have become convinced the only true measure of success is publication in one of the top tier journals.
He continues by suggesting that because the top tier journals are run by editors, rather than scientists, it's often the flashiest articles that get published, rather than the best or most relevant.
Schekman offered hints of his dissatisfaction with the publication process when he took a position as an editor at eLife, an online science journal that prints research papersit's also peer reviewed, but doesn't charge an access fee.
In his article he suggests that many researchers and organizations cut corners in order to focus more clearly on the "wow" factor and...
http://phys.org/news/2013-12-nobel-scientist-boycott-science-journals.html
Schekman's article in The Guardian:
The incentives offered by top journals distort science, just as big bonuses distort banking
Randy Schekman
The Guardian, Monday 9 December 2013 14.30 EST
Jump to comments (278)
The journal Science has recently retracted a high-profile paper reporting links between littering and violence. Photograph: Alamy/Janine Wiedel
I am a scientist. Mine is a professional world that achieves great things for humanity. But it is disfigured by inappropriate incentives. The prevailing structures of personal reputation and career advancement mean the biggest rewards often follow the flashiest work, not the best. Those of us who follow these incentives are being entirely rational I have followed them myself but we do not always best serve our profession's interests, let alone those of humanity and society.
We all know what distorting incentives have done to finance and banking. The incentives my colleagues face are not huge bonuses, but the professional rewards that accompany publication in prestigious journals chiefly Nature, Cell and Science.
These luxury journals are supposed to be the epitome of quality, publishing only the best research. Because funding and appointment panels often use place of publication as a proxy for quality of science, appearing in these titles often leads to grants and professorships. But the big journals' reputations are only partly warranted. While they publish many outstanding papers, they do not publish only outstanding papers. Neither are they the only publishers of outstanding research.
These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called "impact factor" a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself and is as damaging to science as the bonus culture is to banking.
It is common, and encouraged by many journals, for research to be judged by the impact factor of the journal that publishes it. But...
http://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science
http://www.democraticunderground.com/122825671
http://bjoern.brembs.net/comment-n815.html
Last week, we've already seen that the most prominent way of ranking scholarly journals, Thomson Reuters' Impact Factor (IF), isn't a very good measure for predicting how many citations your scientific paper will attract. Instead, there is evidence that IF is much better at predicting the chance that your paper might get retracted.
Now, I've just been sent a paper (subscription required) which provides evidence that the reliability of some research papers correlates negatively with journal IF. In other words, the higher the journal's IF in which the paper was published, the less reliable the research is.
<snip>
http://www.democraticunderground.com/1228903
caraher
(6,278 posts)That's how the author of a chapter on how science works I've often assigned students characterizes the peer-reviewed literature. I think that's the best characterization I've seen, and it holds in the "hard" sciences, too. (Though I can say I've personally run across several instances of "obviously wrong" papers in highly-reputed journals.)