Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

reformist2

(9,841 posts)
Tue Feb 18, 2014, 02:07 AM Feb 2014

In cancer science, many discoveries don't hold up.


This applies not only to cancer research, but all sorts of scientific research, especially when a certain result is desired:


IN CANCER SCIENCE, MANY DISCOVERIES DON'T HOLD UP


By Sharon Begley


(Reuters) - A former researcher at Amgen Inc has found that many basic studies on cancer -- a high proportion of them from university labs -- are unreliable, with grim consequences for producing new medicines in the future.

During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

"It was shocking," said Begley, now senior vice president of privately held biotechnology company TetraLogic, which develops cancer drugs. "These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you're going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it's true. As we tried to reproduce these papers we became convinced you can't take anything at face value."

The failure to win "the war on cancer" has been blamed on many factors, from the use of mouse models that are irrelevant to human cancers to risk-averse funding agencies. But recently a new culprit has emerged: too many basic scientific discoveries, done in animals or cells growing in lab dishes and meant to show the way to a new drug, are wrong.

Begley's experience echoes a report from scientists at Bayer AG last year. Neither group of researchers alleges fraud, nor would they identify the research they had tried to replicate.

But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers.

...

On Tuesday, a committee of the National Academy of Sciences heard testimony that the number of scientific papers that had to be retracted increased more than tenfold over the last decade; the number of journal articles published rose only 44 percent.

Ferric Fang of the University of Washington, speaking to the panel, said he blamed a hypercompetitive academic environment that fosters poor science and even fraud, as too many researchers compete for diminishing funding.

"The surest ticket to getting a grant or job is getting published in a high-profile journal," said Fang. "This is an unhealthy belief that can lead a scientist to engage in sensationalism and sometimes even dishonest behavior."

The academic reward system discourages efforts to ensure a finding was not a fluke. Nor is there an incentive to verify someone else's discovery. As recently as the late 1990s, most potential cancer-drug targets were backed by 100 to 200 publications. Now each may have fewer than half a dozen.

"If you can write it up and get it published you're not even thinking of reproducibility," said Ken Kaitin, director of the Tufts Center for the Study of Drug Development. "You make an observation and move on. There is no incentive to find out it was wrong."

http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328
4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
In cancer science, many discoveries don't hold up. (Original Post) reformist2 Feb 2014 OP
Cool (and depressing) piece on the larger issue of "confidence" in Nature Recursion Feb 2014 #1
"Good" science is all about trying to disprove something, falsifying hypotheses. LAGC Feb 2014 #2
As XKCD said... Recursion Feb 2014 #4
kr. fraud is being incentivized. El_Johns Feb 2014 #3

Recursion

(56,582 posts)
1. Cool (and depressing) piece on the larger issue of "confidence" in Nature
Tue Feb 18, 2014, 02:12 AM
Feb 2014
http://www.nature.com/news/scientific-method-statistical-errors-1.14700

P values have always had critics. In their almost nine decades of existence, they have been likened to mosquitoes (annoying and impossible to swat away), the emperor's new clothes (fraught with obvious problems that everyone ignores) and the tool of a “sterile intellectual rake” who ravishes science but leaves it with no progeny3. One researcher suggested rechristening the methodology “statistical hypothesis inference testing”3, presumably for the acronym it would yield.

The irony is that when UK statistician Ronald Fisher introduced the P value in the 1920s, he did not mean it to be a definitive test. He intended it simply as an informal way to judge whether evidence was significant in the old-fashioned sense: worthy of a second look. The idea was to run an experiment, then see if the results were consistent with what random chance might produce. Researchers would first set up a 'null hypothesis' that they wanted to disprove, such as there being no correlation or no difference between two groups. Next, they would play the devil's advocate and, assuming that this null hypothesis was in fact true, calculate the chances of getting results at least as extreme as what was actually observed. This probability was the P value. The smaller it was, suggested Fisher, the greater the likelihood that the straw-man null hypothesis was false.

For all the P value's apparent precision, Fisher intended it to be just one part of a fluid, non-numerical process that blended data and background knowledge to lead to scientific conclusions. But it soon got swept into a movement to make evidence-based decision-making as rigorous and objective as possible. This movement was spearheaded in the late 1920s by Fisher's bitter rivals, Polish mathematician Jerzy Neyman and UK statistician Egon Pearson, who introduced an alternative framework for data analysis that included statistical power, false positives, false negatives and many other concepts now familiar from introductory statistics classes. They pointedly left out the P value.

But while the rivals feuded — Neyman called some of Fisher's work mathematically “worse than useless”; Fisher called Neyman's approach “childish” and “horrifying [for] intellectual freedom in the west” — other researchers lost patience and began to write statistics manuals for working scientists. And because many of the authors were non-statisticians without a thorough understanding of either approach, they created a hybrid system that crammed Fisher's easy-to-calculate P value into Neyman and Pearson's reassuringly rigorous rule-based system. This is when a P value of 0.05 became enshrined as 'statistically significant', for example. “The P value was never meant to be used the way it's used today,” says Goodman.

LAGC

(5,330 posts)
2. "Good" science is all about trying to disprove something, falsifying hypotheses.
Tue Feb 18, 2014, 02:14 AM
Feb 2014

But it seems more often than not, those funding grants want to see results, so scientists are encouraged to manufacture evidence that isn't necessarily there. The answer, of course, is to greatly increase government funding in the sciences so that folks can practice real unbiased science without having to gin up results.

But in this T-bagger environment, public investment in the sciences is the first thing to go.

Latest Discussions»General Discussion»In cancer science, many d...