Post by Enter Nations on Jan 30, 2018 18:59:12 GMT
SHE’S PROBABLY MOSTLY kidding when she tells the origin story this way, but Kathy Hudson—until last year the deputy director for science, outreach, and policy at the National Institutes of Health—says that a massive update to the NIH’s rules for funding science started with humiliation. A pal who ran approvals at the Food and Drug Administration, Hudson says, “used to walk around and talk about how NIH funded small, crappy trials, and they would say it at big gatherings.” This was Washington, in front of congresspeople—or at conferences full of leading researchers. “I would get so pissed off,” Hudson says.
But then, well, she took it to heart. “I started to look at our trials and what kinds of policies we had, to make sure investments in clinical trials were well spent,” Hudson says. It turned out they were not.
This week, after almost a decade of work, some new rules go into effect for researchers funded by NIH. If they’re using human beings in their experiments, most of them now have to register their methodologies on a government-built website, clinicaltrials.gov. They have to promise to share whatever they find, even if they don’t prove what they hoped—especially if they don’t prove it. They have to get trained up in modern clinical practices.
Philosophically, almost no one disagrees with the intent. Make science more open, more ethical, and smarter. But some researchers think the rule change will bring with it more than just confusing, possibly burdensome new bureaucracy, and maybe even set back all of basic bioscience. They’re just as pissed off as Hudson used to get.
The changes to the rules aren’t small-potatoes. The agency awards tens of thousands of grants, $17 billion in 2016; it’s a key source of money for US scientists and a primary driver of new biomedical knowledge. The process for getting one of those grants is competitive, whether you’re doing basic science or preliminary investigations or if you're doing giant clinical trials that attempt to figure out if a new drug or therapy cures a disease. “Clinical trials are super-special, because people are involved and at risk, and it matters,” Hudson says. “So we should make sure they’re really good.”
The new rules expand the definition of clinical trials to work with human subjects that didn’t used to be clinical. Yet the NIH’s bureaucratic requirements still ask for information on those experiments that maps onto the old definition. And much of that doesn’t apply to smaller studies. The point is, if a researcher has to figure all this out, they might just give up altogether—and not do the science.
Back in the early 2010s, Hudson and Francis Collins, the director of the NIH, set out to get the clinical trial rules sorted. That meant trials had to be well-designed, with enough statistical power to answer the question they set out to, and researchers would have to pre-register those designs to make sure they didn’t try any shenanigans at the end—changing the thing they said they were trying to measure so their data looks more convincing. “We invest in clinical studies where we tell human beings, ‘your participation in this clinical study may not benefit you, but it will benefit other people because we will learn from your contribution.’” Hudson says. “Too frequently that is an outright, blatant lie. Something like 5 percent of all clinical studies terminate without generating any data.”
So another condition: Share the data, no matter what. “People, academics in particular, have an incentive system that rewards publication and getting grants," Hudson says. "Posting data on clinicaltrials.gov is not a citable thing that you put on your CV.”
NIH leadership was making an argument based on economics and ethics. “When it is research that involved human volunteers, regardless of whether they’re giving of their time or bodies or they’re engaged in higher-risk late-phase clinical trials, we had an ethical obligation to make sure those results saw the light of day,” says Carrie Wolinetz, Associate Director for Science Policy at the NIH. “Also, if you were to ask us—and Congress did—‘at any given time, NIH, how many clinical trials are you funding,’ we could actually answer those questions.”
As a bonus, the rules for pre-registering methodologies and sharing data also happen to meet the philosophical goals of Open Science, a set of principles designed to deal with science’s ongoing reproducibility crisis. Academic and social pressures—journals tend to only want to publish surprising, positive results (“hypothesis confirmed!”)—lead to bad science.
At least, that was the hypothesis.
In practice, when the research community started to understand what the new rules would mean, lots of people freaked out. They thought using the infrastructure for registering all-out clinical trials, and changing the definition of “clinical trial” to include, it seemed, every experiment with human beings, would mean basic research and simpler behavioral studies just wouldn’t get funding. In late 2017, more than 3,500 researchers signed a petition to the NIH asking that the new rules be delayed and rethought. “We support the goals of transparency and replicability. Unfortunately, the current effort to improve transparency and replicability in basic science does so by mislabeling basic research as a clinical trial,” the petition said.
Their fear was that even something as innocuous as monitoring a research subject’s stress levels would be an “intervention” in the eyes of an NIH grant review committee. Those kind of studies are a lot more potent than mere observation, but letters from the Association of Psychological Science and a crossover-sized team of academic university associations worried that redefining all human interventional science as clinical would mean lots of researchers would switch to those simpler observational studies.
Source: WIRED
Share this Topic: