قالب وردپرس درنا توس
Home / Science / Is science broken? New important report outlines problems in research

Is science broken? New important report outlines problems in research



Photo: Dan Kitwood (Getty Images)

A new report released this week by the National Academies of Science, Engineering and Medicine is weighing on a controversial debate in the United States World of Science: The idea that scientific research is fundamentally flawed relies on published results that often can not be reproduced or reproduced by other scientists, also known as the replication and reproducibility crisis.

Your report, the collaboration of more than a dozen experts from universities and the private research world, does not go so far as to call a crisis. However, it requires far-reaching improvements in the way scientists do their work, and it also requires scientists ̵

1; and journalists – who sometimes flood the latest research.

For years, some scientists have expressed a clear reputation for the overall quality of published research results. The most common issues highlighted by these scientists included fraudulent, poorly performed, or over-hyped studies with embellished results based on small sample sizes. statistical manipulation of the results of a study during or after the experiment to obtain a desired result; and studies with negative conclusions that are suppressed by their authors or rejected by scientific journals that may then distort the medical literature on a particular topic, such as the efficacy of a drug.

The most blatant effect of these questions was that many of these topics do not reproduce the most influential or striking findings in science, especially in psychology (meaning that other researchers can not get the same results if they use the same raw data obtained from the original study) or replicated (implying that other researchers mimic the restoration).

In some surveys, a majority of scientists have agreed that science is not is a legitimate problem, and initiatives have emerged to review generally accepted, brand-unique, fact-based studies. Prominent researchers, however, have accused these watchdogs of "methodological terrorism", while disadvantaged scientists whose work has been questioned have been beaten back and their critics charged with malicious motives. On the other hand, researchers like John Ioannidis (an early voice in this debate) have argued that most published results are wrong.

On this battlefield, the National Academies – one of the world's leading and most trusted scientific organizations – come forward with their latest report. And there seems to be a middle ground between the two fronts.

Although it notes that the way in which scientists carry out their research and transmit it has serious systematic gaps, it does not necessarily believe that a real "crisis" threatens science, even in public.

"The emergence of new scientific insights that displace or rename foreknowledge should not be interpreted as a weakness in science," wrote the author of the report. "Scientific knowledge builds on previous studies and tested theories, and progress is often not linear. Science is in a process of continuous refinement to get closer and closer to the truth. "

At the same time, it offers science, policymakers and even the media a way forward and sets guidelines for better data transparency and rigor in original studies; Criteria of when these studies could make a reproduction or reproduction; and recommendations on how journalists should report and report on these studies.

For example, one science problem addressed in this report is that many studies do not provide the complete data that other researchers need to reproduce their results. Scientists are also abusing statistical tools such as the p-value (a threshold of normally 0.05, which is used when a finding is statistically significant). A study with a p-value of less than 0.05 means that the results of the study are not plausible if the expected predictions of the scientists are wrong (ie the null hypothesis). In other words, the p-value should help us to say if the results of a study are a coincidence or not. However, it does not directly indicate whether a drug is doing what it should do, for example, nor does it tell us whether treatment in the real world is meaningful and clinically effective.

That said, the report also states that the drug has not weakened the trust of the American public in science in recent years, despite large news articles discussing the "crisis" in psychology and elsewhere. And it turned out that even scientists who have criticized the current state of affairs, are not completely on board the call research.

"What is the lack of reproducibility of research results in science and technology in general? The simple answer is that we do not know, "said Brain Nosek, co-founder and director of the Center for Open Science, during a podium last year. "I do not like the term 'crisis' because it implies many things that we do not know are true."

The Committee called for an optimized approach to replication and reproducibility studies and better storage and availability of datasets for these studies concludes that scientists should not worry too much about the replication of a single study.

"An overwhelming focus on the reproducibility of individual studies is an inefficient way to ensure the reliability of scientific knowledge," she wrote. "Reviewing cumulative evidence on a topic to assess both overall effect size and generalizability is a more useful way to gain confidence in the state of scientific knowledge."

Of course, it's not just a research methodology that could improve. The report also highlights journalists who highlight a survey showing that 73 percent of Americans believe that "the biggest problem with news about scientific research is the way news reporters report it."

The report recommends that journalists should report on scientific reports with as much context and nuance as the medium allows, "especially if the research is complicated, unlike most similar studies found on the same topic or if the researchers involved have potential conflicts of interest such as the previous or current industry funding.

It probably also means telling the readers if a study involves mice instead of humans.


Source link