قالب وردپرس درنا توس
Home / Health / In psychology and other social sciences, many studies fail the reproducibility test

In psychology and other social sciences, many studies fail the reproducibility test



The world of the social sciences became a rude awakening a few years ago when researchers concluded that many studies in this area appeared deeply flawed. Two-thirds could not be replicated in other laboratories.

Some of these researchers now report that these problems are still common in the most prestigious scientific journals.

But their study, published Monday in Nature Human Behavior also finds that social scientists can indeed track down the dubious results with remarkable ability.

First, the findings. Brian Nosek, a psychology researcher at the University of Virginia and director of the Center for Open Science, decided to focus on social science studies published in the most popular magazines, Science and Nature ].

"Some people have hypothesized that because they are the most famous outlets, they have the highest severity," says Nosek. "Others have hypothesized that the most prestigious outlets are the ones most likely to choose 'very sexy' outcomes and may therefore be less reproducible."

To find this out, he collaborated with scientists around the world to see if they were using the results of key experiments from 21

studies in Science and Nature typically psychological experiments Students could reproduce as subjects. On average, the new studies recruited five times as many volunteers to produce results that were less likely.

The results were better than the average of a previous review of psychology literature, but still far from perfect. Of the 21 studies, the experimenters were able to reproduce 13. And the effects they saw were on average only about half as strong as had been announced in the original studies.

The remaining eight were not reproduced.

"An essential part of the literature is reproducible," concludes Nosek. "We get evidence that someone can replicate independently [these findings] and there is a surprising number [of studies] that can not be repeated."

One of the eight studies that failed this test came from the lab of Will Gervais when he received his doctorate from the University of British Columbia. He and a colleague had done a series of experiments to see if people who are more analytical are less likely to have religious beliefs. In a test, the students looked at pictures of statues.

"Half of our attendees saw a picture of the sculpture" The Thinker ", in which this guy is intensively concerned with thoughts," says Gervais. "And in our control state, they would see the famous stature of a man throwing a discus."

People who saw The Thinker, a sculpture by August Rodin, voiced religious unbelief, Gervais reported in Science . And after all the evidence from his lab and others, he says that there is still reasonable evidence that the underlying conclusion is true. But he realizes that the sculpture experiment was really pretty weak.

"Our studies were downright silly in retrospect," says Gervais, who is now assistant professor at the University of Kentucky.

Even a previous study could not reproduce its experimental findings, so that the new analysis is no surprise.

But what interests him most about the new reproducibility study is that scientists predicted that his study, along with the seven others who did not repeat themselves, was unlikely to be up to the challenge.

As part of the reproducibility study, around 200 social scientists were interviewed and asked which results would withstand the re-test and which would not. The scientists completed a survey predicting the winners and losers. They also participated in a "forecasting market" where they could buy or sell tokens that supported their views.

"They take bets against each other, against us," said Anna Dreber, economics professor at the Stockholm School of Economics and co-author of the new study.

It turns out, "these researchers were very good at predicting which studies would replicate," she says. "I think that's good news for science."

These predictions could help accelerate the scientific process. If you can engage expert groups focused on exciting new results, the field could spend less time tracking erroneous results known as false positives.

"A false positive may result in other researchers and the original researcher spending a lot of time, energy and money on results that do not last," she says. "And that's a waste of resources and inefficient, so the sooner we find out that a result does not last, the better."

But if social scientists were really good at identifying faulty studies, why did the publishers and peer reviewers of Science and Nature leave these eight questionable studies through their review process?

"The likelihood that a result will be replicated or not is part of what an examiner would consider," says Nosek. "But other things could affect the decision to publish, it may be that this realization is probably not true, but if it's true, it's very important, so we want to release it because we want to get it."

Nosek acknowledges that although the new studies were stricter than the ones they tried to replicate, this does not guarantee that the old studies are wrong and the new studies are correct. No single scientific study gives a definitive answer.

Forecasts could be a powerful tool to accelerate this quest for truth.

However, this can not work in an area where very high stakes are needed: medical research where answers can be life-threatening.

Jonathan Kimmelman of McGill University, who was not involved in the new study, says when he asked medical researchers to make predictions about studies, the predictions have generally dropped.

"This is probably not a skill that is widely used in medicine," he says. It is possible that the social scientists selected to make the predictions in the latest study have in-depth knowledge in the analysis of data and statistics and that their knowledge of the psychological content is less important.

And predictions are just a tool to improve the rigor of the social sciences.

"Social and behavioral sciences are in the midst of a reformation," says Nosek. Scientists are taking steps to increase transparency, so potential issues are emerging quickly. Scientists increasingly announce in advance the hypothesis they are testing; They provide their data and computer code so that their colleagues can rate and review their results.

Perhaps most importantly, some scientists recognize that it's better to do fewer studies, but with more experimental themes to reduce the likelihood of chance.

"The way to get ahead and get a job is to publish many and many articles," says Gervais. "And it's hard to do that if you can do fewer studies, but in the end, I think it's the right way – to slow down our science and be stricter in the run-up."

Gervais says when he started his first professorship at the University of Kentucky, he sat down with his department head and said he would follow this path of publishing fewer but higher-quality studies. He says he got the nod for it. He sees it as part of a wider cultural change in the social sciences that aims to make the field more robust.

You can reach Richard Harris at rharris@npr.org .

Copyright 2018 NPR. To see more, visit http://www.npr.org/.


Source link