قالب وردپرس درنا توس
Home / Science / Uncovered hundreds of extremely self-citing scientists in new database

Uncovered hundreds of extremely self-citing scientists in new database



According to newly published data, the world's most-cited researchers are a strangely diverse bunch. Nobel laureates and important polymaths meet lesser-known names such as Sundarapandian Vaidyanathan from Chennai in India. What emerges about Vaidyanathan and hundreds of other researchers is that many of the citations to their work come from their own work or those of their co-authors.

Vaidyanathan, computer scientist at the Vel Tech R & D Institute of Technology The privately run institute is an extreme example: According to a study of PLoS Biology this month 1 [haterbis201794%ofitsquotationswereobtainedbyitselforitsco-authors , He is not alone The dataset, which lists some 1

00,000 researchers, shows that at least 250 scientists have accumulated more than 50% of their citations by themselves or their co-authors, while the mean rate of self-citation is 12.7%.

The study could help identify potential extreme self-promoters and possibly "citation farms" in which clusters of scientists massively cite each other, researchers say. "I think that self-citation farms are far more common than we think," says John Ioannidis, a doctor at Stanford University in California who specializes in meta-sciences – studying how science is done – and leading the work. "Those with more than 25% self-citation are not necessarily faced with unethical behavior, but it may require more scrutiny," he says.

The data is by far the largest collection of self-citation metrics ever published. And they come at a time when funding agencies, journals, and more are addressing the potential problems caused by excessive self-citing. In July, the Publishing Ethics Committee (COPE), a London-based advisory body for publishers, cited extreme self-citations as one of the main forms of quoting manipulation. This issue is a general concern that too much of citation metrics is used in decisions about recruitment, promotions, and research funding.

"If we combine career advancement and take citation numbers too much into account, we promote self-citation," says psychologist Sanjay Srivastava of the University of Oregon in Eugene.

Although many scientists believe that excessive self-citation is a problem, there is little agreement on how much is too much or what to do about it. This is partly because researchers have many legitimate reasons to cite their own or colleagues' work. Ioannidis warns that his studies should lead to denigrating certain researchers for their rates of self-citation, not least because they can vary between disciplines and career levels. "It only provides complete, transparent information. It should not be used for judgments such as deciding that too high a self-quotation equals a bad scientist, "he says.

Data Drive

Ioannidis and his co-authors have not published their data in order to focus on themselves -Quotation. This is just one part of their study, which includes a large number of standardized citation-based metrics for the approximately 100,000 most-cited researchers in 176 scientific disciplines in the last two decades. He assembled the data together with Richard Klavans and Kevin Boyack of the SciTech Strategies analyst company in Albuquerque, New Mexico, and Jeroen Baas, Director of Analytics at the Amsterdam publisher Elsevier. The data is all from Elsevier's proprietary Scopus database. The team hopes that through their work, it will be possible to identify factors that could influence quotes.

However, the most obvious part of the record is the self-citation metrics. The number of times an author has quoted his own work can already be seen in subscription databases such as Scopus and Web of Science. However, without looking across research fields and career stages, it is difficult to correlate these numbers and compare one researcher to another.

Vaidyanathan's record is considered one of the most extreme – and has brought some rewards. Last year, Indian politician Prakash Javadekar, who is currently the country's environment minister but was responsible for higher education at the time, presented Vaidyanathan with an award of Rs. 20,000 (US $ 280) for measuring productivity and productivity Performance among the best researchers in the country includes citation metrics. Vaidyanathan did not respond to Nature's request however, has previously defended its citation record in response to questions about Vel Tech that were posted on Quora, the online question and answer platform. In 2017, he wrote that because research is a continuous process, "the next work can not be continued without reference to previous work" and that self-citations were not made with the intention of misleading others.

Two other researchers Those who have received much praise and cite strongly are Theodore Simos, a mathematician whose website links to King Saud University in Riyadh, Ural Federal University in Yekaterinburg, Russia, and Democritus University in Thrace, Komotini, Greece. and Claudiu Supuran, a medical chemist at the University of Florence, Italy, who also lists a membership of King Saud University. Both Simos, who has amassed approximately 76% of his or her co-authors' quotes, and Supuran (62%) were ranked last year on a list of 6,000 "world-class researchers selected for their exceptional research performance," produced by Clarivate Analytics , an information services company in Philadelphia, Pennsylvania, owns the Web of Science. Neither Simos nor Supuran responded to the request of Nature for an opinion. Clarivate said that he was aware of the problem of unusual patterns of self-citation and that the method of calculating his list might change.

What about self-citation?

In recent years, researchers have paid more attention to self-citation. For example, a preprint from 2016 suggested that male academics cite on average 56% more papers than female academics 2 although replication analysis last year suggested that this might be an effect of a higher self – Citation among productive authors of each sex who have more cited earlier works 3 . In 2017, a study showed that scientists in Italy began to cite more strongly after the introduction of a controversial policy in 2010, according to which academics had to meet the productivity thresholds in order to be promoted 4 . Last year, the Indonesian Ministry of Research said that some researchers rated their findings using unethical methods, including excessive self-citations and groups of scientists citing each other. The ministry said it has stopped funding 15 researchers and intends to exclude self-citations from its formula, although researchers tell Nature that this has not yet happened.

Self-citation rates or their score based on metrics that have been corrected for self-citation are highly controversial. For example, COPE argued in a paper published last month 5 against the exclusion of self-citation from metrics, as this "does not allow a differentiated understanding of when self-citation is scientifically meaningful."

Justin Flatt, then a biologist at the University of Zurich in Switzerland, called for more clarity in 2017 on the scientists' self-citation recordings 6 . Flatt, who is now at the University of Helsinki, suggested publishing a self-citation index or s index after the h index of productivity used by many researchers. A h index of 20 indicates that a researcher has published 20 papers with at least 20 citations; Similarly, a s index of 10 would mean that a researcher published 10 articles, each of which received at least 10 self-citations.

Flatt, who has obtained a scholarship to collect data for

s index agrees with Ioannidis that the focus of this type of work should not be to set thresholds for acceptable scores or to designate high self-titers and to shame. "It was never about criminalizing self-citations," he says. However, as long as academics continue to compete with the h index, he advocates considering the s index for the context.

The context matters

An unusual feature of Ioannidis' study is the broad definition of self-citation, which also includes quotes from co-authors. This is to reveal possible cases of citation farming. However, according to Marco Seeber, a sociologist at the University of Ghent in Belgium, this increases the rate of self-citation. In particle physics and astronomy, for example, there are often articles with hundreds or even thousands of co-authors, which increases the self-citing average throughout the area.

Ioannidis says it is possible to explain some systematic differences by comparing researchers to the average for their country, career level and discipline. But more generally, he says, the list draws attention to cases that deserve a closer look. And there is another way to detect problems by examining the ratio of the citations received to the number of papers that include those quotations. Simos, for example, has received 10,458 quotes from just 1,029 articles – which means that he receives on average more than 10 quotes in each article mentioning his work. According to Ioannidis, this metric combined with the self-citation metric is a good indicator of potentially excessive self-promotion.


Source: Jeroen Baas, unpublished analysis of the Scopus database.

In unpublished work, Elseviers Baas says that he has applied a similar analysis to a much larger dataset of 7 million scientists, ie to all authors listed in Scopus who have published more than 5 articles. According to Baas, the mean rate of self-citation in this dataset is 15.5%, but up to 7% of authors have self-citation rates over 40%. This proportion is significantly higher than among the most cited scientists, as many of the 7 million researchers have only a few quotes or are at the beginning of their careers. Early career scientists tend to have higher rates of self-citation because their work did not have time to collect many quotes from others (see & # 39; The Youth Effect & # 39;).


Source: Jeroen Baas, unpublished analysis of the Scopus database. [19659021] According to Baas, Russia and the Ukraine are characterized by high mean rates of self-citation (see 'Land by Land'). His analysis also shows that some areas – such as nuclear and particle physics as well as astronomy and astrophysics – stand out due to their numerous articles with several authors (see "Physics Envy"). However, Baas says he has no plans to release his record.


Source: Jeroen Baas, unpublished analysis of the Scopus database.

Not good for science?

Although the PLoS Biology study identifies some extreme self-titers and suggests ways to look for others. Some researchers are not convinced that the data set will be useful for self-titers, in part because this metric varies so much according to research discipline and career stage. "Self-citation is much more complex than it seems," says Vincent Larivière, an information scientist at the University of Montreal in Canada.

Srivastava adds that the best way is to fight excessive self-citing – and other citation games. not necessarily to publish more and more detailed standardized tables and composite metrics to compare researchers. These could have their own shortcomings, and such an approach could involve scientists even more in the world of assessment on the basis of metrics at the individual level, which is precisely the problem that is the very driving force of play.

"We should ask the editors and reviewers should look for unjustified self-citations," says Srivastava. "And maybe some of those rough metrics are useful to take a closer look. Ultimately, however, the solution must be to refocus the professional evaluation by expert opinion and not to double the measurement data. "Cassidy Sugimoto, an information scientist at Indiana University Bloomington, agrees that more measurement data may not be the answer:" Ranking scientists is not good for science. "

Ioannidis, however, says his work is needed. "People already rely heavily on key figures at the individual level anyway. The question is how to ensure that the information is collected as accurately and carefully as possible and systematically, "he says. "Citation metrics can not and should not disappear. We should make the best possible use of them and fully acknowledge their many limitations. "


Source link