Worldwide diversification of research continues

Last Wednesday, we published the new edition of the Leiden Ranking. The results are quite interesting. The range of countries with universities who score high on their number of highly cited publications is increasing. Thirteen countries are now listed in the top hundred of the world: the US (57 universities), UK (16), Switzerland and the Netherlands (each 6), China (4), Singapore, Canada and Germany (each 2), and Israel, Denmark, Ireland, South Korea and Australia (each with 1 university).

Clearly, the US is still dominating. The first 12 universities are all based in the US. Like last year, MIT is leading the ranking with no less than one quarter of its publications in the 10% most cited percentiles of their field (in this calculation, we also take into account the publication year). The largest research university in the world, Harvard, is number five with an impressive one-fifth of its papers published between 2008 and 2011 scoring in the 10% most cited papers of their field. Note that when the option “fractional counting” is vinked, a paper is attributed as an equal fraction of a paper to all universities mentioned as author address. This prevents double counting, but does not reflect the total number of papers originating from a university. For example, Harvard has produced almost 57,000 papers, but many of them with other universities, which results in a “fractionalized” number of almost 30,000 papers, of which one-fifth scores in the 10% most cited segment.

China is steadily increasing the impact of its research. Whereas in the recent past, China rose quickly in terms of the production of scientific papers but not so much in terms of scientific influence, we now see that research from Chinese universities is gaining citations. Two Chinese universities, Nankai and Hunan, are even scoring higher on the highly cited indicator than the highest ranking Dutch universities (Leiden University and Utrecht University). Almost 14.5% of their publications belong to the top 10% most cited in their field. The diversification also shows outside of the top 100 universities. For example, China has 37 universities in the Leiden Ranking 2013 (of which 6 are newcomers), Iran (all five are new), Brazil (10, 2 newcomers). This trend is the result of three effects. First, many universities are increasing their share of the scientific production. Second, at the same time, the number of scientific papers is rising as such, which results in a steady increase of the size of the Web of Science database, on which the Leiden Ranking is based. Third, we have become better in correctly identifying universities in the address field of the scientific publications. We suspect, for example, that this contributes to the rise of Iran in the Leiden Ranking.

Of course, the ranking also shows areas in which the citation impact is lower than expected. What struck me is that the Japanese universities (including the prestigious Tokyo University) all score lower than the world average. This is also true for all universities from some of the newcomers such as Iran. But also, somewhat more surprisingly, for Norway, Brazil, Poland, Italy, Greece, Portugal, Russia, Turkey, and Taiwan.

Fraud in Flemish science

Almost half of Flemish medical researchers have witnessed a form of scientific fraud in their direct environment. One in twelve have been engaged themselves in data fraud or in “massaging data” in order to make the results fit the hypothesis. Many mention “publication pressure” as an important cause of this behaviour. This is the outcome of the first public survey among Flemish medical researchers about scientific fraud. The survey was conducted in November and December 2012 by the journal Eos . Joeri Tijdink, who had conducted a similar survey in the Netherlands among medical professors supervised the Flemish survey.

It is not clear to what extent the survey results are representative of the conduct of all medical researchers in Flanders. The survey was distributed through the deans of medical faculties in the form of an anonymous questionnaire. The response rate was fairly low (19 % of the 2,548 researchers responded and 315 (12 %) filled it in completely). Yet, the results indicate that fraud may be a much more serious problem than is usually acknowledged in the Flemish scientific system. Since the installation of Flemish university committees on scientific integrity, no more than 4 cases of scientific misconduct have been recognized (3 involved plagiarism; 1 researcher committed fraud). This is clearly lower than expected. The survey, however, consistently reports higher incidence of scientific misconduct than comparable international surveys do. For example, having witnessed misconduct is reported by 14% of researchers according to a meta-study by Daniele Fanelli, but in Flanders this is 47%. Internationally, 2% of researchers admit to have been involved themselves in data massage or fraud, whereas in Flanders this is 8%. The discrepancy can be explained in two ways. One is that the university committees are not yet effective in getting out the truth. The other is that this survey is biased towards researchers who have witnessed misconduct in some way. Given that both explanations seem plausible, the gap between the survey results and the formal record of misconduct in Flanders may best be explained by a combination of both mechanisms. After all, it is hard to understand why Flemish medical researchers would be more (or less)  prone to misconduct than medical researchers in, say, the Netherlands, the UK, or France.

According to Eos, publication pressure is one of the causes of misconduct. This still remains to be proven. However, both in the earlier survey by Tijdink and Smulders, and in this survey, a large number of researchers mention “publication pressure” as a driving factor. As has been argued in the Dutch debate about the fraud by psychologist Diederik Stapel, the mentioning of “publication pressure” as a cause may be motivated by a desire for legitimation. After all, all researchers are pressured to publish on a regular basis, while a small minority is involved in misconduct (as far as we know now). So the response may be part of a justification discourse, rather than a causal analysis. My own intuition is that the problem is not publication pressure, but reputation pressure, a subtle but important difference. Nevertheless, if a large minority (47% of the Flemish respondents for example) of researchers point to “publication pressure” as a cause of misconduct, we may have a serious problem in the scientific system, whether or not these researchers are right. A problem that can no longer be ignored.

Literature:

Fanelli D (2009) How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS ONE 4(5): e5738. doi:10.1371/journal.pone.0005738

Joeri K. Tijdink, Anton C.M. Vergouwen, and Yvo M. Smulders, Ned Tijdschr Geneeskd. 2012;156:A5715