International networks start to drive research

Networks of collaborating scientists spanning the globe are increasingly shaping the research landscape. The share of papers co-authored by researchers from different countries is steadily growing. More than one third of the papers is now based on an international collaboration, up from one quarter fifteen years ago. On top of this, these internationally co-authored papers have a higher citation impact. Each foreign partner in a paper increases its potential to be cited up to a tipping point of approximately 10 countries. The dynamics of these international networks together with sustained investments in scientific research by an increasing number of countries produce a much more multipolar world. Not surprisingly, China is rising fast. Ranking countries on the number of scientific papers produced, China is now number 2 with a share of 10 % of the international scientific production. It is expected to become number 1 within a few decades. Brazil and India are also emerging as powerful players on the international scene. But the rise of new scientific centres is not restricted to the BRICS countries. In the Middle East, both Turkey and Iran are investing strongly with an enormous growth of authors and papers as a result. While Iran published a bit more than 700 papers in 1993, in 2008 this was already more than 13 thousand. Turkey published in 2008 four times as much as in 1996 and its number of researchers has grown by 43 %. Still, the current heavyweights are dominating the rankings based on citation numbers. With a decreasing share in total publications (down from 26 top 21 %), the United States still attracts the majority of citations, more than 30 % of all publications cite work originating in the United States. Chinese papers have significantly less impact: with 10 % of the share of papers, the Chinese collect only 3 % of the citations.

These are some of the highlights of the recent report of the Royal Society (UK), “Knowledge, Networks and Nations: Global scientific collaboration in the 21st century“. This report is based on an analysis of all papers in the Scopus database (Elsevier) published between 2004 and 2008, compared with the production between 1993 and 2003. The report combines these findings with five case studies of prominent international research initiatives in health research, physics, and climate research. I think this report is a goldmine of interesting facts and sometimes surprising developments and a must read for all science policy actors.

For European Science Policy makers, the report should moreover give pause for reflection. The fast rise of international networks is particularly relevant for Europe because of the rise of anti-immigration parties that currently have a big impact on policy in general, and thereby also on Science Policy. The share of internationally co-authored papers in the European countries is rising, which means that the researchers in Europe need to be supported in creating more international collaborations. This simply cannot be combined with an anti-immigration policy focused on blocking international exchange of scientific personnel. In Europe, very different from Asia, the general political climate therefore seems to be out of step with the developments in the world of science and scholarship. A creative Science Policy requires an open attitude eager for international exchange of ideas and people, not least also with colleagues in Turkey and Iran. And Turkey should become a member of the European Union as soon as possible.

The report also shows nicely that internationalization is not a simple process. Overall, the number of internationally co-authored papers is on the rise. And in the current scientific centres, this goes together with an increase of the share of international papers in the total national scientific production. But in China and Brazil, the share of international papers is decreasing, while the absolute number of internationally co-authored papers is rising. Turkey and Iran show comparable trends, albeit less clear.The explanation is that in these countries the national research capacity is building up faster than the growing international collaborations.

Advertisement

Does ranking drive reputation?

 The recent Times Higher Reputation Ranking also raises a number of more fundamental questions about the production of reputation. If we compare the reputation ranking with the overall THE World Universities ranking, it is striking that the reputation ranking is much more skewed. The top 6 universities eat almost the whole reputation pie. University number 50 (Osaka) has only 6 % of the "amount of reputation" that number 1 (Harvard) has, whereas number 50 in the overall THE ranking (Vanderbilt University) still has 69 % of the rating of number 1 (again Harvard). The reputation is based on a survey (of which the validity is unclear), but how do the respondents determine the reputation of universities of which they direct knowledge (for example because they do not work there)?

 A recent issue of the New Yorker has an interesting analysis by Malcolm Gladwell about ranking American colleges (The order of things. What colleges rankings really tell us, The New Yorker, February 14 & 21, 2011, pp. 68-75). His topic is another ranking, perhaps even more famous than the THE Ranking: the Best Colleges Guide published by U.S. News & World Report. This is also based on a survey where university teachers are asked to rank the American colleges. When a university president is asked to assess the performance of a college, "he relies on the only source of detailed information at his disposal that assesses the relative merits of dozens of institutions he knows nothing about: U.S. News." According to Michael Bastedo, an educational sociologist at the University of Michigan, "rankings drive reputation". Gladwell concludes therefore that the U.S. News ratings are "a self-fulfilling prophecy".

 The extremely skewed distribution of reputation is in itself an indication that this might also be true for the THE ranking. Performance ratings are ususally skewed because of network and scaling effects. A big research institute can mobilize more resources to produce top quality research, will therefore attract more external funding, and so on: this sustains a positive feedback loop. But if the resulting rankings are also strongly influencing the data that feed into the next ranking, the skewedness of the ranking becomes even stronger.

This would mean that the THE Reputation Ranking does not only show that, in the perception of the respondents, a few American universities plus Oxford dominate the world, it also indicates that these respondents use the THE ranking, and comparable rankings, to fill in the forms that subsequently determine the next ranking.

 Thus, this type of ranking creates its own reality and truthfulness.

%d bloggers like this: