Two weeks before the start of the 2011 academic season, the latest issue of the Academic Ranking of World Universities (ARWU) was published. The response to this ranking in the Netherlands is telling about the importance ascribed to global university rankings. Utrecht University saw its position improved with 2 points and went to number 48. Leiden University went up 15 points and is now second after Utrecht at number 65. All Dutch universities are now listed among the 500 “best universities” in the world. The organization of Dutch universities VSNU was thrilled. This was an “excellent performance”, according to the organization, because “the Shanghai Ranking is in itself already a selection of the five hundred best universities in the world. This means that the Dutch universities belong to the best 3 percent of the total universities in the world (17,000).” In our view, this shows that the VSNU has not really understood the point of this ranking and the rationales behind its construction.
All measurements are preceded by decisions pertaining to the object(s) and focus of measurement. In this categorization process, certain factors will be labeled as relevant and others as less or irrelevant. Decisions will be made pertaining to the parameters of the categories that will be taken into account. These decisions fundamentally shape the subsequent measurements. The ARWU ranking is based on the data of 1,000 universities (the other 16,000 are not taken into account). The ranking strongly favours large universities. Because Nobel Prizes and Field Medals have a strong impact on the total ranking, and other prestigious prizes are not taken into account, the ARWU advantages Anglo-Saxon universities and the universities focused on the exact and medical sciences. From its beginning in 2003, the ARWU ranking is led by US universities, with Harvard as number one. The only non-US universities among the top ten are Oxford and Cambridge.
The way research performance is measured in the Shanghai ranking is also problematic. The number of articles in the journals Nature and Science determine 20 % of the ranking score, but prestigious monodisciplinary journals such as Cell or Physica Acta do not weigh so heavily. Influential humanities researchers are almost invisible in the ranking. Just before the Summer, the European University Association pointed to the disadvantages of the most popular global university rankings. In fact, they only rank the elite of the international university system. Moreover, composite rankings like the Shanghai Ranking merge different aspects of university performance (research, teaching, valorization, social impact) into one number. How this composite number is calculated is rather arbitrary and not always transparent. It is therefore unclear to what extent a change in position has anything to do with change in performance.
For example, it is quite certain that the small improvement of Utrecht University is a fluctuation without any significance. Additionally, even a seemingly robust improvement of the performance of a university can be caused by an individual outlier. According to the website Transfer, the three Dutch universities that saw their position most strongly improved had three individual researchers to thank for this improvement. Radboud University went up thanks to Nobel Prize winner Konstantin Novoselov. Eindhoven’s technical university should send flowers to computer scientist Wil van der Aalst, and Maastricht has risen thanks to behavioural psychologist Gerjo Kok. The fact that individual researchers can have such a strong influence on the position of a university in this ranking may trigger all sorts of perverse behaviour, such as trying to lure staff away from a competing university.