The ranking of universities published by the Times HigherEducation World University Rankings is now in its seventh year. Every year, it attracts quite a lot of attention and hardly any university can afford to ignore it. But what does it actually represent?
This year, this ranking has undergone a complete overhaul and it is now based on a thoroughly changed methodology and data collection. According to the makers, it should be seen as the start of a new annual series. In other words, it can no longer be compared in a meaningful way to the THES rankings of earlier years. The makers are confident that their rankings “represent the most accurate picture of global higher education we have ever produced”. It has taken 10 months of hard work, according to the THES website, to produce this new ranking.
The quantitative data on scientific output are based onThomson Reuter’s Web of Science. The resulting ranking is based on the combination of this quantitative data with additional data and qualitative expert judgment, with input from more than 50 “leading figures” in 15 different countries across all continents. This has resulted in no less than 13 different performance indicators. They represent activity, output and impact in five main areas: teaching, research, citations, industry income, and internationalization. The first three determine each roughly 1/3 of the rankingposition, whereas internationalization and acquired industry income have a weight of 5 and 2.5 % respectively.
This weighting is rather arbitrary, there is no hard and fast rule for this. And indeed, the THES offers personalization in different applications, such as an iPhone app, which enables users to play with the presented data on the 400 largest universities in the world and create their own methodologies. Of course, this has its limitations since the underlying dataset is itself not available.
The THES ranking is a good example of an attempt to represent all relevant dimensions of the performance and impact of universities in a complex set of indicators. Because many of these aspects cannot be captured easily in simple statistics, the judgments of experts have had a strong influence on the results. For example, the score in teaching is mostly based on the results of a survey about perceived reputation in teaching among 15 thousand respondents. Inevitably, experts in different countries will have varying perspectives on teaching and will subsequently assign different scores. It was unclear to me, while reading the THES website, whether the survey has tried to check for this type of inconsistencies among respondents. In any case, the ranking is to no small extent a ranking of perceived reputation and impact.
Next to the academic survey, citation data play an important role. These have been delivered by Thomson Reuters, the partner of the THES ranking. A strong point in the newly devised THES ranking is the so-called “normalization” of the citation data. Fields differ quite strongly interms of referencing behaviour. In the medical and life sciences, for example, researcher cite quite a lot of other papers in their publications. As a result, the “citation density” in these fields is relatively high. Other fields, such as mathematics, have much shorter reference lists and therefore a much lower citation density. So if one wants to compare mathematicians with medical researchers, and this is necessary if one wants to rank overall university performance, one needs to correct for these type of differences. In the past, this was often not the case, so this is certainly a step forward. It does not solve all problems, however. For example, the academic output in the humanities and social sciences is covered only very incompletely in the Web of Science database. The same holds for local scientific journals and many applied sciences. Moreover, correcting for field differences is not an easy job. Fields are not clearly defined and this type of correction is very sensitive to how exactly one draws the boundaries between fields. This is a vexing issue for all rankers. The THES ranking has used the subject categories that are assigned to journals in the Web of Science. These subject categories are in many respects problematic as markers of fields. This is not such a big problem if one uses the Web of Science to locate interesting articles. As soon as one starts to base evaluations on them, however, extra care is needed. For this reason, finding better ways to delineate research fields is an important topic in scientometric research.
Interesting is also that the weight of the indicators has been influenced by both the availability of the data and the opinion of experts about the validity of an indicator. For example, the ratio between staff and students is seen as a proxy measure for intensity and quality of teaching. The lower this ratio, the more staff available for students, and hence the higher teaching quality may be. Yet, this indicator does not influence the rating strongly, because the THES consultation “expressed some concerns” about this indicator. In other cases, indicators were assigned a relatively low weight because there were not enough data provided by the universities.
This points to an often overlooked problem. The more precise we want to measure the activities and impact of accademics, the more registration of their output, and their working processes are required. However, universities are still based on the model of an academic work force that is relatively autonomous. Doing scientific research and teaching at an academic level requires skills that often escape formal description and standardization. This makes it extremely difficult to measure the relevant dimensions without turning the universities into a Kafkaian nightmare. The universities are therefore confronted with the challenge to find the right balance between measurement and regulation on the one hand and on the other hand providing enough freedom to enable fundamental and ground breaking research (which by definitionis an adventure into the unknown).
The THES ranking is only one instance of an attempt to solve this type of dilemma in the management and information provision in academia. Because it aims to represent al relevant aspects of the universities’ performance and standing, it becomes very complex. The ranking cannot be more than a partial representation of global academia. On top of this, the presentation of the results in a one-dimensional list inevitably gives the impression that it makes a big difference where exactly one ends up. However, with the obvious exception of the few huge universities at the top (such as the academic giant Harvard University), many universities are actually very close to each other. A difference of a 100 in the ranking list does not have to mean that the performance is really different. For most universities, this type of ranking is therefore actually not so meaningful. If one wants to know where one’s institution stands in terms of relative performance, one needs a serious benchmark study which also takes the particular mission of the university into account.
So, if your university did not score so high in the THES ranking, do not lose too much sleep over it. It may mean much less than it seems.
Like this:
Like Loading...