New York Times: “Questionable science behind academic rankings”

Link: http://www.nytimes.com/2010/11/15/education/15iht-educLede15.html?scp=1&sq=times%20higher%20education%20ranking&st=cse

The problematic nature of one-dimensional rankings of university performance has been underlined in a recent story in the New York Times. The story focuses on the University of Alexandria that was ranked as nr. 147 in the world ranking of the recent Times Higher Education Ranking.  In other words, the university is placed as a strong player in the second tier of the universities that are globally relevant. According to the New York Times, Ann Mroz, editor of the THES, even congratulated the Egyptian university to have made it “into this table of truly world class”.

However, the reason for this high position is the performance of exactly one (1) academic: Mohamed El Naschie, who published 323 articles in the Elsevier journal Chaos, Solitons and Fractals of which he is the founding editor. His articles frequently cite other articles in the same journal by the same author. On many indicators, the Alexandrian university does not score very high, but on the number of citations indicator the university scores 99.8 which puts it as the 4th most highly cited in the world. This result clearly does not make any sense at all. Apparently, the methodology used by the THES is not only problematic because it puts a high weight on surveys and perceived reputation. It is also problematic because the way the THES survey counts citations and makes them comparable across fields (in technical terms, the way these counts are normalized) is not able to filter out these forms of self-promotion by self-citations. In other words: the way the THES uses citation analysis does not meet one of the requirements of sound indicators: robustness against simple forms of manipulation.

Even within the narrow terms of the technical expertise of bibliometrics, this is a fatal flaw. So far, the THES web site has not (yet?) mentioned the problem and if you click on the University of Alexandria in the ranking, nothing warns you that this is the result of one extremely “productive” scholar. The THES was quite proud when they had outsourced citation analysis to Thomson Reuters. They thought they had acquired world class methods. Clearly, they were wrong. I do not want to be smug about this, but as we know quite well at CWTS, and sometimes have had to learn the hard way in the past, a company that is able to provide large scale data does not have to be good in analyzing these data in the most sophisticated way possible. The THES should surely reconsider its citation analysis methodology.

Advertisement

The Times Higher Education Supplement University Ranking

The ranking of universities published by the Times HigherEducation World University Rankings is now in its seventh year. Every year, it attracts quite a lot of attention and hardly any university can afford to ignore it. But what does it actually represent?

This year, this ranking has undergone a complete overhaul and it is now based on a thoroughly changed methodology and data collection. According to the makers, it should be seen as the start of a new annual series. In other words, it can no longer be compared in a meaningful way to the THES rankings of earlier years. The makers are confident that their rankings “represent the most accurate picture of global higher education we have ever produced”. It has taken 10 months of hard work, according to the THES website, to produce this new ranking.

The quantitative data on scientific output are based onThomson Reuter’s Web of Science. The resulting ranking is based on the combination of this quantitative data with additional data and qualitative expert judgment, with input from more than 50 “leading figures” in 15 different countries across all continents. This has resulted in no less than 13 different performance indicators. They represent activity, output and impact in five main areas: teaching, research, citations, industry income, and internationalization. The first three determine each roughly 1/3 of the rankingposition, whereas internationalization and acquired industry income have a weight of 5 and 2.5 % respectively.

This weighting is rather arbitrary, there is no hard and fast rule for this. And indeed, the THES offers personalization in different applications, such as an iPhone app, which enables users to play with the presented data on the 400 largest universities in the world and create their own methodologies. Of course, this has its limitations since the underlying dataset is itself not available.

The THES ranking is a good example of an attempt to represent all relevant dimensions of the performance and impact of universities in a complex set of indicators. Because many of these aspects cannot be captured easily in simple statistics, the judgments of experts have had a strong influence on the results. For example, the score in teaching is mostly based on the results of a survey about perceived reputation in teaching among 15 thousand respondents. Inevitably, experts in different countries will have varying perspectives on teaching and will subsequently assign different scores. It was unclear to me, while reading the THES website, whether the survey has tried to check for this type of inconsistencies among respondents. In any case, the ranking is to no small extent a ranking of perceived reputation and impact.

Next to the academic survey, citation data play an important role. These have been delivered by Thomson Reuters, the partner of the THES ranking. A strong point in the newly devised THES ranking is the so-called “normalization” of the citation data. Fields differ quite strongly interms of referencing behaviour. In the medical and life sciences, for example, researcher cite quite a lot of other papers in their publications. As a result, the “citation density” in these fields is relatively high. Other fields, such as mathematics, have much shorter reference lists and therefore a much lower citation density. So if one wants to compare mathematicians with medical researchers, and this is necessary if one wants to rank overall university performance, one needs to correct for these type of differences. In the past, this was often not the case, so this is certainly a step forward. It does not solve all problems, however. For example, the academic output in the humanities and social sciences is covered only very incompletely in the Web of Science database. The same holds for local scientific journals and many applied sciences. Moreover, correcting for field differences is not an easy job. Fields are not clearly defined and this type of correction is very sensitive to how exactly one draws the boundaries between fields. This is a vexing issue for all rankers. The THES ranking has used the subject categories that are assigned to journals in the Web of Science. These subject categories are in many respects problematic as markers of fields. This is not such a big problem if one uses the Web of Science to locate interesting articles. As soon as one starts to base evaluations on them, however, extra care is needed. For this reason, finding better ways to delineate research fields is an important topic in scientometric research.

Interesting is also that the weight of the indicators has been influenced by both the availability of the data and the opinion of experts about the validity of an indicator. For example,  the ratio between staff and students is seen as a proxy measure for intensity and quality of teaching. The lower this ratio, the more staff available for students, and hence the higher teaching quality may be. Yet, this indicator does not influence the rating strongly, because the THES consultation “expressed some concerns” about this indicator. In other cases, indicators were assigned a relatively low weight because there were not enough data provided by the universities.

This points to an often overlooked problem. The more precise we want to measure the activities and impact of accademics, the more registration of their output, and their working processes are required. However, universities are still based on the model of an academic work force that is relatively autonomous. Doing scientific research and teaching at an academic level requires skills that often escape formal description and standardization. This makes it extremely difficult to measure the relevant dimensions without turning the universities into a Kafkaian nightmare. The universities are therefore confronted with the challenge to find the right balance between measurement and regulation on the one hand and on the other hand providing enough freedom to enable fundamental and ground breaking research (which by definitionis an adventure into the unknown).

The THES ranking is only one instance of an attempt to solve this type of dilemma in the management and information provision in academia. Because it aims to represent al relevant aspects of the universities’ performance and standing, it becomes very complex. The ranking cannot be more than a partial representation of global academia. On top of this, the presentation of the results in a one-dimensional list inevitably gives the impression that it makes a big difference where exactly one ends up. However, with the obvious exception of the few huge universities at the top (such as the academic giant Harvard University), many universities are actually very close to each other. A difference of a 100 in the ranking list does not have to mean that the performance is really different. For most universities, this type of ranking is therefore actually not so meaningful. If one wants to know where one’s institution stands in terms of relative performance, one needs a serious benchmark study which also takes the particular mission of the university into account.

So, if your university did not score so high in the THES ranking, do not lose too much sleep over it. It may mean much less than it seems.

%d bloggers like this: