The Facebook-ization of academic reputation?

Guest blog post by Alex Rushforth

The Facebook-ization of academic reputation? ResearchGate, and Everyday neoliberalism

How do we explain the endurance of neoliberal modes of government following the 2008 financial crisis, which could surely have been its death-knoll? This is the question of a long, brilliant, book by historian of science and economics Philip Mirowski, called ‘Never let a serious crisis go to waste’. Mirowski states that explanations of the crisis to date have accounted for only part of the answer. Part of the persistence of neo-liberal ideals of personhood and markets comes not just directly from ‘the government’ or particular policies, but is a result of very mundane practices and technologies which surround us in our everyday lives.

I think this book can tell us a lot about new ways in which our lives as academics are increasingly being governed. Consider web platforms like ResearchGate and following Mirowski, these academic professional networking sites might be understood as technologies of ‘everyday neoliberalism’. These websites share a number of resemblances with social networking sites like Facebook – which Mirowski takes as an exemplar par excellence of this phenomenon. He argues Facebook teaches its users to become ‘entrepreneurs of themselves’, by fragmenting the self and reducing it to something transient (ideals emanating from the writings of Hayek and Friedman), to be actively and promiscuously re-drawn out of various click-enabled associations (accumulated in indicators like numbers of ‘likes’, ‘friends’, comments) (Mirowski, 2013, 92).

Let us briefly consider what kind of academic and ResearchGate encourages and teaches us to become. Part of the seductiveness of these technologies for academics, I suspect, is that we already compete within reputational work organisations (c.f. Whitley, 2000), where self-promotion has always been part-and-parcel of producing new knowledge. However, such platforms also intensify and reinforce dominant ideas and practices for evaluating research and researchers, which – with the help of Mirowski’s text – appear to be premised on neoliberal doctrines. Certainly the websites build on the idea that the individual (as author) is the central locus of knowledge production. Yet what is distinctly neoliberal perhaps is how the individual – through the architecture and design of the websites – experiences their field of knowledge production as a ‘marketplace of ideas’ (on the neo-liberal roots of this idea, see Mirowski, 2011).

This is achieved through ‘dashboards’ that display a smorgasbord of numerical indicators. When you upload your work, the interface generates the Impact Factor of journals you have published in and various other algorithmically-generated scores (ResearchGate score anyone?). There are also social networking elements like ‘contacts’, enabling you to follow and be followed by other users of the platform (your ‘peers’). This in turn produces a count of how well ‘networked’ you are. In short, checking one’s scores, contacts, downloads, views, and so on is supposed to give an impression of an individual user’s market standing, especially as one can compare these with scores of other users. Regular email notifications provide reminders to continue internalizing these demands and to report back regularly to the system. These scores and notices are not final judgments but a record of accomplishments so far, motivating the user to carry on with the determination to do better. Given the aura of ‘objectivity’ and ‘market knows best’ mantra these indicators present to us, any ‘failings’ are the responsibility of the individual. Felt anger is to be turned back inward on the self, rather than outwards on the social practices and ideas through which such ‘truths’ are constituted. A marketplace of ideas indeed.

Like Facebook, what these academic professional networking sites do seems largely unremarkable and uncontroversial, forming part of background infrastructures which simply nestle into our everyday research practices. One of their fascinating features is to promulgate a mode of power that is not directed to us ‘from above’ – no manager or formal audit exercise is coercing researchers into signing-up. We are able to join and leave of our own volition (many academics don’t even have accounts). Yet these websites should be understood as component parts of a wider ‘assemblage’ of metrics and evaluation techniques with which academics currently juggle, which in turn generate certain kinds of tyrannies (see Burrows, 2012).

Mirowski’s book provides a compelling set of provocations for digital scholars, sociologists of science, science studies, higher education scholars and others to work with. Many studies have been produced documenting reforms to the university which have bared various hallmarks of neoliberal political philosophical doctrines (think audits, university rankings, temporary labour contracts, competitive funding schemes and the like). Yet these latter techniques may only be the tip of the iceberg: Mirowski has given us cause to think more imaginatively about how ‘everyday’ or ‘folk’ neoliberal ideas and practices become embedded in our academic lives through quite mundane infrastructures, the effects of which we have barely begun to recognise, let alone understand.


Burrows, R. 2012. Living with the h-index? Metric assemblages in the contemporary academy. Sociological Review, 60, 355-372.

Mirowski, P. 2011. Science-mart : privatizing American science, Cambridge, Mass. ; London, Harvard University Press.

Mirowski, P. 2013. Never let a serious crisis go to waste : how neoliberalism survived the financial meltdown, New York, Verso.

Whitley, R. 2000. The intellectual and social organization of the sciences, Oxford England ; New York, Oxford University Press.



Stick to Your Ribs: Interview with Paula Stephan — Economics, Science, and Doing Better

A good interview about what is wrong with the current incentives system in science and scholarship.

Bibliometrics of individual researchers

The demand for measures of individual performance in the management of universities and research institutes has been growing, in particular since the early 2000s. The publication of the Hirsch Index in 2005 (Hirsch, 2005) and its popularisation by the journal Nature (Ball, 2005) has given this a strong stimulus. According to Hirsch, his index seemed the perfect indicator to assess the scientific performance of an individual author because “it is transparent, unbiased and very hard to rig”. The h-index balances productivity with citation impact. An author with a h-index of 14 has created 14 publications that each have been cited at least 14 times. So neither authors with a long list of mediocre publications, nor an author with 1 wonder hit are rewarded by this indicator. Nevertheless, the h-index turned out to have too many disadvantages to be wearing the crown of “the perfect indicator”. As Hirsch acknowledged himself, it cannot be used for cross-disciplinary comparison. A field in which many citations are exchanged among authors will produce a much higher average Hirsch index than a field with much less citations and references per publication. Moreover, the older one gets, the higher ones h-index will be. And, as my colleagues have shown, the index is mathematically inconsistent, which means that rankings based on the h-index may be influenced in rather counter-intuitive ways (Waltman & Eck, 2012). At CWTS, we therefore prefer the use of an indicator like the number (or percentage) of highly cited papers instead of the h-index (Bornmann, 2013).

Still, none of the bibliometric indicators can claim to be the perfect indicator to assess the performance of the individual researcher. This raises the question of how bibliometricians and science managers should use statistical information and bibliometric indicators. Should they be avoided and should the judgment of candidates for a prize or a membership of a prestigious academic association only be informed by peer review? Or can numbers play a useful role? What guidance should the bibliometric community then give to users of their information?

This was the key topic at a special plenary at the 14th ISSI Conference two weeks ago in Vienna. The plenary was an initiative taken by Jochen Gläser (Technical University Berlin), Ismael Rafols (SPRU, University of Sussex, and Ingenio, Polytechnical University Valencia), Wolfgang Glänzel (Leuven University) and myself. The plenary aimed to give a new stimulus to the debate how to apply, and how not to apply, performance indicators of individual scientists and scholars. Although not a new debate – the pioneers of bibliometrics already paid attention to this problem – it has become more urgent because of the almost insatiable demand for objective data and indicators in the management of universities and research institutes. For example, many biomedical researchers mention the value of their h-index on their CV. In publications lists, one can regularly see the value of the Journal Impact Factor mentioned after the journal’s name. In some countries, for example Turkey and China, one’s salary can be determined by the value of either the h-index or the journal’s impact factor one has published in. The Royal Netherlands Academy of Arts and Sciences also seems to ask for this kind of statistics in its forms for new members in the medical and natural sciences. Although robust systematical evidence is still lacking (we are working hard on this), the use of performance indicators in the judgment of individual researchers for appointments, funding, and memberships, seems widespread, intransparent and unregulated.

This situation is clearly not desirable. If researchers are being evaluated, they should be aware of the criteria used and these criteria should be justified for the purpose at hand. This requires that users of performance indicators should have clear guidelines. It seems rather obvious that the bibliometric community has an important responsibility to inform and provide such guidelines. However, at the moment, there is no consensus yet about such guidelines. Individual bibliometric centres do indeed inform their clients about the use and limitations of their indicators. Moreover, all bibliometric centres have the habit of publishing their work in the scientific literature, often including technical details of their indicators. However, this published work is not easily accessible to non-expert users such as deans of faculties and research directors. The literature is too technical and distributed over too many journals and books. It needs synthesizing and translation into plain language which is easily understandable.

To initiate a process of a more professional guidance for the application of bibliometric indicators in the evaluation of individual researchers, we asked the organizers of the ISSI conference to devote a plenary to this problem, which they kindly agreed to. At the plenary, Wolfgang Glänzel and me presented “The dos and don’ts in individual level bibliometrics”. We do not think this is a final list, more a good start with ten dos and don’ts. Some examples: “do not reduce individual performance to a single number”, “do not rank scientists according to 1 indicator”, “always combine quantitative and qualitative methods”, “combine bibliometrics with career analysis”. To prevent misunderstandings: we do not want to initiate a bibliometric police with absolute rules. The context of the evaluation should always determine which indicators and methods to use. Therefore, some don’ts in our list may sometimes be perfectly useable, such as the application of bibliometric indicators to make a first selection among a large number of candidates.

Our presentation was commented on by Henk Moed (Elsevier) with a presentation on “Author Level Bibliometrics” and by Gunnar Sivertsen (NIFU, Oslo University) with comments on the basis of his extensive experiences in research evaluation. Henk Moed built on the concept of the multi-dimensional research matrix which was published by the European Expert Group on the Assessment of University Based Research in 2010, of which he was a member (Assessing Europe’s University-Based Research – Expert Group on Assessment of University-Based Research, 2010). This matrix aims to give global guidance to the use of indicators at various levels of the university organization. However, it does not focus on the problem of how to evaluate individual researchers. Still, the matrix is surely a valuable contribution to the development of more professional standards in the application of performance indicators. Gunnar Sivertsen made clear that the discussion should not be restricted to the bibliometric community itself. On the contrary, the main audience of guidelines should be the researchers themselves and adminstrators in universities and funding agencies.

The ensuing debate led to a large number of suggestions. They will be included in the full report of the meeting which will be published in the upcoming issue of the ISSI’s professional newsletter in September 2013. A key point was perhaps the issue of responsibility: it is clear that researchers themselves and the evaluating bodies should carry the main responsibility for the use of performance indicators. However, they should be able to rely on clear guidance from the technical experts. How must this balance be struck? Should bibliometricians refuse to deliver indicators when they think their application would be unjustified? Should the association of scientometricians publicly comment on misapplications? Or should this be left to the judgment of the universities themselves? The plenary did not solve these issues yet. However, a consensus is emerging that more guidance by bibliometricians is required and that researchers should have a clear address to which they can turn to with questions about the application of performance indicators either by themselves or by their evaluators.

What next? The four initiators of this debate in Vienna have also organized a thematic session on individual level bibliometrics at the next conference on science & tecnnology indicators, the STI Conference “Translational twists and turns: science as a socio-economic endeavour”, which will take place in Berlin, 4-6 September 2013. There, we will take the next step in specifying guidelines. In parallel, this conference will also host a plenary session on the topic of bibliometric standards in general, organized by iFQ, CWTS and Science-Metrix. In 2014, we will then organize a discussion with the key stakeholders such as faculty deans, adminstrators, and of course the research communities themselves on the best guidelines for evaluating individual researchers.

Stay tuned.


Assessing Europe’s University-Based Research – Expert Group on Assessment of University-Based Research. (2010). Research Policy. European Commission. doi:10.2777/80193

Ball, P. (2005). Index aims for fair ranking of scientists. Nature, 436(7053), 900. Retrieved from

Bornmann, L. (2013). A better alternative to the h index. Journal of Informetrics, 7(1), 100. doi:10.1016/j.joi.2012.09.004

Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–72. doi:10.1073/pnas.0507655102

Waltman, L., & Eck, N. J. Van. (2012). The Inconsistency of the h-index. Journal of the American Society for Information Science and Technology, 63(2007), 406–415. doi:10.1002/asi

Vice Rector University of Vienna calls for a new scientometrics

At the opening of the bi-annual conference of the International Society for Informetrics and Scientometrics (ISSI) in Vienna on July 16, Susanne Weigelin-Schwiedrzik, the Vice Rector of the University of Vienna called upon the participants to reorient the field of scientometrics in order to better meet the need for research performance data. She explained that the Austrian universities nowadays are obliged by law to base all their decision regarding promotion, personnel, research funding and allocation of research funds to departments on formal external evaluation reports. “You are hosted by one of the oldest universities in Europe, it was founded in 1365. In the last couple of years, this prestigious institute has been reorganized using your scientometric data. This puts a tremendous responsibility on your field. You are no longer in the Kindergarten stage. Without your data, we cannot take decisions. We use your data to allocate research funds. We have to think twice before using your data. But you have the responsibility to realize your role in a more fundamental way. You also have to address the criticism of scientometric data. And what they represent.”

Weigelin’s passionate call for a more reflexive and critical type of scientometrics is motivated by the strong shift in Austrian university policy with respect to human resource management and research funding. In the past, the system was basically a closed shop with many university staff members staying within their original university. The system was not very open to exchanges among universities, let alone international exchange. Nowadays, the university managers need to explicitly base their decisions on external evaluations, in order to make clear that their decisions meet international quality standards. As a consequence, the systems of control at Austrian universities have exploded. To support this decision making machinery, the University of Vienna has created a specific quality management department and a bibliometric department. The university has an annual budget 380 million Euro and needs to meet annual targets that are included in target agreements with the government.

On the second day of the ISSI conference, Weigelin repeated her plea in a plenary session on the merits of altmetrics. After a couple of presentations by Elsevier and Mendeley researchers, she said she was “not impressed”. “I do not see how altmetrics, such as download and usage data, can help solve our problem. We need to take decisions on the basis of data on impact. We look at published articles and at Impact Factors. As a researcher, I know that this is incorrect since these indicators do not directly reflect quality. But as a manager, I do not know what to do else. We are supposed to simplify the world of science. That is why we rely on your data and on the misconception that impact is equal to quality. I do not see a solution in altmetrics.” She told the audience, which was listening intently, that she has a constant flow of evaluation reports and the average quality of these reports is declining. “And I must say that a fair amount of the reports that are pretty useless are based on scientometric data.” Nowadays, Weigelin is no longer accepting recommendations for promotion of scientific staff that are only mentioning bibliometric performance measures without a substantive interpretation of what the staff member is actually contributing to her scientific field.

In other words, at the opening of this important scientometric conference, the leadership of the University of Vienna has formulated a clear mission for the field of scientometrics. The task is to be more critical with respect to the interpretation of indicators and to develop new forms of strategically relevant statistical information. This mission resonates strongly with the new research program we have developed at CWTS. Happily, the resonance among the participants of the conference was strong as well. The program of the conference shows many presentations and discussions that promise to at least contribute, albeit sometimes in a modest way, to solving Weigelin’s problems. It seems therefore clear that many scientometricians are eager to meet the challenge and indeed develop a new type of scientometrics for the 21st century.

May university rankings help uncover problematic or fraudulent research?

Can one person manipulate the position of a whole university in a university ranking such as the Leiden Ranking? The answer is, unfortunately, sometimes yes – provided the processes of quality control in journals do not function properly. A Turkish colleague recently alerted us to the position of Ege University in the most recent Leiden Ranking in the field of mathematics and computer science. This university, not previously known as one of the prestigious Turkish research universities, ranks second with an astonishing value of the PP(top 10%) indicator of almost 21%. In other words, 21% of the mathematics and computer science publications of Ege University belong to the top 10% most frequently cited in their field. This means that Ege University is supposed to have produced twice the amount of highly cited papers as expected. Only Stanford University has performed better.

In mathematics and computer science, Ege university has produced 210 publications (Stanford wrote almost ten times as much). Because this is a relatively small number of publications, the reliability of the ranking position is fairly low, which is indicated by a broad stability interval (an indication of the uncertainty in the measurement). Of the 210 Ege University publications, no less than 65 have been created by one person, a certain Ahmet Yildirim. This is an extremely high productivity in only 4 years in this specialty. Moreover, the Yildirim publications are indeed responsible for the high ranking of Ege University: without them, Ege University would rank around position 300 in this field. This position is therefore probably a much better reflection of its performance in this field. Yildirim’s publications have attracted 421 citations, excluding the self-citations. Mathematics is not a very citation dense field, so this level of citations is able to strongly influence both the PP(top10%) and the MNCS indicators.

An investigation into Yildirim’s publications has not yet started, as far as we know. But suspicions of fraud and plagiarism are rising, both in Turkey and abroad. One of his publications, in the journal Mathematical Physics, has recently been retracted by the journal because of evident plagiarism (pieces of an article by a Chinese author were copied and presented as original). Interestingly, the author has not agreed with this retraction. A fair number of Yildirim’s publications have been published in journals with a less than excellent track record in quality control. The Elsevier journal Computer & Mathematics with Applications (11 articles by Yildirim) has recently retracted an article by a different author because it turned out to have “no scientific content”. Actuallly, it was an almost empty publication. According to Retraction Watch, the journal’s editor Ervin Rodin has been replaced at the end of last year. He was also relieved from his editorial position at the journal Applied Mathematics Letters – An International Journal of Rapid Publication, another Elsevier imprint. Rodin was also editor of Mathematical and Computer Modelling, in which Yildirim published 5 articles. The latter journal currently does not accept any submissions “due to an editorial reconstruction”.

How did Yildirim’s publications attract so many citations? His 65 publications are cited by 285 publications, giving in total 421 citations. This group of publications has a strong internal citation traffic. They have attracted almost 1200 citations, of which a bit more than half is generated within this group. In other words: this set of publications seems to represent a closely knit group of authors, but they are not completely isolated from other authors. If we look at the universities citing Ege University, none of them have a high rank in the Leiden Ranking with the exception of Penn State University (which ranks at 112) that has cited Yildirim once. If we zoom in on mathematics and computer science, virtually all of the citing universities do not rank highly either, with the exception of Penn State (1 publication) and Gazi University (also 1 publication). The rank position of the last university, by the way, is not so reliable either, as indicated by the stability interval that is almost as wide as in the case of Ege University.

The bibliometric evidence allows for two different conclusions. One is that Yildirim is a member of a community which works closely together on an important mathematical problem. The alternative interpretation is that this group is a distributed citation cartel which not only exchanges citations but also produces very similar publications in journals that are functioning mainly as citation generating devices. A cursory look at a sample of the publications and the way the problems are formulated seems to support the second interpretation more than the first.

But from this point, the experts in mathematics should take over. Bibliometrics is currently not able to properly distinguish sense from nonsense in scientific publications. Expertise in the field is required for this task. We have informed the rector of Ege University that the ranking of his university is doubtful and requested more information from him about the position of the author. We have not yet received a reply. If Ege University wishes to be taken seriously, it should start a thorough investigation of the publications by Yildirim and his co-authors.

If you see other strange rankings in our Leiden Ranking or in any other ranking, please do notify us. It may help us create better tools to uncover fraudulent behaviour in academic scholarship.

Worldwide diversification of research continues

Last Wednesday, we published the new edition of the Leiden Ranking. The results are quite interesting. The range of countries with universities who score high on their number of highly cited publications is increasing. Thirteen countries are now listed in the top hundred of the world: the US (57 universities), UK (16), Switzerland and the Netherlands (each 6), China (4), Singapore, Canada and Germany (each 2), and Israel, Denmark, Ireland, South Korea and Australia (each with 1 university).

Clearly, the US is still dominating. The first 12 universities are all based in the US. Like last year, MIT is leading the ranking with no less than one quarter of its publications in the 10% most cited percentiles of their field (in this calculation, we also take into account the publication year). The largest research university in the world, Harvard, is number five with an impressive one-fifth of its papers published between 2008 and 2011 scoring in the 10% most cited papers of their field. Note that when the option “fractional counting” is vinked, a paper is attributed as an equal fraction of a paper to all universities mentioned as author address. This prevents double counting, but does not reflect the total number of papers originating from a university. For example, Harvard has produced almost 57,000 papers, but many of them with other universities, which results in a “fractionalized” number of almost 30,000 papers, of which one-fifth scores in the 10% most cited segment.

China is steadily increasing the impact of its research. Whereas in the recent past, China rose quickly in terms of the production of scientific papers but not so much in terms of scientific influence, we now see that research from Chinese universities is gaining citations. Two Chinese universities, Nankai and Hunan, are even scoring higher on the highly cited indicator than the highest ranking Dutch universities (Leiden University and Utrecht University). Almost 14.5% of their publications belong to the top 10% most cited in their field. The diversification also shows outside of the top 100 universities. For example, China has 37 universities in the Leiden Ranking 2013 (of which 6 are newcomers), Iran (all five are new), Brazil (10, 2 newcomers). This trend is the result of three effects. First, many universities are increasing their share of the scientific production. Second, at the same time, the number of scientific papers is rising as such, which results in a steady increase of the size of the Web of Science database, on which the Leiden Ranking is based. Third, we have become better in correctly identifying universities in the address field of the scientific publications. We suspect, for example, that this contributes to the rise of Iran in the Leiden Ranking.

Of course, the ranking also shows areas in which the citation impact is lower than expected. What struck me is that the Japanese universities (including the prestigious Tokyo University) all score lower than the world average. This is also true for all universities from some of the newcomers such as Iran. But also, somewhat more surprisingly, for Norway, Brazil, Poland, Italy, Greece, Portugal, Russia, Turkey, and Taiwan.

Plea for assessments, against bean counting – part 3

Valorization of research has become an increasingly important pillar in research evaluation. The LERU report “Research universities and research assessment” does acknowledge this development. The report does not take a strong stand but limits itself to a cautious preliminary assessment of impact assessment. It gives an overview of the British, US, and European approach to “impact” as evaluation criterion. In the UK, one fifth of the grade in the new Research Excellence Framework will be awarded based on a combination of the “reach” of the impact and its “significance”. Universities are asked to present case studies as empirical evidence of societal impact of their research. The LERU report points to the resource intensiveness of this approach as well as to the novelty of this type of measurement for academia. Panel members will have to be develop expertise in this area. Also, the way research may have wider impact in society will vary strongly by research field.

In the US a different, large-scale data oriented route has been taken with the STAR METRICS project funded by NIH, NSF and the White House Office of Science and Technology. There is no lack of ambition for the US project. According to Francis Collins, Director of NIH, STAR METRICS will “yield a rigorous, transparent review of how our science investments are performing. In the short term, we’ll know the impact on jobs. In the long term, we’ll be able to measure patents, publications, citations, and business start-ups”. LERU warns that this might be too optimistic. “Already anecdotal evidence suggests that a number of anomalies appear to be occurring. There is concern about coverage especially in disciplines that focus on highly selective and tightly focused conference proceedings, traditional journals being deemed to slow. In addition, it is thought that there may be perverse effects on young new investigators.”

Not mentioned in the report is the role of commercial companies in research assessment. This is a growing market and the increasing pressure on university budgets has the paradoxical effect of making research assessments and bibliometric analyses even more important. As a result, commercial companies have developed aggressive strategies to attract universities as clients. Some universities have developed spin-off companies, and CWTS itself is in fact an exemplar of such a hybrid of a research centre and commercial service provider. This has been the state of affairs from the very beginning of scientometrics as a field of research. So there is nothing new here. Still, universities need to be aware of  potential conflicts of interest between the companies producing information about research and themselves. A good strategy might be to always maintain ownership of the data produced by the university and to promote open access where possible. Universities are starting to develop campus-wide policies and they might have profited from LERU advice on this topic.

Last, but not least, the LERU report does not discuss the changing demographics of the research population at universities and the acute need for universities to develop a more future oriented career policy. According to many specialists, the way universities develop their human resource management might very well decide how they will fare. An important question is how research evaluations are affecting the development of research careers and to what extent they are producing perverse effects. The fact that this is not mentioned at all in the LERU report is a missed opportunity in an otherwise balanced and carefully written policy report.

Plea for assessments, against bean counting – part 2

The LERU report “Research universities and research assessment” is partly inspired by problems that university managers have encountered in their attempt to evaluate the performance of their institute. In her presentation at the launch event of the report, Mary Phillips concluded that universities use assessments in a variety of ways. First, they want to know their output, impact and quality for the allocation of funds, performance improvement, and maximization of ‘return on investment’. Second, research assessments are used to inform strategic planning. Third, they are applied to identify excellent researchers in the context of attracting and retaining them. Fourth, they are used to monitor the performance of individual units in the university, such as departments or faculties. Fifth, research assessments are used to identify current and potential partners for scientific collaboration. And last, they are used at the level of the universities to benchmark against their peers.

Given this variety of assessment applications, it is not surprising that universities encounter a number of problems. The report identifies a number of them. The relevant data are diverse and currently often not integrated in tools. There is a lack of agreement on definitons and standards. For example, who counts as a ‘researcher’ may differ in different university systems. Also, the funding mechanisms are still dominantly national and they differ significantly. And finally, many databases that are used in research assessments are proprietary and cannot be controlled by the universities themselves. Morover, Phillips signals that perverse effects can be expected from current assessment procedures. Measurement cultures may “distract from the academic mission”. It is important to be aware of disciplinary differences, for example with respect to the numbers of citations and the relevant time frames of the measurement. Last, the report mentions that academics may feel threatened by research assessments.

In addition to these problems and dangers, the report identifies two important novel developments in research assessment: the European project to rank universities in multiple dimensions (U-Multirank), and the recent emphasis on the societal impact of research in evaluations. The report is rather critical of U-Multirank. The project, in which CWTS participates, aims to address a major problem in current university rankings. Apart from the research focused rankings, such as the Leiden Ranking or the Scimago Ranking, global rankings have combined different dimensions such as the quality of education and research output in an arbitrary way. Also, they apply one model to all universities. However, universities may have very different missions. Therefore, it makes more sense to compare universities with similar missions. “According to the multidimensional approach a focused ranking does not collapse all dimensions into one rank, but will instead provide a fair picture of institutions (‘zooming in’) within the multi-dimensional context provided by the full set of dimensions.” (U-Multirank) In principle, LERU supports this approach and it was also involved in the first stage feasibility study. However, a number of concerns have led LERU to disengage from the project.

“Our main concerns relate to the lack of good or relevant data in several dimensions, the problems of comparability between countries in areas such as funding, the fact that U-Multirank will not attempt to evaluate the data collected, i.e. there will be no “reality-checks”, and last but by no means least, the enormous burden put upon universities in collecting the data, resulting in a lack of involvement from a good mix of different types of universities from all over the world, which renders the resulting analyses and comparisons suspect.” It has led the organization to turn away from rankings as an instrument in assessment. The European Commission has not followed this reasoning and has recently decided to publish a call for the second stage of the U-Multirank project. The consortium has not yet publicly replied to LERU’s critique.

Plea for assessments, against bean counting – part 1

“Above all, universities should stand firm in defending the long-term value of their research activity, which is not easy to assess in a culture where return on investment is measured in very short time spans.” This is the main motif of a new position paper recently published by the League of Research Universities (LERU) about the way universities should handle evaluation of research. In many ways, it is a sensible report which tries to strike a careful balance between the different interests involved. The report is written by Mary Phillips, former director of Research Planning at University College London and currently adviser of Academic Analytics, a for-profit consultancy in the area of research evaluation (and hence one of CWTS’ competitors). The report is a plea for the combined application of peer review and bibliometrics by university management. It also contains a number of principles that LERU would like to see implemented by universities in their assessment procedures.

Point of departure of the report is the observation that assessments have become part and parcel of the university. At the same time, the types of assessments possible and the different methodologies have exploded. This leads to the stimulation of “aobsession with measurement and monitoring, wich may result in a ‘bean counting’ culture detracting from the real quality of research”. Indeed, this has already begun. The dilemmas are made worse by the fact that universities need to deal with large quantities of data, require sophisticated human resource and research management tools, which they often currently lack. On top of all this, funding regimes tend to create incentives which may tempt universities to, as the report with feeling for understatements expresses, “behave in certain ways, sometimes with unfortunate consequences”.

One of the implications is that any assessment system must be sensitive to possible perverse incentives, should take disciplinary differences into account and have a long enough time frame, at least five years according to the report. Assessments should “reflect the reality of research”, including the aspirations of the researchers involved. “Thus, senior administrators and academics must take account of the views of those “at the coal-face” of research”. Assessments should be “as transparent as possible”. Universities are advised to improve their data management systems. And researchers “should be encouraged (or compelled) when publishing, to use a unique personal and institutional designation, and to deposit all publications into the university’s publications  database”.

Universities demand full transparency university rankings

Last week, I attended a two day conference of the rectors of 65 Latin American universities about global university rankings in Mexico City. The meeting concluded by adopting a “Final Declaration” signed by the majority of attending universities. At times, it was a debate in which the emotions ran high. Clearly, many universities leaders had the feeling that they were badly served by most global university rankings. In this, they were supported by the keynote speaker, Simon Marginson, a higher education expert from the University of Melbourne (Australia). He held an excellent speech in which he showed how most rankings are based on a particular model of higher education as a globalized market. In this framework US universities are dominant. Many rectors were of the opinion that the social mission of the Latin American universities will not be valued in this model. Moreover, performance at the international research front is dominant in most rankings, including in our Leiden Ranking. Latin American universities do not score high, if they make it to the ranking at all.

The meeting was organized by Imanol Ordorika, director of institutional evaluation of the National Autonomous University of Mexico. A former leader of the 1987 student demonstrations, he is focused both on international research (in the field of higher education) and on the social role of the universities. The countries in Latin America are confronted with high levels of corruption, enormous economic and social inequalities, and the need for much better mass education. Although these universities are huge (UNAM has more than 300 thousand students), they still cannot accomodate all young people who aspire to study. Approximately one-fifth of Latin America’s youth neither studies nor works. No wonder that university rectors not only worry about their international research effort, but at least as much or more about their role in improving the educational system in their countries.

Against this background, the well-known deficiencies of many global university rankings are even more urgent. This was also the reason to organize the conference. Increasingly, universities that score low or not at all in the rankings – such as the Times Higher Education World University Rankings, de QS World University Rankings, or the Academic Ranking of World Universities (the Shanghai Ranking), or the Ranking Web of World Universities – are questioned about their performance. According to the declaration adopted at the conference, the current rankings have many undesirable effects, such as a homogenizing impact in which the elite US based research university is dominant, a bias in the perception of the performance of Latin American  universities, an undermining of the legitimacy of the national higher education institutions, and the mistaken tendency to see rankings as information systems.

Key problems in the global rankings discussed were: the arbitrary way in which different indicators are combined into one composite indicator; the lack of visibility of the humanities and social sciences; the neglect of the social and cultural impact of the universities; and last but not least the lack of transparency of both methods and data that are used to calculate the indicators. The Leiden Ranking was praised for its transparency and its focus, as was the SCImago Ranking. It was seen as helpful that these rankings make very explicit what they measure and what they do not measure. Of course, these rankings do not enable to compare the universities social mission. For this other measures are needed.

The “Final Declaration” demanded that governments in Latin America avoid using the rankings as elements in evaluating the universities performance. They were also advised to encourage the creation of public databases that permits a well-founded knowledge of the performance of the higher education system. The ranking producers were called upon to adhere to the 2006 “Berlin Principles on Ranking of Higher Education Institutions”. Rankings should be 100% transparent. Ranking producers should also engage in more interaction with the universities. The declaration notes that there is currently no consensus on criteria for measuring the quality of universities. “Any selection of parameters or quantitative indicators to sum up the qualities of universities is rather arbitrary”. The media are admonished to provide a more balanced coverage of the rankings. And the universities in Latin America are encouraged to adopt policies that promote transparency, accountability and open access. Rankings can play a role here. However, universities should not sacrifice “our fundamental responsibilities” in order to implement “superficial strategies designed to improve our standings in the rankings”.

Paul Wouters

%d bloggers like this: