Plea for assessments, against bean counting – part 3

Valorization of research has become an increasingly important pillar in research evaluation. The LERU report “Research universities and research assessment” does acknowledge this development. The report does not take a strong stand but limits itself to a cautious preliminary assessment of impact assessment. It gives an overview of the British, US, and European approach to “impact” as evaluation criterion. In the UK, one fifth of the grade in the new Research Excellence Framework will be awarded based on a combination of the “reach” of the impact and its “significance”. Universities are asked to present case studies as empirical evidence of societal impact of their research. The LERU report points to the resource intensiveness of this approach as well as to the novelty of this type of measurement for academia. Panel members will have to be develop expertise in this area. Also, the way research may have wider impact in society will vary strongly by research field.

In the US a different, large-scale data oriented route has been taken with the STAR METRICS project funded by NIH, NSF and the White House Office of Science and Technology. There is no lack of ambition for the US project. According to Francis Collins, Director of NIH, STAR METRICS will “yield a rigorous, transparent review of how our science investments are performing. In the short term, we’ll know the impact on jobs. In the long term, we’ll be able to measure patents, publications, citations, and business start-ups”. LERU warns that this might be too optimistic. “Already anecdotal evidence suggests that a number of anomalies appear to be occurring. There is concern about coverage especially in disciplines that focus on highly selective and tightly focused conference proceedings, traditional journals being deemed to slow. In addition, it is thought that there may be perverse effects on young new investigators.”

Not mentioned in the report is the role of commercial companies in research assessment. This is a growing market and the increasing pressure on university budgets has the paradoxical effect of making research assessments and bibliometric analyses even more important. As a result, commercial companies have developed aggressive strategies to attract universities as clients. Some universities have developed spin-off companies, and CWTS itself is in fact an exemplar of such a hybrid of a research centre and commercial service provider. This has been the state of affairs from the very beginning of scientometrics as a field of research. So there is nothing new here. Still, universities need to be aware of  potential conflicts of interest between the companies producing information about research and themselves. A good strategy might be to always maintain ownership of the data produced by the university and to promote open access where possible. Universities are starting to develop campus-wide policies and they might have profited from LERU advice on this topic.

Last, but not least, the LERU report does not discuss the changing demographics of the research population at universities and the acute need for universities to develop a more future oriented career policy. According to many specialists, the way universities develop their human resource management might very well decide how they will fare. An important question is how research evaluations are affecting the development of research careers and to what extent they are producing perverse effects. The fact that this is not mentioned at all in the LERU report is a missed opportunity in an otherwise balanced and carefully written policy report.


Plea for assessments, against bean counting – part 2

The LERU report “Research universities and research assessment” is partly inspired by problems that university managers have encountered in their attempt to evaluate the performance of their institute. In her presentation at the launch event of the report, Mary Phillips concluded that universities use assessments in a variety of ways. First, they want to know their output, impact and quality for the allocation of funds, performance improvement, and maximization of ‘return on investment’. Second, research assessments are used to inform strategic planning. Third, they are applied to identify excellent researchers in the context of attracting and retaining them. Fourth, they are used to monitor the performance of individual units in the university, such as departments or faculties. Fifth, research assessments are used to identify current and potential partners for scientific collaboration. And last, they are used at the level of the universities to benchmark against their peers.

Given this variety of assessment applications, it is not surprising that universities encounter a number of problems. The report identifies a number of them. The relevant data are diverse and currently often not integrated in tools. There is a lack of agreement on definitons and standards. For example, who counts as a ‘researcher’ may differ in different university systems. Also, the funding mechanisms are still dominantly national and they differ significantly. And finally, many databases that are used in research assessments are proprietary and cannot be controlled by the universities themselves. Morover, Phillips signals that perverse effects can be expected from current assessment procedures. Measurement cultures may “distract from the academic mission”. It is important to be aware of disciplinary differences, for example with respect to the numbers of citations and the relevant time frames of the measurement. Last, the report mentions that academics may feel threatened by research assessments.

In addition to these problems and dangers, the report identifies two important novel developments in research assessment: the European project to rank universities in multiple dimensions (U-Multirank), and the recent emphasis on the societal impact of research in evaluations. The report is rather critical of U-Multirank. The project, in which CWTS participates, aims to address a major problem in current university rankings. Apart from the research focused rankings, such as the Leiden Ranking or the Scimago Ranking, global rankings have combined different dimensions such as the quality of education and research output in an arbitrary way. Also, they apply one model to all universities. However, universities may have very different missions. Therefore, it makes more sense to compare universities with similar missions. “According to the multidimensional approach a focused ranking does not collapse all dimensions into one rank, but will instead provide a fair picture of institutions (‘zooming in’) within the multi-dimensional context provided by the full set of dimensions.” (U-Multirank) In principle, LERU supports this approach and it was also involved in the first stage feasibility study. However, a number of concerns have led LERU to disengage from the project.

“Our main concerns relate to the lack of good or relevant data in several dimensions, the problems of comparability between countries in areas such as funding, the fact that U-Multirank will not attempt to evaluate the data collected, i.e. there will be no “reality-checks”, and last but by no means least, the enormous burden put upon universities in collecting the data, resulting in a lack of involvement from a good mix of different types of universities from all over the world, which renders the resulting analyses and comparisons suspect.” It has led the organization to turn away from rankings as an instrument in assessment. The European Commission has not followed this reasoning and has recently decided to publish a call for the second stage of the U-Multirank project. The consortium has not yet publicly replied to LERU’s critique.

Plea for assessments, against bean counting – part 1

“Above all, universities should stand firm in defending the long-term value of their research activity, which is not easy to assess in a culture where return on investment is measured in very short time spans.” This is the main motif of a new position paper recently published by the League of Research Universities (LERU) about the way universities should handle evaluation of research. In many ways, it is a sensible report which tries to strike a careful balance between the different interests involved. The report is written by Mary Phillips, former director of Research Planning at University College London and currently adviser of Academic Analytics, a for-profit consultancy in the area of research evaluation (and hence one of CWTS’ competitors). The report is a plea for the combined application of peer review and bibliometrics by university management. It also contains a number of principles that LERU would like to see implemented by universities in their assessment procedures.

Point of departure of the report is the observation that assessments have become part and parcel of the university. At the same time, the types of assessments possible and the different methodologies have exploded. This leads to the stimulation of “aobsession with measurement and monitoring, wich may result in a ‘bean counting’ culture detracting from the real quality of research”. Indeed, this has already begun. The dilemmas are made worse by the fact that universities need to deal with large quantities of data, require sophisticated human resource and research management tools, which they often currently lack. On top of all this, funding regimes tend to create incentives which may tempt universities to, as the report with feeling for understatements expresses, “behave in certain ways, sometimes with unfortunate consequences”.

One of the implications is that any assessment system must be sensitive to possible perverse incentives, should take disciplinary differences into account and have a long enough time frame, at least five years according to the report. Assessments should “reflect the reality of research”, including the aspirations of the researchers involved. “Thus, senior administrators and academics must take account of the views of those “at the coal-face” of research”. Assessments should be “as transparent as possible”. Universities are advised to improve their data management systems. And researchers “should be encouraged (or compelled) when publishing, to use a unique personal and institutional designation, and to deposit all publications into the university’s publications  database”.

%d bloggers like this: