“Above all, universities should stand firm in defending the long-term value of their research activity, which is not easy to assess in a culture where return on investment is measured in very short time spans.” This is the main motif of a new position paper recently published by the League of Research Universities (LERU) about the way universities should handle evaluation of research. In many ways, it is a sensible report which tries to strike a careful balance between the different interests involved. The report is written by Mary Phillips, former director of Research Planning at University College London and currently adviser of Academic Analytics, a for-profit consultancy in the area of research evaluation (and hence one of CWTS’ competitors). The report is a plea for the combined application of peer review and bibliometrics by university management. It also contains a number of principles that LERU would like to see implemented by universities in their assessment procedures.
Point of departure of the report is the observation that assessments have become part and parcel of the university. At the same time, the types of assessments possible and the different methodologies have exploded. This leads to the stimulation of “aobsession with measurement and monitoring, wich may result in a ‘bean counting’ culture detracting from the real quality of research”. Indeed, this has already begun. The dilemmas are made worse by the fact that universities need to deal with large quantities of data, require sophisticated human resource and research management tools, which they often currently lack. On top of all this, funding regimes tend to create incentives which may tempt universities to, as the report with feeling for understatements expresses, “behave in certain ways, sometimes with unfortunate consequences”.
One of the implications is that any assessment system must be sensitive to possible perverse incentives, should take disciplinary differences into account and have a long enough time frame, at least five years according to the report. Assessments should “reflect the reality of research”, including the aspirations of the researchers involved. “Thus, senior administrators and academics must take account of the views of those “at the coal-face” of research”. Assessments should be “as transparent as possible”. Universities are advised to improve their data management systems. And researchers “should be encouraged (or compelled) when publishing, to use a unique personal and institutional designation, and to deposit all publications into the university’s publications database”.