Bibliometrics of individual researchers – the debate in Berlin

The lively debate we had at the ISSI conference in Vienna continued at the STI2013 conference, “Translational twists and turns: science as a socio-economic endeavour” 4-6 September in Berlin. A full plenary was devoted to the challenge of, and the dilemmas in, the application of bibliometrics to the (self)-evaluation of individual researchers, chaired by Ben Martin (SPRU). Martin opened the session with the tale of the rise and fall of a star researcher in economics in Germany. Based on a single dataset created in his PhD project, the economist published an impressive amount of publications. Because he was so productive, he was able to attract more external research funding. When a German university was seeking to increase its chance of getting one of the Excellence Initiative grants, he seemed the perfect person to hire. A few members of the hiring committee then started to actually read his publications. They were all rather similar. Not very surprising given that the research was all based on a single dataset from his PhD project. It turned out that he had published a large number of variations of basically the same article in different journals without anyone noticing these duplications. It was the beginning of the end. A number of journals began retracting these publications, although not with the cooperation of the researcher. This process is still ongoing. A sobering tale, according to Martin. He told the story at the start of the debate to warn against the misuses of performance indicators (such as the number of publications). For a recent overview of cases of fraud and Martin’s experiences as editor of Research Policy see (Martin, 2013).

The plenary had a series of presentations, varying from the state of the debate, to examples of a portfolio approach to individual evaluation, to tensions in science policy with respect to indicator based assessments, to the ethics of the evaluation of individual researchers. A report of the meeting will be published in the ISSI Newsletter shortly (Wouters et al. 2013). Here I wish to highlight the ethical questions which were the focus of Jochen Gläser’s presentation. Currently, there is no agreement on this in the field. It was even questioned whether we actually have an ethical problem. According to Peter van den Besselaar, we may have more a knowledge problem than an ethical problem. Often, it is not clear what the different patterns in the indicator measurements mean. This is partly due to the fact that scientometricians often only use a very limited set of databases, such as Scopus or the Web of Science. According to Van den Besselaar, this makes it more difficult to make the measurements more robust. I agree that combining a variety of databases and other data sources (such as surveys or interviews or national statistical materials) is the way to go. The strongest studies in science studies have often used a diversity of materials.

Nevertheless, I don’t think that this absolves us from facing ethical dilemmas, in particular whenever individual researchers are being assessed with the help of metrics. In his presentation, Gläser discussed whether we need more explicit ethical guidelines. After all, the bibliometric centres have developed guidelines and include extensive explanations of the limits of their indicator reports. Moreover, the details of the performance indicators are also published in the bibliometric literature. Still, he argued in favour of more attention to the ethics of bibliometrics because the position of bibliometrics has changed over the years. He identified three relevant developments: an increased demand for bibliometric services in research management; the emergence of “amateur bibliometrics” thanks to the larger availability of data and indicators; and an increased effectiveness of bibliometrics due to more advanced indicators and increased availability of data sets (including web data). The scope of bibliometric practices is therefore increasing and this requires a more explicit set of guidelines of how to apply bibliometric analyses. This holds for scientometric evaluation in general, but it is particularly pertinent when individual researchers are being assessed. Two indicators play an important role in these assessments, the h-index and the Journal Impact Factor and neither of them are fitted to this role (see Bornmann 2013 on the h-index). Gläser put forward a number of proposals. On the short term, he proposed to start collecting experiences and case descriptions in which things seem to go wrong with research assessments. On the medium term, he proposed to develop, as expert community, a set of guidelines that are made available to research directors, managers, science policy officials and deans, and in which the field reaches some consensus with respect to the state of the art. He also supported a suggestion I had made in a parallel session in Berlin to create an Ombudsoffice for research evaluation. This office should be able to look into complaints about the use of bibliometrics by universities and institutes in research management.

We can expect that this debate will continue at the next indicator conferences.

References:

Bornmann, L. (2013). A better alternative to the h index. Journal of Informetrics, 7(1), 100. doi:10.1016/j.joi.2012.09.004

Wouters, P.F., W. Glänzel, J. Gläser and I. Rafols, “The dilemmas of performance indicators of individual researchers – an urgent debate in bibliometrics”, ISSI Newsletter, forthcoming 2013

Martin, B. R. (2013). Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment. Research Policy, 42(5), 1005–1014. doi:10.1016/j.respol.2013.03.011

One Response to “Bibliometrics of individual researchers – the debate in Berlin”

  1. Developing Metrics for the Evaluation of Individual Researchers – Should Bibliometricians Be Left to Their Own Devices? | jbrittholbrook Says:

    […] topic will be the recent discussions among bibliometricians of the development of metrics for individual researchers. That sounds like a […]


Leave a comment