We should urgently improve the ways in which the output of research is assessed by universities and funding agencies. Therefore, the dominance of the Journal Impact Factor in these evaluations should be terminated. This is the gist of a call published a week ago by a large group of prominent researchers and research institutes, the San Francisco Declaration on Research Assessment (DORA). This new initiative started at a conference in San Francisco last December, organized by the American Society for Cell Biology. This origin shows in the list and in the accompanying editorials. The declaration went live together with an editorial in Science and journals in the life sciences, such as EMBO journal, Molecular Biology of the Cell, eLife, and Traffic. At the moment of writing, more than a thousand individual researchers have signed the declaration as well as over a hundred scientific institutions. Among them are AAAS, Wellcome Trust, EMBO, HEFCE, PNAS, PLOS and the Open Knowledge Foundation.
DORA has mostly been welcomed by experts in scientometrics and bibliometrics, science policy and leaders of academic institutions, and rightly so. This is not because they are declared enemies of the Journal Impact Factor, but because of the narrowmindedness of assessment systems centered around one indicator, which by definition can only capture a narrow slice of relevant dimensions in the assessment of scientific performance. DORA focuses on JIF, produced by Thomson Reuters in their Journal Citation Reports, but some of the arguments also hold for performance indicators in general. The strength of DORA is its plea for the recognition of the diversity of types of scientific output. This should be met by a diversity of measures, both qualitative and quantitative. Moreover, the increasingly web based style of working in science and scholarship enables more advanced and refined forms of measures of production, impact, and influence than the often rather crude approximation in indicators such as JIF (but this depends on what one wants to measure!).
DORA cites the critique of JIF as it has been developed in the decades of bibliometric and science policy research since the early 1990s. The main problems mentioned are strong varation of JIF values across fields due to which it does not make sense to compare JIF values in different fields or even sub-fields; the skewed distribution of the number of citations over the articles within a journal, due to which one cannot see the average as correlated to the prospective citation scores of an article; and the relatively easy ways in which JIF can be gamed by journal editors. This body of research is fairly well summarized, albeit not cited in a comprehensive way.
The main weaknesses of DORA show in the specific recommendations and in some confusion with respect to specific problems of JIF and more generic problems of performance indicators. For example, DORA seems to want to do away entirely with journal based indicators while it recommends additional journal indicators at the same time. (More on this in a next post.)
Yet, the main thrust of DORA is in line with the need to correct for, or warn against, too much reliance on formalized indicators in a lot of universities and institutes. This may have developed at the expense of a well-balanced form of informed peer review, although we also should not underestimate the large number of very well-designed evaluation work that is being conducted every day. Of course, peer review itself must also be kept honest by, among others, well-developed indicators of a variety of dimensions of the process of knowledge creation (such as network positions and gender relationships).
Last year, CWTS published its new research program. One of the main themes is precisely the urgent need to innovate the current systems of research assessment and the related need to support this with a new research agenda in scientometrics. (More on this in a next post). Also, at CWTS we are coordinating the European research project ACUMEN, which aims to support researchers in their evaluation moments by a portfolio of qualitative and quantitative evidence which is valid and reliable at the level of the individual researcher. This project is a large-scale collaboration with a host of scientometric, webometric and science policy experts and researchers. And we know that many of our colleagues are thinking along the same lines. So it should definitely be possible to build a strong coalition in favor of evaluation practices that are more conducive to the further development of science and creativity.
Next post: a summary of the evidence on JIF
Leave a Reply