Booming bibliometrics in biomedicine: the Dutch case

Last week, I gave a talk at a research seminar organized by the University of Manchester, Centre for the History of Science, Technology and Medicine. The talk was based on exploratory archival research on the introduction of bibliometrics in Dutch biomedicine.

Why did performance-based measurement catch on so quickly in the Dutch medical sciences? Of course this is part of a larger picture: From the 1980s onward, an unprecedented growth of evaluation institutions and procedures took place in all scientific research. In tandem with the development and first applications of performance indicators discussions about “gaming the system” surfaced (cf. MacRoberts and MacRoberts 1989). In the talk, I presented results from a literature search on how strategic behavior has been discussed in international peer-reviewed and professional medical journals from the 1970s onwards. The authors’ main concerns boiled down to three things. The first was irresponsible authorship (co-authorship, salami slicing etc.). Authors also signaled a growing pressure to publish and discussed relationships with scientific fraud. The third concern had to do with the rise of a group of evaluators with growing influence but seemingly without a clear consensus about their own professional standards. Typically, these concerns started to be voiced from the beginning of the 80s onwards.

Around the same time, two relevant developments took place in the Netherlands. First of all, the earliest Dutch science policy document on assessing the sciences was published. It focused entirely on the medical sciences (RAWB 1983). The report was promoted as a model for priority setting in scientific research, and was the first to overthrow internal peer review as the sole source for research assessment by including citation analysis (Wouters 1999). Secondly, a new allocation system was introduced at our own university here in Leiden in 1975. Anticipating a move on a national level from block grant funding to separate funding channels for teaching and research, a procedure was introduced that basically forced faculties to present existing and new research projects for assessment to a separate funding channel, in order to avoid decrease in research support in the near future. Ton van Raan, future director of CWTS, outlined specific methods for creating this separate funding model in the International Journal of Institutional Management in Higher education (Van Raan & Frankfort 1980). Van Raan and his co-author – at the time affiliated to the university’s Science Policy Unit argued that Leiden should move away from an ‘inefficient’ allocation system based on institutional support via student numbers, because this hindered scientific productivity and excellence. According to Van Raan [personal communication], this so-called ‘Z-procedure’ created the breeding ground for the establishment of a bibliometric institute in Leiden some years later.

Leiden University started the Z-procedure project inventories in ’75, dividing projects in that in- and those outside of priorities. The university started to include publication counts from 1980 onwards. As far as the medical sciences are concerned, the yearly Scientific Reports of ’78 to ’93 show that their total number of publications rose from 1401 in 1981 to 2468 in 1993. This number went up to roughly 7500 in 2008 (source: NFU). More advanced bibliometrics were introduced in the mid-80s. This shift from counting ‘brute numbers’ to assembling multidimensional complex operations (cf. Bowker 2005) also entailed a new representation of impact and quality: aggregated and normalized citation counts.

Back to the larger picture. A growing use of performance indicators from the 80s onwards can be ascribed to, among other things: an increased economic and social role of science and technology; an increase in the scale of research institutes; limitations and costs of peer review procedures; and a general move towards formal evaluation of professional work. It is usually argued that under the influence of the emergence of new public management and neoliberalism authorities decided to model large parts of the public sector, including higher education, on control mechanisms that were formerly reserved to the private sector (cf. Power 1999; Strathern 2000). It is necessary to dig deeper into the available historical sources to find out if these explanations suffice. If so, aggregated citation scores may have come to prosper in a specific political economy that values efficiency, transparency and quality assurance models. In the discussion after my talk Vladimir Jankovic suggested that I also look into Benjamin Hunt’s The Timid Corporation (2003). Hunt argues that while neoliberalism is often associated with economically motivated de-regulation, what has in fact been going on from the 80s onward is socially oriented regulation of individuals and groups, aimed at taming risks and impact of change through formal procedures. Two additional ways of analyzing the rise of such a precautionary approach may be found in the work of sociologists Frank Furedi (“Culture of Fear” 1997) and Ulrich Beck (“Risk Society” 1992). When aversion to risks and fear of change come to be perceived as abiding, a greater reliance on procedures and performance indicators may increasingly be seen as means to control openness and uncertainty. It is worth exploring if these sociological explanations can help us explain some of the dynamics in biomedicine I alluded to above. It may be a first step in finding out whether there is indeed something particular about medical research that makes it particularly receptive to metrics-based research evaluation.

Advertisement

Book release

Today we are witnessing dramatic changes in the way scientific and scholarly knowledge is created, codified, and communicated. This transformation is connected to the use of digital technologies and the virtualization of knowledge. In this book, scholars from a range of disciplines consider just what, if anything, is new when knowledge is produced in new ways. Does knowledge itself change when the tools of knowledge acquisition, representation, and distribution become digital? Issues of knowledge creation and dissemination go beyond the development and use of new computational tools. The book, which draws on work from the Virtual Knowledge Studio, brings together research on scientific practice, infrastructure, and technology. Focusing on issues of digital scholarship in the humanities and social sciences, the contributors discuss who can be considered legitimate knowledge creators, the value of “invisible” labor, the role of data visualization in policy making, the visualization of uncertainty, the conceptualization of openness in scholarly communication, data floods in the social sciences, and how expectations about future research shape research practices. The contributors combine an appreciation of the transformative power of the virtual with a commitment to the empirical study of practice and use.

Edited by Paul Wouters, Anne Beaulieu, Andrea Scharnhorst and Sally Wyatt.

%d bloggers like this: