On citation stress and publication pressure

Our article on citation stress and publication pressure in biomedicine went online this week – co-authored with colleagues from the Free University and University Medical Centre Utrecht:

Tijdink, J.K., S. de Rijcke, C.H. Vinkers, Y.M. Smulders, P.F. Wouters, 2014. Publicatiedrang en citatiestress: De invloed van prestatie-indicatoren op wetenschapsbeoefening. Nederlands Tijdschrift voor Geneeskunde 158: A7147.

* Dutch only *


Tales from the field: On the (not so) secret life of performance indicators

* Guest blog post by Alex Rushforth *

In the coming months Sarah De Rijcke and I have been accepted to present at conferences in Valencia and Rotterdam on research from CWTS’s nascent EPIC working group. We very much look forward to drawing on collaborative work from our ongoing ‘Impact of indicators’ project on biomedical research in University Medical Centers (UMC) in the Netherlands. One of our motivations behind the project is that there has been a wealth of social science literature in recent times about the effects of formal evaluation in public sector organisations, including universities. Yet too few studies have taken seriously the presence of indicators in the context of one of the universities core-missions: knowledge creation. Fewer still have looked to take an ethnographic lens to the dynamics of indicators in the day-to-day work context of academic knowledge. These are deficits we hope to begin addressing through these conferences and beyond.

The puzzle we will be addressing here appears – at least at first glance- straightforward enough: what is the role of bibliometric performance indicators in the biomedical knowledge production process? Yet comparing provisional findings from two contrasting case studies of research groups from the same UMC – one a molecular biology group and the other a statistics group – it becomes quickly apparent that there can be no general answer to this question. As such we aim to provide not only an inventory of different ‘roles’ of indicators in these two cases, but also to pose the more interesting analytical question of what conditions and mechanisms explain the observed variations in the roles indicators come to perform?

Owing to their persistent recurrence in the data so far, the indicators we will analyze are journal impact factor, H-index, and ‘advanced’ citation-based bibliometric indicators. It should be stressed that our focus on these particular indicators have have emerged inductively from observing first-hand the metrics that research groups attended to in their knowledge-making activities. So what have we found so far?

Dutch UMCs constitute particularly apt sites through which to explore this problem given how bibliometric assessments have been central to the formal evaluations carried-out since their inception in the early-2000s. On one level it is argued that researchers in both cases encounter such metrics as ‘governance/managerial devices’, that is, as forms of information required of them by external agencies on whom they are reliant for resources and legitimacy. Such examples can be seen when funding applications, annual performance appraisals, or job descriptions demand such information of an individual’s or group’s past performance. As the findings will show, the information needed by the two groups to produce their work effectively and the types of demands made on them by ‘external’ agencies varies considerably, despite their common location in the same UMC. This is one important reason why the role of indicators differs between cases.

However, this coercive ‘power over’ account is but one dimension of a satisfying answer to our role of indicators question. Emerging analysis reveals also the surprising discovery that in fields characterized by particularly integrated forms of coordination and standardization (Whitley, 2000)– like our molecular biologists – indicators in fact have the propensity to function as a core feature of the knowledge making process. For instance, a performance indicator like the journal impact factor was routinely mobilized informally in researchers’ decision-making as an ad hoc standard against which to evaluate the likely uses of information and resources, and in deciding whether time and resources should be spent pursuing them. By contrast in the less centralized and integrated field statistical research such an indicator was not so indispensable to routines of knowledge making activities. In the case of the statisticians it is possible to speculate that indicators are more likely to emerge intermittently as conditions to be met for gaining social and cultural acceptance by external agencies, but are less likely to inform day-to-day decisions. Through our ongoing analysis we aim to unpack further how disciplinary practices interact with organisation of Dutch UMCs to produce quite varying engagements with indicators.

The extent to which indicators play central/peripheral roles in research production processes across academic contexts is an important sociological problem to be posed in order to enhance understanding of the complex role of performance indicators in academic life. We feel much of the existing literature on evaluation of public organisations has tended to paint an exaggerated picture of formal evaluation and research metrics as synonymous with empty ritual and legitimacy (e.g. Dahler-Larsen, 2012). Emerging results here show that – at least in the realm of knowledge production- the picture is more subtle. This theoretical insight will prompt us to suggest further empirical studies are needed of scholarly fields with different patterns of work organisation in order to compare our results and develop middle-range theorizing on the mechanisms through which metrics infiltrate knowledge production processes to fundamental or peripheral degrees. In future this could mean venturing into fields far outside of biomedicine, such as history, literature, or sociology. For now though we look forward to expanding the biomedical project, by conducting analogous case studies from a second UMC.

Indeed it is through such theoretical developments that we can consider not only the appropriateness of one-size-fits-all models of performance evaluation, but also unpack and problematize discourses about what constitutes ‘misuse’ of metrics. And indeed how convinced should we be that academic life is now saturated and dominated by deleterious metric indicators? 


DAHLER-LARSEN, P. 2012. The evaluation society, Stanford, California, Stanford Business Books, an imprint of Stanford University Press.

 WHITLEY, R. 2000. The intellectual and social organization of the sciences, Oxford England ; New York, Oxford University Press.

Why do neoliberal universities play the numbers game?

Performance measurement has brought on a crisis in academia. At least, that’s what Roger Burrows (Goldsmiths, University of London) claims in a recent article for The Sociological Review. According to Burrows, academics are at great risk of becoming overwhelmed by a ‘deep, affective, somatic crisis’. This crisis is brought on by the ‘cultural flattening of market economic imperatives’ that fires up increasingly convoluted systems of measure. Burrows places this emergence of quantified control in academia within the broader context of neoliberalism. Though this has been argued before, Burrows gives the discussion a theoretical twist. He does so by drawing on Gane’s (2012) analysis of Foucault’s (1978-1979) lectures on the relation between market and state under neoliberalism. According to Foucault, neoliberal states can only guarantee the freedom of markets when they apply the same ‘market logic’ on themselves. In this view, the standard depiction of neoliberalism as passive statecraft is not correct. This type of management is not ‘laissez-faire’, but actively stimulates competition and privatization strategies.

In the UK, Burrows contends, the simulation of neoliberal markets in academia has largely been channelled through the introduction of audit and of performance measures. He argues that these control mechanisms become autonomous entities that are increasingly used outside the original context of evaluations, and get a much more active role in shaping the everyday work of academics. According to Burrows, neoliberal universities provide fertile ground for a “co-construction of statistical metrics and social practices within the academy.” Among other things, this leads to a reification of individual performance measures such as the H-index. Burrows:

“[I]t is not the conceptualization, reliability, validity or any other set of methodological concerns that really matter. The index has become reified; (…) a number that has become a rhetorical device with which the neoliberal academy has come to enact ‘academic value’.” (p. 361)

Interestingly, Burrow’s line of reasoning can in some respects itself be seen as a resultant of a broader neoliberal context. Neoliberal policies applaud personal autonomy and the individual’s responsibility for one’s own well-being and professional success. Burrows directly addresses fellow-academics (‘we need to obtain critical distance’; ‘we need to understand ourselves as academics’; ‘why do we feel the way we do?’) and concludes that we are all implicated in the ‘autonomization of metric assemblages’ in the academy. Arguably, it is exactly this neoliberal political climate that justifies Burrows’ focus on individual academics’ affective states. With it comes a delegation of responsibility to the level of the individual researchers. It is our own choice if we comply with the metricization of academia. It is our own choice if we decide to work long hours, spend our weekends writing grant proposals and articles and grading students’ exams. According to Gill (2010), academics tend to justify working so hard because they possess a passionate drive for self-expression and pleasure in intellectual work. Paradoxically, Gill argues, it is this drive that feeds a whole range of disciplinary mechanisms and that lets academics internalize a neoliberal subjectivity. We play ‘the numbers game’, as Burrows calls it, because of “a deep love for the ‘myth’ of what we thought being an intellectual would be like.” (p. 15)

Though Burrows raises concerns that are shared by many academics, it is unfortunate that he does not substantiate his claims with empirical data. Apart from own experience and anecdotal evidence, how do we know that today’s researchers experience the metricization of academia as a ‘deep, affective somatic crisis’? Does it apply to all researchers, is it the same everywhere, and does it hold for all disciplines? These are empirical questions that Burrows does not answer. That said, there is a great need for the types of analyses Burrows and Gill provide, analyses that assess, situate and historicize academic audit cultures. It is not a coincidence that Burrows’ polemic piece emerges from the field of sociology. The social sciences and humanities are increasingly confronted with what Burrows calls the ‘rethoric of accountability’. It has become a commonplace to argue that they, too, should be held accountable for the taxpayers’ money that is being spent on them. These disciplines, too, should be made auditable by way of standardized, transparent performance measures. I agree with Burrows that this rethoric should be problematized. In large parts of these fields it is not at all clear how performance should be ‘measured’ in the first place, for example because of differences in publication cultures within these fields and as compared to the natural sciences. And it is precisely because the discussion is ongoing that we are allowed a clear view of the performative effects of a very specific and increasingly dominant evaluation culture that is not modelled by and on these disciplines. What are the consequences? And are there more constructive alternatives?

%d bloggers like this: