Tales from the field: On the (not so) secret life of performance indicators

* Guest blog post by Alex Rushforth *

In the coming months Sarah De Rijcke and I have been accepted to present at conferences in Valencia and Rotterdam on research from CWTS’s nascent EPIC working group. We very much look forward to drawing on collaborative work from our ongoing ‘Impact of indicators’ project on biomedical research in University Medical Centers (UMC) in the Netherlands. One of our motivations behind the project is that there has been a wealth of social science literature in recent times about the effects of formal evaluation in public sector organisations, including universities. Yet too few studies have taken seriously the presence of indicators in the context of one of the universities core-missions: knowledge creation. Fewer still have looked to take an ethnographic lens to the dynamics of indicators in the day-to-day work context of academic knowledge. These are deficits we hope to begin addressing through these conferences and beyond.

The puzzle we will be addressing here appears – at least at first glance- straightforward enough: what is the role of bibliometric performance indicators in the biomedical knowledge production process? Yet comparing provisional findings from two contrasting case studies of research groups from the same UMC – one a molecular biology group and the other a statistics group – it becomes quickly apparent that there can be no general answer to this question. As such we aim to provide not only an inventory of different ‘roles’ of indicators in these two cases, but also to pose the more interesting analytical question of what conditions and mechanisms explain the observed variations in the roles indicators come to perform?

Owing to their persistent recurrence in the data so far, the indicators we will analyze are journal impact factor, H-index, and ‘advanced’ citation-based bibliometric indicators. It should be stressed that our focus on these particular indicators have have emerged inductively from observing first-hand the metrics that research groups attended to in their knowledge-making activities. So what have we found so far?

Dutch UMCs constitute particularly apt sites through which to explore this problem given how bibliometric assessments have been central to the formal evaluations carried-out since their inception in the early-2000s. On one level it is argued that researchers in both cases encounter such metrics as ‘governance/managerial devices’, that is, as forms of information required of them by external agencies on whom they are reliant for resources and legitimacy. Such examples can be seen when funding applications, annual performance appraisals, or job descriptions demand such information of an individual’s or group’s past performance. As the findings will show, the information needed by the two groups to produce their work effectively and the types of demands made on them by ‘external’ agencies varies considerably, despite their common location in the same UMC. This is one important reason why the role of indicators differs between cases.

However, this coercive ‘power over’ account is but one dimension of a satisfying answer to our role of indicators question. Emerging analysis reveals also the surprising discovery that in fields characterized by particularly integrated forms of coordination and standardization (Whitley, 2000)– like our molecular biologists – indicators in fact have the propensity to function as a core feature of the knowledge making process. For instance, a performance indicator like the journal impact factor was routinely mobilized informally in researchers’ decision-making as an ad hoc standard against which to evaluate the likely uses of information and resources, and in deciding whether time and resources should be spent pursuing them. By contrast in the less centralized and integrated field statistical research such an indicator was not so indispensable to routines of knowledge making activities. In the case of the statisticians it is possible to speculate that indicators are more likely to emerge intermittently as conditions to be met for gaining social and cultural acceptance by external agencies, but are less likely to inform day-to-day decisions. Through our ongoing analysis we aim to unpack further how disciplinary practices interact with organisation of Dutch UMCs to produce quite varying engagements with indicators.

The extent to which indicators play central/peripheral roles in research production processes across academic contexts is an important sociological problem to be posed in order to enhance understanding of the complex role of performance indicators in academic life. We feel much of the existing literature on evaluation of public organisations has tended to paint an exaggerated picture of formal evaluation and research metrics as synonymous with empty ritual and legitimacy (e.g. Dahler-Larsen, 2012). Emerging results here show that – at least in the realm of knowledge production- the picture is more subtle. This theoretical insight will prompt us to suggest further empirical studies are needed of scholarly fields with different patterns of work organisation in order to compare our results and develop middle-range theorizing on the mechanisms through which metrics infiltrate knowledge production processes to fundamental or peripheral degrees. In future this could mean venturing into fields far outside of biomedicine, such as history, literature, or sociology. For now though we look forward to expanding the biomedical project, by conducting analogous case studies from a second UMC.

Indeed it is through such theoretical developments that we can consider not only the appropriateness of one-size-fits-all models of performance evaluation, but also unpack and problematize discourses about what constitutes ‘misuse’ of metrics. And indeed how convinced should we be that academic life is now saturated and dominated by deleterious metric indicators? 

References

DAHLER-LARSEN, P. 2012. The evaluation society, Stanford, California, Stanford Business Books, an imprint of Stanford University Press.

 WHITLEY, R. 2000. The intellectual and social organization of the sciences, Oxford England ; New York, Oxford University Press.

Advertisement

Fraud in Flemish science

Almost half of Flemish medical researchers have witnessed a form of scientific fraud in their direct environment. One in twelve have been engaged themselves in data fraud or in “massaging data” in order to make the results fit the hypothesis. Many mention “publication pressure” as an important cause of this behaviour. This is the outcome of the first public survey among Flemish medical researchers about scientific fraud. The survey was conducted in November and December 2012 by the journal Eos . Joeri Tijdink, who had conducted a similar survey in the Netherlands among medical professors supervised the Flemish survey.

It is not clear to what extent the survey results are representative of the conduct of all medical researchers in Flanders. The survey was distributed through the deans of medical faculties in the form of an anonymous questionnaire. The response rate was fairly low (19 % of the 2,548 researchers responded and 315 (12 %) filled it in completely). Yet, the results indicate that fraud may be a much more serious problem than is usually acknowledged in the Flemish scientific system. Since the installation of Flemish university committees on scientific integrity, no more than 4 cases of scientific misconduct have been recognized (3 involved plagiarism; 1 researcher committed fraud). This is clearly lower than expected. The survey, however, consistently reports higher incidence of scientific misconduct than comparable international surveys do. For example, having witnessed misconduct is reported by 14% of researchers according to a meta-study by Daniele Fanelli, but in Flanders this is 47%. Internationally, 2% of researchers admit to have been involved themselves in data massage or fraud, whereas in Flanders this is 8%. The discrepancy can be explained in two ways. One is that the university committees are not yet effective in getting out the truth. The other is that this survey is biased towards researchers who have witnessed misconduct in some way. Given that both explanations seem plausible, the gap between the survey results and the formal record of misconduct in Flanders may best be explained by a combination of both mechanisms. After all, it is hard to understand why Flemish medical researchers would be more (or less)  prone to misconduct than medical researchers in, say, the Netherlands, the UK, or France.

According to Eos, publication pressure is one of the causes of misconduct. This still remains to be proven. However, both in the earlier survey by Tijdink and Smulders, and in this survey, a large number of researchers mention “publication pressure” as a driving factor. As has been argued in the Dutch debate about the fraud by psychologist Diederik Stapel, the mentioning of “publication pressure” as a cause may be motivated by a desire for legitimation. After all, all researchers are pressured to publish on a regular basis, while a small minority is involved in misconduct (as far as we know now). So the response may be part of a justification discourse, rather than a causal analysis. My own intuition is that the problem is not publication pressure, but reputation pressure, a subtle but important difference. Nevertheless, if a large minority (47% of the Flemish respondents for example) of researchers point to “publication pressure” as a cause of misconduct, we may have a serious problem in the scientific system, whether or not these researchers are right. A problem that can no longer be ignored.

Literature:

Fanelli D (2009) How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS ONE 4(5): e5738. doi:10.1371/journal.pone.0005738

Joeri K. Tijdink, Anton C.M. Vergouwen, and Yvo M. Smulders, Ned Tijdschr Geneeskd. 2012;156:A5715

%d bloggers like this: