From 13-15 September 2012, the departments of Sociology and of Anthropology at Goldsmiths are hosting an interdisciplinary conference on ‘practicing comparisons’. Here’s the call for papers. We submitted the following abstract, together with Roland Bal and Iris Wallenburg (Institute of Health Policy and Management, Erasmus University). This cooperation is part of a new line of research on the impacts of evaluation processes on knowledge production.
“Comparing comparisons. On rankings and accounting in hospitals and academia
Not much research has been done as of yet on the ways in which rankings affect academic and hospital performance. The little evidence there is focuses on the university sector. Here, an interest in rankings is driven by a competition in which universities are being made comparable on the basis of ‘impact’. The rise of performance based funding schemes is one of the driving forces. Some studies suggest that shrinking governmental research funding from the 1980s onward has resulted in “academic capitalism” (cf. Slaughter & Lesly 1997). By now, universities have set up special organizational units and have devised specific policy measures in response to ranking systems. Recent studies point to the normalizing and disciplining powers associated with rankings and to ‘reputational risk’ as explanations for organizational change (Espeland & Sauder 2007; Power et al. 2009; Sauder & Espeland 2009). Similar claims have been made for the hospital sector in relation to the effects of benchmarks (Triantafillou 2007). Here, too, we witness a growing emphasis on ‘reputation management’, and on the use of rankings in quality assessment policies.
The modest empirical research done thus far mainly focuses on higher management levels and/or on large institutional infrastructures. Instead, we propose to analyze hospital and university responses to rankings from a whole-organization perspective. Our work zooms in on so-called composite performance indicators that combine many underlying specific indicators (e.g. patient experiences, outcome, and process and structure indicators in the hospital setting, and citation impact, international outlook, and teaching in university rankings). Among other things, we are interested in the kinds of ordering mechanisms (Felt 2009) that rankings bring about on multiple organizational levels – ranging from the managers’ office and the offices of coding staff to the lab benches and hospital beds.
In the paper, we first of all analyze how rankings contribute to making organizations auditable and comparable. Secondly, we focus on how rankings translate, purify, and simplify heterogeneity into an ordered list of comparable units, and on the kinds of realities that are enacted through these rankings. Thirdly, and drawing on recent empirical philosophical and anthropological work (Mol 2002, 2011; Strathern 2000, 2011; Verran 2011), we ask how we as analysts ‘practice comparison’ in our attempt to make hospital and university rankings comparable.”