On organizational responses to rankings

From 13-15 September 2012, the departments of Sociology and of Anthropology at Goldsmiths are hosting an interdisciplinary conference on ‘practicing comparisons’. Here’s the call for papers. We submitted the following abstract, together with Roland Bal and Iris Wallenburg (Institute of Health Policy and Management, Erasmus University). This cooperation is part of a new line of research on the impacts of evaluation processes on knowledge production.

“Comparing comparisons. On rankings and accounting in hospitals and academia

Not much research has been done as of yet on the ways in which rankings affect academic and hospital performance. The little evidence there is focuses on the university sector. Here, an interest in rankings is driven by a competition in which universities are being made comparable on the basis of ‘impact’. The rise of performance based funding schemes is one of the driving forces. Some studies suggest that shrinking governmental research funding from the 1980s onward has resulted in “academic capitalism” (cf. Slaughter & Lesly 1997). By now, universities have set up special organizational units and have devised specific policy measures in response to ranking systems. Recent studies point to the normalizing and disciplining powers associated with rankings and to ‘reputational risk’ as explanations for organizational change (Espeland & Sauder 2007; Power et al. 2009; Sauder & Espeland 2009). Similar claims have been made for the hospital sector in relation to the effects of benchmarks (Triantafillou 2007). Here, too, we witness a growing emphasis on ‘reputation management’, and on the use of rankings in quality assessment policies.

The modest empirical research done thus far mainly focuses on higher management levels and/or on large institutional infrastructures. Instead, we propose to analyze hospital and university responses to rankings from a whole-organization perspective. Our work zooms in on so-called composite performance indicators that combine many underlying specific indicators (e.g. patient experiences, outcome, and process and structure indicators in the hospital setting, and citation impact, international outlook, and teaching in university rankings). Among other things, we are interested in the kinds of ordering mechanisms (Felt 2009) that rankings bring about on multiple organizational levels – ranging from the managers’ office and the offices of coding staff to the lab benches and hospital beds.

In the paper, we first of all analyze how rankings contribute to making organizations auditable and comparable. Secondly, we focus on how rankings translate, purify, and simplify heterogeneity into an ordered list of comparable units, and on the kinds of realities that are enacted through these rankings. Thirdly, and drawing on recent empirical philosophical and anthropological work (Mol 2002, 2011; Strathern 2000, 2011; Verran 2011), we ask how we as analysts ‘practice comparison’ in our attempt to make hospital and university rankings comparable.”

Advertisement

“Looking-glass upon the wall, Who is fairest of us all?” (Part 4)

In our last post, we discussed four arguments in favour of alternative metrics (more details can be found in our recent report on altmetrics “Users, narcissism, and control”. To recapitulate, the four arguments are: openness, speed, scholarly output diversity, and the measurement of more impact dimensions. How do these arguments relate to the available empirical evidence?

Speed is probably the weakest argument. Of course, it is seductive to have the feeling to be able to monitor “in real time” how a publication reverbates in the communication system. The Altmetrics Manifesto (Priem, Taraborelli, Groth, & Neylon, 2010) even advocates the use of “real-time recommendation and collaborative filtering systems” in funding and promotion decisions. But how wise is this? To really know what a particular publication has contributed takes time, if only because the publication must be read by enough people. Faster is not always better. It may even be the other way around, as the sociologist Dick Pels has argued in his book celebrating “slow science” (Pels, 2003).

Moreover – this relates to the fourth argument – we do not yet know enough about scholarly communication to see what all the measurable data might mean. For example, it does not make much sense to be happy about one instance of correlation between number of tweets and citations, if we do not fully understand what a tweet might mean (Davis, 2012). The role of early signalling of possibly interesting research may be very different from a later-stage scholarly citation. And different modalities of communication may also represent different dimensions of research quality. For example, a recent study compared research blogging in the area of chemistry with journal publications. It was found that blogging is more oriented towards the social implications of research, tends to focus on high-impact journals, is more immediate than scientific publishing, and provides more context of the research (Groth & Gurney, 2010). We need much more of these studies before we jump to conclusions about the value of measuring blogs, web sites, tweets etc. In other words, the fourth argument for alternative metrics is an important research agenda in itself.

This also holds for the third argument: diversity. Researchers write blogs, update databases, build instruments, do field work, conduct applied research to solve societal problems, train future generations of researchers, develop prototypes and contribute their expertise to countless panels and newspaper columns. All this is not well represented in international peer reviewed journals (albeit sometimes it is reflected indirectly). Traditional citation analysis captures an important slice of scholarly and scientific output, provided the field is well represented in the Web of Science (which is not the case in most humanities). Yet, however valuable, it is still only a thin slice of the diverse scientific production. Perhaps alternative metric will be able to reflect this diversity in a more satisfactory way than citation analysis. Before we can affirm that this is the case indeed, we need much more case study research.

This brings me to the last argument, openness. The two most popular citation indexes (Web of Science and Scopus) are both proprietary. Together with their relatively narrow focus, this has brought many scholars to look for open, freely accessible alternatives. And some think they found one in Google Scholar, the most popular search engine for scholarly work. I think it is indisputable that the publication system is moving towards a future with more open access media as default options. But there is a snag. Although Google Scholar is freely available, its database is certainly not open. On the contrary, how it is created and presented to the users of the search engine is one of the better kept secrets of the for-profit company Google. In fact, for the purpose of evaluation, it is less rather than more transparent than the Web of Science or Scopus. In the framework of research evaluation, transparency and consistency of data and indicators may actually be more important than free availability.

References:

Davis, P. M. (2012). Tweets, and Our Obsession with Alt Metrics. The Scholarly Kitchen. Retrieved January 8, 2012, from http://scholarlykitchen.sspnet.org/2012/01/04/tweets-and-our-obsession-with-alt-metrics/

Groth, P., & Gurney, T. (2010). Studying Scientific Discourse on the Web using Bibliometrics: A Chemistry Blogging Case Study. Retrieved from http://journal.webscience.org/308/2/websci10_submission_48.pdf

Pels, D. (2003). Unhastening science: Autonomy and reflexivity in the social theory of knowledge. Routledge.

Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). altmetrics: a manifesto – altmetrics.org. Retrieved January 8, 2012, from http://altmetrics.org/manifesto/

%d bloggers like this: