On exploding ‘evaluation machines’ and the construction of alt-metrics

The emergence of web-based ways to create and communicate new knowledge is affecting long-established scientific and scholarly research practices (cf. Borgman 2007; Wouters, Beaulieu, Scharnhorst, & Wyatt 2013). This move to the web is spawning a need for tools to track and measure a wide range of online communication forms and outputs. By now, there is a large differentiation in the kinds of social web tools (i.e. Mendeley, F1000,  Impact Story) and in the outputs they track (i.e. code, datasets, nanopublications, blogs). The expectations surrounding the explosion of tools and big ‘alt-metric’ data (Priem et al. 2010; Wouters & Costas 2012) marshal resources at various scales and gather highly diverse groups in pursuing new projects (cf. Brown & Michael 2003; Borup et al. 2006 in Beaulieu, de Rijcke & Van Heur 2013).

Today we submitted an abstract for a contribution to Big Data? Qualitative approaches to digital research (edited by Martin Hand & Sam Hillyard and contracted with Emerald). In the abstract we propose to zoom in on a specific set of expectations around altmetrics: Their alleged usefulness for research evaluation. Of particular interest to this volume is how altmetrics information is expected to enable a more comprehensive assessment of 1. social scientific outputs (under-represented in citation databases) and 2. wider types of output associated with societal relevance (not covered in citation analysis and allegedly more prevalent in the social sciences).

Our chapter we address a number of these expectations by analyzing 1) the discourse in the “altmetrics movement”, the expectations and promises formulated by key actors involved in “big data” (including commercial entities); and 2) the construction of these altmetric data and their alleged validity for research evaluation purposes. We will combine discourse analysis with bibliometric, webometric and altmetric methods in which both methods will also interrogate each others’ assumptions (Hicks & Potter 1991).

Our contribution will show, first of all, that altmetric data do not simply ‘represent’ other types of outputs; they also actively create a need for these types of information. These needs will have to be aligned with existing accountability regimes. Secondly, we will argue that researchers will develop forms of regulation that will partly be shaped by these new types of altmetric information. They are not passive recipients of research evaluation but play an active role in assessment contexts (cf. Aksnes & Rip 2009; Van Noorden 2010). Thirdly, we will show that the emergence of altmetric data for evaluation is another instance (following the creation of the citation indexes and the use of web data in assessments) of transposing traces of communication into a framework of evaluation and assessment (Dahler-Larsen 2012, 2013; Wouters 2014).

By making explicit what the implications are of the transfer of altmetric data from the framework of the communication of science to the framework of research evaluation, we aim to contribute to a better understanding of the complex dynamics in which new generation of researchers will have to work and be creative.

Aksnes, D. W., & Rip, A. (2009). Researchers’ perceptions of citations. Research Policy, 38(6), 895–905.

Beaulieu, A., van Heur, B. & de Rijcke, S. (2013). Authority and Expertise in New Sites of Knowledge Production. In A. Beaulieu, A. Scharnhorst, P. Wouters and S. Wyatt (Eds.), Virtual KnowledgeExperimenting in the Humanities and the Social Sciences. (pp. 25-56). MIT Press.

Borup, M, Brown, N., Konrad, K. & van Lente, H. 2006. “The sociology of expectations in science and technology.” Technology Analysis & Strategic Management 18 (3/4), 285-98.

Brown, N. & Michael, M. (2003). “A sociology of expectations: Retrospecting prospects and prospecting retrospects.” Technology Analysis & Strategic Management 15 (1), 3-18.

Costas, R., Zahedi, Z. & Wouters, P. (n.d.). Do ‘altmetrics’ correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective.

Dahler-Larsen, P. (2012). The Evaluation Society. Stanford University Press.

Dahler-Larsen, P. (2013). Constitutive Effects of Performance Indicators. Public Management Review, (May), 1–18.

Galligan, F., & Dyas-Correia, S. (2013). Altmetrics: Rethinking the Way We Measure. Serials Review, 39(1), 56–61.

Hicks, D., & Potter, J. (1991). Sociology of Scientific Knowledge: A Reflexive Citation Analysis of Science Disciplines and Disciplining Science. Social Studies of Science, 21(3), 459 –501.

Priem, J., Taraborelli, D., Groth, P., and Neylon, C. (2010a). Altmetrics: a manifesto. http://altmetrics.org/manifesto/

Van Noorden, R. (2010) “Metrics: A Profusion of Measures.” Nature, 465, 864–866.

Wouters, P., Costas, R. (2012). Users, narcissism and control: Tracking the impact of scholarly publications in the 21st century. Utrecht: SURF foundation.

Wouters, P. (2014). The Citation: From Culture to Infrastructure. In B. Cronin & C. R. Sugimoto (Eds.), Next Generation Metrics: Harnessing Multidimensional Indicators Of Scholarly Performance (Vol. 22, pp. 48–66). MIT Press.

Wouters, P., Beaulieu, A., Scharnhorst, A., & Wyatt, S. (eds.) (2013). Virtual Knowledge – Experimenting in the Humanities and the Social Sciences. MIT Press.

Advertisement

Selling science to Nature

On Saturday 22 December, the Dutch national newspaper NRC published an interview with Hans Clevers, professor of molecular genetics and president of the Royal Netherlands Academy of Arts and Sciences (KNAW). The interview is the latest in a series of public performances following Clevers’ installment as president in 2012, in which he responds to current concerns about the need for revisions in the governance of science. The recent Science in Transition initiative for instance stirred quite some debate in the Netherlands, also within the Academy. One of the most hotly debated issues is that of quality control, an issue that encompasses the implications of an increasing publication pressure, purported flaws in the peer review system, impact factor manipulation, and the need for new forms of data quality management.

Clevers is currently combining the KNAW-presidency with his group leadership at the Hubrecht Institute in Utrecht. In both roles he actively promotes data sharing. He told the NRC that he stimulates his own researchers to share all findings. “Everything is for the entire lab. Asians in particular sometimes need to be scolded for trying to keep things to themselves.” When it comes to publishing the findings, it is Clevers who decides who contributed most to a particular project and who deserves to be first author. “This can be a big deal for the careers of PhD students and post-docs.” The articles for ‘top journals’ like Nature or Science he always writes himself. “I know what the journals expect. It requires great precision. A title consists of 102 characters. It should be spot-on in terms of content, but it should also be exciting.”

Clevers does acknowledge some of the problems with the current governance of science — the issue of data sharing and mistrust mentioned above, but for instance also the systematic imbalance in the academic reward system when it comes to appreciation for teaching. However, he does not seem very concerned with publication pressure. He argued on numerous occasions that publishing is simply part of daily scientific life. According to him, the number of articles is not a leading criterium. In most fields, it’s the quality of the papers that matters most. With these statements Clevers clearly puts himself in the mainstream view on scientific management. But there are also dissenting opinions, and sometimes they are voiced by other prominent scientists from the same field. Last month, Nobel Prize winner Randy Schekman, professor of molecular and cell biology at UC Berkeley, declared a boycott on three top-tier journals at the Nobel Prize ceremony in Stockholm. Schekman argued that NatureCellScience and other “luxury” journals are damaging the scientific process by artificially restricting the number of papers they accept, by make improper use of the journal impact factor as a marketing tool, and by depending on editors that favor spectacular findings over soundness of the results. 

The Guardian published an article in which Schekman iterated his critique. The journal also made an inventory of the reactions of the editors-in-chief of NatureCell and Science. They washed their hands of the matter. Some even delegated the problems to the scientists themselves. Philip Campbell, editor-in-chief of Nature, referred to a recent survey of the Nature Publishing Group which revealed that “[t]he research community tends towards an over-reliance in assessing research by the journal in which it appears, or the impact factor of that journal.”

In a previous blog post we paid attention to a call for an in-depth study of the editorial policies of NatureScience, and Cell by Jos Engelen, president of the Netherlands Organization for Scientific Research (NWO). It is worth reiterating some parts of his argument. According to Engelen the reputation of these journals, published by commercial publishers, is based on ‘selling’ innovative science derived from publicly funded research. Their “extremely selective publishing policy” has turned these journals into ‘brands’ that have ‘selling’ as their primary interest, and not, for example, “promoting the best researchers.” Here we see the contours of a disagreement with Clevers. Without wanting to read too much into his statements, Clevers on more than one occasion treats the status and quality of NatureCell and Science as apparently self-evident — as the main current of thought would have it. But in the NRC interview Clevers also does something else: By explaining his policy to write the ‘top-papers’ himself he also reveals that these papers are as much the result of craft, reputation and access, as they are an ‘essential’ quality of the science behind it. Knowing how to write attractive titles is a start – but it is certainly not the only skill needed in this scientific reputation game.

The stakes are high with regard to scientific publishing  — that much is clear. Articles in ‘top’ journals can make, break or sustain careers. One possible explanation for the status of these journals is of course that researchers have become highly reliant on on external funding for the continuation of their research. And highly cited papers in high impact journals have become the main ‘currency’ in science, as theoretical physicist Jan Zaanen called it in a lecture at our institute. The fact that articles in top journals serve as de facto proxies for the quality of researchers is perhaps not problematic in itself (or is it?). But it certainly becomes tricky if these same journals increasingly treat short-term news-worthiness as an important criterion in their publishing policies, and if peer review committee work also increasingly revolves around selecting those projects that are most likely to have short-term success. Amongst others Frank Miedema (one of the initiators of Science in Transition) argues that this is the case in his booklet Science 3.0. Clearly, there is a need for thorough research into these dynamics. How prevalent are they? And what are the potential consequences for longer-term research agendas?

%d bloggers like this: