On exploding ‘evaluation machines’ and the construction of alt-metrics

The emergence of web-based ways to create and communicate new knowledge is affecting long-established scientific and scholarly research practices (cf. Borgman 2007; Wouters, Beaulieu, Scharnhorst, & Wyatt 2013). This move to the web is spawning a need for tools to track and measure a wide range of online communication forms and outputs. By now, there is a large differentiation in the kinds of social web tools (i.e. Mendeley, F1000,  Impact Story) and in the outputs they track (i.e. code, datasets, nanopublications, blogs). The expectations surrounding the explosion of tools and big ‘alt-metric’ data (Priem et al. 2010; Wouters & Costas 2012) marshal resources at various scales and gather highly diverse groups in pursuing new projects (cf. Brown & Michael 2003; Borup et al. 2006 in Beaulieu, de Rijcke & Van Heur 2013).

Today we submitted an abstract for a contribution to Big Data? Qualitative approaches to digital research (edited by Martin Hand & Sam Hillyard and contracted with Emerald). In the abstract we propose to zoom in on a specific set of expectations around altmetrics: Their alleged usefulness for research evaluation. Of particular interest to this volume is how altmetrics information is expected to enable a more comprehensive assessment of 1. social scientific outputs (under-represented in citation databases) and 2. wider types of output associated with societal relevance (not covered in citation analysis and allegedly more prevalent in the social sciences).

Our chapter we address a number of these expectations by analyzing 1) the discourse in the “altmetrics movement”, the expectations and promises formulated by key actors involved in “big data” (including commercial entities); and 2) the construction of these altmetric data and their alleged validity for research evaluation purposes. We will combine discourse analysis with bibliometric, webometric and altmetric methods in which both methods will also interrogate each others’ assumptions (Hicks & Potter 1991).

Our contribution will show, first of all, that altmetric data do not simply ‘represent’ other types of outputs; they also actively create a need for these types of information. These needs will have to be aligned with existing accountability regimes. Secondly, we will argue that researchers will develop forms of regulation that will partly be shaped by these new types of altmetric information. They are not passive recipients of research evaluation but play an active role in assessment contexts (cf. Aksnes & Rip 2009; Van Noorden 2010). Thirdly, we will show that the emergence of altmetric data for evaluation is another instance (following the creation of the citation indexes and the use of web data in assessments) of transposing traces of communication into a framework of evaluation and assessment (Dahler-Larsen 2012, 2013; Wouters 2014).

By making explicit what the implications are of the transfer of altmetric data from the framework of the communication of science to the framework of research evaluation, we aim to contribute to a better understanding of the complex dynamics in which new generation of researchers will have to work and be creative.

Aksnes, D. W., & Rip, A. (2009). Researchers’ perceptions of citations. Research Policy, 38(6), 895–905.

Beaulieu, A., van Heur, B. & de Rijcke, S. (2013). Authority and Expertise in New Sites of Knowledge Production. In A. Beaulieu, A. Scharnhorst, P. Wouters and S. Wyatt (Eds.), Virtual KnowledgeExperimenting in the Humanities and the Social Sciences. (pp. 25-56). MIT Press.

Borup, M, Brown, N., Konrad, K. & van Lente, H. 2006. “The sociology of expectations in science and technology.” Technology Analysis & Strategic Management 18 (3/4), 285-98.

Brown, N. & Michael, M. (2003). “A sociology of expectations: Retrospecting prospects and prospecting retrospects.” Technology Analysis & Strategic Management 15 (1), 3-18.

Costas, R., Zahedi, Z. & Wouters, P. (n.d.). Do ‘altmetrics’ correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective.

Dahler-Larsen, P. (2012). The Evaluation Society. Stanford University Press.

Dahler-Larsen, P. (2013). Constitutive Effects of Performance Indicators. Public Management Review, (May), 1–18.

Galligan, F., & Dyas-Correia, S. (2013). Altmetrics: Rethinking the Way We Measure. Serials Review, 39(1), 56–61.

Hicks, D., & Potter, J. (1991). Sociology of Scientific Knowledge: A Reflexive Citation Analysis of Science Disciplines and Disciplining Science. Social Studies of Science, 21(3), 459 –501.

Priem, J., Taraborelli, D., Groth, P., and Neylon, C. (2010a). Altmetrics: a manifesto. http://altmetrics.org/manifesto/

Van Noorden, R. (2010) “Metrics: A Profusion of Measures.” Nature, 465, 864–866.

Wouters, P., Costas, R. (2012). Users, narcissism and control: Tracking the impact of scholarly publications in the 21st century. Utrecht: SURF foundation.

Wouters, P. (2014). The Citation: From Culture to Infrastructure. In B. Cronin & C. R. Sugimoto (Eds.), Next Generation Metrics: Harnessing Multidimensional Indicators Of Scholarly Performance (Vol. 22, pp. 48–66). MIT Press.

Wouters, P., Beaulieu, A., Scharnhorst, A., & Wyatt, S. (eds.) (2013). Virtual Knowledge – Experimenting in the Humanities and the Social Sciences. MIT Press.

Advertisement

“Looking-glass upon the wall, Who is fairest of us all?” (Part 2)

As indicated in the last post about our recent report on alternative impact metrics “Users, narcissism, and control”, we have tried to give an overview of 16 of novel impact measurement tools and present their strengths and weaknesses as thoroughly as we could. Many of the tools have an attractive user interface and are able to present impact results faily quickly. Moreover, almost all of them are freely available, albeit some need some form of gratis registration. All of them provide metrics at the level of the article, manuscript or book. Taken together, these three characteristics make these tools attractive to individual researchers and scholars. It enables them to quickly see statistical evidence regarding impact, usage, or influence without too much effort.

At the same time, the impact monitors still suffer from some crucial disadvantages. An important problem has to do with the underlying data. Most of the tools do not (yet?) enable the user to inspect the data on criteria such as completeness and accuracy. This means that these web based tools may create statistics and indicators on incorrect data. The second problem relates to field differences. Scientific fields differ considerably in their communication characteristics. For example, the numbers of citations in clinical research are very high because a very large number of researchers is active, the lists of references per article are relatively long, and there are many co-authored articles, sometimes with tens of authors per paper. As a result the average clinical researcher has a higher citation frequency than the average mathematician. The latter operates in much smaller communities with relatively short lists of references and many solitary articles. As a consequence, it would be irresponsible to compare the raw citation data as a proxy measure of scientific impact among units with production from very different fields.

In many evaluation contexts, it is therefore desirable to be able to normalise impact indicators. Most tools do not accomodate this. The third problem is that the data coverage is sometimes rather limited (some of the tools only look at the biomedical fields for example). The tools have some more limitations. There are almost no tools that provide metrics at other levels of aggregation such as research institutes, journals, etc. Most tools also do not provide easy ways for data downloads and data management. Although less severe than the crucial requirements, these limitations also diminish the usability of many of these tools in the more formal research assessments.

 

 

%d bloggers like this: