“Looking-glass upon the wall, Who is fairest of us all?” (Part 3)

How do the conclusions in our recent report on altmetrics “Users, narcissism, and control” relate to the discussions about altmetrics? We found that four arguments are regularly mentioned in favor of new methods instead of the more traditional citation analysis.

Perhaps one of the best representatives of this body of work is the Altmetrics Manifesto (Priem, Taraborelli, Groth, & Neylon, 2010). The manifesto notes that traditional forms of publication in the current system of journals and books are increasingly supplemented by other forms of science communication. These include: the sharing of ‘raw science’ like datasets, code, and experimental designs; new publication formats such as the ‘nanopublication’, basically a format for the publication of data elements (Groth, Gibson, & Velterop, 2010); and widespread self-publishing via blogging, microblogging, and comments or annotations on existing work (Priem et al., 2010).

The first argument in favor of new impact metrics is diversity and filtering. Because web based publishing and communication has become so diverse, we need an equally diverse set of tools to act upon these traces of communication. The altmetrics tools build on their use as information filters to also start measuring some forms of impact (often defined differently from citation impact).

The second argument is speed. It takes time for traditional publications to pick up citations and citation analysis is only reliable after some initial period (which varies by field). The promise of altmetrics is an almost instant measurement window. ‘The speed of altmetrics presents the opportunity to create real-time recommendation and collaborative filtering systems: instead of subscribing to dozens of tables-of-contents, a researcher could get a feed of this week’s most significant work in her field. This becomes especially powerful when combined with quick “alt-publications” like blogs or preprint servers, shrinking the communication cycle from years to weeks or days. Faster, broader impact metrics could also play a role in funding and promotion decisions.’ (Priem et al., 2010).

The third argument is openness. Because the data can be collected through Advanced Programming Interfaces (APIs), the data coverage is completely transparent to the user. This also holds for the algorithms and code used to calculate the indicators. An important advantage discussed in the literature is also the possibility to end the dependency on commercial databases such as Thomson Reuters’ Web of Science or Elsevier’s Scopus. The difficulties that are entailed in the bottom-up creation of a completely new usage, impact, or citation index is however usually not mentioned. Still, this promise of a non-commercial index that can be used to measure impact or other dimensions of scientific performance should not be disregarded. In the long term, this may be the direction in which the publication system is moving.

The fourth argument is that many web based traces of scientific communication activity can be used to measure aspects of scientific performance that are not captured by citation analysis or peer review. For example, download data could be used to measure actual use of one’s work. The number of hyperlinks to one’s website might also be an indication of some form of impact. Indeed, since the 1990s the fields of internet research, webometrics and scientometrics have developed a body of work comparing the roles of citations and hyperlinks and the possibility of building impact measurements on these analogies (Bar-Ilan & Peritz, 2002; Björneborn & Ingwersen, 2001; Hewson, 2003; Hine, 2005; Rousseau, 1998; Thelwall, 2005).

So, do these four arguments stand up in confrontation with the empirical results we had?

References:

Bar-Ilan, J., & Peritz, B. C. (2002). Informetric Theories and Methods for Exploring the Internet: An Analytical Survey of Recent Research Literature. Library Trends, 50(3), 371-392.

Björneborn, L., & Ingwersen, P. (2001). Perspectives of webometrics. Scientometrics, 50(1), 65-82.

Groth, P., Gibson, A., & Velterop, J. (2010). The anatomy of a nanopublication. Information Services & Use, 30, 51-56. doi:10.3233/ISU-2010-0613

Hewson, C. (2003). Internet research methods: a practical guide for the social and behavioural sciences. London etc.: Sage.

Hine, C. (2005). Virtual Methods: Issues in Social Research on the Internet. Berg.

Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). altmetrics: a manifesto – altmetrics.org. Retrieved January 8, 2012, from http://altmetrics.org/manifesto/

Rousseau, R. (1998). Sitations: an exploratory study. Cybermetrics, 1(1), 1. Retrieved from http://www.cindoc.csic.es/cybermetrics/articles/v1i1p1.html

Thelwall, M. (2005). Link Analysis: An Information Science Approach. San Diego: Academic Press.

Advertisement

“Looking-glass upon the wall, Who is fairest of us all?” (Part 2)

As indicated in the last post about our recent report on alternative impact metrics “Users, narcissism, and control”, we have tried to give an overview of 16 of novel impact measurement tools and present their strengths and weaknesses as thoroughly as we could. Many of the tools have an attractive user interface and are able to present impact results faily quickly. Moreover, almost all of them are freely available, albeit some need some form of gratis registration. All of them provide metrics at the level of the article, manuscript or book. Taken together, these three characteristics make these tools attractive to individual researchers and scholars. It enables them to quickly see statistical evidence regarding impact, usage, or influence without too much effort.

At the same time, the impact monitors still suffer from some crucial disadvantages. An important problem has to do with the underlying data. Most of the tools do not (yet?) enable the user to inspect the data on criteria such as completeness and accuracy. This means that these web based tools may create statistics and indicators on incorrect data. The second problem relates to field differences. Scientific fields differ considerably in their communication characteristics. For example, the numbers of citations in clinical research are very high because a very large number of researchers is active, the lists of references per article are relatively long, and there are many co-authored articles, sometimes with tens of authors per paper. As a result the average clinical researcher has a higher citation frequency than the average mathematician. The latter operates in much smaller communities with relatively short lists of references and many solitary articles. As a consequence, it would be irresponsible to compare the raw citation data as a proxy measure of scientific impact among units with production from very different fields.

In many evaluation contexts, it is therefore desirable to be able to normalise impact indicators. Most tools do not accomodate this. The third problem is that the data coverage is sometimes rather limited (some of the tools only look at the biomedical fields for example). The tools have some more limitations. There are almost no tools that provide metrics at other levels of aggregation such as research institutes, journals, etc. Most tools also do not provide easy ways for data downloads and data management. Although less severe than the crucial requirements, these limitations also diminish the usability of many of these tools in the more formal research assessments.

 

 

“Looking-glass upon the wall, Who is fairest of us all?” (Part 1)

What is the impact of my research? This question has of course always been intriguing for scholars and scientists. For the bigger part of science history, it was a difficult question to answer. It was simply too difficult to know who was reading ones work, apart from the relatively small circle of direct colleagues. This has changed somewhat with the advent of the web and in particular with the exploding popularity of social media such as Facebook and Linkedin. Many of these tools do not only enable one to push ones work directly to “the world”, but also facilitate tracking some dimensions of its reception. So is it now possible to do immediate impact measurements of ones latest research publication?

This is the key question in a recent study we conducted in the framework of the SURFSHARE program of the Dutch universities, “Users, narcissism, and control”   (Wouters & Costas, 2012). As far as we know, we have done the first attempt at a comprehensive empirical analysis of the “Cambrian explosion of metrics” (Van Noorden, 2010) on the web. We collected detailed information about 16 recent publication impact monitors.

We draw the conclusion that these novel tools are quite interesting for individual researchers to have a quick (and sometimes dirty) impression of their possible impact. We call the tools used for this purpose “technologies of narcissism”. At the same time, it would be irresponsible to use these numbers in more formal research evaluations, to which we refer as “technologies of control” (Beniger, 1986). We therefore advise against already incorporating these tools in these more formalized settings because of the problems with data verification and control, and partly though less importantly, also because of the difficulties with normalizing the indicators to correct for field differences.

These new web based information services vary from enhanced peer review systems to tracking readers and downloads of publications to free citation services. For example, the Public Library of Science (PLoSOne) gives extensive user and citation statistics of articles published in PLoSOne. The new website Faculty of 1000 (F1000) brings together reviews and rankings of biomedical publications. This enables researchers to quickly zoom in on the stuff most relevant to their own research. Google Scholar, probably the most popular academic search engine, has recently started a new service Google Citations by which one can reconstruct ones citation impact on the basis of mentions in the Google Scholar database. Microsoft started a competing search engine with slightly different capabilities: Microsoft Academic Search. It gives more options through the use of APIs (application programming interfaces), so users can collect data with the help of scripts rather than having to do everything manually. An interesting development is also the creation of new types of reference managing software. For example, Mendeley is a combination of a social networking site and a reference manager. This enables scholars to efficiently exchange their reading tips and bibliographies. And this in its turn has created the possibility to know how many readers one has. These usage statistics are public. As a result, other services can harvest this data and represent them in a combined impact monitor. One of these new tools is TotalImpactwhich aims to present “the invisible impact” of scientific and scholarly articles on the basis of a document ID (such as a DOI or a URN). It harvests use data from a variety of sources, Mendeley being one of them.

Literature:

Beniger, J. R. (1986). The Control Revolution. Technological and Economic Origins of the Information Society. Cambridge. Massachusets, and London, England: Harvard University Press.

Van Noorden, R. (2010). Metrics: A profusion of measures. Nature, 465(7300), 864-6. Nature Publishing Group. Retrieved from http://www.nature.com/news/2010/100616/full/465864a.html

Wouters, P., & Costas, R. (2012). Users , narcissism and control – tracking the impact of scholarly publications in the 21 century (pp. 1-50). Utrecht.

%d bloggers like this: