“Looking-glass upon the wall, Who is fairest of us all?” (Part 4)

In our last post, we discussed four arguments in favour of alternative metrics (more details can be found in our recent report on altmetrics “Users, narcissism, and control”. To recapitulate, the four arguments are: openness, speed, scholarly output diversity, and the measurement of more impact dimensions. How do these arguments relate to the available empirical evidence?

Speed is probably the weakest argument. Of course, it is seductive to have the feeling to be able to monitor “in real time” how a publication reverbates in the communication system. The Altmetrics Manifesto (Priem, Taraborelli, Groth, & Neylon, 2010) even advocates the use of “real-time recommendation and collaborative filtering systems” in funding and promotion decisions. But how wise is this? To really know what a particular publication has contributed takes time, if only because the publication must be read by enough people. Faster is not always better. It may even be the other way around, as the sociologist Dick Pels has argued in his book celebrating “slow science” (Pels, 2003).

Moreover – this relates to the fourth argument – we do not yet know enough about scholarly communication to see what all the measurable data might mean. For example, it does not make much sense to be happy about one instance of correlation between number of tweets and citations, if we do not fully understand what a tweet might mean (Davis, 2012). The role of early signalling of possibly interesting research may be very different from a later-stage scholarly citation. And different modalities of communication may also represent different dimensions of research quality. For example, a recent study compared research blogging in the area of chemistry with journal publications. It was found that blogging is more oriented towards the social implications of research, tends to focus on high-impact journals, is more immediate than scientific publishing, and provides more context of the research (Groth & Gurney, 2010). We need much more of these studies before we jump to conclusions about the value of measuring blogs, web sites, tweets etc. In other words, the fourth argument for alternative metrics is an important research agenda in itself.

This also holds for the third argument: diversity. Researchers write blogs, update databases, build instruments, do field work, conduct applied research to solve societal problems, train future generations of researchers, develop prototypes and contribute their expertise to countless panels and newspaper columns. All this is not well represented in international peer reviewed journals (albeit sometimes it is reflected indirectly). Traditional citation analysis captures an important slice of scholarly and scientific output, provided the field is well represented in the Web of Science (which is not the case in most humanities). Yet, however valuable, it is still only a thin slice of the diverse scientific production. Perhaps alternative metric will be able to reflect this diversity in a more satisfactory way than citation analysis. Before we can affirm that this is the case indeed, we need much more case study research.

This brings me to the last argument, openness. The two most popular citation indexes (Web of Science and Scopus) are both proprietary. Together with their relatively narrow focus, this has brought many scholars to look for open, freely accessible alternatives. And some think they found one in Google Scholar, the most popular search engine for scholarly work. I think it is indisputable that the publication system is moving towards a future with more open access media as default options. But there is a snag. Although Google Scholar is freely available, its database is certainly not open. On the contrary, how it is created and presented to the users of the search engine is one of the better kept secrets of the for-profit company Google. In fact, for the purpose of evaluation, it is less rather than more transparent than the Web of Science or Scopus. In the framework of research evaluation, transparency and consistency of data and indicators may actually be more important than free availability.

References:

Davis, P. M. (2012). Tweets, and Our Obsession with Alt Metrics. The Scholarly Kitchen. Retrieved January 8, 2012, from http://scholarlykitchen.sspnet.org/2012/01/04/tweets-and-our-obsession-with-alt-metrics/

Groth, P., & Gurney, T. (2010). Studying Scientific Discourse on the Web using Bibliometrics: A Chemistry Blogging Case Study. Retrieved from http://journal.webscience.org/308/2/websci10_submission_48.pdf

Pels, D. (2003). Unhastening science: Autonomy and reflexivity in the social theory of knowledge. Routledge.

Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). altmetrics: a manifesto – altmetrics.org. Retrieved January 8, 2012, from http://altmetrics.org/manifesto/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: