“Looking-glass upon the wall, Who is fairest of us all?” (Part 2)

As indicated in the last post about our recent report on alternative impact metrics “Users, narcissism, and control”, we have tried to give an overview of 16 of novel impact measurement tools and present their strengths and weaknesses as thoroughly as we could. Many of the tools have an attractive user interface and are able to present impact results faily quickly. Moreover, almost all of them are freely available, albeit some need some form of gratis registration. All of them provide metrics at the level of the article, manuscript or book. Taken together, these three characteristics make these tools attractive to individual researchers and scholars. It enables them to quickly see statistical evidence regarding impact, usage, or influence without too much effort.

At the same time, the impact monitors still suffer from some crucial disadvantages. An important problem has to do with the underlying data. Most of the tools do not (yet?) enable the user to inspect the data on criteria such as completeness and accuracy. This means that these web based tools may create statistics and indicators on incorrect data. The second problem relates to field differences. Scientific fields differ considerably in their communication characteristics. For example, the numbers of citations in clinical research are very high because a very large number of researchers is active, the lists of references per article are relatively long, and there are many co-authored articles, sometimes with tens of authors per paper. As a result the average clinical researcher has a higher citation frequency than the average mathematician. The latter operates in much smaller communities with relatively short lists of references and many solitary articles. As a consequence, it would be irresponsible to compare the raw citation data as a proxy measure of scientific impact among units with production from very different fields.

In many evaluation contexts, it is therefore desirable to be able to normalise impact indicators. Most tools do not accomodate this. The third problem is that the data coverage is sometimes rather limited (some of the tools only look at the biomedical fields for example). The tools have some more limitations. There are almost no tools that provide metrics at other levels of aggregation such as research institutes, journals, etc. Most tools also do not provide easy ways for data downloads and data management. Although less severe than the crucial requirements, these limitations also diminish the usability of many of these tools in the more formal research assessments.