High time to start this blog again! November and December were too busy to keep up with it, as I had to combe getting to know CWTS better with preparing the transfer of the Virtual Knowledge Studio to the e-Humanities Group at the KNAW. I am currently being overwhelmed by positive responses to my inaugural lecture that I gave last Friday in the beautiful Academy Building of Leiden University. In the lecture I sketched my plans for future research at CWTS against the backdrop of the history of performance measurement in the sciences and of the field of scientometrics. The hall was packed and I have received many enhousiastic emails since. It means that we will have a firm ground to build up this research agenda.
So let me summarize the main points. In the past decennia, research evaluation has increased in size and complexity and formal performance indicators are playing a crucial role. This is very different indeed from the times when Ton van Raan started his scientometric research and CWTS in the 1980s. The competition between different indicator research groups and scientometric institutes has also led to a proliferation of indicators. The differences between them are not always clear, as is the exact way in which they are defined, measured and computed. This means that it is now becoming more urgent to include the critique of indicators in the creation of new ones, to spell out the limitations of these indicators to audiences that are not yet accustomed to them. This is also the motivation why CWTS will publish a manual on our indicators later this year.
What does citation actually mean? This is the first research theme that I will explore in the coming years. This question was already tackled by the students of the American historian of Science Robert Merton in the early days of scientometrics, and it is still highly relevant. It is also a bit of a puzzle. At higher level of aggregations, such as large groups of researchers or universities, many studies have shown a correlation between citation frequency and quality of research, reputation of researchers or scientific relevance of the work. However, as soon as we are looking at a more finegrained level at the underlying mechanisms, to understand where this correlation comes from, the correlation seems to disappear. Of course, this may simply mean that it depends on the level of aggregation and also on the exact definition of quality, reputation, and relevance. In itself this is not strange, but it remains unsatisfactory. I will try to dig into this in the coming years, also in relation to the renewed interest in citation theories. A related line of work in this research theme, more important perhaps, is the impact of evaluation and performance indicators on research. How do evaluations actually work out in large research organizations such as universities and hospitals? Are researchers changing their communication and research practices because of the use of citation frequencies in evaluation? Are they citing with this in mind? How will the organization of research be affected? We do not know a lot about these implications of the rise of citation cultures in research, yet it is urgent to understand this better in order to improve the quality of evaluations.
The second research theme I will contribute to has already started at CWTS in the last year. it is fundamental research in the mathematical and statistical properties of performance indicators. Do we actually need all these indicators that we see parading in the pages of Scientometrics? How do they actually relate to each other in terms of their mathematical properties and definitions? And how do they behave when applied to the existing citation databases and research groups? We know that some of these indicators are actually not fit to use in research evaluation, such as the Journal Impact Factor and the Hirsch Index. (Yet these belong to the most popular indicators!) But we currently do not have a systematic overview of the properties of all performance indicators. Consistency and reliability are important issues in this line of work. In this area, I am particularly interested in the connection between the math questions and the sociological questions. Can this combination bring us more robust general design principles for performance indicators? Second, I will contribute by building simulations of the scientific publication and communcation system. I hope this will in the long term build an experimental environment and set of tools to simulate indicators before they are being applied in a management or policy context.
The third research theme, that I think will be very exciting in the coming years is the area of data and knowledge visualization. It is now possible to create sophisticated science maps on the basis of large data sets on scientific research. The recent publication of the Atlas of Science by Katy Börner is a beautiful contribution to this work and has shown the promises. Her book also shows how sensitive these maps are to the underlying assumptions about science and scientific work. Maps have a reality effect and tend to be read three dimensional geographical maps. However, the use of science maps is important precisely because they can present many different dimensions. This calls for a more systematic study of the design principles of science maps. After we have established these, more user oriented questions are pertinent. Will it be possible to present most scientometric research in the forms of interactive maps of science, where the user can dig into the underlying data sets, and where uncertainties and missing values are clearly indicated?
The fourth research line I will explore in the coming years with my colleagues at CWTS is the question of data sources. It is clear that the current situation is unsatisfactory. Citation databases do not cover all of the scholarly fields, and especially the humanities and social sciences are only partially represented in these databases. For many interesting evaluation as well as research questions, combinations of citation data and other data (investments in research, personnel, patents, cultural impacts) are needed. In small research projects this is often not too difficult, but when we are speaking of large scale research evaluation and management, it does require a quality jump in data infrastructures and data integration. In the end, scientometrics is and remains a data science.