The question about the societal relevance and economic impact of research is increasingly put forward in peer review of research projects, annual appraisal interviews, and institutional research assessment exercises. This is part of a more generic trend in accountability practices in a variety of societal sectors and in the way strategic intelligence is managed in business processes. An important task for CWTS will be to contribute actively to the build-up of conscientious criteria for this relatively new research assessment module. CWTS recently hired dr. Ingeborg Meijer – previously senior consultant at Technopolis and scientific officer at the Advisory Counsil on Health Research – to take on this task. Ingeborg presented her plans at last week’s CWTS research seminar.
The increasing focus on the societal impact of research has created a serious problem for both researchers and evaluators because these wider impacts of the outcomes of research (different from the more narrow research output in the form of publications) are very difficult to prove and evaluate. This is mainly caused by the complex nature of the interactions between academia, industry, and the public sector. There is no straightforward way out of this problem. Assessing the social, economic, cultural and ecological impact of scientific research is not simply a matter of developing performance indicators for ‘societally relevant’ research activity and an accompanying technological infrastructure for data collection. The methods and techniques for evaluating societal relevance will themselves also affect how ‘societal impact’ is defined and operationalized.
Using indicators and methods for research assessment is not merely a descriptive but also a prescriptive practice. Bear in mind some of the perverse effects of quantitative performance indicators for scientific impact: in some fields the citation culture seems to have resulted in an unhealthy interest in uni-dimensional output measures such as the number of articles published in high-impact journals and the number of times these articles are cited. If we take seriously that research assessment is a social technology we should also acknowledge these undesirable effects. It may indeed be beneficial for researchers if there is more balance in the types of activitities they will be held accountable for. As Ingeborg pointed out, making visible the ‘societally relevant’ work researchers are already doing (by collecting data on the web or by asking researchers to list activities, for instance) is a promising start. In addition, and considering the performative effects of indicators, policy makers, researchers and evaluation officers should also develop an overarching vision on the kinds of work they deem crucial and ‘socially relevant’. The activities that are currently being mapped out are undertaken within (and will therefore reflect) the parameters of the present evaluation system, which lean towards counting international peer-reviewed articles. Perhaps researchers should be encouraged to develop a much more variegated set of activities than they are at present receiving credit for.