Diversity in publication cultures

Last December, the “Young Academy” (DJA), a part of the Royal Netherlands Academy of Arts and Sciences, published an interesting Dutch-language booklet about the experiences of their members (somewhat younger professors and assistant professors) in publishing and evaluations, “Kennis over publiceren. Publicatietradities in de wetenschap”. It tries to chart the enormous diversity in publication and citation traditions across, and even within, disciplines. The booklet aims to contribute to the increasingly important discussion about the best way to communicate scientific and scholarly results and about the current evaluation protocols. It combines a general overview of the debate and part of the literature on publishing and citation with text boxes in which DJA members are interviewed about their own careers. The latter part is the most interesting of the booklet.

The DJA publication confirms the main themes of the new CWTS research programma which we also published last December. First, we are witnessing an increasing formalized evaluation culture in science and scholarship, which is now also covering the humanities and social sciences. Second, there is not one proper way to publish and evaluate ones work. What works well in the geosciences, may be very inappropriate for chemistry. Books are still very important in some fields, and virtually non-existent in others. Third, these differences do not map neatly on the boundaries between the natural sciences, social sciences and humanities. There are important differences within these areas and even within disciplines.

This creates a challenge for research evaluations, and this theme is the main thread in our new research programme “Merit, Expertise and Measurement” . On the one hand, in order to be fair evaluation criteria need to be standardized and generic. On the other hand, the characteristics of different fields need to be taken into account. The current evaluation system in the Netherlands, as well as the way project proposals are evaluated by the Dutch Science Foundation NWO, tends to be biased towards the standardized criteria based on an implicit natural science model. As a result, publications in international journals with high impact factors have become the gold standard. This disadvantages other forms of scholarly communication, tends to devalue translational research which is aimed at societal impact rather than scientific excellence, and makes life more difficult for researchers who are publishing either books or in non-English languages.

These differences in publication cultures are often underestimated. For example, most people involved in evaluation tend to think that peer review is universally accepted. But according to Janneke Gerards, a legal scholar in human rights, it is common practice in her field that journal editorial boards decide themselves about submitted manuscripts. She does not see much added value in external peer review. In other fields, however, it would be unacceptable not to have external peer review, often even double blind reviews are required. A comparable variation can be found regarding the value of citation analysis for research assessment. For paleo-climatologist Appy Sluijs citation analysis is a reasonably good method. But according to experimental physicist Tjerk Oosterkamp, which is also a field oriented to international journals, citation analysis is “not at all a good instrument”. His guess is that this would promote mainstream research at the cost of truely innovative work. The evidence for this expectation is lacking, however.

This is true of the booklet as a whole. The authors as well as the interviewees mostly extrapolate their own experiences. The DJA publication therefore does not offer a good overall theoretical or empirical framework, nor does it offer much new for the field of bibliometrics. Still, I think it has been a very valuable exercise because it foregrounds the life experiences of some members of a group from the new generation of excellent scholars (measured according to their success in building a career in ground-breaking research). Building on this type of experiences in a more systematic way is the main goal of our new research program. It would be great to collaborate more in order to improve our understanding of how evaluation in research really works.

Advertisement
%d bloggers like this: