As said in the previous post on the topic of diversity in publication cultures, the recent DJA publication, “Kennis over publiceren. Publicatietradities in de wetenschap”, presents interesting and valuable personal experiences. At the same time, the booklet tends to cut corners and make rather crude statements about the role of evaluation and indicators. Often, the individual life stories are not properly contextualized. For example, physicist Tjerk Oosterkamp claims that citation analysis is “not at all” appropriate for experimental physics. According to him, the use of citation scores in evaluation would encourage researchers to stick to “simple things” and shy away from more daring and risky projects. But is this true? Many initially risky projects attracted quite a lot of citations later. As far as I know, we do not yet have a lot of evidence about the effect of evaluations and performance indicators on risk behavior in science. We do indeed have some indications that researchers tend to avoid risky projects, especially in writing applications for externally funded projects. Yet, we do not know whether this means that researchers are taking less risks across the board.
Another objection is that citation patterns may reflect current fashions rather than the most valuable research. I think this is an important point. For example, the recent hype about graphene research in physics may prove to be less valuable than expected. Citations represent impact on the short term communication within the relevant research communities. This is different from long term impact on the body of knowledge. There is a relationship between the two types of impact, but they are certainly not identical.
A second example of cutting corners is the statement by the editors in one of the essays of the DJA publication that “there is not much support among scientists for bibliometric analysis (p. 25). Well, to be honest, this varies quite strongly. In many areas in the natural and biomedical sciences quantitative performance analysis is actually quite hot. Also, we see a tendency in the humanities and social sciences to try to find a cure for the lack of publication data in Google Scholar, which often, albeit not always, has a much better coverage of these areas. They are sometimes even willing to turn a blind eye to the quite considerable problems with the accuracy and reliability of these data. So, the picture is much more complicated than the image of bibliometrics being performed top-down on the unhappy researcher.
Notwithstanding these shortcomings, the DJA booklet presents important dilemmas and problems. Perhaps the legal scholar Carla Sieburgh presents the problem most clearly: quality can in the end only be judged by experts. However, there is no time to have external reviewers read all the material. Hence the shift towards measurement. But this tends to lead us away from the content. In every discipline, some solution of this dilemma needs to be found, probably by striking a discipline-specific balance between objectified analysis from outside and internalized quality control by experts. This search for the optimal balance is especially important in those fields where quality control has been introduced relatively recently.