Changing publication practices in the “confetti factory”

When do important reorientations or shifts in research agendas come about in scientific fields? A brief brainstorm led us to formulate three possible causes. First of all, a scarcity of resources can bring about shifts in research agendas, for instance on an institutional level (because research management decides on cutting the budgets of ill-performing research units). A second, related cause, are alignments of agendas through strategic (interdisciplinary) alliances, for the purpose of obtaining funding. A third cause for reconsideration of research agendas are situations of crisis, for instance those brought about by large-scale scientific misconduct or by debates on undesirable consequences of measuring productivity only in terms of number of articles.

Zooming in on the latter point: the anxiety over the consequences of a culture of ‘bean counting’ seems to be getting bigger. Unfortunately, solid analyses are rare that tease out these exact consequences for the knowledge produced. A recent contribution to the European Journal of Social Psychology does however offer such an analysis. In the article, and appropriating Piet Vroon’s metaphor of the ‘exploded confetti factory’, professor Naomi Ellemers voices her concern over the production of increasing amounts of gradually shorter articles in social psychology (a field in crisis), the decreasing amounts of references to books, and the very small 5-year citation window that researchers tend to stick to (cf. Van Leeuwen 2013). Ellemers laments the drift toward publishing very small isolated effects (robust, but meaningless), which leaves less and less room for ‘connecting the dots’, i.e. cumulative knowledge production. According to Ellemers, the current way to assess productivity and research standing has the opposite effect of leading to a narrowing of focus. Concentrating on amount of (preferably first-authored) articles in high impact journals does not stimulate social psychologists to aim for connection, but instead leads them to focus on ‘novelty’ and difference. A second way to attain more insight, build a solid knowledge base and generate new lines of research is through intra- and interdisciplinary cooperation, she argues. If her field really wants to tackle important problems in their full complexity – including the wider implications of specific findings – methodological plurality is imperative. Ellemers recommends that the field extends its existing collaborations – mainly with the ‘harder’ sciences – to also include other social sciences. A third way to connect the dots, and at least as important for ‘real impact’, is to transfer social-psychological insights to the general public:

“There is a range of real-life concerns we routinely refer to when explaining the focal issues in our discipline for the general public or to motivate the investment of tax payers’ money in our research programs. These include the pervasiveness of discrimination, the development and resolution of intergroup conflict, or the tendency toward suboptimal decision making. A true understanding of these issues requires that we go beyond single study observations, to assess the context-dependence of established findings, explore potential moderations, and examine the combined effect of different variables in more complex research designs, even if this is a difficult and uncertain strategy.” (Ellemers 2013, p. 5)

This also means, Ellemers specifies, that social psychologists perform more conceptual replications, and always specify how their own research fits in with and complements existing theoretical frameworks. It means that they should not refrain from writing meta-analyses and periodic reviews, and from including references to sources older than 10 years. This, Ellemers concludes, would all contribute to the goal of cumulative knowledge building, and would hopefully put an end to collecting unconnected findings, ‘presented in a moving window of references’.

What makes Ellemers’ contribution stand out is that she not only links recent debates about the reliability of social-psychological findings and ensuing ‘methodological fetishism’ to the current evaluation culture, but also that she doesn’t leave it at that. Ellemers subsequently outlines a research agenda for social psychology, in which she also argues for more methodological leniency, room for creativity and more comprehensive theory-formation about psychological processes and their consequences. Though calls for science-informed research management are also voiced in other fields and are certainly much needed, truly content-based evaluation procedures are very difficult to arrive at without substantive discipline-specific contributions like the one Ellemers provides.


Diversity in publication cultures II

As said in the previous post on the topic of diversity in publication cultures, the recent DJA publication, “Kennis over publiceren. Publicatietradities in de wetenschap”, presents interesting and valuable personal experiences. At the same time, the booklet tends to cut corners and make rather crude statements about the role of evaluation and indicators. Often, the individual life stories are not properly contextualized. For example, physicist Tjerk Oosterkamp claims that citation analysis is “not at all” appropriate for experimental physics. According to him, the use of citation scores in evaluation would encourage researchers to stick to “simple things” and shy away from more daring and risky projects. But is this true? Many initially risky projects attracted quite a lot of citations later. As far as I know, we do not yet have a lot of evidence about the effect of evaluations and performance indicators on risk behavior in science. We do indeed have some indications that researchers tend to avoid risky projects, especially in writing applications for externally funded projects. Yet, we do not know whether this means that researchers are taking less risks across the board.

Another objection is that citation patterns may reflect current fashions rather than the most valuable research. I think this is an important point. For example, the recent hype about graphene research in physics may prove to be less valuable than expected. Citations represent impact on the short term communication within the relevant research communities. This is different from long term impact on the body of knowledge. There is a relationship between the two types of impact, but they are certainly not identical.

A second example of cutting corners is the statement by the editors in one of the essays of the DJA publication that “there is not much support among scientists for bibliometric analysis (p. 25). Well, to be honest, this varies quite strongly. In many areas in the natural and biomedical sciences quantitative performance analysis is actually quite hot. Also, we see a tendency in the humanities and social sciences to try to find a cure for the lack of publication data in Google Scholar, which often, albeit not always, has a much better coverage of these areas. They are sometimes even willing to turn a blind eye to the quite considerable problems with the accuracy and reliability of these data. So, the picture is much more complicated than the image of bibliometrics being performed top-down on the unhappy researcher.

Notwithstanding these shortcomings, the DJA booklet presents important dilemmas and problems. Perhaps the legal scholar Carla Sieburgh presents the problem most clearly: quality can in the end only be judged by experts. However, there is no time to have external reviewers read all the material. Hence the shift towards measurement. But this tends to lead us away from the content. In every discipline, some solution of this dilemma needs to be found, probably by striking a discipline-specific balance between objectified analysis from outside and internalized quality control by experts. This search for the optimal balance is especially important in those fields where quality control has been introduced relatively recently.

%d bloggers like this: