Knowledge, control and capitalism: what epistemic capitalism makes us see

Guest blog post by Thomas Franssen

On February 5th Max Fochler (University of Vienna) gave a talk during an extended EPIC research group seminar at the CWTS in Leiden. Fochler posed a crucial and critical question regarding knowledge production in the 21th century; can we understand contemporary practices of knowledge production in and outside academia as practices of epistemic capitalism? With this term, defined as ‘the accumulation of capital through the act of doing research’, Max wanted to address the regimes of worth that play a crucial role in life science research in Austria.

Max was interested in exploring the concept of capitalism as it denotes both forms of ascribing worth or value to something (in this case to knowledge and doing research), and the sets of practices in which these forms of worth are embedded. In this way it allows one to talk about which registers or regimes of value are visible as well as the institutional context in which these forms of worth ‘count’ for something.

Using research on the life sciences (partly done with Ulrike Felt and Ruth Müller) Max compared the regimes of value found in biographical interviews with postdocs working in Austrian academia to those of founders, managers and employees of research-active small biotech companies in Austria.

Results showed that the postdocs in their study are preoccupied with their own future employability, and that they assess their own worth in terms of the capital that they can accumulate. This capital consists of publications, impact factors, citations and grant-money. What is especially critical in this respect is that potential sites of work, social relations with others, and choices for particular research topics or model organisms are scrutinized in relation to the effect they might have on the accumulation of capital. Importantly, also for research policy and higher education studies, this is the only strategy that this sample of postdocs sees as viable. They do not see other regimes of valuation available for them. As such they either comply to the rules of the game or opt out of the academic system entirely.

In biotech companies the situation is very different. The accumulation of epistemic capital plays a smaller role in the biographies of those working for biotech companies. The main difference, Max observed, is that failure and success are attributed to companies rather than individuals. The intense competition and focus on the individual as experienced by postdocs in the life sciences is less intense in biotech. As such, the essence of working in biotech is not the accumulation of capital, but the development of the company. Capital is not an end in itself but used strategically when possible.

Thinking through epistemic capitalism with biodiversity

Esther Turnhout (Wageningen University) was invited by Sarah to comment on Max Fochler’s talk. Turnhout’s research focuses on information and accountability infrastructures in forest certification and auditing in global value chains. She started her response by asking whether the concept of epistemic capitalism made her look at her own case stuy materials differently and if so how? Not to interrogate the concept and test it empirically but rather to make clear what it highlights and what it affords.

Her criticism of the term came down to two aspects, which she explained using the case of biodiversity. Most importantly, the concept of epistemic capitalism ties the development of knowledge to the accumulation of capital directly and it has the tendency to reduce everything it captures to one single mechanism or logic.

To make her case, Esther traced the knowledge making practices in biodiversity research historically. She did so by focusing on the rise of so-called ecosystem services. Within ecosystem services biodiversity knowledge has become mainly utilitarian, and biodiversity itself an object that presents economic value because it has not yet been destroyed. Think for example of forest carbon, which represents a value on the carbon market as long as it is locked in the forest itself.

So here, in the commodification of biodiversity, knowledge and capital are again closely related. This, however, is not the main argument that Esther took from this example. Rather, she argued that in many ways ecosystem services are very similar to the history of biodiversity knowledge. In all cases, the knowledge produced must be rendered technical, it is assumed to be linear and it privileges scientific expertise. More importantly, there is a preoccupation with ‘complete knowledge’, which is seen as needed for effective conservation. Also, this type of knowledge is increasingly used for managerial concerns to measure success or effectiveness of policy.

As such, disconnected from capitalist or economic concerns, in biodiversity knowledge three logics come together: a technocratic logic, a managerial logic, and a logic of control. For her case, a focus on epistemic capitalism and the accumulation of capital does not work so well. The issue of a technocratic ideal of total control would disappear from view if ecosystems services are only regarded as a commodification of nature. It is the issue of control, which can be understood from a range of logics (technocratic, managerial even aesthetic), that currently prevents urgently needed action. This is because there is an experienced lack of ‘total information’, a total which – seen from technocratic and managerial logics – is needed to act. According to Turnhout it is this utopian ideal of ‘technocratic control through complete information’ that should be criticised much more strongly.

On citation stress and publication pressure

Our article on citation stress and publication pressure in biomedicine went online this week – co-authored with colleagues from the Free University and University Medical Centre Utrecht:

Tijdink, J.K., S. de Rijcke, C.H. Vinkers, Y.M. Smulders, P.F. Wouters, 2014. Publicatiedrang en citatiestress: De invloed van prestatie-indicatoren op wetenschapsbeoefening. Nederlands Tijdschrift voor Geneeskunde 158: A7147.

* Dutch only *

NWO president Jos Engelen calls for in-depth study of editorial policies of Science and Nature

The Netherlands Organization for Scientific Research (NWO) wants to start an in-depth study of the editorial policies of the most famous scientific journals, such as Science, Nature, Cell, The Lancet, The New England Journal of Medicine, and Brain. NWO president Jos Engelen announced this in a lecture on open access publishing on 11 December in Nijmegen. The lecture was given in the framework of the Honors Program of Radboud University on “Ethos in Science”.

According to Engelen, it is urgent to assess the role of these journals in the communication of scientific knowledge. Engelen wants the scientific system to shift to free dissemination of all scientific results. He sees three reasons for this. First, it is “a moral obligation to grant members of the public free access to scientific results that were obtained through public funding, through taxpayers’ money.” Engelen gets “particularly irritated when I read in my newspaper that new scientific results have been published on, say, sea level rise, to find out that I have to buy the latest issue of Nature Magazine to be properly informed.” Second, scientific knowledge gives a competitive edge to the knowledge economy and should therefore freely flow into society and the private sector. Third, science itself will profit from the free flow of knowledge between fields. “In order to face the ‘grand challenges’ of today scientific disciplines have to cooperate and new disciplines will emerge.”

Engelen wants to investigate the editorial policies of the most famous scientific journals because they stand in the way of open access. These feel no reason to shift their business model to open access, because their position is practically impregnable”. Engelen takes the journal Science, published by the Association for the Advancement of Science as example. “Its reputation is based on an extremely selective publishing policy and its reputation has turned ‘Science’ into a brand that sells”. Engelen remarks that the same is true for Nature, Cell and other journals published by commercial publishers. “Scientific publications are only a part, not even the dominant part of ‘the business’, but the reputation of the journal is entirely based on innovative science emanating from publicly funded research. Conversely, the reputation of scientists is greatly boosted by publications in these top-journals; top-journals with primarily an interest in selling and not in, for example, promoting the best researchers.”

Engelen concludes this part of his lecture on open access with a clear shot across the bow. “It has puzzled me for a while already that national funding organisations are not more critical about the authority that is almost automatically imputed to the (in some cases full time, professional, paid) editors of the top-journals. I think an in depth, objective study of the editorial policies, and the results thereof, commissioned by research funders, is highly desirable and in fact overdue. I intend to take initiatives along this line soon!”

The need for change in the governance of science – II

Turbulent times at the Trippenhuis, home of the Royal Netherlands Academy of Arts and Sciences (KNAW). Last Thursday and Friday the Academy opened its doors for the Science in Transition conference: two days of debate between representatives of science, industry, and policy-making aimed at revising some of the checks and balances of the scientific and scholarly system. We already blogged about some of the most problematic aspects of current quality control mechanisms last week. Interestingly, there was remarkable consensus among conference participants on a number of points relating to these mechanisms. Most keynotes, commentators, and members of the audience seemed to want to avoid:

  • Research agendas that are not driven by content and relevance;
  • Excessive competition and careerism;
  • A publish or perish culture that favors quantity over quality, promotes cherry picking of results and salami slicing, and discourages validation, verification and replication;
  • An ill-functioning peer review system that lacks incentives for sound quality judgment;
  • One-size-fits-all evaluation procedures;
  • Perverse allocation models and career policy mechanisms (in which for instance number of students directly affect the number of .fte spent on research and young researchers are hired on short-term contracts funded through external grants [‘PhD and Post-doc factories’).

But of course there was still a lot left to debate. As a result of the succesful media campaign and the subsequent hype around Science in Transition, some speakers felt that they needed to ‘stand up for science’. Hans Clevers, president of the KNAW, and Jos Engelen, chairman of the Netherlands Organisation for Scientific Research (NWO) were noticably unhappy about the portrayal in the media of science ‘in crisis’. Both stressed that Dutch science is doing well, judging for instance from the scores on university rankings. Both radiated their aggravation about painting an ambiguous picture of science to outsiders, because of the potential risks of feeding already existing scepticism and mistrust. At the same time it was telling that these key figures in the landscape of Dutch governance of science were supportive of the debate and the fundamental points raised by the organisers.

Like Clevers and Engelen, Lodi Nauta (dean of the faculty of philosophy in Groningen) too, argued that not everything is going astray in science. According to him there are still many inspiring examples of solid, prize-worthy, trust-worthy, interdisciplinary, societally relevant research. But Nauta also signaled that there is much ‘sloppy science’. Not all symposium participants agreed on how much, and if there is indeed a general increase. Peter Blom, CEO of Triodos Bank, made an important aside. He thought it rather arrogant that whilst basically every other sector is in crisis, science should think it could distance itself from these economic and socio-political currents. But many participants took a cautionary stance: If there is indeed such a thing as a crisis, we should not lose sight of the nuances. It is not all bad everywhere, at the same time, and for everyone. Some argued that young researchers suffer most from current governance structures and evaluation procedures; that certain fields are more resilient than others; and that compared to other countries the Dutch scientific and scholarly system is not doing that badly at all. Henk van Houten, general manager of Philips Research, on the contrary, argued that ‘university as a whole has a governance issue’: The only moment that universities have actual influence is when they appoint professors at particular chairs. However, these professors are subsequently mainly held accountable to external funders. One is left to wonder which governance model is to be preferred: this one, or models companies like Philips put in practice.

At the heart of the debate on being open about the present crisis lies a rather dated desire to leave ‘the black box of science’ unopened. Whilst Lodi Nauta for instance argued – with Kant – that an ideal-typical image of science is necessary as a ‘regulatory idea’, the Science in Transition initiators deemed it pointless to keep spreading a fairytale about ‘a perfect scientific method by individuals with high moral values without any bias or interests’. Van Houten (Philips) and Blom (Triodos) also argued that science does not take its publics seriously enough if it sticks to this myth. Letting go of this myth does not amount to ‘science bashing’ – on the contrary. It is valuable to explain how science ‘really’ works, how objective facts are made, where the uncertainties lie, which interests are involved, and how science contributes through trained judgment and highly specialized expertise.

A hotly debated matter also relates to ‘black-boxing’ science: Who gets to have a say about proper quality assessment and the shaping of research agendas? André Knottnerus, chairman of the Scientific Council for Government Policy (WRR) pointed at ambivalence in discussions on these matters. We tend to only take criticism on performance measurement seriously if delivered by researchers that score high on these same measures. There were also differences of opinion about the role of industry in defining research agendas. (i.e. detrimental effects of pharmaceutical companies on clinical research. Obviously Philips was invited to serve as a counter-example of positive bonds between (bio-medical) research and commercial partners). And what about society at large? Who speaks for science and to whom are we to be held accountable, Sheila Jasanoff asked? (How) should researchers pay more attention to mobilizing new publics and participatory mechanisms, and productive democratisation of the politics of science?

Most speakers were of the opinion that we should move away from narrow impact measurement towards contextually sensitive evaluation systems. Systems that reward mission oriented research, collaboration and interdisciplinarity, which not only accommodate short-term production but also the generation of deep knowledge. These (ideal-typical?) systems should allow for diversification in talent selection, and grant academic prestige through balanced reward mechanisms and ‘meaningful metrics’. Though the symposium did a lot of the groundwork, how to arrive at such systems is of course the biggest challenge (see also Miedema’s ‘toolbox for Science in Transition’ for concrete suggestions). This is assuming it is possible at all. But perhaps we need this ideal-typical image as a ‘regulatory idea’.

The need for change in the governance of science

Tomorrow, a two-day conference will be held, Science in Transition, at the beautiful headquarters of the Royal Netherlands Academy of Arts and Sciences, the Trippenhuis in Amsterdam. Five researchers with backgrounds in medical research, history, and science & technology studies, have taken the lead in what they hope will become a strong movement for change in the governance of science and scholarship. The conference tomorrow builds on a series of three workshops held earlier this year about “image and trust”, “quality and corruption”, and “communication and democracy”. On the eve of the conference, the initiators published their agenda for change. In this document, 7 issues are defined as key topics and a large number of questions about the necessary direction for change are formulated. These issues are: the image science has in the public view; public trust in science; quality control; fraud and deceit; new challenges in science communication; the relationship between science, democracy and policy; and the connection between education and research.

With this list, the agenda is rather encompassing and broad. The thread running through the document as well as through the supporting “position paper” is discontent with the current governance of the scientific and scholarly system. The position paper is strong in that it is based on the professional experience of the authors, some of whom have been leading and managing research for many years. At the same time, this is also the source of some obvious weaknesses. The situation in the medical sciences is here and there a bit too dominant in the description of reality in science, whereas the situation in the humanities and social sciences is rather different (although equally problematic). Because the agenda is so broad, the position paper in its current version tends to lump together problems of quite different sources as if they are all of a kind. The subtleties that are so important in the daily practices of scientists and scholars tend to disappear from view. But then again, some of this may be inevitable if one wishes to push an agenda for change. A quite strong feature of the position paper is that it does not try to justify or deny the problematic aspects of science (of which fraud and corruption are only the most visible forms) but attempts to confront them head-on.

This is the reason that I think Science in Transition is an excellent iniatiative which deserves strong support from all participants and users in the current system of knowledge creation. Certainly in the Netherlands, which is the focus of most experiences the initiative builds on, but also more globally, the current ways of governing the increasingly complex scientific system hit their limits. Let me focus on the matter of quality control, the issue with which we deal regularly in this blog. The peer review system is straining under increasing pressure. Data intensive research requires new forms of data quality control that are not yet in place. Fraudulent journals have become a major source of profit for shady publishers. Open access of both publications and research data is increasingly needed, but at the same time it threatens to introduce corrupt business models in science and may harm the publication of books in the humanities (if not done carefully). Simplified but easily accessible indicators, such as the h-index and the Journal Impact Factor, have in many biomedical fields acquired the mantle of a goal in itself. Editors of journals feel pressured to increase their impact factor in sound and less sound ways. The economics of science is dominated by a huge supply of easily replaceable temporary labour force and for many PhD students there is no real career possibility in meaningful research. Peer review tends to favour methodological soundness above scientific or societal relevance. The publicly funded budgets are not always sufficient to perform the research as thoroughly as is needed. The current publication cultures tend to prefer positive results over negative ones (especially dangerous in the context of pharmaceutical research).

I realize that this short summary of some of the better known problems is as generalizing as the position paper. Of course, these problems are not acute in every field. Some journals are not afflicted with impactitis, but manage to focus on pushing the research front in their area. Universities behave differently in the ecology of higher education and research. Many researchers are delivering a decent or excellent performance. Scientific specialties differ strongly in epistemic styles as well as in publication cultures. And the solutions are certainly not easy. Nevertheless, the governance of science requires some fundamental adaptations, including a possible revision of the role of universities and other institutions of higher education. Science in Transition deserves to be applauded for having put this complex problem forcefully on the agenda.

I am also enthusiastic about the project because it resonates so well with the research agenda of CWTS. We have even created a new working group which focuses on the detailed, ethnographic, study of actual evalution practices in science and scholarship: EPIC (evaluation practices in context). We need to have a much more detailed understanding of what actually goes on in the laboratories, hospitals, and research institutes at universities. This is the only way we can supplement generalizing and normative statements about trends in scientific governance with “thick descriptions” of the complex reality of current science.

The more complex the research system has become, the more important quantitative information, including indicators, is for the researchers, research managers and science policy makers. This requires more advanced methodologies in the field of scientometrics (and not only in bibliometrics), such as science mapping, the topic of another CWTS working group. It requires more accurate data collection, including better accounting systems of the costs of scientific research. (Currently, universities actually do not know how much their research actually costs.) But it also requires vigilance against “management by indicators”. If young PhD students aim to publish mainly in order to increase their performance indicators so that they can have a career, as many a senior researcher in a hospital has experienced, we know that the system is in trouble.

Accounting systems are sometimes certainly necessary, but these should be put in place in such a way that they do not derail the primary processes (such as knowledge creation) that they are supposed to support. In the scientific system in the Netherlands, we therefore need a renewed balance between performance measurement and expert judgement in the quality control mechanisms. This is what we mean with our new CWTS motto: meaningful metrics. The future of scientometrics is not in the production of ever more indicators, but in more effectively supporting researchers in their endeavour to create new knowledge.

Event: crafting your career

Screen Shot 2013-09-16 at 11.44.38 AMOn Wednesday October 30, the Rathenau Instituut and CWTS are jointly organizing an event for researchers, Crafting your Career. A growing demand on researchers to perform seems to distract from other competencies needed or desired to be a good scientist. Or does it? Crafting your career will facilitate debate among early stage and senior researchers about career challenges in the current research system, and about ways to solve these either individually, or together. The afternoon is also a platform to meet researchers from various fields, career coaches, experts and role models: an afternoon full of debate, information, networking and entertainment.

The evidence on the Journal Impact Factor

The San Francisco Declaration on Research Assessment (DORA), see our most recent blogpost, focuses on the Journal Impact Factor, published in the Web of Science by Thomson Reuters. It is a strong plea to base research assessments of individual researchers, research groups and submitted grant proposals not on journal metrics but on article-based metrics combined with peer review. DORA cites a few scientometric studies to bolster this argument. So what is the evidence we have about the JIF?

In the 1990s, the Norwegian researcher Per Seglen, based at our sister institute the Institute for Studies in Higher Education and Research (NIFU) in Oslo and a number of CWTS researchers (in particular Henk Moed and Thed van Leeuwen) developed a systematic critique of the JIF, its validity as well as the way it is calculated (Moed & Van Leeuwen, 1996; Moed & Leeuwen, 1995; Seglen, 1997). This line of research has since blossomed in a variety of disciplinary contexts, and has identified three main reasons not to use the JIF in research assessments of individuals and research groups.

First, although the values of JIF of a particular journal depend on the aggregated citation rates of the individual articles, the JIF cannot be used as a stand-in for the latter in research assessments. This is because a small number of articles are cited very heavily, while a large number of articles are only cited once in a while, and some are not cited at all. This skweded distribution is a general phenomenon in citation patterns and it holds for all journals. Therefore, if a researcher has published an article in a high impact journal, this does not mean that her particular piece of research will also have a high impact.

Second, fields differ strongly in their usual JIF values. A field with a rapid turn-over of research publications and long reference lists (such as fields in biomedical research) will tend to have much higher JIF values for its journals than a field with short refence lists in which older publications remain relevant much longer (such as fields in mathematics). Moreover, smaller fields will usually have smaller number of journals, resulting in less possibilities to publish in high-impact journals. As a result, it does not make sense to compare JIF across fields. Although virtually everybody knows this, an implicit comparison is often still prevalent. This is for example the case when publications are compared on their JIF values in multi-disciplinary settings (such as in grant proposals reviews).

Third, the way in which the JIF is calculated in the Web of Science has a number of technical characteristics due to which the JIF can be gamed relatively easily by journal editors. The JIF is a division of total number of citations to the journal in the last two years by the number of “citeable publications”. Some publications do not count as “citeable” although they do contribute to the total number of citations if cited. By increasing the relative share of these publications in the journal, the editor can try to artifically increase his JIF value. This can also be accomplished by increasing the number of publications that are more frequently cited, such as review articles, long articles, or clinical trials. Last, the editor can try to convince or pressure submitting authors to cite more publications in the journal itself. All three forms of manipulations are occuring, although we do not really know how frequently this happens. Sometimes, the manipulation is plainly visible. Editors have been writing editorials about their citation impact, citing all publications in the past two years in their own journal, admonishing authors to increase their JIF!

A more generic problem with using the JIF in research assessment is that not all fields have meaningful JIF values, since they are only based on those journals in the Web of Science that have their JIF calculated. Scholarly fields focusing on books or technical designs are disadvantaged in evaluations in which the JIF is important.

In response to these problems, five main journal impact indicators have been developed as an improvement upon, or alternative to, the JIF. First, the CWTS Journal to Field Impact Score (JFIS) indicator improves upon the JIF because it does away with the difference in the numerator and denominator regarding “citeable items” and because it takes field differences in citation density into account. Second, the SCImago Journal Rank (SJR) indicator follows the same logic as Google’s PageRank algorithm: citations from highly cited journals have more influence than citations from lowly cited ones. SCImago, based in Madrid, calculates the SJR not on the basis of the basis of the Web of Science but on the basis of the Scopus citation database (published by Elsevier). A similar logic is applied in two other journal impact factors from the Eigenfactor.org research project, based at the biology department of the University of Washington (Seattle): the Eigenfactor and the Article Influence Score (AIS). These are often calculated on the basis of the Web of Science and use a ‘citation window’ of five years (citations to an article in the previous five years count), whereas this is two years in JIF and three years in SJR.

The fifth journal impact indicator is computed on the basis of Scopus by CWTS: the Source Normalized Impact per Paper indicator (SNIP) (invented by Henk Moed and further developed by Nees Jan van Eck, Thed van Leeuwen, Martijn Visser and Ludo Waltman (Waltman, Eck, Leeuwen, & Visser, 2012)). This indicator also weights citations but not on the basis of the number of citations to the citing journal, but on the basis of the number of references in the citing article. Basically, the citing paper is seen as giving out one vote which is distributed over all cited papers. As a result, a citation from a paper with 10 references adds 1/10th to the citation frequency, whereas a citation from a paper with 100 references adds only 1/100th. The effect is that the SNIP indicator cancels out differences across fields in citation density (though certainly not all relevant differences between disciplines, such as the amount of work that is needed to publish an article). The Eigenfactor also uses this principle in its implementation of the PageRank algorithm.

The improved journal impact indicators do solve a number of problems that have emerged in the use of the JIF. Nevertheless, careless use of the journal impact indicators in research assessments is not justified. All journal impact indicators are in the end based on the number of citations to the individual articles in the journal. The correlation is however too weak to legitimize the application of some journal indicator instead of the assessment of the articles themselves if one wishes to evaluate those articles. Whenever the journal indicators take the differences between fields into account, the number of citations to sets of articles produced by research groups as a whole tend to show a somewhat stronger correlation with the journal indicators. Still, the statistical correlation remains very modest. Research groups tend to publish across a whole range of journals with both high and lower impact factors. It will therefore usually be much more accurate to analyze the influence of these bodies of work rather than fall back on the journal indicators.

To sum up, the bibliometric evidence confirms the main thrust of DORA: it is not sensible to use the JIF or any other journal impact indicator as a predictor of the citedness of a particular paper or set of papers. But does this mean, as DORA seems to suggest, that journal impact factors do not make any sense at all? Here I think DORA is wrong. At the level of the journal the improved impact factors do give interesting information about the role and position of the journal, especially if this is combined with qualitative information about the peer review process, an analysis of who is citing the journal and in which context, and its editorial policies. No editor would want to miss the opportunity to use the analysis of its role in the scientific communication process, and journal indicators can play an informative, supporting, role. Also, it makes perfect sense in the context of research evaluation to take into account whether a researcher has been able to publish in a high quality scholarly journal. But journal impact factors should not rule the world.

Literature:

Moed, H. F., & Van Leeuwen, T. N. (1996). Impact factors can mislead. Nature, 381(6579), 186.

Moed, H., & Leeuwen, T. Van. (1995). Improving the accuracy of Institute for Scientific Information’s journal impact factors. JASIS, 46(6), 461–467. Retrieved from http://www.iem.ac.ru/~kalinich/rus-sci/ISI-CI-IF.pdf

Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ (Clinical research ed.), 314(7079), 498–502. Retrieved from http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2126010&tool=pmcentrez&rendertype=abstract

Waltman, L., & Eck, N. van, Leeuwen, & Visser. (2013). Some modifications to the SNIP journal impact indicator. Journal of Informetrics, 1–20. Retrieved from http://www.sciencedirect.com/science/article/pii/S1751157712001010

Acknowledgement:

I would like to thank Thed van Leeuwen and Ludo Waltman for their comments on an earlier draft of this post.

Changing publication practices in the “confetti factory”

When do important reorientations or shifts in research agendas come about in scientific fields? A brief brainstorm led us to formulate three possible causes. First of all, a scarcity of resources can bring about shifts in research agendas, for instance on an institutional level (because research management decides on cutting the budgets of ill-performing research units). A second, related cause, are alignments of agendas through strategic (interdisciplinary) alliances, for the purpose of obtaining funding. A third cause for reconsideration of research agendas are situations of crisis, for instance those brought about by large-scale scientific misconduct or by debates on undesirable consequences of measuring productivity only in terms of number of articles.

Zooming in on the latter point: the anxiety over the consequences of a culture of ‘bean counting’ seems to be getting bigger. Unfortunately, solid analyses are rare that tease out these exact consequences for the knowledge produced. A recent contribution to the European Journal of Social Psychology does however offer such an analysis. In the article, and appropriating Piet Vroon’s metaphor of the ‘exploded confetti factory’, professor Naomi Ellemers voices her concern over the production of increasing amounts of gradually shorter articles in social psychology (a field in crisis), the decreasing amounts of references to books, and the very small 5-year citation window that researchers tend to stick to (cf. Van Leeuwen 2013). Ellemers laments the drift toward publishing very small isolated effects (robust, but meaningless), which leaves less and less room for ‘connecting the dots’, i.e. cumulative knowledge production. According to Ellemers, the current way to assess productivity and research standing has the opposite effect of leading to a narrowing of focus. Concentrating on amount of (preferably first-authored) articles in high impact journals does not stimulate social psychologists to aim for connection, but instead leads them to focus on ‘novelty’ and difference. A second way to attain more insight, build a solid knowledge base and generate new lines of research is through intra- and interdisciplinary cooperation, she argues. If her field really wants to tackle important problems in their full complexity – including the wider implications of specific findings – methodological plurality is imperative. Ellemers recommends that the field extends its existing collaborations – mainly with the ‘harder’ sciences – to also include other social sciences. A third way to connect the dots, and at least as important for ‘real impact’, is to transfer social-psychological insights to the general public:

“There is a range of real-life concerns we routinely refer to when explaining the focal issues in our discipline for the general public or to motivate the investment of tax payers’ money in our research programs. These include the pervasiveness of discrimination, the development and resolution of intergroup conflict, or the tendency toward suboptimal decision making. A true understanding of these issues requires that we go beyond single study observations, to assess the context-dependence of established findings, explore potential moderations, and examine the combined effect of different variables in more complex research designs, even if this is a difficult and uncertain strategy.” (Ellemers 2013, p. 5)

This also means, Ellemers specifies, that social psychologists perform more conceptual replications, and always specify how their own research fits in with and complements existing theoretical frameworks. It means that they should not refrain from writing meta-analyses and periodic reviews, and from including references to sources older than 10 years. This, Ellemers concludes, would all contribute to the goal of cumulative knowledge building, and would hopefully put an end to collecting unconnected findings, ‘presented in a moving window of references’.

What makes Ellemers’ contribution stand out is that she not only links recent debates about the reliability of social-psychological findings and ensuing ‘methodological fetishism’ to the current evaluation culture, but also that she doesn’t leave it at that. Ellemers subsequently outlines a research agenda for social psychology, in which she also argues for more methodological leniency, room for creativity and more comprehensive theory-formation about psychological processes and their consequences. Though calls for science-informed research management are also voiced in other fields and are certainly much needed, truly content-based evaluation procedures are very difficult to arrive at without substantive discipline-specific contributions like the one Ellemers provides.

Viridiana Jones and the privatization of US science

mirowskiRecently a deluge of books saw the light on commercialization of academia and the political climate that allegedly enabled this development: neo-liberalism. Examples include If You’re So Smart, Why aren’t You Rich? (Lorentz 2008), Weten is meer dan Meten (Reijngoud 2012), The Fall of the Faculty (Ginsberg 2011), The Commodification of Academic Research (Radder (ed.) 2010), How Economics Shapes Science (Stephan 2012), and Creating the Market University (Popp Berman 2011). A recent book in this trend I would like to bring to the attention of our blog readers is Philip Mirowski’s Science Mart: Privatizing American Science (Harvard UP, 2011). Mirowski is Carl Koch Professor of Economics and the History of Philosophy at the University of Notre Dame. He is author of The Effortless Economy of Science? (2004), Science Bought and Sold (with Esther-Mirjam Sent, eds., 2002), The Road from Mont Pélerin: The Making of the Neoliberal Thought Collective (with Dieter Plehwe, eds, 2009), and a host of articles on the topic. That Mirowski knows a thing or two about his subject also becomes apparent through his writing: He combines an impressive amount of interdisciplinary knowledge with what he calls ‘empirical meditations on the state of contemporary science’. I think he succesfully counters more shallow explanations for the commercialization of (US) academic research that rely on misunderstood versions of neoliberalism. How? By zooming in on more subtle conjunctions of circumstances that ultimately led to the installment of exactly that very hard to counter grand narrative called ‘neoliberalism’. And by demonstrating how specific professions, disciplines, strands of theories abstained from or couldn’t come up with an equally convincing alternative to ‘render the totality of academic life coherent’. Occasionally, Mirowski himself does also fall into the trap of the attractive overarching narrative. For instance when he describes the recent history of the rise and increasing use of citation-analysis and of performance indicators in academia as a development from a neutral information tool to a ‘bureaucratic means of surveillance’. He also assumes – and I think this is a simplification- a causal link between privately owned citation data and the erection of a ‘Science Panopticon’. Nonetheless, Science Mart stands out from a number of the books mentioned above, not in the least due to Mirowski’s daring and ironic tone of voice. (A reference to the first chapter may suffice, in which the author uses a fictive researcher called Viridiana Jones to set the scene of the book).

Booming bibliometrics in biomedicine: the Dutch case

Last week, I gave a talk at a research seminar organized by the University of Manchester, Centre for the History of Science, Technology and Medicine. The talk was based on exploratory archival research on the introduction of bibliometrics in Dutch biomedicine.

Why did performance-based measurement catch on so quickly in the Dutch medical sciences? Of course this is part of a larger picture: From the 1980s onward, an unprecedented growth of evaluation institutions and procedures took place in all scientific research. In tandem with the development and first applications of performance indicators discussions about “gaming the system” surfaced (cf. MacRoberts and MacRoberts 1989). In the talk, I presented results from a literature search on how strategic behavior has been discussed in international peer-reviewed and professional medical journals from the 1970s onwards. The authors’ main concerns boiled down to three things. The first was irresponsible authorship (co-authorship, salami slicing etc.). Authors also signaled a growing pressure to publish and discussed relationships with scientific fraud. The third concern had to do with the rise of a group of evaluators with growing influence but seemingly without a clear consensus about their own professional standards. Typically, these concerns started to be voiced from the beginning of the 80s onwards.

Around the same time, two relevant developments took place in the Netherlands. First of all, the earliest Dutch science policy document on assessing the sciences was published. It focused entirely on the medical sciences (RAWB 1983). The report was promoted as a model for priority setting in scientific research, and was the first to overthrow internal peer review as the sole source for research assessment by including citation analysis (Wouters 1999). Secondly, a new allocation system was introduced at our own university here in Leiden in 1975. Anticipating a move on a national level from block grant funding to separate funding channels for teaching and research, a procedure was introduced that basically forced faculties to present existing and new research projects for assessment to a separate funding channel, in order to avoid decrease in research support in the near future. Ton van Raan, future director of CWTS, outlined specific methods for creating this separate funding model in the International Journal of Institutional Management in Higher education (Van Raan & Frankfort 1980). Van Raan and his co-author – at the time affiliated to the university’s Science Policy Unit argued that Leiden should move away from an ‘inefficient’ allocation system based on institutional support via student numbers, because this hindered scientific productivity and excellence. According to Van Raan [personal communication], this so-called ‘Z-procedure’ created the breeding ground for the establishment of a bibliometric institute in Leiden some years later.

Leiden University started the Z-procedure project inventories in ’75, dividing projects in that in- and those outside of priorities. The university started to include publication counts from 1980 onwards. As far as the medical sciences are concerned, the yearly Scientific Reports of ’78 to ’93 show that their total number of publications rose from 1401 in 1981 to 2468 in 1993. This number went up to roughly 7500 in 2008 (source: NFU). More advanced bibliometrics were introduced in the mid-80s. This shift from counting ‘brute numbers’ to assembling multidimensional complex operations (cf. Bowker 2005) also entailed a new representation of impact and quality: aggregated and normalized citation counts.

Back to the larger picture. A growing use of performance indicators from the 80s onwards can be ascribed to, among other things: an increased economic and social role of science and technology; an increase in the scale of research institutes; limitations and costs of peer review procedures; and a general move towards formal evaluation of professional work. It is usually argued that under the influence of the emergence of new public management and neoliberalism authorities decided to model large parts of the public sector, including higher education, on control mechanisms that were formerly reserved to the private sector (cf. Power 1999; Strathern 2000). It is necessary to dig deeper into the available historical sources to find out if these explanations suffice. If so, aggregated citation scores may have come to prosper in a specific political economy that values efficiency, transparency and quality assurance models. In the discussion after my talk Vladimir Jankovic suggested that I also look into Benjamin Hunt’s The Timid Corporation (2003). Hunt argues that while neoliberalism is often associated with economically motivated de-regulation, what has in fact been going on from the 80s onward is socially oriented regulation of individuals and groups, aimed at taming risks and impact of change through formal procedures. Two additional ways of analyzing the rise of such a precautionary approach may be found in the work of sociologists Frank Furedi (“Culture of Fear” 1997) and Ulrich Beck (“Risk Society” 1992). When aversion to risks and fear of change come to be perceived as abiding, a greater reliance on procedures and performance indicators may increasingly be seen as means to control openness and uncertainty. It is worth exploring if these sociological explanations can help us explain some of the dynamics in biomedicine I alluded to above. It may be a first step in finding out whether there is indeed something particular about medical research that makes it particularly receptive to metrics-based research evaluation.