Changing publication practices in the “confetti factory”

When do important reorientations or shifts in research agendas come about in scientific fields? A brief brainstorm led us to formulate three possible causes. First of all, a scarcity of resources can bring about shifts in research agendas, for instance on an institutional level (because research management decides on cutting the budgets of ill-performing research units). A second, related cause, are alignments of agendas through strategic (interdisciplinary) alliances, for the purpose of obtaining funding. A third cause for reconsideration of research agendas are situations of crisis, for instance those brought about by large-scale scientific misconduct or by debates on undesirable consequences of measuring productivity only in terms of number of articles.

Zooming in on the latter point: the anxiety over the consequences of a culture of ‘bean counting’ seems to be getting bigger. Unfortunately, solid analyses are rare that tease out these exact consequences for the knowledge produced. A recent contribution to the European Journal of Social Psychology does however offer such an analysis. In the article, and appropriating Piet Vroon’s metaphor of the ‘exploded confetti factory’, professor Naomi Ellemers voices her concern over the production of increasing amounts of gradually shorter articles in social psychology (a field in crisis), the decreasing amounts of references to books, and the very small 5-year citation window that researchers tend to stick to (cf. Van Leeuwen 2013). Ellemers laments the drift toward publishing very small isolated effects (robust, but meaningless), which leaves less and less room for ‘connecting the dots’, i.e. cumulative knowledge production. According to Ellemers, the current way to assess productivity and research standing has the opposite effect of leading to a narrowing of focus. Concentrating on amount of (preferably first-authored) articles in high impact journals does not stimulate social psychologists to aim for connection, but instead leads them to focus on ‘novelty’ and difference. A second way to attain more insight, build a solid knowledge base and generate new lines of research is through intra- and interdisciplinary cooperation, she argues. If her field really wants to tackle important problems in their full complexity – including the wider implications of specific findings – methodological plurality is imperative. Ellemers recommends that the field extends its existing collaborations – mainly with the ‘harder’ sciences – to also include other social sciences. A third way to connect the dots, and at least as important for ‘real impact’, is to transfer social-psychological insights to the general public:

“There is a range of real-life concerns we routinely refer to when explaining the focal issues in our discipline for the general public or to motivate the investment of tax payers’ money in our research programs. These include the pervasiveness of discrimination, the development and resolution of intergroup conflict, or the tendency toward suboptimal decision making. A true understanding of these issues requires that we go beyond single study observations, to assess the context-dependence of established findings, explore potential moderations, and examine the combined effect of different variables in more complex research designs, even if this is a difficult and uncertain strategy.” (Ellemers 2013, p. 5)

This also means, Ellemers specifies, that social psychologists perform more conceptual replications, and always specify how their own research fits in with and complements existing theoretical frameworks. It means that they should not refrain from writing meta-analyses and periodic reviews, and from including references to sources older than 10 years. This, Ellemers concludes, would all contribute to the goal of cumulative knowledge building, and would hopefully put an end to collecting unconnected findings, ‘presented in a moving window of references’.

What makes Ellemers’ contribution stand out is that she not only links recent debates about the reliability of social-psychological findings and ensuing ‘methodological fetishism’ to the current evaluation culture, but also that she doesn’t leave it at that. Ellemers subsequently outlines a research agenda for social psychology, in which she also argues for more methodological leniency, room for creativity and more comprehensive theory-formation about psychological processes and their consequences. Though calls for science-informed research management are also voiced in other fields and are certainly much needed, truly content-based evaluation procedures are very difficult to arrive at without substantive discipline-specific contributions like the one Ellemers provides.

Advertisements

Viridiana Jones and the privatization of US science

mirowskiRecently a deluge of books saw the light on commercialization of academia and the political climate that allegedly enabled this development: neo-liberalism. Examples include If You’re So Smart, Why aren’t You Rich? (Lorentz 2008), Weten is meer dan Meten (Reijngoud 2012), The Fall of the Faculty (Ginsberg 2011), The Commodification of Academic Research (Radder (ed.) 2010), How Economics Shapes Science (Stephan 2012), and Creating the Market University (Popp Berman 2011). A recent book in this trend I would like to bring to the attention of our blog readers is Philip Mirowski’s Science Mart: Privatizing American Science (Harvard UP, 2011). Mirowski is Carl Koch Professor of Economics and the History of Philosophy at the University of Notre Dame. He is author of The Effortless Economy of Science? (2004), Science Bought and Sold (with Esther-Mirjam Sent, eds., 2002), The Road from Mont Pélerin: The Making of the Neoliberal Thought Collective (with Dieter Plehwe, eds, 2009), and a host of articles on the topic. That Mirowski knows a thing or two about his subject also becomes apparent through his writing: He combines an impressive amount of interdisciplinary knowledge with what he calls ‘empirical meditations on the state of contemporary science’. I think he succesfully counters more shallow explanations for the commercialization of (US) academic research that rely on misunderstood versions of neoliberalism. How? By zooming in on more subtle conjunctions of circumstances that ultimately led to the installment of exactly that very hard to counter grand narrative called ‘neoliberalism’. And by demonstrating how specific professions, disciplines, strands of theories abstained from or couldn’t come up with an equally convincing alternative to ‘render the totality of academic life coherent’. Occasionally, Mirowski himself does also fall into the trap of the attractive overarching narrative. For instance when he describes the recent history of the rise and increasing use of citation-analysis and of performance indicators in academia as a development from a neutral information tool to a ‘bureaucratic means of surveillance’. He also assumes – and I think this is a simplification- a causal link between privately owned citation data and the erection of a ‘Science Panopticon’. Nonetheless, Science Mart stands out from a number of the books mentioned above, not in the least due to Mirowski’s daring and ironic tone of voice. (A reference to the first chapter may suffice, in which the author uses a fictive researcher called Viridiana Jones to set the scene of the book).

Booming bibliometrics in biomedicine: the Dutch case

Last week, I gave a talk at a research seminar organized by the University of Manchester, Centre for the History of Science, Technology and Medicine. The talk was based on exploratory archival research on the introduction of bibliometrics in Dutch biomedicine.

Why did performance-based measurement catch on so quickly in the Dutch medical sciences? Of course this is part of a larger picture: From the 1980s onward, an unprecedented growth of evaluation institutions and procedures took place in all scientific research. In tandem with the development and first applications of performance indicators discussions about “gaming the system” surfaced (cf. MacRoberts and MacRoberts 1989). In the talk, I presented results from a literature search on how strategic behavior has been discussed in international peer-reviewed and professional medical journals from the 1970s onwards. The authors’ main concerns boiled down to three things. The first was irresponsible authorship (co-authorship, salami slicing etc.). Authors also signaled a growing pressure to publish and discussed relationships with scientific fraud. The third concern had to do with the rise of a group of evaluators with growing influence but seemingly without a clear consensus about their own professional standards. Typically, these concerns started to be voiced from the beginning of the 80s onwards.

Around the same time, two relevant developments took place in the Netherlands. First of all, the earliest Dutch science policy document on assessing the sciences was published. It focused entirely on the medical sciences (RAWB 1983). The report was promoted as a model for priority setting in scientific research, and was the first to overthrow internal peer review as the sole source for research assessment by including citation analysis (Wouters 1999). Secondly, a new allocation system was introduced at our own university here in Leiden in 1975. Anticipating a move on a national level from block grant funding to separate funding channels for teaching and research, a procedure was introduced that basically forced faculties to present existing and new research projects for assessment to a separate funding channel, in order to avoid decrease in research support in the near future. Ton van Raan, future director of CWTS, outlined specific methods for creating this separate funding model in the International Journal of Institutional Management in Higher education (Van Raan & Frankfort 1980). Van Raan and his co-author – at the time affiliated to the university’s Science Policy Unit argued that Leiden should move away from an ‘inefficient’ allocation system based on institutional support via student numbers, because this hindered scientific productivity and excellence. According to Van Raan [personal communication], this so-called ‘Z-procedure’ created the breeding ground for the establishment of a bibliometric institute in Leiden some years later.

Leiden University started the Z-procedure project inventories in ’75, dividing projects in that in- and those outside of priorities. The university started to include publication counts from 1980 onwards. As far as the medical sciences are concerned, the yearly Scientific Reports of ’78 to ’93 show that their total number of publications rose from 1401 in 1981 to 2468 in 1993. This number went up to roughly 7500 in 2008 (source: NFU). More advanced bibliometrics were introduced in the mid-80s. This shift from counting ‘brute numbers’ to assembling multidimensional complex operations (cf. Bowker 2005) also entailed a new representation of impact and quality: aggregated and normalized citation counts.

Back to the larger picture. A growing use of performance indicators from the 80s onwards can be ascribed to, among other things: an increased economic and social role of science and technology; an increase in the scale of research institutes; limitations and costs of peer review procedures; and a general move towards formal evaluation of professional work. It is usually argued that under the influence of the emergence of new public management and neoliberalism authorities decided to model large parts of the public sector, including higher education, on control mechanisms that were formerly reserved to the private sector (cf. Power 1999; Strathern 2000). It is necessary to dig deeper into the available historical sources to find out if these explanations suffice. If so, aggregated citation scores may have come to prosper in a specific political economy that values efficiency, transparency and quality assurance models. In the discussion after my talk Vladimir Jankovic suggested that I also look into Benjamin Hunt’s The Timid Corporation (2003). Hunt argues that while neoliberalism is often associated with economically motivated de-regulation, what has in fact been going on from the 80s onward is socially oriented regulation of individuals and groups, aimed at taming risks and impact of change through formal procedures. Two additional ways of analyzing the rise of such a precautionary approach may be found in the work of sociologists Frank Furedi (“Culture of Fear” 1997) and Ulrich Beck (“Risk Society” 1992). When aversion to risks and fear of change come to be perceived as abiding, a greater reliance on procedures and performance indicators may increasingly be seen as means to control openness and uncertainty. It is worth exploring if these sociological explanations can help us explain some of the dynamics in biomedicine I alluded to above. It may be a first step in finding out whether there is indeed something particular about medical research that makes it particularly receptive to metrics-based research evaluation.

Seminar on Intellectual Property, Science, Patenting and Publishing

We hereby cordially invite you to this afternoon’s open seminar at the Centre for Science and Technology Studies (CWTS), Leiden University.

Time: May 11th, 15-16.30h
Location: CWTS Common room, Willem Einthoven building, 5th floor, Wassenaarseweg 62a, Leiden

Title:
“In the Interest of Disinterestedness: the Intellectual Properties of Marie Curie”
Prof. Eva Hemmungs Wirtén (Uppsala University, Sweden)

Abstract:

In 1923, Marie Curie wrote:

“My husband, as well as myself, always refused to draw from our discovery any material profit. We have published, since the beginning, without any reserve, the process that we used to prepare radium. We took out no patent and we did not reserve any advantage in any industrial exploitation. No detail was kept secret, and it is due to the information we gave in our publications that the industry of radium has been rapidly developed. Up to the present time this industry hardly uses any methods except those established by us.”

Five sentences, four actions, one result. This is as close as we get to a last will and testament over the principles, acts, and legacy Marie Curie wanted readers to associate with the Curie name. Brief and clipped in style, she nonetheless managed to use the format to her advantage, using an efficient rhetorical strategy where statements of action immediately follow upon statements of non-action. Yes, material profit was refused but on the other hand publishing took place without reserve. No advantage was reserved in industrial application, but no detail was kept secret and information given freely. Finally, and interestingly enough considering their avowed non-proprietary stance and negation of patenting, the result of their actions is not the opening up of new scientific frontiers, but the blossoming of a radium industry. But as she clearly demarcated what she and her husband did or did not do, she provided more than a snapshot representation of their particular mindset. She indicated the presence of a structural and ongoing tension in science between a gift/market dichotomy, between two distinct systems of credit and reward. This tension is at the heart of my talk, which focuses on the two specific materialities, the two textual expressions that Marie Curie placed on either side of the gift/market precipice: patents and publications. Patents represented an “interested” perspective where you “reserved advantage.” Choosing to “publish without reserve” and keeping “no detail secret,” instead epitomized the values of disinterestedness. In approaching this formative representational dichotomy between patenting and publishing in the making of the Curie persona and myth, my aim is to consider what purposes are served by keeping them on their separate ledges, which, as a consequence, means understanding something of where and how they converge. As I hope to be able to show, the gifting/patenting of radium took place on a decidedly more hybrid territory than Curie’s quote implies.

Bio

Eva Hemmungs Wirtén is Professor in Library- and Information Science and Associate Professor [Docent] in Comparative Literature, the Department of ALM, Uppsala University, Sweden. She is the author of two peer-reviewed monographs published by the University of Toronto Press, No Trespassing: Authorship, Intellectual Property Rights, and the Boundaries of Globalization (2004) and Terms of Use: Negotiating the Jungle of the Intellectual Commons (2008). Recent articles include “A Diplomatic Salto Mortale: Translation Trouble in Berne, 1884-1886” for Book History (14) 2011, and “Colonial Copyright, Postcolonial Publics: the Berne Convention and the 1967 Stockholm Diplomatic Conference Revisited” in SCRIPTed, A Journal of Law, Technology & Society, December 2010 (7) 3. Forthcoming in 2012 are two book chapters; “Plants, Pills, and Patents: Circulating Knowledge” in Intellectual Property and Emerging Biotechnologies (Eds. Matthew Rimmer and Alison McLennan) and “Swedish Subtitling Strike Called Off! Fan-to-Fan Piracy, Translation, and the Primacy of Authorization,” in Amateur Media: Social, Cultural and Legal Perspectives (Eds. Dan Hunter, Ramon Lobato, Megan Richardson and Julian Thomas). She is currently funded by HERA (Humanities in the European Research Area) writing a book due for completion in the summer of 2013 preliminarily entitled Making Marie Curie: Intellectual Property, Science, and the Power of Print.

For more information, please see http://www.socialsciences.leiden.edu/cwts/news/cwts-seminar-20120511.html

What kind of societal relevance do we want?

The question about the societal relevance and economic impact of research is increasingly put forward in peer review of research projects, annual appraisal interviews, and institutional research assessment exercises. This is part of a more generic trend in accountability practices in a variety of societal sectors and in the way strategic intelligence is managed in business processes. An important task for CWTS will be to contribute actively to the build-up of conscientious criteria for this relatively new research assessment module. CWTS recently hired dr. Ingeborg Meijer – previously senior consultant at Technopolis and scientific officer at the Advisory Counsil on Health Research – to take on this task. Ingeborg presented her plans at last week’s CWTS research seminar.

The increasing focus on the societal impact of research has created a serious problem for both researchers and evaluators because these wider impacts of the outcomes of research (different from the more narrow research output in the form of publications) are very difficult to prove and evaluate. This is mainly caused by the complex nature of the interactions between academia, industry, and the public sector. There is no straightforward way out of this problem. Assessing the social, economic, cultural and ecological impact of scientific research is not simply a matter of developing performance indicators for ‘societally relevant’ research activity and an accompanying technological infrastructure for data collection. The methods and techniques for evaluating societal relevance will themselves also affect how ‘societal impact’ is defined and operationalized.

Using indicators and methods for research assessment is not merely a descriptive but also a prescriptive practice. Bear in mind some of the perverse effects of quantitative performance indicators for scientific impact: in some fields the citation culture seems to have resulted in an unhealthy interest in uni-dimensional output measures such as the number of articles published in high-impact journals and the number of times these articles are cited. If we take seriously that research assessment is a social technology we should also acknowledge these undesirable effects. It may indeed be beneficial for researchers if there is more balance in the types of activitities they will be held accountable for. As Ingeborg pointed out, making visible the ‘societally relevant’ work researchers are already doing (by collecting data on the web or by asking researchers to list activities, for instance) is a promising start. In addition, and considering the performative effects of indicators, policy makers, researchers and evaluation officers should also develop an overarching vision on the kinds of work they deem crucial and ‘socially relevant’. The activities that are currently being mapped out are undertaken within (and will therefore reflect) the parameters of the present evaluation system, which lean towards counting international peer-reviewed articles. Perhaps researchers should be encouraged to develop a much more variegated set of activities than they are at present receiving credit for.

Perspectives on computer simulation and data visualization

When it comes to critical analysis of the role of computers, data visualization, simulations and modeling in the sciences, much can be learned from humanities scholars. I’m currently teaching a course on the role of computer-generated images in contemporary science and visual culture at Utrecht University. Yesterday I learned that the New Media department hosts two very interesting events. Today, Tuesday October 18, there’s a workshop on software applications as active agents in shaping knowledge. The two keynote speakers are Dr Eckhart Arnold (University of Stuttgart), expert in the field of simulation technologies, and Dr Bernhard Rieder (University of Amsterdam), who researches how computers and software organize knowledge.

A week later, on October 25, Setup will host an event on data visualization at the Wolff Cinema movie theatre in Utrecht. Some of the most striking recent data visualization projects will be displayed on screen, and the following questions will be addressed: what makes data visualizations so appealing? Do they bring across the same message as the ‘raw’ data they originate from? Ann-Sophie Lehmann (associate professor New Media en Art History, UU) will discuss the visualizations and will throw light on some of the effects they have on viewers. One question that came to my mind is what this particular context (a movie theater) does to the (reception of) the visualizations, compared to a web-based interaction on a laptop or PC, for instance.

%d bloggers like this: