Viridiana Jones and the privatization of US science

mirowskiRecently a deluge of books saw the light on commercialization of academia and the political climate that allegedly enabled this development: neo-liberalism. Examples include If You’re So Smart, Why aren’t You Rich? (Lorentz 2008), Weten is meer dan Meten (Reijngoud 2012), The Fall of the Faculty (Ginsberg 2011), The Commodification of Academic Research (Radder (ed.) 2010), How Economics Shapes Science (Stephan 2012), and Creating the Market University (Popp Berman 2011). A recent book in this trend I would like to bring to the attention of our blog readers is Philip Mirowski’s Science Mart: Privatizing American Science (Harvard UP, 2011). Mirowski is Carl Koch Professor of Economics and the History of Philosophy at the University of Notre Dame. He is author of The Effortless Economy of Science? (2004), Science Bought and Sold (with Esther-Mirjam Sent, eds., 2002), The Road from Mont Pélerin: The Making of the Neoliberal Thought Collective (with Dieter Plehwe, eds, 2009), and a host of articles on the topic. That Mirowski knows a thing or two about his subject also becomes apparent through his writing: He combines an impressive amount of interdisciplinary knowledge with what he calls ‘empirical meditations on the state of contemporary science’. I think he succesfully counters more shallow explanations for the commercialization of (US) academic research that rely on misunderstood versions of neoliberalism. How? By zooming in on more subtle conjunctions of circumstances that ultimately led to the installment of exactly that very hard to counter grand narrative called ‘neoliberalism’. And by demonstrating how specific professions, disciplines, strands of theories abstained from or couldn’t come up with an equally convincing alternative to ‘render the totality of academic life coherent’. Occasionally, Mirowski himself does also fall into the trap of the attractive overarching narrative. For instance when he describes the recent history of the rise and increasing use of citation-analysis and of performance indicators in academia as a development from a neutral information tool to a ‘bureaucratic means of surveillance’. He also assumes – and I think this is a simplification- a causal link between privately owned citation data and the erection of a ‘Science Panopticon’. Nonetheless, Science Mart stands out from a number of the books mentioned above, not in the least due to Mirowski’s daring and ironic tone of voice. (A reference to the first chapter may suffice, in which the author uses a fictive researcher called Viridiana Jones to set the scene of the book).

Advertisement

Should science studies pay more attention to scientific fraud?

Last week, the Dutch scientific community was rocked by the publication of the final report on the large-scale fraud committed by former professor in social psychology, Diederik Stapel. Three committees performed an extraordinarily thorough examination of the full scientific publication record produced by Stapel and his 70 co-authors. Stapel was known in the Dutch media as the “golden boy” of social psychology. The scientific establishment was also blinded by his apparent success in producing massive amounts of supposedly ingenious experiments. He was appointed as fellow of the Royal Netherlands Academy of Arts and Sciences (KNAW) early in his career and collected large amounts of subsidy from the Dutch science foundation NWO.

In at least 55 publications the data have been fully or partially fabricated. This was done in a cunning way, since at least 1996. Stapel has cooperated with the investigation, but the report mentions that he “did not recognize himself” in the image that the report sketches of a manipulating and at times intimidating schemer. As if to emphasize his role as poseur, Stapel published a book about his fraud the day after the formal report was made public. He even started a tour of signing sessions in the most prestigious academic bookshops in the Netherlands last weekend. Shamelessness has always been a defining characteristic of con men. An investigation by the Dutch prosecutor is still ongoing to see whether Stapel can be brought to justice for fraudulent behavior or financial misdemeanors. So it remains to be seen how long he can go where he pleases.

Perhaps more important than the fraud itself (the report concludes that Stapel did not have much impact on his field), is the conclusion that there is something fundamentally wrong with the research culture in social psychology. On top of the “usual publication bias” (journals prefer positive results over negative results, even when the latter are actually more important), the committees found a strong verification bias. Researchers did everything they could to confirm their hypothesis, including redacting the data, misrepresenting the experiments, copying data from one experiment to another, etc. The report also notes a glaring lack of statistical knowledge among co-authors of quantitative research publications. Since the discovery of the Stapel fraud, social psychologists have taken a number of initiatives to remedy the situation, including strict data and data-sharing protocols, and initiatives to promote replication of experiments and secondary data analysis.

The question is whether this is enough. Social psychology is not the only field confronted with large-scale fraud. For example, the damage of fraudulent or low quality research in the medical sciences may actually be more important. The Erasmus University Rotterdam is now confronted with the gigantic task of checking more than 600 publications written by a suspect cardiac researcher who denies the accusations. Apparently, the system of peer review does not only fail to discover fraud in social psychology, there is a potentially far bigger problem in the medical and clinical sciences. Anti-fraud measures that will be taken in the next few years in these fields will have a strong influence on the research agendas. It seems therefore natural to expect that science studies experts, specialized in analyzing the politics, culture, and economics of scientific and scholarly research, should be able to give a serious contribution.

Yet, this has not yet happened. The key players in the Stapel discovery are the whistle-blowers (3 PhD students), ex-presidents of the KNAW, social psychologists and statistical experts. Science studies experts have not been involved. This is not new. Journalists often are more active in discovering fraud than science studies scholars. I do not think this is coincidental. I see a more fundamental and a more practical explanation. The practical one is that science studies researchers often do not have the data to play a role in detecting and analyzing fraud. Most steps in the quality control processes in science, based on peer review, are confidential. For example, I once tried to get access to an archive of a scientific journal to study the history of that journal, a rather innocent request, and even that was denied. Also, quantitative science studies such as citation analysis cannot detect fraud because effective fraudulent papers are cited in the same ways as sound scientific articles. Bibliometrics does not measure quality directly, but basically measures how the scientific community responds to new papers. If a community fails collectively, bibliometrics fails as well.

The more fundamental reason is that constructivism in science studies has developed a strong neutral attitude (“symmetry”) with respect to the prevailing epistemic cultures. Science studies mostly abstains from a normative perspective and instead tries to analyze how research “really happens”. Since Trevor Pinch’ article on para-psychology in 1979, science studies has questioned the way science and non-science is demarcated by the scientific establishment. Recently, renewed attention has been paid to the ways science is appropriated and steered by powerful political and commercial interests, such as the manipulation of medical research by the pharmaceutical industry. This new emphasis on a more normative research program in science studies may now need to be further stimulated.

In other words, it may make sense for science studies scholars to question their current priorities in the wake of the link between fraud and epistemic cultures. Let me suggest some components of a research agenda. First of all, what kind of phenomenon is scientific fraud actually? When does fraud manifest itself, how is it defined, and by whom? These questions fit comfortably with the dominant constructivist paradigm. Answering them would be an important contribution because there are many grey areas between the formal scientific ideology (such as represented by first year text books) and the actual research practice in a particular lab or institute. Second, we may need to become more normative. How can we detect fraud? What circumstances enable fraud? What kind of configurations of power, accountability and incentives may hinder fraud? I think there is considerable scope for case studies, histories and quantitative research to help tackle these questions.

Quantitative science studies may also contribute. An obvious question is to what extent retracted publications still circulate in the scholarly archive. A more difficult one is whether the combination of citation analysis and full-text analysis may help detect patterns that may identify potential fraud cases. Given the role of the number of citations in performance indicators such as the Journal Impact Factor and the Hirsch Index, we may also want to be more active in detecting “citation clubs” where researchers set up cartels to boost each others citation record. I do not think that purely algorithmic approaches will be able to establish cases of fraud, but it may help as an information filter to be able to zoom in on suspect cases.

Last, but not least, it is high time to take a hard look at the evaluation culture in science, the recurring theme in this blog. The Stapel affair shows how the review committees in psychology have basically failed to detect fundamental weaknesses in the research culture of social psychology. The report asks whether this may be due to the publication pressure, an excuse that co-authors of Stapel frequently invoked to be sloppy with the quality standards for an article. We know from many areas in science that the pressure to publish as fast as possible is felt acutely by many researchers. I do not think that publication pressure as such is sufficient explanation for fraud (it is not the case that most researchers are fraudulent). But there is certainly a problem with the way researchers are being held accountable. Formal criteria (how often did you publish in high prestige journals?) are dominant, at the cost of more substantive criteria (what contribution did you make to knowledge?). Metrics is often used out of context. This evaluation culture should end. We need to go back to meaningful metrics in which the quality and content of ones contribution to knowledge becomes primary again. As Dick Pels formulated it, it is high time to “unhasten science”. At CWTS, we wish to contribute to this goal with our new research program as well as with our bibliometric services.

Literature:

Pels, D. (2003). Unhastening science: Autonomy and reflexivity in the social theory of knowledge. Routledge.

Pinch, T. J. (1979). Normal Explanations of the Paranormal: The Demarcation Problem and Fraud in Parapsychology. Social Studies of Science, 9(3), 329–348. doi:10.1177/030631277900900303

%d bloggers like this: