How does science go wrong?

We are happy to announce that our abstract got accepted for the 2014 Conference of the European Consortium for Political Research (ECPR), which will be held in Glasgow from 3-6 September. Our paper is selected for a panel on ‘The role of ideas and indicators in science policies and research management’, organised by Luis Sanz-Menéndez and Laura Cruz-Castro (both at CSIC-IPP).

Title of our paper: How does science go wrong?

“Science is in need of fundamental reform.” In 2013, five Dutch researchers took the lead in what they hope will become a strong movement for change in the governance of science and scholarship: Science in Transition. SiT appears to voice concerns heard beyond national borders about the need for change in the governance of science (cf. The Economist 19 October 2013; THE 23 Jan. 2014; Nature 16 Oct. 2013; Die Zeit 5 Jan. 2014). One of the most hotly debated concerns is quality control, and it encompasses the implications of a perceived increasing publication pressure, purported flaws in the peer review system, impact factor manipulation, irreproducibility of results, and the need for new forms of data quality management.

One could argue that SiT landed in fertile ground. In recent years, a number of severe fraud cases drew attention to possible ‘perverse effects’ in the management system of science and scholarship. Partly due to the juicy aspects of most cases of misconduct, these debates tend to focus on ‘bad apples’ and shy away from more fundamental problems in the governance of science and scholarship.

Our paper articulates how key actors construct the notion of ‘quality’ in these debates, and how they respond to each other’s position. By making these constructions explicit, we shift focus back to the self-reinforcing ‘performance loops’ that most researchers are caught up in at present. Our methodology is a combination of the mapping of the dynamics of media waves (Vasterman, 2005) and discourse analysis (Gilbert & Mulkay, 1984).

References

A revolutionary mission statement: improve the world. Times Higher Education, 23 January 2014.

Chalmers, I., Bracken, M. B., Djulbegovic, B., Garattini, S., Grant, J., Gülmezoglu, A. M., Oliver, S. (2014). How to increase value and reduce waste when research priorities are set. The Lancet, 383 (9912), 156–165.

Gilbert, G. N., & Mulkay, M. J. (1984). Opening Pandora’s Box. A Sociological Analysis of Scientists’ Discourse. Cambridge: Cambridge University Press.

Research evaluation: Impact. (2013). Nature, 502(7471), 287–287.

Rettet die Wissenschaft!: “Die Folgekosten können hoch sein.” Die Zeit, 5 January 2014.

Trouble at the lab. The Economist, 19 October 2013.

Vasterman, P. L. M. (2005). Media-Hype. European Journal of Communication , 20 (4 ), 508–530.

Advertisements

On exploding ‘evaluation machines’ and the construction of alt-metrics

The emergence of web-based ways to create and communicate new knowledge is affecting long-established scientific and scholarly research practices (cf. Borgman 2007; Wouters, Beaulieu, Scharnhorst, & Wyatt 2013). This move to the web is spawning a need for tools to track and measure a wide range of online communication forms and outputs. By now, there is a large differentiation in the kinds of social web tools (i.e. Mendeley, F1000,  Impact Story) and in the outputs they track (i.e. code, datasets, nanopublications, blogs). The expectations surrounding the explosion of tools and big ‘alt-metric’ data (Priem et al. 2010; Wouters & Costas 2012) marshal resources at various scales and gather highly diverse groups in pursuing new projects (cf. Brown & Michael 2003; Borup et al. 2006 in Beaulieu, de Rijcke & Van Heur 2013).

Today we submitted an abstract for a contribution to Big Data? Qualitative approaches to digital research (edited by Martin Hand & Sam Hillyard and contracted with Emerald). In the abstract we propose to zoom in on a specific set of expectations around altmetrics: Their alleged usefulness for research evaluation. Of particular interest to this volume is how altmetrics information is expected to enable a more comprehensive assessment of 1. social scientific outputs (under-represented in citation databases) and 2. wider types of output associated with societal relevance (not covered in citation analysis and allegedly more prevalent in the social sciences).

Our chapter we address a number of these expectations by analyzing 1) the discourse in the “altmetrics movement”, the expectations and promises formulated by key actors involved in “big data” (including commercial entities); and 2) the construction of these altmetric data and their alleged validity for research evaluation purposes. We will combine discourse analysis with bibliometric, webometric and altmetric methods in which both methods will also interrogate each others’ assumptions (Hicks & Potter 1991).

Our contribution will show, first of all, that altmetric data do not simply ‘represent’ other types of outputs; they also actively create a need for these types of information. These needs will have to be aligned with existing accountability regimes. Secondly, we will argue that researchers will develop forms of regulation that will partly be shaped by these new types of altmetric information. They are not passive recipients of research evaluation but play an active role in assessment contexts (cf. Aksnes & Rip 2009; Van Noorden 2010). Thirdly, we will show that the emergence of altmetric data for evaluation is another instance (following the creation of the citation indexes and the use of web data in assessments) of transposing traces of communication into a framework of evaluation and assessment (Dahler-Larsen 2012, 2013; Wouters 2014).

By making explicit what the implications are of the transfer of altmetric data from the framework of the communication of science to the framework of research evaluation, we aim to contribute to a better understanding of the complex dynamics in which new generation of researchers will have to work and be creative.

Aksnes, D. W., & Rip, A. (2009). Researchers’ perceptions of citations. Research Policy, 38(6), 895–905.

Beaulieu, A., van Heur, B. & de Rijcke, S. (2013). Authority and Expertise in New Sites of Knowledge Production. In A. Beaulieu, A. Scharnhorst, P. Wouters and S. Wyatt (Eds.), Virtual KnowledgeExperimenting in the Humanities and the Social Sciences. (pp. 25-56). MIT Press.

Borup, M, Brown, N., Konrad, K. & van Lente, H. 2006. “The sociology of expectations in science and technology.” Technology Analysis & Strategic Management 18 (3/4), 285-98.

Brown, N. & Michael, M. (2003). “A sociology of expectations: Retrospecting prospects and prospecting retrospects.” Technology Analysis & Strategic Management 15 (1), 3-18.

Costas, R., Zahedi, Z. & Wouters, P. (n.d.). Do ‘altmetrics’ correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective.

Dahler-Larsen, P. (2012). The Evaluation Society. Stanford University Press.

Dahler-Larsen, P. (2013). Constitutive Effects of Performance Indicators. Public Management Review, (May), 1–18.

Galligan, F., & Dyas-Correia, S. (2013). Altmetrics: Rethinking the Way We Measure. Serials Review, 39(1), 56–61.

Hicks, D., & Potter, J. (1991). Sociology of Scientific Knowledge: A Reflexive Citation Analysis of Science Disciplines and Disciplining Science. Social Studies of Science, 21(3), 459 –501.

Priem, J., Taraborelli, D., Groth, P., and Neylon, C. (2010a). Altmetrics: a manifesto. http://altmetrics.org/manifesto/

Van Noorden, R. (2010) “Metrics: A Profusion of Measures.” Nature, 465, 864–866.

Wouters, P., Costas, R. (2012). Users, narcissism and control: Tracking the impact of scholarly publications in the 21st century. Utrecht: SURF foundation.

Wouters, P. (2014). The Citation: From Culture to Infrastructure. In B. Cronin & C. R. Sugimoto (Eds.), Next Generation Metrics: Harnessing Multidimensional Indicators Of Scholarly Performance (Vol. 22, pp. 48–66). MIT Press.

Wouters, P., Beaulieu, A., Scharnhorst, A., & Wyatt, S. (eds.) (2013). Virtual Knowledge – Experimenting in the Humanities and the Social Sciences. MIT Press.

Who is the modern scientist? Lecture by Steven Shapin

There are now many historical studies of what’s been called scientists’ personæ–-the typifications, images, and expectations attached to people who do scientific work. There has been much less interest in the largely managerial and bureaucratic exercises of counting scientists-– finding out how many there are, of what sorts, working in what institutions. This talk first describes how and why scientists came to be counted from about the middle of the twentieth century and then relates those statistical exercises to changing senses of who the scientist was, what scientific inquiry was, and what it was good for.

Here’s more information, including how to register

Date: Thursday 28 November 2013

Time: 5-7 pm

Place: Felix Meritis (Teekenzaal), Keizersgracht 324, Amsterdam

Update Crafting Your Career (CYC)

Screen Shot 2013-09-16 at 11.44.38 AMCrafting your Career (the event co-organised by CWTS and the Rathenau Instituut, 30 October 2013) is attracting a lot of attention. With only two weeks to go, 173 people have registered (we’re aiming for 200) and over a 1000 people have taken our researcher motivation test.  CYC will facilitate a balanced discussion about the pros and cons of recent trends in research evaluation and their effects on scientific research and scientific careers. While we are busy putting together a program leaflet, our moderators are contacting speakers about the details of the interviews and panel debate. Our Rathenau colleagues are working out the details of the ‘fair’ that takes place during the extended break, and Laurens Hessels and I will have a short meeting tomorrow with KNAW-president prof. Hans Clevers to discuss his opening address.

One of our speakers, Dr. Ruth Müller, was interviewed by ScienceGuide on the occasion of our event. Here’s what she has to say about how post-docs structure academic careers in the life sciences, and the pressures they are experiencing.

Vacancy post-doctoral researcher

The Centre for Science and Technology Studies of the Faculty of Social Sciences of Leiden University wishes to announce a vacancy for the following position:

POST-DOCTORAL RESEARCHER (38 hours per week)

Vacancy number: 13-062

The Centre for Science and Technology Studies (CWTS)

The Centre for Science and Technology Studies (CWTS) is an interdisciplinary institute at Leiden University. Our research staff originates from many fields, varying from psychology, political science, literature studies and information science, to computer science, economics, physics and chemistry. We study the dynamics of science and its connections to technology and innovation. In other words, we study scientific and scholarly research from a scientific point of view. CWTS uses large databases that enable us to quantitatively discern the growth in scientific publications, patterns of collaboration, the impacts of science, and many other aspects of science such as scholarly communication and evidence-based performance assessment.

Our research is also used to provide high-quality services, via a university-owned company CWTS BV, to research institutes for evaluation of the impact of their publications and their standing in the international scientific community. In addition, we analyse the development of scientific careers, and the impact of research assessment on knowledge production, by way of mixed-methods research (including surveys and ethnographic methods).

Since 2012, we have focused our activities and interests within the framework of a new research program (www.cwts.nl/pdf/cwts_research_programme_2012-2015.pdf). CWTS has three chairs for full professors (Scientometrics; Science & Innovation studies; Science policy studies) as well as five working groups on key research themes (Advanced bibliometric methodologies; Evaluation practices in context; Social sciences and humanities; Scientific careers; Societal impact of research). The centre hosts a dynamic group of senior researchers and talented juniors who welcome collaboration with colleagues internationally and nationally. We can accommodate internships and provide students with supervision for Master’s and PhD theses.

Job description

We are inviting applications for a post-doctoral position in our new research program. The post-doctoral candidate is expected to carry out research in the context of the Evaluation Practices in Context (EPIC) working group at CWTS. This new line of research focuses on the implications of research assessment, and the performance criteria applied, for scientific and scholarly communication and knowledge production. The post-doc project will be drawn up in close consultation with prof.dr. Paul Wouters (Scientometrics chair) and dr. Sarah de Rijcke (EPIC working group leader). The post-doc will be encouraged to carry out comparative research with other EPIC group members. Results of the research will be disseminated through preparation of publications for a range of audiences.

Evaluation Practices in Context (EPIC)

The working group Evaluation Practices in Context (EPIC) examines the politics and practices of research evaluation in connection with contemporary forms of governance of research and scholarship. EPIC combines and contributes to theoretical frameworks and detailed empirical studies from Science and Technology Studies (STS) broadly defined (including scientometrics, and history, sociology and anthropology of science), organizational studies and higher education studies. The working group pays particular attention to the implications of research assessment, and the performance criteria applied, for scientific and scholarly communication and knowledge production. Important STS perspectives that we draw on have demonstrated that ‘science’ and ‘politics’ or ‘knowledge’ and ‘power’ should not be seen as separate spheres of action, but are involved in a constant process of mutual embedding and stabilization. Accordingly, our work analyzes the co-constitution of knowledge in relation to specific epistemic cultures, evaluation systems, publication practices, and governance contexts.

Profile post-doctoral researcher

We are looking for a prospective candidate with a PhD in the social sciences or humanities, preferably in science, technology and innovation studies or related fields (e.g. sociology, law, anthropology, political science, history of science, organizational studies, cultural studies). The candidate must have strong skills in designing, organizing and executing qualitative research, especially interviews and ethnographic fieldwork. Experience with computer-supported analysis (eg AtlasTI) is desirable but not necessary. Preference will be given to candidates with an academic drive who can provide clear evidence of, or potential for, international excellence in published research. The candidate should be able to work independently as well as cooperate in an interdisciplinary team. S/he should have verbal fluency in English and good written and verbal communication skills. Fluency in Dutch is considered an asset, but not a condition.

Appointment

We offer a temporary position as a researcher for a period of two years. Depending upon qualifications and experience, the gross monthly salary will be between €3227 and €4418 (scale 11), based on full time employment.

Benefits include pension contribution, annual holiday premium of 8% and an end-of-year premium of 8.3%. Non-Dutch nationals may be eligible for a substantial tax break (30% ruling).

Applicants should have the right to work in the Netherlands for the duration of the contract.

Additional Information

Further information about this position can be obtained from dr. Sarah de Rijcke, tel. +31 71 5276853 (office) or e-mail s.de.rijcke@cwts.leidenuniv.nl.

Application

Letters of application should be accompanied by a full curriculum vitae and two or three references.

Applications should reach the university by March 28, 2013 and can be sent electronically to our Human Resource Department at vacature@fsw.leidenuniv.nl.

When your application reaches us we will send you confirmation by e-mail. If you have not received a confirmation within three days after sending the e-mail, please phone us at +31 71 527 3427.

We will schedule interviews on the 3rd and 10th of April 2013.

Should science studies pay more attention to scientific fraud?

Last week, the Dutch scientific community was rocked by the publication of the final report on the large-scale fraud committed by former professor in social psychology, Diederik Stapel. Three committees performed an extraordinarily thorough examination of the full scientific publication record produced by Stapel and his 70 co-authors. Stapel was known in the Dutch media as the “golden boy” of social psychology. The scientific establishment was also blinded by his apparent success in producing massive amounts of supposedly ingenious experiments. He was appointed as fellow of the Royal Netherlands Academy of Arts and Sciences (KNAW) early in his career and collected large amounts of subsidy from the Dutch science foundation NWO.

In at least 55 publications the data have been fully or partially fabricated. This was done in a cunning way, since at least 1996. Stapel has cooperated with the investigation, but the report mentions that he “did not recognize himself” in the image that the report sketches of a manipulating and at times intimidating schemer. As if to emphasize his role as poseur, Stapel published a book about his fraud the day after the formal report was made public. He even started a tour of signing sessions in the most prestigious academic bookshops in the Netherlands last weekend. Shamelessness has always been a defining characteristic of con men. An investigation by the Dutch prosecutor is still ongoing to see whether Stapel can be brought to justice for fraudulent behavior or financial misdemeanors. So it remains to be seen how long he can go where he pleases.

Perhaps more important than the fraud itself (the report concludes that Stapel did not have much impact on his field), is the conclusion that there is something fundamentally wrong with the research culture in social psychology. On top of the “usual publication bias” (journals prefer positive results over negative results, even when the latter are actually more important), the committees found a strong verification bias. Researchers did everything they could to confirm their hypothesis, including redacting the data, misrepresenting the experiments, copying data from one experiment to another, etc. The report also notes a glaring lack of statistical knowledge among co-authors of quantitative research publications. Since the discovery of the Stapel fraud, social psychologists have taken a number of initiatives to remedy the situation, including strict data and data-sharing protocols, and initiatives to promote replication of experiments and secondary data analysis.

The question is whether this is enough. Social psychology is not the only field confronted with large-scale fraud. For example, the damage of fraudulent or low quality research in the medical sciences may actually be more important. The Erasmus University Rotterdam is now confronted with the gigantic task of checking more than 600 publications written by a suspect cardiac researcher who denies the accusations. Apparently, the system of peer review does not only fail to discover fraud in social psychology, there is a potentially far bigger problem in the medical and clinical sciences. Anti-fraud measures that will be taken in the next few years in these fields will have a strong influence on the research agendas. It seems therefore natural to expect that science studies experts, specialized in analyzing the politics, culture, and economics of scientific and scholarly research, should be able to give a serious contribution.

Yet, this has not yet happened. The key players in the Stapel discovery are the whistle-blowers (3 PhD students), ex-presidents of the KNAW, social psychologists and statistical experts. Science studies experts have not been involved. This is not new. Journalists often are more active in discovering fraud than science studies scholars. I do not think this is coincidental. I see a more fundamental and a more practical explanation. The practical one is that science studies researchers often do not have the data to play a role in detecting and analyzing fraud. Most steps in the quality control processes in science, based on peer review, are confidential. For example, I once tried to get access to an archive of a scientific journal to study the history of that journal, a rather innocent request, and even that was denied. Also, quantitative science studies such as citation analysis cannot detect fraud because effective fraudulent papers are cited in the same ways as sound scientific articles. Bibliometrics does not measure quality directly, but basically measures how the scientific community responds to new papers. If a community fails collectively, bibliometrics fails as well.

The more fundamental reason is that constructivism in science studies has developed a strong neutral attitude (“symmetry”) with respect to the prevailing epistemic cultures. Science studies mostly abstains from a normative perspective and instead tries to analyze how research “really happens”. Since Trevor Pinch’ article on para-psychology in 1979, science studies has questioned the way science and non-science is demarcated by the scientific establishment. Recently, renewed attention has been paid to the ways science is appropriated and steered by powerful political and commercial interests, such as the manipulation of medical research by the pharmaceutical industry. This new emphasis on a more normative research program in science studies may now need to be further stimulated.

In other words, it may make sense for science studies scholars to question their current priorities in the wake of the link between fraud and epistemic cultures. Let me suggest some components of a research agenda. First of all, what kind of phenomenon is scientific fraud actually? When does fraud manifest itself, how is it defined, and by whom? These questions fit comfortably with the dominant constructivist paradigm. Answering them would be an important contribution because there are many grey areas between the formal scientific ideology (such as represented by first year text books) and the actual research practice in a particular lab or institute. Second, we may need to become more normative. How can we detect fraud? What circumstances enable fraud? What kind of configurations of power, accountability and incentives may hinder fraud? I think there is considerable scope for case studies, histories and quantitative research to help tackle these questions.

Quantitative science studies may also contribute. An obvious question is to what extent retracted publications still circulate in the scholarly archive. A more difficult one is whether the combination of citation analysis and full-text analysis may help detect patterns that may identify potential fraud cases. Given the role of the number of citations in performance indicators such as the Journal Impact Factor and the Hirsch Index, we may also want to be more active in detecting “citation clubs” where researchers set up cartels to boost each others citation record. I do not think that purely algorithmic approaches will be able to establish cases of fraud, but it may help as an information filter to be able to zoom in on suspect cases.

Last, but not least, it is high time to take a hard look at the evaluation culture in science, the recurring theme in this blog. The Stapel affair shows how the review committees in psychology have basically failed to detect fundamental weaknesses in the research culture of social psychology. The report asks whether this may be due to the publication pressure, an excuse that co-authors of Stapel frequently invoked to be sloppy with the quality standards for an article. We know from many areas in science that the pressure to publish as fast as possible is felt acutely by many researchers. I do not think that publication pressure as such is sufficient explanation for fraud (it is not the case that most researchers are fraudulent). But there is certainly a problem with the way researchers are being held accountable. Formal criteria (how often did you publish in high prestige journals?) are dominant, at the cost of more substantive criteria (what contribution did you make to knowledge?). Metrics is often used out of context. This evaluation culture should end. We need to go back to meaningful metrics in which the quality and content of ones contribution to knowledge becomes primary again. As Dick Pels formulated it, it is high time to “unhasten science”. At CWTS, we wish to contribute to this goal with our new research program as well as with our bibliometric services.

Literature:

Pels, D. (2003). Unhastening science: Autonomy and reflexivity in the social theory of knowledge. Routledge.

Pinch, T. J. (1979). Normal Explanations of the Paranormal: The Demarcation Problem and Fraud in Parapsychology. Social Studies of Science, 9(3), 329–348. doi:10.1177/030631277900900303

Book release

Today we are witnessing dramatic changes in the way scientific and scholarly knowledge is created, codified, and communicated. This transformation is connected to the use of digital technologies and the virtualization of knowledge. In this book, scholars from a range of disciplines consider just what, if anything, is new when knowledge is produced in new ways. Does knowledge itself change when the tools of knowledge acquisition, representation, and distribution become digital? Issues of knowledge creation and dissemination go beyond the development and use of new computational tools. The book, which draws on work from the Virtual Knowledge Studio, brings together research on scientific practice, infrastructure, and technology. Focusing on issues of digital scholarship in the humanities and social sciences, the contributors discuss who can be considered legitimate knowledge creators, the value of “invisible” labor, the role of data visualization in policy making, the visualization of uncertainty, the conceptualization of openness in scholarly communication, data floods in the social sciences, and how expectations about future research shape research practices. The contributors combine an appreciation of the transformative power of the virtual with a commitment to the empirical study of practice and use.

Edited by Paul Wouters, Anne Beaulieu, Andrea Scharnhorst and Sally Wyatt.

%d bloggers like this: