Ethics and misconduct – Review of a play organized by the Young Academy (KNAW)

This is a guest blog post by Joost Kosten. Joost is PhD student at CWTS and member of the EPIC working group. His research focuses on the use of research indicators from the perspective of public policy. Joost obtained an MSc in Public Administration (Leiden University) and was also trained in Political Science (Stockholm University) and Law (VU University Amsterdam).

Scientific (mis)conduct – The sins, the drama, the identification

On Tuesday November 18th 2014 the Young Academy of the Royal Netherlands Academy of Sciences organized a performance of the play Gewetenschap by Tony Maples at Leiden University. These weeks, Pandemonia Science Theater is on tour in the Netherlands to perform this piece at several universities. Gewetenschap was inspired by occasional troubles with respect to ethics and integrity which recently occurred in Dutch science and scholarship. Although these troubles concerned grave violations of the scientific code of conduct (i.e., the cardinal sins of fraud, fabrication, and plagiarism) the play focusses on common dilemma’s in a researcher’s everyday life. The title Gewetenschap is a non-existent word, which combines the Dutch words geweten (conscience) and wetenschap (science).

The playwright used confidential interviews with members of the Young Academy to gain insight into the most frequently occurring ethical dilemma’s researchers have to deal with. Professor Karin de Zwaan is a research group leader who has hardly any time to do research herself. She puts much effort in organizing grants, attracting new students and organizing her research group. Post-doc Jeroen Dreef is a very active researcher who does not have enough time to take organizational responsibilities serious. A tenure track is all he wants. Given their other important activities, Karin and Jeroen hardly have any time to supervise PhD student Lotte. One could question the type of support they do give her.

At times, given the reaction on scenes of the drama piece, the topics presented were clearly recognized by the audience. Afterwards, the dilemma’s touched upon during the play are presented by prof. Bas Haring. The audience discusses the following topics:

  • Is there a conflict between the research topics a researcher likes himself and what the research group expects her/him to do?
  • In one of the scenes, the researchers were delighted because of the acceptance of a publication. Haring asks if that exhibits “natural behaviour”. Shouldn’t a researcher be happy with good results instead of a publication being accepted? One of the participants replies that a publication functions as a reward.
  • What do you do with your data? Is endless application of a diversity analysis methods until you find nice results a responsible approach?
  • What about impact factors (IF)? Bas Haring himself says his IF is 0. “Do you think I am an idiot?” Which role do numbers such as the IF play in your opinion about colleagues? There seems to be quite a diversity of opinions. An early career research says everone knows these numbers are nonsense. An experienced scientist points out that there is a correlation between scores and quality. Someone else expresses his optimism since he expects that this focus on numbers will be over with ten years. This causes another to respond that in the past there was competition too, but in a different way.
  • When is someone a co-author? This question results in a lively debate. Apparently, there are considerable differences from field to field. In the medical fields, a co-authorship can be a way to express gratitude to authors who have played a vital role in a research project, such as people who could organize experimental subjects. In this way, a co-authorship becomes a tradeable commodity. A medicine professor points out that in his field, co-authorships can be used to compare a curriculum vitae with the development of status as a researcher. Thus, it can be used as a criterion to judge grant proposals. A good researcher should start with first position co-authorships, later on should have co-authorships somewhere in between the first and last author, and should end his career with papers in which has co-authorships in the last position. Thus, the further the career has been developed, the more the name of the other should be in the final part of the author list. Another participant states that one can deal with co-authorships in three different ways: 1. Co-authors should always have full responsibility for everything in the paper. 2. Similar to openness which is given at the end of a movie, co-authors should clarify what each co-author’s contribution was. 3. Only those who really contributed in writing a paper can be a co-author. The participant admits that this last proposal works in his own field but might not work in other fields.
  • Can a researcher exaggerate his findings if he presents them to journalists? Should you keep control over a journalist’s work in order to avoid that he will present things differently? Is it allowed to present untruth information in order to help support your case, just to avoid that a proper scientific argumentation will be too complex for the man in the street?
  • Is it allowed to to present your work as having more societal relevance than you really expect? One of the reactions is that researchers are forced to express the societal relevance of their work when they apply for a grant. From the very nature of scientific research it is hardly possible to clearly indicate what society will gain from the results.
  • What does a good relationship between a PhD-student and a supervisor look like? What is a good balance between serving the interests of PhD students, serving organizational interests (e.g. the future of the organization by attracting new students and grants), and the own interest of the researcher?

The discussion did not concentrate on the following dilemma´s presented in Gewetenschap:

  • To what extent are requirements for grant proposals contradictory? On the one hand, researchers are expected to think ‘out-of-the-box’ while on the other hand they should meet a large amount of requirements. Moreover, should one propose new ideas including the risks which come along, or is it better to walk on the beaten path in order to guarantee successes?
  • Should colleagues who did not show respect be served with the same sauce if you have a chance to review their work? Should you always judge scientific work on its merits? Are there any principles of ‘due process’ which should guide peer review?
  • Whose are the data if someone contributed to them but moves to another research group or institute?

 

How does science go wrong?

We are happy to announce that our abstract got accepted for the 2014 Conference of the European Consortium for Political Research (ECPR), which will be held in Glasgow from 3-6 September. Our paper is selected for a panel on ‘The role of ideas and indicators in science policies and research management’, organised by Luis Sanz-Menéndez and Laura Cruz-Castro (both at CSIC-IPP).

Title of our paper: How does science go wrong?

“Science is in need of fundamental reform.” In 2013, five Dutch researchers took the lead in what they hope will become a strong movement for change in the governance of science and scholarship: Science in Transition. SiT appears to voice concerns heard beyond national borders about the need for change in the governance of science (cf. The Economist 19 October 2013; THE 23 Jan. 2014; Nature 16 Oct. 2013; Die Zeit 5 Jan. 2014). One of the most hotly debated concerns is quality control, and it encompasses the implications of a perceived increasing publication pressure, purported flaws in the peer review system, impact factor manipulation, irreproducibility of results, and the need for new forms of data quality management.

One could argue that SiT landed in fertile ground. In recent years, a number of severe fraud cases drew attention to possible ‘perverse effects’ in the management system of science and scholarship. Partly due to the juicy aspects of most cases of misconduct, these debates tend to focus on ‘bad apples’ and shy away from more fundamental problems in the governance of science and scholarship.

Our paper articulates how key actors construct the notion of ‘quality’ in these debates, and how they respond to each other’s position. By making these constructions explicit, we shift focus back to the self-reinforcing ‘performance loops’ that most researchers are caught up in at present. Our methodology is a combination of the mapping of the dynamics of media waves (Vasterman, 2005) and discourse analysis (Gilbert & Mulkay, 1984).

References

A revolutionary mission statement: improve the world. Times Higher Education, 23 January 2014.

Chalmers, I., Bracken, M. B., Djulbegovic, B., Garattini, S., Grant, J., Gülmezoglu, A. M., Oliver, S. (2014). How to increase value and reduce waste when research priorities are set. The Lancet, 383 (9912), 156–165.

Gilbert, G. N., & Mulkay, M. J. (1984). Opening Pandora’s Box. A Sociological Analysis of Scientists’ Discourse. Cambridge: Cambridge University Press.

Research evaluation: Impact. (2013). Nature, 502(7471), 287–287.

Rettet die Wissenschaft!: “Die Folgekosten können hoch sein.” Die Zeit, 5 January 2014.

Trouble at the lab. The Economist, 19 October 2013.

Vasterman, P. L. M. (2005). Media-Hype. European Journal of Communication , 20 (4 ), 508–530.

Fraud in Flemish science

Almost half of Flemish medical researchers have witnessed a form of scientific fraud in their direct environment. One in twelve have been engaged themselves in data fraud or in “massaging data” in order to make the results fit the hypothesis. Many mention “publication pressure” as an important cause of this behaviour. This is the outcome of the first public survey among Flemish medical researchers about scientific fraud. The survey was conducted in November and December 2012 by the journal Eos . Joeri Tijdink, who had conducted a similar survey in the Netherlands among medical professors supervised the Flemish survey.

It is not clear to what extent the survey results are representative of the conduct of all medical researchers in Flanders. The survey was distributed through the deans of medical faculties in the form of an anonymous questionnaire. The response rate was fairly low (19 % of the 2,548 researchers responded and 315 (12 %) filled it in completely). Yet, the results indicate that fraud may be a much more serious problem than is usually acknowledged in the Flemish scientific system. Since the installation of Flemish university committees on scientific integrity, no more than 4 cases of scientific misconduct have been recognized (3 involved plagiarism; 1 researcher committed fraud). This is clearly lower than expected. The survey, however, consistently reports higher incidence of scientific misconduct than comparable international surveys do. For example, having witnessed misconduct is reported by 14% of researchers according to a meta-study by Daniele Fanelli, but in Flanders this is 47%. Internationally, 2% of researchers admit to have been involved themselves in data massage or fraud, whereas in Flanders this is 8%. The discrepancy can be explained in two ways. One is that the university committees are not yet effective in getting out the truth. The other is that this survey is biased towards researchers who have witnessed misconduct in some way. Given that both explanations seem plausible, the gap between the survey results and the formal record of misconduct in Flanders may best be explained by a combination of both mechanisms. After all, it is hard to understand why Flemish medical researchers would be more (or less)  prone to misconduct than medical researchers in, say, the Netherlands, the UK, or France.

According to Eos, publication pressure is one of the causes of misconduct. This still remains to be proven. However, both in the earlier survey by Tijdink and Smulders, and in this survey, a large number of researchers mention “publication pressure” as a driving factor. As has been argued in the Dutch debate about the fraud by psychologist Diederik Stapel, the mentioning of “publication pressure” as a cause may be motivated by a desire for legitimation. After all, all researchers are pressured to publish on a regular basis, while a small minority is involved in misconduct (as far as we know now). So the response may be part of a justification discourse, rather than a causal analysis. My own intuition is that the problem is not publication pressure, but reputation pressure, a subtle but important difference. Nevertheless, if a large minority (47% of the Flemish respondents for example) of researchers point to “publication pressure” as a cause of misconduct, we may have a serious problem in the scientific system, whether or not these researchers are right. A problem that can no longer be ignored.

Literature:

Fanelli D (2009) How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS ONE 4(5): e5738. doi:10.1371/journal.pone.0005738

Joeri K. Tijdink, Anton C.M. Vergouwen, and Yvo M. Smulders, Ned Tijdschr Geneeskd. 2012;156:A5715