Journal ranking biased against interdisciplinary research

The widespread use of rankings of journals in research institutes and universities creates a disadvantage for interdisciplinary research in assessment exercises such as the British Research Excellence Framework. This is the conclusion of a paper presented at the 2011 Annual Conference of the Society for the Social Studies of Science in Cleveland (US) by Ismael Rafols (SPRU, Sussex University), Loet Leydesdorff (University of Amsterdam) and Alice O’Hare, Paul Nightingale and Andy Stirling (all SPRU, Sussex University). The study is the first quantitative proof that researchers working at the boundaries between different research fields may be disadvantaged compared with monodisciplinary colleagues. The study argues that citation analysis, if properly applied, is a better measurement instrument than a ranked journal list.

The study is quite relevant for research management at universities and research institutes. Journal lists have become a very popular management tool. In a lot of departments, researchers are obliged to publish in a limited set of journals. Some departments, for example in economics, have even been reorganized on the basis of having published in such a list. The way these lists have been composed does vary. Sometimes a group of experts decides whether a journal belongs to the list, sometimes the Journal Impact Factor published by ISI/Thomson Reuters is the determining factor.

The study by Rafols et al. has analyzed one such list: the ranked journal list used by the British Association of Business Schools. This list is based on a mix of citation statistics and peer review. It ranks scholarly journals in business and management studies in five categories. “Modest standard journals” are category 1, “world elite journals” are category 4*. This scheme reflects the experience researchers have with the Research Assessment Exercise categories. The ranked journal list is meant to be used widely for a variety of management goals. It is used as an advice for researchers about the best venue for their manuscripts. Libraries are supposed to use it in their acquisition policies. And last but not least, it is used in research assessments and personnel evaluations. Although the actual use of the list is an interesting research topic in itself, we can safely assume that it has had a serious impact on the researchers in the British business schools community.

The study shows first of all that the position of a journal in the ranked list correlates negatively with the extent of interdisciplinarity of the journal. In other words, the higher the ranking, the more narrow its disciplinary focus. (The study has used a number of indicators for interdisciplinarity by which different aspects of what it means to be interdisciplinary have been captured.) Rewarding researchers to publish first of all in the ranked journal list may therefore discourage interdisciplinary work.

The study confirms this effect by comparing business and management studies to innovation studies. Both fields are subjected to the same evaluation regime in the Research Excellence Framework. Intellectually, they are very close. However, they differ markedly with respect to their interdisciplinary nature. Researchers in business schools have a more traditional publishing behaviour than their innovation studies colleagues. The research units in innnovation studies are consistently more interdisciplinary than the business and management schools.

Of course, publication behaviour is shaped by a variety of influences. Peer review may be biased against interdisciplinary work because it is more difficult to assess its quality. Many top journals are not eager to publish interdisciplinary work. This study is the first to show convincingly that these already existing biases tend to be made even stronger by the use of ranked journal lists as a tool in research management. The study confirms this effect by comparing the performance based on the ranked journal list with a citation analysis. In the latter, the innovation studies research is not punished by its more interdisciplinary character which does happen in an assessment on the basis of the journal list. The paper concludes with a discussion of the negative implications in terms of funding and acquiring resources for research groups working at the boundaries of different fields

The paper will be published in a forthcoming issue of Research Policy and has been awarded the best paper at the Atlanta Conference on Science and Innovation Policy in September 2011.

Reference: Ismael Rafols, Loet Leydesdorff, Alice O’Hare, Paul Nightingale, & Andy Stirling, “How journal rankings can suppress interdisciplinary research. A comparison between innovation studies and business & management,” Paper presented at the Annual Meeting of the Society for the Social Studies of Science (4S), Cleveland, OH, Nov. 2011; available at http://arxiv.org/abs/1105.1227 .

One Response to “Journal ranking biased against interdisciplinary research”

  1. Journal ranking biased against interdisciplinary research « The Citation Culture | csid Says:

    […] Journal ranking biased against interdisciplinary research « The Citation Culture. This entry was posted in Uncategorized. Bookmark the permalink. ← Carl Mitcham Interview on Social Epistemology Review and Reply Collective […]


Leave a comment