The Facebook-ization of academic reputation?

Guest blog post by Alex Rushforth

The Facebook-ization of academic reputation? ResearchGate, and Everyday neoliberalism

How do we explain the endurance of neoliberal modes of government following the 2008 financial crisis, which could surely have been its death-knoll? This is the question of a long, brilliant, book by historian of science and economics Philip Mirowski, called ‘Never let a serious crisis go to waste’. Mirowski states that explanations of the crisis to date have accounted for only part of the answer. Part of the persistence of neo-liberal ideals of personhood and markets comes not just directly from ‘the government’ or particular policies, but is a result of very mundane practices and technologies which surround us in our everyday lives.

I think this book can tell us a lot about new ways in which our lives as academics are increasingly being governed. Consider web platforms like ResearchGate and following Mirowski, these academic professional networking sites might be understood as technologies of ‘everyday neoliberalism’. These websites share a number of resemblances with social networking sites like Facebook – which Mirowski takes as an exemplar par excellence of this phenomenon. He argues Facebook teaches its users to become ‘entrepreneurs of themselves’, by fragmenting the self and reducing it to something transient (ideals emanating from the writings of Hayek and Friedman), to be actively and promiscuously re-drawn out of various click-enabled associations (accumulated in indicators like numbers of ‘likes’, ‘friends’, comments) (Mirowski, 2013, 92).

Let us briefly consider what kind of academic and ResearchGate encourages and teaches us to become. Part of the seductiveness of these technologies for academics, I suspect, is that we already compete within reputational work organisations (c.f. Whitley, 2000), where self-promotion has always been part-and-parcel of producing new knowledge. However, such platforms also intensify and reinforce dominant ideas and practices for evaluating research and researchers, which – with the help of Mirowski’s text – appear to be premised on neoliberal doctrines. Certainly the websites build on the idea that the individual (as author) is the central locus of knowledge production. Yet what is distinctly neoliberal perhaps is how the individual – through the architecture and design of the websites – experiences their field of knowledge production as a ‘marketplace of ideas’ (on the neo-liberal roots of this idea, see Mirowski, 2011).

This is achieved through ‘dashboards’ that display a smorgasbord of numerical indicators. When you upload your work, the interface generates the Impact Factor of journals you have published in and various other algorithmically-generated scores (ResearchGate score anyone?). There are also social networking elements like ‘contacts’, enabling you to follow and be followed by other users of the platform (your ‘peers’). This in turn produces a count of how well ‘networked’ you are. In short, checking one’s scores, contacts, downloads, views, and so on is supposed to give an impression of an individual user’s market standing, especially as one can compare these with scores of other users. Regular email notifications provide reminders to continue internalizing these demands and to report back regularly to the system. These scores and notices are not final judgments but a record of accomplishments so far, motivating the user to carry on with the determination to do better. Given the aura of ‘objectivity’ and ‘market knows best’ mantra these indicators present to us, any ‘failings’ are the responsibility of the individual. Felt anger is to be turned back inward on the self, rather than outwards on the social practices and ideas through which such ‘truths’ are constituted. A marketplace of ideas indeed.

Like Facebook, what these academic professional networking sites do seems largely unremarkable and uncontroversial, forming part of background infrastructures which simply nestle into our everyday research practices. One of their fascinating features is to promulgate a mode of power that is not directed to us ‘from above’ – no manager or formal audit exercise is coercing researchers into signing-up. We are able to join and leave of our own volition (many academics don’t even have accounts). Yet these websites should be understood as component parts of a wider ‘assemblage’ of metrics and evaluation techniques with which academics currently juggle, which in turn generate certain kinds of tyrannies (see Burrows, 2012).

Mirowski’s book provides a compelling set of provocations for digital scholars, sociologists of science, science studies, higher education scholars and others to work with. Many studies have been produced documenting reforms to the university which have bared various hallmarks of neoliberal political philosophical doctrines (think audits, university rankings, temporary labour contracts, competitive funding schemes and the like). Yet these latter techniques may only be the tip of the iceberg: Mirowski has given us cause to think more imaginatively about how ‘everyday’ or ‘folk’ neoliberal ideas and practices become embedded in our academic lives through quite mundane infrastructures, the effects of which we have barely begun to recognise, let alone understand.


Burrows, R. 2012. Living with the h-index? Metric assemblages in the contemporary academy. Sociological Review, 60, 355-372.

Mirowski, P. 2011. Science-mart : privatizing American science, Cambridge, Mass. ; London, Harvard University Press.

Mirowski, P. 2013. Never let a serious crisis go to waste : how neoliberalism survived the financial meltdown, New York, Verso.

Whitley, R. 2000. The intellectual and social organization of the sciences, Oxford England ; New York, Oxford University Press.



In Search of Excellence? Debating the Merits of Introducing an Elite Dutch University Model

Report by Alex Rushforth

Should the Netherlands strive for excellence in its university systems? Will maintaining quality suffice? This was the topic of a recent panel debate at the WTMC annual meeting on 21 November 2014 in De Balie, Amsterdam. Organised and chaired by Willem Halffman, the session focused on an article published by Barend Van Der Meulen in the national newspaper De Volkskrant, which advocated the need to produce two excellent universities which excel on internationally published rankings, thereby creating a new top-tier in the Dutch higher education system.

Both van der Meulen and Halffman presented their views, with an opposing position also coming from Sally Wyatt. Completing the panel, CWTS’s very own Paul Wouters provided results from recent empirical work about rankings.

Barend van der Meulen’s call for an elite university stemmed from the fact Dutch universities perennially sit outside of the top-50 in Shanghai and Times Higher Education rankings. For him the message is clear: the Netherlands is repeatedly failing to enhance its reputation as an elite player among global universities, a position which ought to cause concern. Van der Meulen stated that his call for an elite university model is part of a need to create an expanded repertoire of what universities are and what they should do in the Netherlands. The pursuit of rankings through this vehicle is therefore tightly coupled with a rejection of the status quo. Rankings are a social technology which ought to be harnessed for quality improvement and as tools through which to promote democratic participation by equipping students and policymakers with tools to make judgments and exert some influence over universities. Alternative modes of evaluation like peer review provide closed systems in which only other academics can make judgments, leaving university activities unaccountable to external modes of evaluation. This ‘ivory tower’ situation reminiscent of the 1980s is an image Van Der Meulen wishes to escape from, as ultimately it damages credibility and legitimacy of universities. The reliance on public money for research and education makes the moral case for university improvement and accountability particularly pressing in the Netherlands. For Van Der Meulen, the ‘good enough’ university (see Wyatt’s argument below) is not enough, given that excellence is imposing itself as a viable and increasingly important alternative.

First to oppose the motion in favour of elite universities was Willem Hallfman, whose talk built on a reply co-authored with Roland Bal, also in De Volkskrant. In the talk Halffman questioned the very foundations of the idea that ‘excellence’ ought to be pursued. Drawing unflattering comparisons between the research budget of Harvard University and that of the entire Netherlands, it was argued that competing within a global superleague would require a radical expansion of existing research budgets and wage structures across the Dutch university system, which he felt unrealistic and unreasonable against a backdrop of crisis in public finances. As well as reproducing national elites, Halffman also questioned the desirability of ranking systems which promote academic stars and the consequences this brings to institutions of science in general and Dutch universities in particular. Football-style league tables provide poor models on which to rate universities, as in contrast with sport where a winner-takes-all logic is central, for universities embodying a broad repertoire of societal functions, it is not clear what ‘winning’ means and how this would be made visible and commensurable through performance indicators.

Sally Wyatt recounted her personal experiences of the shock she encountered when studying and working in British universities in the 1980s, having grown-up in Canada within a period of prosperity and social mobility. These experiences fired a series of warning shots not to go down a road of pursuing excellence. When a move to the Netherlands came about in 1999, it promised her an oasis away from the turmoil the British university system had faced as a result of Thatcherite policy reforms. With the emergence of the Research Assessment Exercise (RAE) and its ranking logic comes also a rise in managerial positions and policies, decline in working conditions, and a widening gender gap. Gone also was a latent class system engrained in the culture of universities, with dominant elite institutions the site of social stratification reproduced across generations, which rankings merely encourage and reinforce. Despite erosion of certain positive attributes in universities since her arrival in the Netherlands, Wyatt argued that the Dutch system still preserves enough of a ‘level-playing field’ in terms of funding allocation to merit fierce resistance to any introduction of an elite university model. For Wyatt sometimes it is better to promote the ‘good enough’ than to chase an imperialist and elitist vision of ‘excellence’.

Drawing on work on university and hospital rankings carried-out with Sarah De Rijcke (CWTS), Iris Wallenburg and Roland Bal (Erasmus MC, Rotterdam), Paul Wouters’ talk advocated the need for a more fine-grained STS investigations into the kinds of work that goes into rankings, who is doing it, and in what situations. What is at stake in studying rankings then is not simply the critique of this or that tool, but a more pervasive (and sometimes invisible) logic/set of practices encountered across public organisations like universities and hospitals. Wouters advocated a move towards combining audit society critiques (which tend to be top-down) with STS insights into how ranking is practiced across various organisational levels in universities. This would provide a more promising platform through which to inform debates of the kind playing-out over the desirability of the elite university.

So the contrast between positions was stark. Are rankings – these seemingly ubiquitous ordering mechanisms of contemporary social life – something the Netherlands can afford to back away from in governing its universities? If they are being pursued anyway, shouldn’t policy intervene and assist a more systematic pursuit up the rankings which would enable more pronounced successes? Or is it necessary to oppose the very notion that the Netherlands needs to excel in a ‘globally competitive’ race, particularly given the seeming arbitrariness of many of the metrics according to which prestige gets attributed via ranking mechanisms? Despite polarization on what is to be done, potential for extending STS’s conceptual and empirical apparatus to mediate these discussions seemed to strike a chord among panelists and the audience alike. No doubt this stimulating debate touches on a set of issues that will not be going away quickly, and is one on which the WTMC community is surely well placed to intervene.

Quality in the age of the impact factor

ISIS, the most prestigious journal in the history of science, moved house last September and its central office is now located at the Descartes Centre for the History and Philosophy of the Sciences and Humanities at Utrecht University. The Dutch science historian H. Floris Cohen took up the position of the editor in chief of the journal. No doubt this underlines the international reputation of the community of historians of science in the Netherlands. Being the editor of the central journal in ones field surely is mark of esteem and quality.

The opening of the editorial office in Utrecht was celebrated with a symposium entitled “Quality in the age of the impact factor”. Since quality of research in history is intimately intertwined with the quality of writing, it seemed particularly apt to call attention to the role of impact factors in humanities fields. I used the occasion to pose the question how we actually define scientific and scholarly quality. How do we recognize quality in our daily practices? And how can this variety of practices be understood theoretically? Which approaches in the field of science and technology studies are most relevant?

In the same month, Pleun van Arensbergen graduated on a very interesting PhD dissertation which dealt with some of the issues, “Talent Proof. Selection Processes in Research Funding and Careers”. Van Arensbergen did her thesis work at the Rathenau Institute in The Hague. The quality of research is increasingly seen as mainly the result of the quality of the people involved. Hence, universities “have openly made it one of their main goals to attract scientific talent” (van Arensbergen, 2014, p. 121). A specific characteristics of this “war for talent” in the academic world is that there is an oversupply of talents and a relative lack of career opportunities, leading to a “war between talents”. The dissertation is a thorough analysis of success factors in academic careers. It is an empirical analysis of how the Dutch science foundation NWO selects early career talent in its Innovational Research Incentives Scheme. The study surveyed researchers about their definitions of quality and talent. It combines this with an analysis of both the outcome and the process of this talent selection. Van Arensbergen paid specific attention to the gender distribution and to the difference between successful and unsuccessful applicants.

Her results point to a discrepancy between the common notion among researchers that talent is immediately recognizable (“you know it when you see it”) and the fact that there are very small differences between candidates that get funded and those that do not. The top and the bottom of the distribution of quality among proposals and candidates are relatively easy to detect. But the group of “good” and “very good” proposals is still too large to be funded. Van Arensbergen and her colleagues did not find a “natural threshold” above which the successful talents can be placed. On the contrary, in one of her chapters they find that researchers who leave the academic system due to lack of career possibilities regularly score higher on a number of quality indicators than those who are able to continue a research career. “This study does not confirm that the university system always preserves the highly productive researchers, as leavers were even found to outperform the stayers in the final career phase (van Arensbergen, 2014, p. 125).

Based on the survey, her case studies and her interviews, Van Arensbergen also concludes that productivity and publication records have become rather important for academic careers. “Quality nowadays seems to a large extent to be defined as productivity. Universities seem to have internalized the performance culture and rhetoric to such an extent that academics even define and regulate themselves in terms of dominant performance indicators like numbers of publications, citations or the H-index. (…) Publishing seems to have become the goal of academic labour.” (van Arensbergen, 2014, p. 125). This does not mean, however, that these indicators determine the success of a career. The study questions “the overpowering significance assigned to these performance measures in the debate, as they were not found to be entirely decisive.” (van Arensbergen, 2014, p. 126) An extensive publication record is a condition but not a guarantee for success.

This relates to another finding: the group process of panel discussions are also very important. With a variety of examples, Van Arensbergen shows how the organization of the selection process shapes the outcome. The face to face interview of the candidate with the panel is for example crucial for the final decision. In addition, the influence of the external peer reports was found to be modest.

A third finding in the talent dissertation is that success in obtaining grants feeds back into ones scientific and scholarly career. This creates a self reinforcing mechanism, which the science historian Robert Merton coined the Matthew effect after the quote from the bible: “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath.” (Merton, 1968). Van Arensbergen concludes that this means that differences between scholars may initially be small but will increase in the course of time as a result of funding decisions. “Panel decisions convert minor differences in quality into enlarged differences in recognition.”

Combining these three findings leads to some interesting conclusions regarding how we actually define and shape quality in academia. Although panel decisions about who to fund are strongly shaped by the organization of the selection process as well as by a host of other contextual factors (including chance), and although all researchers are aware of the uncertainties in these decisions, this does not mean that these decisions are given less weight. On the contrary, obtaining external grants has become a cornerstone for successful academic careers. Universities even devote considerable resources to make their researchers abler to acquire prestigious grants as well as external funding in general. Although this is clearly instrumental for the organization, Van Arensbergen thinks that grants have become part of the symbolic capital of a researcher and research group and she refers to Pierre Bourdieu’s theory of symbolic capital to better understand the implications.

This brings me to my short lecture at the opening of the editorial office of ISIS in Utrecht. Although the experts on bibliometric indicators don’t generally see the Journal Impact Factor as an indicator of quality, socially it seems to partly function like it. But indicators are not alone in shaping how we in practice identify, and thereby define, talent and quality. They flow together with the way quality assurance and measurement processes are organized, the social psychology of panel discussions, the extent to which researchers are visible in their networks, etc. In these complex contextual interactions, indicators do not determine but they are ascribed meaning dependent on the situation in which the researchers find themselves. A good way to think about this, in my view, is developed in the field of material semiotics. This approach which has its roots in the French actor network theory of Bruno Latour and Michel Callon, does not accept a fundamental rupture in reality between the material and the symbolic. Reality as such is the result of complex and interacting translation processes. This is an excellent philosophical basis to understand how scientific and scholarly quality emerge. I see quality not as an attribute of an academic persona or of a particular piece of work, but as the result of the interaction between a researcher (or a manuscript) and the already existing scientific or scholarly infrastructure (eg. the body of published studies). If this interaction creates a productive friction (meaning that there is enough novelty in the contribution but not so much that it is incompatible with the already existing body of work), we see the work or scholar as of high quality. In other words, quality does simply not (yet) exist outside of the systems of quality measurement. The implication of this is that quality itself is a historical category. It is not an invariant but a culturally and historically specific concept that changes and morphes over time. In fact, the history of science is the history of quality. I hope historians of science will take up the challenge to map this history in more empirical and theoretical sophistication than has been done so far.


Merton, R. K. (1968). The Matthew Effect in Science. Science, 159, 56–62.

Van Arensbergen, P. (2014). Talent proof : selection processes in research funding and careers. The Hague, Netherlands: Rathenau Institute. Retrieved from


Developing guiding principles and standards in the field of evaluation – lessons learned

This is a guest blog post by professor Peter Dahler-Larsen. The reflections below are a follow-up of his keynote at the STI conference in Leiden (3-5 September 2014) and the special session at STI on the development of quality standards for science & technology indicators. Dahler-Larsen holds a chair at the Department of Political Science, University of Copenhagen. He is former president of the European Evaluation Society and author of The Evaluation Society (Stanford University Press, 2012).

Lessons learned about the development of guiding principles and standards in the field of evaluation – A personal reflection

Professor Peter Dahler-Larsen, 5 October 2014

Guidelines are symbolic, not regulatory

The limited institutional status of guiding principles and standards should be understood as a starting point for the debate. In the initial phases of development of such standards and guidelines, people often have very strong views. But only the state can enforce laws. To the extent that guidelines and standards merely express some official views of a professional association who has no institutional power to enforce them, standards and guidelines will have limited direct consequences for practitioners. The discussion becomes clearer once it is recognized that standards and guidelines thus primarily have a symbolic and communicative function, not a regulatory one. Practitioners will continue to be free to do whatever kind of practice they like, also after guidelines have been adopted.

Design a process of debate and involvement

All members of a professional association should have a possibility to comment on a draft version of guidelines/standards. An important component in the adoption of guidelines/standards is the design of a proper organizational process that involves the composition of a draft by a select group of recognized experts, an open debate among members, and an official procedure for the adoption of standards/guidelines as organizational policy.

Acknowledge the difference between minimum and maximum standards

Minimal standards must be complied with in all situations. Maximum standards are ideal principles worth striving for, although they will not be accomplished in any particular situation. It often turns out that there will be many maximum principles in a set of guidelines, although that is not what most people believe is “standards.” For that reason I personally prefer the term guidelines or guiding principles rather that “standards.”

Think carefully about guidelines and methodological pluralism

Advocates of a particular method often think that methodological rules connected to their own method defines quality as such in the whole field. For that reason, they are likely to insert their own methodological rules into the set of guidelines. As a consequence, guidelines can be used politically to promote one set of methods or one particular paradigm rather than another. Great care should be exercised in the formulation of guidelines to make sure that pluralism remains protected. For example, in evaluation the rule is that if you subscribe to a particular method, you should have high competence in the chosen method. But that goes for all methods.

Get beyond the “but that´s obvious” argument

Some argue that it is futile to formulate a set of guidelines because at that level of generality, it is only possible to state some very broad and obvious principles with which every sensible person must agree. The argument sounds plausible when you hear it, but my experience suggests otherwise for a number of reasons. First, some people have just not thought about a very bad practice (for example, doing evaluation without written Terms of Reference). Once you see, that someone has formulated a guideline against this, you are likely to start paying attention to the problem. Just because a principle is obvious to some, does not mean that it is obvious to all. Second, although there may be general agreement about a principle (such as “do no unnecessary harm” or “take general social welfare into account”), there can be strong disagreement about the interpretations and implications of the principle in practice.  Third, a good set of guiding principles will often comprise at least two principles that are somewhat in tension with each other, for example the principle of being quick and useful versus the principle of being scientifically rigorous. To sort out exactly which kind of tension between these two principles one can live with in a concrete case turns out to be a matter of complicated professional judgment. So, get beyond the “that´s obvious” argument.

Recognize the fruitful uses of guidelines

Among the most important uses of guidelines in evaluation are:

– In application situations, good evaluators can explain their practice with reference to broader principles

– In conferences, guidelines can stimulate insightful professional discussions about how to handle complicated cases

– Books and journals can make use of guidelines as inspiration for the development of an ethical awareness among practitioners. For example, google Michael Morris´ work in the field of evaluation.

– There is great use of guidelines in teaching and in other forms of socialization of evaluators.

Respect the multiplicity of organizations

If, say, the European Evaluation Society wants to adopt a set of guidelines, it should be respected that, say, the German and the Swiss association already have their own guidelines. Furthermore, some professional associations (say, psychologists) also have guidelines. A professional association should take such overlaps seriously and find ways to exchange views and experiences with guidelines across national and organizational borders.

Professionals are not alone, but relations can be described in guidelines, too

It is often debated that one of the major problems in bad evaluation practice is the behavior of commissioners. Some therefore think that guidelines describing good evaluation practice are in vain until the behavior of commissioners (and perhaps other users of evaluation) are included in the guidelines, too. However, there is no particular reason why the guidelines cannot describe a good relation and a good interaction between commissioners and evaluators. Remember, guidelines have no regulatory power. They express merely the official norms of the professional association. Evaluators are allowed to express what they think a good commissioner should do or not do. In fact, explicit guidelines can help clarify mutual and reciprocal role expectations.

Allow for regular reflection, evaluation and revision of guidelines

At regular intervals, guidelines should be debated, evaluated and revised. The AEA guidelines, for example, have been revised and now reflect values regarding culturally competent evaluation that was not in earlier versions. Guidelines are organic and reflect a particular socio-historical situation.


Michael Morris (2008). Evaluation Ethics for Best Practice. Guilford Press.

American Evaluation Association Guiding principles

Selling science to Nature

On Saturday 22 December, the Dutch national newspaper NRC published an interview with Hans Clevers, professor of molecular genetics and president of the Royal Netherlands Academy of Arts and Sciences (KNAW). The interview is the latest in a series of public performances following Clevers’ installment as president in 2012, in which he responds to current concerns about the need for revisions in the governance of science. The recent Science in Transition initiative for instance stirred quite some debate in the Netherlands, also within the Academy. One of the most hotly debated issues is that of quality control, an issue that encompasses the implications of an increasing publication pressure, purported flaws in the peer review system, impact factor manipulation, and the need for new forms of data quality management.

Clevers is currently combining the KNAW-presidency with his group leadership at the Hubrecht Institute in Utrecht. In both roles he actively promotes data sharing. He told the NRC that he stimulates his own researchers to share all findings. “Everything is for the entire lab. Asians in particular sometimes need to be scolded for trying to keep things to themselves.” When it comes to publishing the findings, it is Clevers who decides who contributed most to a particular project and who deserves to be first author. “This can be a big deal for the careers of PhD students and post-docs.” The articles for ‘top journals’ like Nature or Science he always writes himself. “I know what the journals expect. It requires great precision. A title consists of 102 characters. It should be spot-on in terms of content, but it should also be exciting.”

Clevers does acknowledge some of the problems with the current governance of science — the issue of data sharing and mistrust mentioned above, but for instance also the systematic imbalance in the academic reward system when it comes to appreciation for teaching. However, he does not seem very concerned with publication pressure. He argued on numerous occasions that publishing is simply part of daily scientific life. According to him, the number of articles is not a leading criterium. In most fields, it’s the quality of the papers that matters most. With these statements Clevers clearly puts himself in the mainstream view on scientific management. But there are also dissenting opinions, and sometimes they are voiced by other prominent scientists from the same field. Last month, Nobel Prize winner Randy Schekman, professor of molecular and cell biology at UC Berkeley, declared a boycott on three top-tier journals at the Nobel Prize ceremony in Stockholm. Schekman argued that NatureCellScience and other “luxury” journals are damaging the scientific process by artificially restricting the number of papers they accept, by make improper use of the journal impact factor as a marketing tool, and by depending on editors that favor spectacular findings over soundness of the results. 

The Guardian published an article in which Schekman iterated his critique. The journal also made an inventory of the reactions of the editors-in-chief of NatureCell and Science. They washed their hands of the matter. Some even delegated the problems to the scientists themselves. Philip Campbell, editor-in-chief of Nature, referred to a recent survey of the Nature Publishing Group which revealed that “[t]he research community tends towards an over-reliance in assessing research by the journal in which it appears, or the impact factor of that journal.”

In a previous blog post we paid attention to a call for an in-depth study of the editorial policies of NatureScience, and Cell by Jos Engelen, president of the Netherlands Organization for Scientific Research (NWO). It is worth reiterating some parts of his argument. According to Engelen the reputation of these journals, published by commercial publishers, is based on ‘selling’ innovative science derived from publicly funded research. Their “extremely selective publishing policy” has turned these journals into ‘brands’ that have ‘selling’ as their primary interest, and not, for example, “promoting the best researchers.” Here we see the contours of a disagreement with Clevers. Without wanting to read too much into his statements, Clevers on more than one occasion treats the status and quality of NatureCell and Science as apparently self-evident — as the main current of thought would have it. But in the NRC interview Clevers also does something else: By explaining his policy to write the ‘top-papers’ himself he also reveals that these papers are as much the result of craft, reputation and access, as they are an ‘essential’ quality of the science behind it. Knowing how to write attractive titles is a start – but it is certainly not the only skill needed in this scientific reputation game.

The stakes are high with regard to scientific publishing  — that much is clear. Articles in ‘top’ journals can make, break or sustain careers. One possible explanation for the status of these journals is of course that researchers have become highly reliant on on external funding for the continuation of their research. And highly cited papers in high impact journals have become the main ‘currency’ in science, as theoretical physicist Jan Zaanen called it in a lecture at our institute. The fact that articles in top journals serve as de facto proxies for the quality of researchers is perhaps not problematic in itself (or is it?). But it certainly becomes tricky if these same journals increasingly treat short-term news-worthiness as an important criterion in their publishing policies, and if peer review committee work also increasingly revolves around selecting those projects that are most likely to have short-term success. Amongst others Frank Miedema (one of the initiators of Science in Transition) argues that this is the case in his booklet Science 3.0. Clearly, there is a need for thorough research into these dynamics. How prevalent are they? And what are the potential consequences for longer-term research agendas?

Who is the modern scientist? Lecture by Steven Shapin

There are now many historical studies of what’s been called scientists’ personæ–-the typifications, images, and expectations attached to people who do scientific work. There has been much less interest in the largely managerial and bureaucratic exercises of counting scientists-– finding out how many there are, of what sorts, working in what institutions. This talk first describes how and why scientists came to be counted from about the middle of the twentieth century and then relates those statistical exercises to changing senses of who the scientist was, what scientific inquiry was, and what it was good for.

Here’s more information, including how to register

Date: Thursday 28 November 2013

Time: 5-7 pm

Place: Felix Meritis (Teekenzaal), Keizersgracht 324, Amsterdam

Why do neoliberal universities play the numbers game?

Performance measurement has brought on a crisis in academia. At least, that’s what Roger Burrows (Goldsmiths, University of London) claims in a recent article for The Sociological Review. According to Burrows, academics are at great risk of becoming overwhelmed by a ‘deep, affective, somatic crisis’. This crisis is brought on by the ‘cultural flattening of market economic imperatives’ that fires up increasingly convoluted systems of measure. Burrows places this emergence of quantified control in academia within the broader context of neoliberalism. Though this has been argued before, Burrows gives the discussion a theoretical twist. He does so by drawing on Gane’s (2012) analysis of Foucault’s (1978-1979) lectures on the relation between market and state under neoliberalism. According to Foucault, neoliberal states can only guarantee the freedom of markets when they apply the same ‘market logic’ on themselves. In this view, the standard depiction of neoliberalism as passive statecraft is not correct. This type of management is not ‘laissez-faire’, but actively stimulates competition and privatization strategies.

In the UK, Burrows contends, the simulation of neoliberal markets in academia has largely been channelled through the introduction of audit and of performance measures. He argues that these control mechanisms become autonomous entities that are increasingly used outside the original context of evaluations, and get a much more active role in shaping the everyday work of academics. According to Burrows, neoliberal universities provide fertile ground for a “co-construction of statistical metrics and social practices within the academy.” Among other things, this leads to a reification of individual performance measures such as the H-index. Burrows:

“[I]t is not the conceptualization, reliability, validity or any other set of methodological concerns that really matter. The index has become reified; (…) a number that has become a rhetorical device with which the neoliberal academy has come to enact ‘academic value’.” (p. 361)

Interestingly, Burrow’s line of reasoning can in some respects itself be seen as a resultant of a broader neoliberal context. Neoliberal policies applaud personal autonomy and the individual’s responsibility for one’s own well-being and professional success. Burrows directly addresses fellow-academics (‘we need to obtain critical distance’; ‘we need to understand ourselves as academics’; ‘why do we feel the way we do?’) and concludes that we are all implicated in the ‘autonomization of metric assemblages’ in the academy. Arguably, it is exactly this neoliberal political climate that justifies Burrows’ focus on individual academics’ affective states. With it comes a delegation of responsibility to the level of the individual researchers. It is our own choice if we comply with the metricization of academia. It is our own choice if we decide to work long hours, spend our weekends writing grant proposals and articles and grading students’ exams. According to Gill (2010), academics tend to justify working so hard because they possess a passionate drive for self-expression and pleasure in intellectual work. Paradoxically, Gill argues, it is this drive that feeds a whole range of disciplinary mechanisms and that lets academics internalize a neoliberal subjectivity. We play ‘the numbers game’, as Burrows calls it, because of “a deep love for the ‘myth’ of what we thought being an intellectual would be like.” (p. 15)

Though Burrows raises concerns that are shared by many academics, it is unfortunate that he does not substantiate his claims with empirical data. Apart from own experience and anecdotal evidence, how do we know that today’s researchers experience the metricization of academia as a ‘deep, affective somatic crisis’? Does it apply to all researchers, is it the same everywhere, and does it hold for all disciplines? These are empirical questions that Burrows does not answer. That said, there is a great need for the types of analyses Burrows and Gill provide, analyses that assess, situate and historicize academic audit cultures. It is not a coincidence that Burrows’ polemic piece emerges from the field of sociology. The social sciences and humanities are increasingly confronted with what Burrows calls the ‘rethoric of accountability’. It has become a commonplace to argue that they, too, should be held accountable for the taxpayers’ money that is being spent on them. These disciplines, too, should be made auditable by way of standardized, transparent performance measures. I agree with Burrows that this rethoric should be problematized. In large parts of these fields it is not at all clear how performance should be ‘measured’ in the first place, for example because of differences in publication cultures within these fields and as compared to the natural sciences. And it is precisely because the discussion is ongoing that we are allowed a clear view of the performative effects of a very specific and increasingly dominant evaluation culture that is not modelled by and on these disciplines. What are the consequences? And are there more constructive alternatives?

%d bloggers like this: