New CWTS blog

Dear readers,

It is with great pleasure that we announce a new platform for blog posts emanating from our institute: the CWTS blog.

Screen Shot 2015-11-04 at 10.06.51

How important is the number of citations to my scientific work really? How is evaluation influencing knowledge production? Should my organisation support the DORA declaration? When does it (not) make sense to use the h-index? What does a competitive yet conscientious career system look like? What is the relation between scientific and social impact of research? How can we value diversity in scholarship?

The CWTS blog brings together ideas, commentary, and (book) reviews about the latest developments in scientometrics, research evaluation, and research management. It is written for those interested in bibliometric and scientometric indicators and tools, implications of monitoring, measuring, and managing research, and the potential of quantitative and qualitative methods for understanding the dynamics of scientific research.

This a moderated blog with a small editorial team consisting of Sarah de RijckeLudo Waltman, and Paul Wouters. The blog posts are written by researchers affiliated to CWTS.

In the meantime, this current Citation Culture blog will be discontinued. Thank you all very much for your dedicated readership. We hope you will enjoy reading the new blog!

You can subscribe to the mailinglist or rss feed at www.cwts.nl/blog

Advertisement

The Facebook-ization of academic reputation?

Guest blog post by Alex Rushforth

The Facebook-ization of academic reputation? ResearchGate, Academia.edu and Everyday neoliberalism

How do we explain the endurance of neoliberal modes of government following the 2008 financial crisis, which could surely have been its death-knoll? This is the question of a long, brilliant, book by historian of science and economics Philip Mirowski, called ‘Never let a serious crisis go to waste’. Mirowski states that explanations of the crisis to date have accounted for only part of the answer. Part of the persistence of neo-liberal ideals of personhood and markets comes not just directly from ‘the government’ or particular policies, but is a result of very mundane practices and technologies which surround us in our everyday lives.

I think this book can tell us a lot about new ways in which our lives as academics are increasingly being governed. Consider web platforms like ResearchGate and Academia.edu: following Mirowski, these academic professional networking sites might be understood as technologies of ‘everyday neoliberalism’. These websites share a number of resemblances with social networking sites like Facebook – which Mirowski takes as an exemplar par excellence of this phenomenon. He argues Facebook teaches its users to become ‘entrepreneurs of themselves’, by fragmenting the self and reducing it to something transient (ideals emanating from the writings of Hayek and Friedman), to be actively and promiscuously re-drawn out of various click-enabled associations (accumulated in indicators like numbers of ‘likes’, ‘friends’, comments) (Mirowski, 2013, 92).

Let us briefly consider what kind of academic Academia.edu and ResearchGate encourages and teaches us to become. Part of the seductiveness of these technologies for academics, I suspect, is that we already compete within reputational work organisations (c.f. Whitley, 2000), where self-promotion has always been part-and-parcel of producing new knowledge. However, such platforms also intensify and reinforce dominant ideas and practices for evaluating research and researchers, which – with the help of Mirowski’s text – appear to be premised on neoliberal doctrines. Certainly the websites build on the idea that the individual (as author) is the central locus of knowledge production. Yet what is distinctly neoliberal perhaps is how the individual – through the architecture and design of the websites – experiences their field of knowledge production as a ‘marketplace of ideas’ (on the neo-liberal roots of this idea, see Mirowski, 2011).

This is achieved through ‘dashboards’ that display a smorgasbord of numerical indicators. When you upload your work, the interface generates the Impact Factor of journals you have published in and various other algorithmically-generated scores (ResearchGate score anyone?). There are also social networking elements like ‘contacts’, enabling you to follow and be followed by other users of the platform (your ‘peers’). This in turn produces a count of how well ‘networked’ you are. In short, checking one’s scores, contacts, downloads, views, and so on is supposed to give an impression of an individual user’s market standing, especially as one can compare these with scores of other users. Regular email notifications provide reminders to continue internalizing these demands and to report back regularly to the system. These scores and notices are not final judgments but a record of accomplishments so far, motivating the user to carry on with the determination to do better. Given the aura of ‘objectivity’ and ‘market knows best’ mantra these indicators present to us, any ‘failings’ are the responsibility of the individual. Felt anger is to be turned back inward on the self, rather than outwards on the social practices and ideas through which such ‘truths’ are constituted. A marketplace of ideas indeed.

Like Facebook, what these academic professional networking sites do seems largely unremarkable and uncontroversial, forming part of background infrastructures which simply nestle into our everyday research practices. One of their fascinating features is to promulgate a mode of power that is not directed to us ‘from above’ – no manager or formal audit exercise is coercing researchers into signing-up. We are able to join and leave of our own volition (many academics don’t even have accounts). Yet these websites should be understood as component parts of a wider ‘assemblage’ of metrics and evaluation techniques with which academics currently juggle, which in turn generate certain kinds of tyrannies (see Burrows, 2012).

Mirowski’s book provides a compelling set of provocations for digital scholars, sociologists of science, science studies, higher education scholars and others to work with. Many studies have been produced documenting reforms to the university which have bared various hallmarks of neoliberal political philosophical doctrines (think audits, university rankings, temporary labour contracts, competitive funding schemes and the like). Yet these latter techniques may only be the tip of the iceberg: Mirowski has given us cause to think more imaginatively about how ‘everyday’ or ‘folk’ neoliberal ideas and practices become embedded in our academic lives through quite mundane infrastructures, the effects of which we have barely begun to recognise, let alone understand.

References

Burrows, R. 2012. Living with the h-index? Metric assemblages in the contemporary academy. Sociological Review, 60, 355-372.

Mirowski, P. 2011. Science-mart : privatizing American science, Cambridge, Mass. ; London, Harvard University Press.

Mirowski, P. 2013. Never let a serious crisis go to waste : how neoliberalism survived the financial meltdown, New York, Verso.

Whitley, R. 2000. The intellectual and social organization of the sciences, Oxford England ; New York, Oxford University Press.

 

 

CWTS in new European consortium

Good news came our way yesterday! CWTS will be partner in a new project funded by the Swedisch Riksbankens Jubileumsfond: Knowledge in science and policy. Creating an evidence base for converging modes of governance in policy and science (KNOWSCIENCE). The project is coordinated by Merle Jacob (Lund University, Sweden). Other partners in the consortium are Dietmar Braun (Lausanne University, Switzerland), Tomas Hellström (Department of Business Administration, Lund University), Niilo Kauppi (CNRS, Strasbourg, France), Duncan Thomas & Maria Nedeva (Manchester Institute of Innovation Research, Manchester Business School, UK), Rikard Stankiewitz (Lund University), and Sarah de Rijcke & Paul Wouters (CWTS).

KNOWSCIENCE focuses on deepening our understanding of the interplay between policy instruments intended to govern the structural organization of higher education and research (HER) and the informal rules and processes that organisations have developed for ensuring the validity and quality of the knowledge they produce. KNOWSCIENCE refers to this as the interplay between structural and epistemic governance, and argue that an understanding of this relationship is necessary for building sustainable knowledge producing arrangements and institutions and securing society’s long-term knowledge provision.

The main research question guiding the project is ‘how do policy and the science systems co-produce the conditions for sustainable knowledge provision?’ Specifically we ask:

(a) How are HER policy steering mechanisms enabled, disabled and transformed throughout the HER sector via the academic social system?

(b) What are the most significant unintended consequences of HER policy on the HER system? and

(c) What types of policy frameworks would be required to meet these challenges?

The announcement on the RJ website can be found via this link.

Indicator-considerations eclipse other judgments on the shop-floor | Keynote Sarah de Rijcke ESA Prague, 26 August 2015

This invited lecture at the ESA conference in Prague drew on insights from the Leiden Manifesto and from two recent research projects at our institute in the Evaluation Practices in Context research group. These research projects show how indicators influence knowledge production in the life sciences and social sciences, and how in- and exclusion mechanisms get built into the scientific system through certain uses of evaluative metrics. Our findings point to a rather self-referential focus on metrics and a lack of space for responsible, relevant research in the scientific practices under study. On the basis of these findings I argued in the talk that we need an alternative moral discourse in research assessment, centered around the need to address growing inequalities in the science system. Secondly, the talk considered the most pertinent issues for the community of sociologists from the Leiden Manifesto for research metrics (Hicks, Wouters, Waltman, De Rijcke & Rafols, Nature, 23 April 2015).

http://www.slideshare.net/sarahderijcke/slideshelf

See also:

Rushforth & De Rijcke (2015). Accounting for Impact? The Journal Impact Factor and the making of biomedical research in the NetherlandsMinerva, 53(2), 117-139.

De Rijcke, S. & Rushforth, A.D. (2015). To intervene, or not to intervene, is that the question? On the role of scientometrics in research evaluation. Journal of the Association for Information Science and Technology, 66 (9), 1954-1958.

Hicks, D., Wouters, P.F., Rafols, I., De Rijcke, S. & Waltman, L. (2015). The Leiden Manifesto for Research Metrics. Nature, 23 April 2015.

Hammarfelt & De Rijcke (2015). Accountability in Context: Effects of research evaluation systems on publication practices, disciplinary norms, and individual working routines in the faculty of Arts at Uppsala UniversityResearch Evaluation, 24(1), 63-77.

CWTS part of H2020 COST Action to stimulate integrity and responsible research

Good news came our way recently! Thed van Leeuwen, Paul Wouters and myself will be part of an EC-funded H2020 COST Action on Promoting Integrity as an Integral Dimension of Excellence in Research (PRINTEGER). Main applicants Hub Zwart and Willem Halffman (Radboud University Nijmegen) brought together highly skilled partners for this network from the Free University Brussels, the University of Tartu (Estonia), Oslo and Akershus University College, Leiden University, and the Universities of Bonn, Bristol, and Trento.

The primary goal of the COST Action is to encourage a research culture that treats integrity as an integral part of doing research, instead of an externally driven steering mechanism. Our starting point: in order to stimulate integrity and responsible research, new forms of governance are needed that are firmly grounded in and informed by research practice.

Concretely, the work entailed in the project will consist of A) a systematic review of integrity cultures and practices; B) an analysis and assessment of current challenges, pressures, and opportunities for research integrity in a demanding and rapidly changing research system; and C) the development and testing of tools and policy recommendations enabling key players to effectively address issues of integrity, specifically directed at science policy makers, research managers and future researchers.

CWTS will contribute to the network with

  • A bibliometric analysis of ‘traces of fraud’ (e.g. retracted articles, manipulative editorials, non-existent authors and papers, fake journals, bogus conferences, non-existent universities), against the background of general shifts in publication patterns, such as changing co-authoring practices, instruments as authors, or the rise of hyper-productive authors;
  • Two in-depth cases studies of research misconduct, not the evident or spectacular, but more particularly reflecting dilemmas and conflicts that occur in grey areas. Every partner will provide two cases; ours will most likely focus on cases of questionable integrity of journal editors (for example cases of impact factor manipulation);
  • Act as task leader on formulation of Advice for research support organisations, including on IT tools. This task will draw conclusions from the research on the operation of the research system, specifically publication infrastructures such as journals, libraries, or data repositories;
  • Like all other partners in the network, we will set up small local advisory panels consisting of five to ten key stakeholders of the project: research policy makers, research leaders or managers, research support organisations, and early career scientists. These panels will meet for a scoping consultation at the start of the projects, for a halfway consultation to discuss intermediate results and further choices to be made, and for a near-end consultation to test the pertinence of tools and advice at a point where we can still make changes to accommodate for stakeholder input.

Quality in the age of the impact factor

ISIS, the most prestigious journal in the history of science, moved house last September and its central office is now located at the Descartes Centre for the History and Philosophy of the Sciences and Humanities at Utrecht University. The Dutch science historian H. Floris Cohen took up the position of the editor in chief of the journal. No doubt this underlines the international reputation of the community of historians of science in the Netherlands. Being the editor of the central journal in ones field surely is mark of esteem and quality.

The opening of the editorial office in Utrecht was celebrated with a symposium entitled “Quality in the age of the impact factor”. Since quality of research in history is intimately intertwined with the quality of writing, it seemed particularly apt to call attention to the role of impact factors in humanities fields. I used the occasion to pose the question how we actually define scientific and scholarly quality. How do we recognize quality in our daily practices? And how can this variety of practices be understood theoretically? Which approaches in the field of science and technology studies are most relevant?

In the same month, Pleun van Arensbergen graduated on a very interesting PhD dissertation which dealt with some of the issues, “Talent Proof. Selection Processes in Research Funding and Careers”. Van Arensbergen did her thesis work at the Rathenau Institute in The Hague. The quality of research is increasingly seen as mainly the result of the quality of the people involved. Hence, universities “have openly made it one of their main goals to attract scientific talent” (van Arensbergen, 2014, p. 121). A specific characteristics of this “war for talent” in the academic world is that there is an oversupply of talents and a relative lack of career opportunities, leading to a “war between talents”. The dissertation is a thorough analysis of success factors in academic careers. It is an empirical analysis of how the Dutch science foundation NWO selects early career talent in its Innovational Research Incentives Scheme. The study surveyed researchers about their definitions of quality and talent. It combines this with an analysis of both the outcome and the process of this talent selection. Van Arensbergen paid specific attention to the gender distribution and to the difference between successful and unsuccessful applicants.

Her results point to a discrepancy between the common notion among researchers that talent is immediately recognizable (“you know it when you see it”) and the fact that there are very small differences between candidates that get funded and those that do not. The top and the bottom of the distribution of quality among proposals and candidates are relatively easy to detect. But the group of “good” and “very good” proposals is still too large to be funded. Van Arensbergen and her colleagues did not find a “natural threshold” above which the successful talents can be placed. On the contrary, in one of her chapters they find that researchers who leave the academic system due to lack of career possibilities regularly score higher on a number of quality indicators than those who are able to continue a research career. “This study does not confirm that the university system always preserves the highly productive researchers, as leavers were even found to outperform the stayers in the final career phase (van Arensbergen, 2014, p. 125).

Based on the survey, her case studies and her interviews, Van Arensbergen also concludes that productivity and publication records have become rather important for academic careers. “Quality nowadays seems to a large extent to be defined as productivity. Universities seem to have internalized the performance culture and rhetoric to such an extent that academics even define and regulate themselves in terms of dominant performance indicators like numbers of publications, citations or the H-index. (…) Publishing seems to have become the goal of academic labour.” (van Arensbergen, 2014, p. 125). This does not mean, however, that these indicators determine the success of a career. The study questions “the overpowering significance assigned to these performance measures in the debate, as they were not found to be entirely decisive.” (van Arensbergen, 2014, p. 126) An extensive publication record is a condition but not a guarantee for success.

This relates to another finding: the group process of panel discussions are also very important. With a variety of examples, Van Arensbergen shows how the organization of the selection process shapes the outcome. The face to face interview of the candidate with the panel is for example crucial for the final decision. In addition, the influence of the external peer reports was found to be modest.

A third finding in the talent dissertation is that success in obtaining grants feeds back into ones scientific and scholarly career. This creates a self reinforcing mechanism, which the science historian Robert Merton coined the Matthew effect after the quote from the bible: “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath.” (Merton, 1968). Van Arensbergen concludes that this means that differences between scholars may initially be small but will increase in the course of time as a result of funding decisions. “Panel decisions convert minor differences in quality into enlarged differences in recognition.”

Combining these three findings leads to some interesting conclusions regarding how we actually define and shape quality in academia. Although panel decisions about who to fund are strongly shaped by the organization of the selection process as well as by a host of other contextual factors (including chance), and although all researchers are aware of the uncertainties in these decisions, this does not mean that these decisions are given less weight. On the contrary, obtaining external grants has become a cornerstone for successful academic careers. Universities even devote considerable resources to make their researchers abler to acquire prestigious grants as well as external funding in general. Although this is clearly instrumental for the organization, Van Arensbergen thinks that grants have become part of the symbolic capital of a researcher and research group and she refers to Pierre Bourdieu’s theory of symbolic capital to better understand the implications.

This brings me to my short lecture at the opening of the editorial office of ISIS in Utrecht. Although the experts on bibliometric indicators don’t generally see the Journal Impact Factor as an indicator of quality, socially it seems to partly function like it. But indicators are not alone in shaping how we in practice identify, and thereby define, talent and quality. They flow together with the way quality assurance and measurement processes are organized, the social psychology of panel discussions, the extent to which researchers are visible in their networks, etc. In these complex contextual interactions, indicators do not determine but they are ascribed meaning dependent on the situation in which the researchers find themselves. A good way to think about this, in my view, is developed in the field of material semiotics. This approach which has its roots in the French actor network theory of Bruno Latour and Michel Callon, does not accept a fundamental rupture in reality between the material and the symbolic. Reality as such is the result of complex and interacting translation processes. This is an excellent philosophical basis to understand how scientific and scholarly quality emerge. I see quality not as an attribute of an academic persona or of a particular piece of work, but as the result of the interaction between a researcher (or a manuscript) and the already existing scientific or scholarly infrastructure (eg. the body of published studies). If this interaction creates a productive friction (meaning that there is enough novelty in the contribution but not so much that it is incompatible with the already existing body of work), we see the work or scholar as of high quality. In other words, quality does simply not (yet) exist outside of the systems of quality measurement. The implication of this is that quality itself is a historical category. It is not an invariant but a culturally and historically specific concept that changes and morphes over time. In fact, the history of science is the history of quality. I hope historians of science will take up the challenge to map this history in more empirical and theoretical sophistication than has been done so far.

Literature:

Merton, R. K. (1968). The Matthew Effect in Science. Science, 159, 56–62.

Van Arensbergen, P. (2014). Talent proof : selection processes in research funding and careers. The Hague, Netherlands: Rathenau Institute. Retrieved from http://www.worldcat.org/title/talent-proof-selection-processes-in-research-funding-and-careers/oclc/890766139&referer=brief_results

 

Tales from the field: On the (not so) secret life of performance indicators

* Guest blog post by Alex Rushforth *

In the coming months Sarah De Rijcke and I have been accepted to present at conferences in Valencia and Rotterdam on research from CWTS’s nascent EPIC working group. We very much look forward to drawing on collaborative work from our ongoing ‘Impact of indicators’ project on biomedical research in University Medical Centers (UMC) in the Netherlands. One of our motivations behind the project is that there has been a wealth of social science literature in recent times about the effects of formal evaluation in public sector organisations, including universities. Yet too few studies have taken seriously the presence of indicators in the context of one of the universities core-missions: knowledge creation. Fewer still have looked to take an ethnographic lens to the dynamics of indicators in the day-to-day work context of academic knowledge. These are deficits we hope to begin addressing through these conferences and beyond.

The puzzle we will be addressing here appears – at least at first glance- straightforward enough: what is the role of bibliometric performance indicators in the biomedical knowledge production process? Yet comparing provisional findings from two contrasting case studies of research groups from the same UMC – one a molecular biology group and the other a statistics group – it becomes quickly apparent that there can be no general answer to this question. As such we aim to provide not only an inventory of different ‘roles’ of indicators in these two cases, but also to pose the more interesting analytical question of what conditions and mechanisms explain the observed variations in the roles indicators come to perform?

Owing to their persistent recurrence in the data so far, the indicators we will analyze are journal impact factor, H-index, and ‘advanced’ citation-based bibliometric indicators. It should be stressed that our focus on these particular indicators have have emerged inductively from observing first-hand the metrics that research groups attended to in their knowledge-making activities. So what have we found so far?

Dutch UMCs constitute particularly apt sites through which to explore this problem given how bibliometric assessments have been central to the formal evaluations carried-out since their inception in the early-2000s. On one level it is argued that researchers in both cases encounter such metrics as ‘governance/managerial devices’, that is, as forms of information required of them by external agencies on whom they are reliant for resources and legitimacy. Such examples can be seen when funding applications, annual performance appraisals, or job descriptions demand such information of an individual’s or group’s past performance. As the findings will show, the information needed by the two groups to produce their work effectively and the types of demands made on them by ‘external’ agencies varies considerably, despite their common location in the same UMC. This is one important reason why the role of indicators differs between cases.

However, this coercive ‘power over’ account is but one dimension of a satisfying answer to our role of indicators question. Emerging analysis reveals also the surprising discovery that in fields characterized by particularly integrated forms of coordination and standardization (Whitley, 2000)– like our molecular biologists – indicators in fact have the propensity to function as a core feature of the knowledge making process. For instance, a performance indicator like the journal impact factor was routinely mobilized informally in researchers’ decision-making as an ad hoc standard against which to evaluate the likely uses of information and resources, and in deciding whether time and resources should be spent pursuing them. By contrast in the less centralized and integrated field statistical research such an indicator was not so indispensable to routines of knowledge making activities. In the case of the statisticians it is possible to speculate that indicators are more likely to emerge intermittently as conditions to be met for gaining social and cultural acceptance by external agencies, but are less likely to inform day-to-day decisions. Through our ongoing analysis we aim to unpack further how disciplinary practices interact with organisation of Dutch UMCs to produce quite varying engagements with indicators.

The extent to which indicators play central/peripheral roles in research production processes across academic contexts is an important sociological problem to be posed in order to enhance understanding of the complex role of performance indicators in academic life. We feel much of the existing literature on evaluation of public organisations has tended to paint an exaggerated picture of formal evaluation and research metrics as synonymous with empty ritual and legitimacy (e.g. Dahler-Larsen, 2012). Emerging results here show that – at least in the realm of knowledge production- the picture is more subtle. This theoretical insight will prompt us to suggest further empirical studies are needed of scholarly fields with different patterns of work organisation in order to compare our results and develop middle-range theorizing on the mechanisms through which metrics infiltrate knowledge production processes to fundamental or peripheral degrees. In future this could mean venturing into fields far outside of biomedicine, such as history, literature, or sociology. For now though we look forward to expanding the biomedical project, by conducting analogous case studies from a second UMC.

Indeed it is through such theoretical developments that we can consider not only the appropriateness of one-size-fits-all models of performance evaluation, but also unpack and problematize discourses about what constitutes ‘misuse’ of metrics. And indeed how convinced should we be that academic life is now saturated and dominated by deleterious metric indicators? 

References

DAHLER-LARSEN, P. 2012. The evaluation society, Stanford, California, Stanford Business Books, an imprint of Stanford University Press.

 WHITLEY, R. 2000. The intellectual and social organization of the sciences, Oxford England ; New York, Oxford University Press.

How does science go wrong?

We are happy to announce that our abstract got accepted for the 2014 Conference of the European Consortium for Political Research (ECPR), which will be held in Glasgow from 3-6 September. Our paper is selected for a panel on ‘The role of ideas and indicators in science policies and research management’, organised by Luis Sanz-Menéndez and Laura Cruz-Castro (both at CSIC-IPP).

Title of our paper: How does science go wrong?

“Science is in need of fundamental reform.” In 2013, five Dutch researchers took the lead in what they hope will become a strong movement for change in the governance of science and scholarship: Science in Transition. SiT appears to voice concerns heard beyond national borders about the need for change in the governance of science (cf. The Economist 19 October 2013; THE 23 Jan. 2014; Nature 16 Oct. 2013; Die Zeit 5 Jan. 2014). One of the most hotly debated concerns is quality control, and it encompasses the implications of a perceived increasing publication pressure, purported flaws in the peer review system, impact factor manipulation, irreproducibility of results, and the need for new forms of data quality management.

One could argue that SiT landed in fertile ground. In recent years, a number of severe fraud cases drew attention to possible ‘perverse effects’ in the management system of science and scholarship. Partly due to the juicy aspects of most cases of misconduct, these debates tend to focus on ‘bad apples’ and shy away from more fundamental problems in the governance of science and scholarship.

Our paper articulates how key actors construct the notion of ‘quality’ in these debates, and how they respond to each other’s position. By making these constructions explicit, we shift focus back to the self-reinforcing ‘performance loops’ that most researchers are caught up in at present. Our methodology is a combination of the mapping of the dynamics of media waves (Vasterman, 2005) and discourse analysis (Gilbert & Mulkay, 1984).

References

A revolutionary mission statement: improve the world. Times Higher Education, 23 January 2014.

Chalmers, I., Bracken, M. B., Djulbegovic, B., Garattini, S., Grant, J., Gülmezoglu, A. M., Oliver, S. (2014). How to increase value and reduce waste when research priorities are set. The Lancet, 383 (9912), 156–165.

Gilbert, G. N., & Mulkay, M. J. (1984). Opening Pandora’s Box. A Sociological Analysis of Scientists’ Discourse. Cambridge: Cambridge University Press.

Research evaluation: Impact. (2013). Nature, 502(7471), 287–287.

Rettet die Wissenschaft!: “Die Folgekosten können hoch sein.” Die Zeit, 5 January 2014.

Trouble at the lab. The Economist, 19 October 2013.

Vasterman, P. L. M. (2005). Media-Hype. European Journal of Communication , 20 (4 ), 508–530.

On exploding ‘evaluation machines’ and the construction of alt-metrics

The emergence of web-based ways to create and communicate new knowledge is affecting long-established scientific and scholarly research practices (cf. Borgman 2007; Wouters, Beaulieu, Scharnhorst, & Wyatt 2013). This move to the web is spawning a need for tools to track and measure a wide range of online communication forms and outputs. By now, there is a large differentiation in the kinds of social web tools (i.e. Mendeley, F1000,  Impact Story) and in the outputs they track (i.e. code, datasets, nanopublications, blogs). The expectations surrounding the explosion of tools and big ‘alt-metric’ data (Priem et al. 2010; Wouters & Costas 2012) marshal resources at various scales and gather highly diverse groups in pursuing new projects (cf. Brown & Michael 2003; Borup et al. 2006 in Beaulieu, de Rijcke & Van Heur 2013).

Today we submitted an abstract for a contribution to Big Data? Qualitative approaches to digital research (edited by Martin Hand & Sam Hillyard and contracted with Emerald). In the abstract we propose to zoom in on a specific set of expectations around altmetrics: Their alleged usefulness for research evaluation. Of particular interest to this volume is how altmetrics information is expected to enable a more comprehensive assessment of 1. social scientific outputs (under-represented in citation databases) and 2. wider types of output associated with societal relevance (not covered in citation analysis and allegedly more prevalent in the social sciences).

Our chapter we address a number of these expectations by analyzing 1) the discourse in the “altmetrics movement”, the expectations and promises formulated by key actors involved in “big data” (including commercial entities); and 2) the construction of these altmetric data and their alleged validity for research evaluation purposes. We will combine discourse analysis with bibliometric, webometric and altmetric methods in which both methods will also interrogate each others’ assumptions (Hicks & Potter 1991).

Our contribution will show, first of all, that altmetric data do not simply ‘represent’ other types of outputs; they also actively create a need for these types of information. These needs will have to be aligned with existing accountability regimes. Secondly, we will argue that researchers will develop forms of regulation that will partly be shaped by these new types of altmetric information. They are not passive recipients of research evaluation but play an active role in assessment contexts (cf. Aksnes & Rip 2009; Van Noorden 2010). Thirdly, we will show that the emergence of altmetric data for evaluation is another instance (following the creation of the citation indexes and the use of web data in assessments) of transposing traces of communication into a framework of evaluation and assessment (Dahler-Larsen 2012, 2013; Wouters 2014).

By making explicit what the implications are of the transfer of altmetric data from the framework of the communication of science to the framework of research evaluation, we aim to contribute to a better understanding of the complex dynamics in which new generation of researchers will have to work and be creative.

Aksnes, D. W., & Rip, A. (2009). Researchers’ perceptions of citations. Research Policy, 38(6), 895–905.

Beaulieu, A., van Heur, B. & de Rijcke, S. (2013). Authority and Expertise in New Sites of Knowledge Production. In A. Beaulieu, A. Scharnhorst, P. Wouters and S. Wyatt (Eds.), Virtual KnowledgeExperimenting in the Humanities and the Social Sciences. (pp. 25-56). MIT Press.

Borup, M, Brown, N., Konrad, K. & van Lente, H. 2006. “The sociology of expectations in science and technology.” Technology Analysis & Strategic Management 18 (3/4), 285-98.

Brown, N. & Michael, M. (2003). “A sociology of expectations: Retrospecting prospects and prospecting retrospects.” Technology Analysis & Strategic Management 15 (1), 3-18.

Costas, R., Zahedi, Z. & Wouters, P. (n.d.). Do ‘altmetrics’ correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective.

Dahler-Larsen, P. (2012). The Evaluation Society. Stanford University Press.

Dahler-Larsen, P. (2013). Constitutive Effects of Performance Indicators. Public Management Review, (May), 1–18.

Galligan, F., & Dyas-Correia, S. (2013). Altmetrics: Rethinking the Way We Measure. Serials Review, 39(1), 56–61.

Hicks, D., & Potter, J. (1991). Sociology of Scientific Knowledge: A Reflexive Citation Analysis of Science Disciplines and Disciplining Science. Social Studies of Science, 21(3), 459 –501.

Priem, J., Taraborelli, D., Groth, P., and Neylon, C. (2010a). Altmetrics: a manifesto. http://altmetrics.org/manifesto/

Van Noorden, R. (2010) “Metrics: A Profusion of Measures.” Nature, 465, 864–866.

Wouters, P., Costas, R. (2012). Users, narcissism and control: Tracking the impact of scholarly publications in the 21st century. Utrecht: SURF foundation.

Wouters, P. (2014). The Citation: From Culture to Infrastructure. In B. Cronin & C. R. Sugimoto (Eds.), Next Generation Metrics: Harnessing Multidimensional Indicators Of Scholarly Performance (Vol. 22, pp. 48–66). MIT Press.

Wouters, P., Beaulieu, A., Scharnhorst, A., & Wyatt, S. (eds.) (2013). Virtual Knowledge – Experimenting in the Humanities and the Social Sciences. MIT Press.

Who is the modern scientist? Lecture by Steven Shapin

There are now many historical studies of what’s been called scientists’ personæ–-the typifications, images, and expectations attached to people who do scientific work. There has been much less interest in the largely managerial and bureaucratic exercises of counting scientists-– finding out how many there are, of what sorts, working in what institutions. This talk first describes how and why scientists came to be counted from about the middle of the twentieth century and then relates those statistical exercises to changing senses of who the scientist was, what scientific inquiry was, and what it was good for.

Here’s more information, including how to register

Date: Thursday 28 November 2013

Time: 5-7 pm

Place: Felix Meritis (Teekenzaal), Keizersgracht 324, Amsterdam

%d bloggers like this: