New CWTS blog

Dear readers,

It is with great pleasure that we announce a new platform for blog posts emanating from our institute: the CWTS blog.

Screen Shot 2015-11-04 at 10.06.51

How important is the number of citations to my scientific work really? How is evaluation influencing knowledge production? Should my organisation support the DORA declaration? When does it (not) make sense to use the h-index? What does a competitive yet conscientious career system look like? What is the relation between scientific and social impact of research? How can we value diversity in scholarship?

The CWTS blog brings together ideas, commentary, and (book) reviews about the latest developments in scientometrics, research evaluation, and research management. It is written for those interested in bibliometric and scientometric indicators and tools, implications of monitoring, measuring, and managing research, and the potential of quantitative and qualitative methods for understanding the dynamics of scientific research.

This a moderated blog with a small editorial team consisting of Sarah de RijckeLudo Waltman, and Paul Wouters. The blog posts are written by researchers affiliated to CWTS.

In the meantime, this current Citation Culture blog will be discontinued. Thank you all very much for your dedicated readership. We hope you will enjoy reading the new blog!

You can subscribe to the mailinglist or rss feed at www.cwts.nl/blog

The Facebook-ization of academic reputation?

Guest blog post by Alex Rushforth

The Facebook-ization of academic reputation? ResearchGate, Academia.edu and Everyday neoliberalism

How do we explain the endurance of neoliberal modes of government following the 2008 financial crisis, which could surely have been its death-knoll? This is the question of a long, brilliant, book by historian of science and economics Philip Mirowski, called ‘Never let a serious crisis go to waste’. Mirowski states that explanations of the crisis to date have accounted for only part of the answer. Part of the persistence of neo-liberal ideals of personhood and markets comes not just directly from ‘the government’ or particular policies, but is a result of very mundane practices and technologies which surround us in our everyday lives.

I think this book can tell us a lot about new ways in which our lives as academics are increasingly being governed. Consider web platforms like ResearchGate and Academia.edu: following Mirowski, these academic professional networking sites might be understood as technologies of ‘everyday neoliberalism’. These websites share a number of resemblances with social networking sites like Facebook – which Mirowski takes as an exemplar par excellence of this phenomenon. He argues Facebook teaches its users to become ‘entrepreneurs of themselves’, by fragmenting the self and reducing it to something transient (ideals emanating from the writings of Hayek and Friedman), to be actively and promiscuously re-drawn out of various click-enabled associations (accumulated in indicators like numbers of ‘likes’, ‘friends’, comments) (Mirowski, 2013, 92).

Let us briefly consider what kind of academic Academia.edu and ResearchGate encourages and teaches us to become. Part of the seductiveness of these technologies for academics, I suspect, is that we already compete within reputational work organisations (c.f. Whitley, 2000), where self-promotion has always been part-and-parcel of producing new knowledge. However, such platforms also intensify and reinforce dominant ideas and practices for evaluating research and researchers, which – with the help of Mirowski’s text – appear to be premised on neoliberal doctrines. Certainly the websites build on the idea that the individual (as author) is the central locus of knowledge production. Yet what is distinctly neoliberal perhaps is how the individual – through the architecture and design of the websites – experiences their field of knowledge production as a ‘marketplace of ideas’ (on the neo-liberal roots of this idea, see Mirowski, 2011).

This is achieved through ‘dashboards’ that display a smorgasbord of numerical indicators. When you upload your work, the interface generates the Impact Factor of journals you have published in and various other algorithmically-generated scores (ResearchGate score anyone?). There are also social networking elements like ‘contacts’, enabling you to follow and be followed by other users of the platform (your ‘peers’). This in turn produces a count of how well ‘networked’ you are. In short, checking one’s scores, contacts, downloads, views, and so on is supposed to give an impression of an individual user’s market standing, especially as one can compare these with scores of other users. Regular email notifications provide reminders to continue internalizing these demands and to report back regularly to the system. These scores and notices are not final judgments but a record of accomplishments so far, motivating the user to carry on with the determination to do better. Given the aura of ‘objectivity’ and ‘market knows best’ mantra these indicators present to us, any ‘failings’ are the responsibility of the individual. Felt anger is to be turned back inward on the self, rather than outwards on the social practices and ideas through which such ‘truths’ are constituted. A marketplace of ideas indeed.

Like Facebook, what these academic professional networking sites do seems largely unremarkable and uncontroversial, forming part of background infrastructures which simply nestle into our everyday research practices. One of their fascinating features is to promulgate a mode of power that is not directed to us ‘from above’ – no manager or formal audit exercise is coercing researchers into signing-up. We are able to join and leave of our own volition (many academics don’t even have accounts). Yet these websites should be understood as component parts of a wider ‘assemblage’ of metrics and evaluation techniques with which academics currently juggle, which in turn generate certain kinds of tyrannies (see Burrows, 2012).

Mirowski’s book provides a compelling set of provocations for digital scholars, sociologists of science, science studies, higher education scholars and others to work with. Many studies have been produced documenting reforms to the university which have bared various hallmarks of neoliberal political philosophical doctrines (think audits, university rankings, temporary labour contracts, competitive funding schemes and the like). Yet these latter techniques may only be the tip of the iceberg: Mirowski has given us cause to think more imaginatively about how ‘everyday’ or ‘folk’ neoliberal ideas and practices become embedded in our academic lives through quite mundane infrastructures, the effects of which we have barely begun to recognise, let alone understand.

References

Burrows, R. 2012. Living with the h-index? Metric assemblages in the contemporary academy. Sociological Review, 60, 355-372.

Mirowski, P. 2011. Science-mart : privatizing American science, Cambridge, Mass. ; London, Harvard University Press.

Mirowski, P. 2013. Never let a serious crisis go to waste : how neoliberalism survived the financial meltdown, New York, Verso.

Whitley, R. 2000. The intellectual and social organization of the sciences, Oxford England ; New York, Oxford University Press.

 

 

Leiden Manifesto for research metrics published in Nature

We’re happy to announce the publication of ten principles to guide the use of metrics in research evaluation – a collaboration between Diana Hicks (Georgia Tech), Ismael Rafols (Ingenio/SPRU), Paul Wouters, Sarah de Rijcke and Ludo Waltman (CWTS). The principles were formulated on the basis of input from the scientometric community during a special session on the development of good practices for metrics use at the STI/ENID conference in Leiden (September 2014).

image

Developing guiding principles and standards in the field of evaluation – lessons learned

This is a guest blog post by professor Peter Dahler-Larsen. The reflections below are a follow-up of his keynote at the STI conference in Leiden (3-5 September 2014) and the special session at STI on the development of quality standards for science & technology indicators. Dahler-Larsen holds a chair at the Department of Political Science, University of Copenhagen. He is former president of the European Evaluation Society and author of The Evaluation Society (Stanford University Press, 2012).

Lessons learned about the development of guiding principles and standards in the field of evaluation – A personal reflection

Professor Peter Dahler-Larsen, 5 October 2014

Guidelines are symbolic, not regulatory

The limited institutional status of guiding principles and standards should be understood as a starting point for the debate. In the initial phases of development of such standards and guidelines, people often have very strong views. But only the state can enforce laws. To the extent that guidelines and standards merely express some official views of a professional association who has no institutional power to enforce them, standards and guidelines will have limited direct consequences for practitioners. The discussion becomes clearer once it is recognized that standards and guidelines thus primarily have a symbolic and communicative function, not a regulatory one. Practitioners will continue to be free to do whatever kind of practice they like, also after guidelines have been adopted.

Design a process of debate and involvement

All members of a professional association should have a possibility to comment on a draft version of guidelines/standards. An important component in the adoption of guidelines/standards is the design of a proper organizational process that involves the composition of a draft by a select group of recognized experts, an open debate among members, and an official procedure for the adoption of standards/guidelines as organizational policy.

Acknowledge the difference between minimum and maximum standards

Minimal standards must be complied with in all situations. Maximum standards are ideal principles worth striving for, although they will not be accomplished in any particular situation. It often turns out that there will be many maximum principles in a set of guidelines, although that is not what most people believe is “standards.” For that reason I personally prefer the term guidelines or guiding principles rather that “standards.”

Think carefully about guidelines and methodological pluralism

Advocates of a particular method often think that methodological rules connected to their own method defines quality as such in the whole field. For that reason, they are likely to insert their own methodological rules into the set of guidelines. As a consequence, guidelines can be used politically to promote one set of methods or one particular paradigm rather than another. Great care should be exercised in the formulation of guidelines to make sure that pluralism remains protected. For example, in evaluation the rule is that if you subscribe to a particular method, you should have high competence in the chosen method. But that goes for all methods.

Get beyond the “but that´s obvious” argument

Some argue that it is futile to formulate a set of guidelines because at that level of generality, it is only possible to state some very broad and obvious principles with which every sensible person must agree. The argument sounds plausible when you hear it, but my experience suggests otherwise for a number of reasons. First, some people have just not thought about a very bad practice (for example, doing evaluation without written Terms of Reference). Once you see, that someone has formulated a guideline against this, you are likely to start paying attention to the problem. Just because a principle is obvious to some, does not mean that it is obvious to all. Second, although there may be general agreement about a principle (such as “do no unnecessary harm” or “take general social welfare into account”), there can be strong disagreement about the interpretations and implications of the principle in practice.  Third, a good set of guiding principles will often comprise at least two principles that are somewhat in tension with each other, for example the principle of being quick and useful versus the principle of being scientifically rigorous. To sort out exactly which kind of tension between these two principles one can live with in a concrete case turns out to be a matter of complicated professional judgment. So, get beyond the “that´s obvious” argument.

Recognize the fruitful uses of guidelines

Among the most important uses of guidelines in evaluation are:

– In application situations, good evaluators can explain their practice with reference to broader principles

– In conferences, guidelines can stimulate insightful professional discussions about how to handle complicated cases

– Books and journals can make use of guidelines as inspiration for the development of an ethical awareness among practitioners. For example, google Michael Morris´ work in the field of evaluation.

– There is great use of guidelines in teaching and in other forms of socialization of evaluators.

Respect the multiplicity of organizations

If, say, the European Evaluation Society wants to adopt a set of guidelines, it should be respected that, say, the German and the Swiss association already have their own guidelines. Furthermore, some professional associations (say, psychologists) also have guidelines. A professional association should take such overlaps seriously and find ways to exchange views and experiences with guidelines across national and organizational borders.

Professionals are not alone, but relations can be described in guidelines, too

It is often debated that one of the major problems in bad evaluation practice is the behavior of commissioners. Some therefore think that guidelines describing good evaluation practice are in vain until the behavior of commissioners (and perhaps other users of evaluation) are included in the guidelines, too. However, there is no particular reason why the guidelines cannot describe a good relation and a good interaction between commissioners and evaluators. Remember, guidelines have no regulatory power. They express merely the official norms of the professional association. Evaluators are allowed to express what they think a good commissioner should do or not do. In fact, explicit guidelines can help clarify mutual and reciprocal role expectations.

Allow for regular reflection, evaluation and revision of guidelines

At regular intervals, guidelines should be debated, evaluated and revised. The AEA guidelines, for example, have been revised and now reflect values regarding culturally competent evaluation that was not in earlier versions. Guidelines are organic and reflect a particular socio-historical situation.

Sources:

Michael Morris (2008). Evaluation Ethics for Best Practice. Guilford Press.

American Evaluation Association Guiding principles

The Leiden manifesto in the making: proposal of a set of principles on the use of assessment metrics in the S&T indicators conference

Summary

A set of guiding principles (a manifesto) on the use of quantitative metrics in research assessment was proposed by Diana Hicks (Georgia Tech) during a panel session on quality standards for S&T indicators at the STI conference in Leiden last week. Various participants in the debate agreed on the responsibility of the scientometric community in better supporting use of scientometrics. Finding the choice of specific indicators too constraining, many voices supported the idea of a joint publication of a set of principles which should guide a responsible use of quantitative metrics. The session also included calls for scientometricians to take a more proactive role as engaged and responsible stakeholders in the development and monitoring of metrics for research assessment, as well as in wider debates on data governance of, such as infrastructure and ownership.

In the closure of the conference, the association of scientometric institutes ENID (European Network of Indicators Designers) and Ton van Raan as president, offered to play a coordinating role in writing up and publishing a consensus version of the manifesto.

Full report of the plenary session at the 2014 STI conference in Leiden on Quality standards for evaluation: Any chance of a dream come true?

The need to debate these issues has come to the forefront in light of reports that uses of certain easy-to-use and potentially misleading metrics for evaluative purposes have become a routine part of academic life, despite misgivings within the profession itself about its validity. A central aim of the special session was to discuss the need for a concerted response from the scientometric community to produce more explicit guidelines and expert advice on good scientometric practices. The session continued from the 2013 ISSI and STI conferences in Vienna and Berlin, where full plenary sessions were convened on the need for standards in evaluative bibliometrics, and the ethical and policy implications of individual-level bibliometrics.

This year’s plenary session started with a summary by Ludo Waltman (CWTS) of the pre-conference workshop on technical aspects of advanced bibliometric indicators. The workshop, co-organised by Ludo, was attended by some 25 participants, and topics that were addressed included 1. Advanced bibliometric indicators (strengths and weaknesses of different types of indicators; field normalization; country-level and institutional-level comparisons); 2. Statistical inference in bibliometric analysis; and 3. Journal impact metrics (strenghts and weaknessess of different journal impact metrics; use of the metrics in the assessment of individual researchers). The workshop discussions were very fruitful and some common ground was found, but that there also remained significant differences in opinion. Some topics that need further discussion are technical and mathematical properties of indicators (e.g., ranking consistency); strong correlations between indicators; the need to distinguish between technical issues and usage issues; purely descriptive approaches vs. statistical approaches, and the importance of user perspectives for technical aspects of indicator production. There was a clear interest in continuing these discussions at a next conference. The slides of the workshop are available on request.

Ludo’s summary was followed by a short talk by Sarah de Rijcke (CWTS), to set the scene for the ensuing panel discussion. Sarah provided an historical explanation for why previous responses by the scientometric community about misuses of performance metrics and the need for standards have landed in deaf ears. Evoking Paul Wouters’ and Peter Dahler-Larsen’s introductory and keynote lectures, she argued that the preferred normative position of scientometrics (‘We measure, you decide’) and the tendency to provide upstream solutions no longer serve the double role of the field very well. As an academic as well as a regulatory discipline, scientometrics not only creates reliable knowledge on metrics, but also produces social technologies for research governance. As such, evaluative metrics attain meaning in a certain context, and they also help shape that context. Though parts of the community now acknowledge that there is indeed a ‘social’ problem, ethical issues are often either conveniently bracketed off or ascribed to ‘users lacking knowledge’. This reveals unease with taking any other-than-technical responsibility. Sarah plugged the idea of a short joint statement on proper uses of evaluative metrics, proposed at the international workshop at OST in Paris (12 May 2014). She concluded with a plea for a more long-term reconsideration of the field’s normative position. If the world of research governance is indeed a collective responsibility, then scientometrics should step up and accept its part. This would put the community in a much better position to actually engage productively with stakeholders in the process of developing good practices.

In the ensuing panel discussion, Stephen Curry (professor of Structural Biology at Imperial College, London, and member of HEFCE steering group) expressed a deep concern about the seducing power of metrics in research assessment and saw a shared, collective responsibility for the creation and use of metrics on the side of bibliometricians, researchers and publishers alike. Thus according to him technical and usage aspects of indicators should not be separated artificially.

Lisa Colledge (representing Elsevier as Snowballmetrics project director) talked about the Snowballmetrics initiative, and presented it as a bottom-up and practical approach with the goal to meet the needs of funding organizations and university senior level management. According to Lisa, while it primarily addresses research officers, feedback from the academic community of bibliometrics is highly appreciated to contribute to the empowerment of indicator users.

Stephanie Haustein (University of Montreal) was not convinced that social media metrics (a.k.a. altmetrics) lend itself to standardization due to heterogeneity of data sources (tweets, views, downloads) and their constantly changing nature. She stated that meaning of altmetrics data is highly ambiguous (attention vs. significance) and a quality control similar to the peer review system in scientific publications does not yet exist.

Jonathan Adams (Chief scientist at Digital Science) approved the idea of setting up a statement but emphasized that it would have to be short, precise and clear to also catch the attention of government bodies, funding agencies and senior level university management who are uninterested in technical details. Standards will have to live up to the fast-paced change (data availability, technological innovations). He was critical of any fixed set of indicators since this would not accommodate the strategic interests of every organization.

Diana Hicks (Georgia Institute of Technology) presented a first draft of a set of statements (the “Leiden Manifesto”), which she proposed should be published in a top-tier journal like Nature or Science. The statements are general principles on how scientometric indicators should be used, such as for example, ‘Metrics properly used support assessments; they do not substitute for judgment’ or ‘Metrics should align with strategic goals’.

In the ensuing debate, many participants in the audience proposed initiatives and problems that need to be solved. They were partially summarized by Paul Wouters who identified four issues around which the debate evolved. First, he proposed that a central issue is the connection between assessment procedures and the primary process of knowledge creation. If this connection is severed, assessments lose part of their usefulness for researchers and scholars.

The second question is what kind of standards are desirable. Who sets them? How open are they to new developments and different stakeholders? How comprehensive and transparent are or should standards be? What interests and assumptions are included within them? In the debate it became clear that scientometricians do not want to determine the standards themselves. Yet standards are being developed by database providers and universities, now busy building up new research information systems. Wouters proposed that the scientometric community sets as its goal to monitor and analyze evolving standards. This could help to better understand problems and pitfalls and also provide technical documentation.

The third issue highlighted by Wouters is the question of who is responsible. While the scientometric community cannot assume full responsibility for all evaluations in which scientometric data and indicators play a role, it can certainly broaden out its agenda. Perhaps an even more fundamental question is how public stakeholders can remain in control of the responsibility for publicly funded science when more and more meta-data is being privatized. Wouters pleaded for strengthening the public nature of the infrastructure of meta-data, including current research information systems, publication databases and citation indexes. This view does not deny the important role for for-profit companies who are often more innovative. Fourth, Wouters suggested that taking these issues together provides an inspiring collective research agenda for the scientometrics community.

Diana Hicks’ suggestion of a manifesto or set of principles was followed up on the second day of the STI conference at the annual meeting of ENID (European Network of Indicators Designers). The ENID assembly, and Ton van Raan as president, offered to play a coordinating role in writing up the statement. Diana Hicks’ draft will serve as a basis, and it will also be informed by opinions from the community, important stakeholders and intermediary organisations, as well as those affected by evaluations. The debate on standardization and use will be continued in upcoming science policy conferences, with a session confirmed for the AAAS (San José, February) and expected sessions in the STI and ISSI conferences in 2015.

(Thanks to Sabrina Petersohn for sharing her notes of the debate.)

Ismael Rafols (Ingenio (CSIC-UPV) & SPRU (Sussex); Session chair); Sarah de Rijcke (CWTS, Leiden University); Paul Wouters (CWTS, Leiden University)

The new Dutch research evaluation protocol

From 2015 onwards, the societal impact of research will be a more prominent measure of success in the evaluation of research in the Netherlands. Less emphasis will be put on the number of publications, while the vigilance about research integrity will be increased. These are the main elements of the new Dutch Standard Evaluation Protocol which was published a few weeks ago.

The new protocol aims to guarantee, improve, and make visible the quality and relevance of scientific research at Dutch universities and institutes. Three aspects are central: scientific quality; societal relevance; and feasibility of the research strategy of the research groups involved. As is already the case in the current protocol, research assessments are organized by institution, and the institutional board is responsible. Nationwide comparative evaluations by discipline are possible, but the institutions involved have to agree explicitly to organize their assessments in a coordinated way to realize this. In contrast to performance based funding systems, the Dutch system does not have a tight coupling between assessment outcomes and funding for research.

This does not mean, inter alia, that research assessments in the Netherlands do not have consequences. On the contrary, these may be quite severe but they will usually be implemented by the university management with considerable leeway for interpretation of the assessment results. The main channel through which Dutch research assessments has implications is via the reputation gained or lost for the research leaders involved. The effectiveness of the assessments is often decided by the way the international committee works which performs the evaluation. If they see it as their main mission to celebrate their nice Dutch colleagues (as has happened in the recent past), the results will be complimentary but not necessarily very informative. On the other hand, they may also punish groups by using criteria that are actually not valid for those specific groups although they may be standard for the discipline as a whole (and this has also happened, for example when book-oriented groups work in a journal-oriented discipline).

The protocol does not include a uniform set of requirements or indicators. The specific mission of the research institutes or university departments under assessment is leading. As a result, research that is mainly aimed at having practical impact may be evaluated with different criteria from a group that aims to work on the international frontier of basic research. The protocol is not unified around substance but around procedure. Each group has to be evaluated every six years. A new element in the protocol is also that the scale for assessment has been changed from a five-point to a four-point scale, ranging from “unsatisfactory”, via “good” and “very good” to “excellent”. This scale will be applied to all three dimensions: scientific quality, societal relevance, and feasibility.

The considerable freedom that the peer committees have in evaluating Dutch research has been maintained in the new protocol. Therefore, it remains to be seen what the effects will be of the novel elements in the protocol. In assessing the societal relevance of research, the Dutch are following their British peers. Research groups will have to construct “narratives” which explain the impact their research has had on society, understood broadly. It is not yet clear how these narratives will be judged according to the scale. The criteria for feasibility are even less clear: according to the protocol a group has an “excellent” feasibility if it is “excellently equipped for the future”. Well, we’ll see how this works out.

With less emphasis on the amount of publications in the new protocol, the Dutch universities, the funding agency NWO and the academy of science KNAW (who collectively are reponsible for the protocol) have also responded to the increased anxiety about “perverse effects” in the research system triggered by the ‘Science in Transition’ group and to recent cases of scientific fraud. The Dutch minister of education, culture and the sciences Jet Bussemaker welcomed this change. “Productivity and speed should not be leading considerations for researchers”, she said at the reception of the new protocol. I fully agree with this statement, yet this aspect of the protocol will also have to stand the test of practice. In many ways, the number of publications is still a basic building block of scientific or scholarly careers. For example, the h-index is very popular in the medical sciences  ((Tijdink, Rijcke, Vinkers, Smulders, & Wouters, 2014). This index is a combination of the number of publications of a researcher and the citation impact of these articles in such a way that the h-index can never be higher than the total number of publications. This means that if researchers are compared according to the h-index, the most productive ones will prevail. We will have to wait and see whether the new evaluation protocol will be able to withstand this type of reward for high levels of article production.

Reference: Tijdink, J. K., Rijcke, S. De, Vinkers, C. H., Smulders, Y. M., & Wouters, P. (2014). Publicatiedrang en citatiestress. Nederlands Tijdschrift Voor Geneeskunde, 158, A7147.

Metrics in research assessment under review

This week the Higher Education Funding Council for England (HEFCE) published a call to gather “views and evidence relating to the use of metrics in research assessment and management” http://www.hefce.ac.uk/news/newsarchive/2014/news87111.html. The council has established an international steering group which will perform an independent review of the role of metrics in research assessment. The review is supposed to contribute to the next installment of the Research Excellence Framework (REF) and will be completed Spring 2015.

Interestingly, two members of the European ACUMEN project http://research-acumen.eu/ are members of the 12 person steering group – Mike Thelwall (professor of cybermetrics at Wolverhampton University http://cybermetrics.wlv.ac.uk/index.html) and myself – and it is led by James Wilsdon, professor of Science and Democracy at the Science Policy Research Unit (SPRU) at the University of Sussex. The London School of Economics scholar Jane Tinkler, co-author of the book The Impact of the Social Sciences, is also member and has put together some reading material on their blog http://blogs.lse.ac.uk/impactofsocialsciences/2014/04/03/reading-list-for-hefcemetrics/. So there will be ample input from the social sciences to analyze both the promises and the pitfalls of using metrics in the British research assessment procedures. The British clearly see this as an important issue. The creation of the steering group was announced by the British minister for universities and science, David Willett at the Universities UK conference on April 3 https://www.gov.uk/government/speeches/contribution-of-uk-universities-to-national-and-local-economic-growth. In addition to science & technology studies experts, the steering group consists of scientists from the most important stakeholders in the British science system.

At CWTS, we responded enthusiastically to the invitation by HEFCE to contribute to this work, because this approach resonates so well with the CWTS research programme http://www.cwts.nl/pdf/cwts_research_programme_2012-2015.pdf. The review will focus on: identifying useful metrics for research assessment; how metrics should be used in research assessment; ‘gaming’ and strategic use of metrics; and the international perspective.

All the important questions about metrics have been put on the table by the steering group, among others:

–       What empirical evidence (qualitative or quantitative) is needed for the evaluation of research, research outputs and career decisions?

–       What metric indicators are useful for the assessment of research outputs, research impacts and research environments?

–       What are the implications of the disciplinary differences in practices and norms of research culture for the use of metrics?

–       What evidence supports the use of metrics as good indicators of research quality?

–       Is there evidence for the move to more open access to the research literature to enable new metrics to be used or enhance the usefulness of existing metrics?

–       What evidence exists around the strategic behaviour of researchers, research managers and publishers responding to specific metrics?

–       Has strategic behaviour invalidated the use of metrics and/or led to unacceptable effects?

–       What are the risks that some groups within the academic community might be disproportionately disadvantaged by the use of metrics for research assessment and management?

–       What can be done to minimise ‘gaming’ and ensure the use of metrics is as objective and fit-for-purpose as possible?

The steering group also calls for evidence on these issues from other countries. If you wish to contribute evidence to the HEFCE review, please make it clear in your response whether you are responding as an individual or on behalf of a group or organisation. Responses should be sent to metrics@hefce.ac.uk by noon on Monday 30 June 2014. The steering group will consider all responses received by this deadline.

 

 

%d bloggers like this: