One of the long-term challenges to transitioning scholarly communication to open access is reliance on bibliometrics. Many authors and organizations are working to address this challenge. The purpose of this post is to share some highlights of my work in progress, a book chapter (preprint) designed to explain the current state of bibliometrics in the context of a critique of global university rankings. Some reflections in brief that are new and relevant to advocates of open access and changes in evaluation of scholarly work follow.
- Impact:it is not logical to equate impact with quality, and further, it is dangerous to do so. Most approaches to evaluation of scholarly work assume that impact is a good thing, an indicator of quality research. I argue that this reflects a major logical flaw, and a dangerous one at that. We should be asking whether it makes sense for an individual research study (as opposed to weight of evidence gained and confirmed over many studies) should have impact. If impact is good and more impact is better, then the since-refuted study that equated vaccination with autism must be an exceptionally high quality study, whether measured by traditional citations or the real-world impact of the return of diseases such as measles. I argue that this is not a fluke, but rather a reasonable expectation of reward systems that favour innovation, not caution. Irreproducible research, in this sense, is not a fluke but rather a logical outcome of current evaluation of scholarly work.
- New metrics (or altmetrics) serve many purposes and should be developed and used, but should be avoided in the context of evaluating the quality of scholarship to avoid bias and manipulation. It should obvious that metrics that go beyond traditional academic citations are likely to reflect and amplify existing social biases (e.g. gender, ethnicity), and non-academic metrics such as tweets are in addition subject to manipulation by interested parties including industry and special interest groups (e.g. big pharma, big oil, big tobacco).
- New metrics are likely to change scholarship, but not necessarily in the ways anticipated by the open access movement. For example, replacement of the journal-level citation impact by article-level citations is already very well advanced, with Elsevier in a strong position to dominate this market. Scopus metrics data is already in use by university rankings and is being sold by Elsever to the university market.
- It is possible to evaluate scholarly research without recourse to metrics. The University of Ottawa’s collective agreement with full-time faculty reflects a model that not only avoids the problems of metrics, but is an excellent model for change in scholarly communication as it is recognized that scholarly works may take many forms. For details, see the APUO Collective Agreement 2018 – 2021 section 23.3.1 – excerpt:
23.3.1. General Whenever this agreement calls for an assessment of a Faculty Member’s scholarly activities, the following provisions shall apply.
a) The Member may submit for assessment articles, books or contributions to books, the text of presentations at conferences, reports, portions of work in progress, and, in the case of literary or artistic creation, original works and forms of expression
b) Works may be submitted in final published form, as galley proofs, as preprints of material to be published, or as final or preliminary drafts. Material accepted for publication shall be considered as equivalent to actually published material…
h) It is understood that since methods of dissemination may vary among disciplines and individuals, dissemination shall not be limited to publication in refereed journals or any particular form or method.
There may be other models; if so, I would be interested in hearing about them, please add a comment to this post or send an e-mail.
The full book chapter preprint is available here: https://ruor.uottawa.ca/handle/10393/39088
Excerpt
This chapter begins with a brief history of scholarly journals and the origins of bibliometrics and an overview of how metrics feed into university rankings. Journal impact factor (IF), a measure of average citations to articles in a particular journal, was the sole universal standard for assessing quality of journals and articles until quite recently. IF has been widely critiqued; even Clarivate Analytics, the publisher of the Journal Citation Reports / IF, cautions against use of IF for research assessment. In the past few years there have been several major calls for change in research assessment: the 2012 San Francisco Declaration on Research Assessment (DORA), the 2015 Leiden Manifesto (translated into 18 languages) and the 2017 Science Europe New vision for meaningful research assessment. Meanwhile, due to rapid change in the underlying technology, practice is changing far more rapidly than most of us realize. IF has already largely been replaced by item-level citation data from Elsevier’s Scopus in university rankings. Altmetrics illustrating a wide range of uses including but moving beyond citation data, such as downloads and social media use are prominently displayed on publishers’ websites. The purpose of this chapter is to provide an overview of how these metrics work at present, to move beyond technical critique (reliability and validity of metrics) to introduce major flaws in the logic behind metrics-based assessment of research, and to call for even more radical thought and change towards a more qualitative approach to assessment. The collective agreement of the University of Ottawa is presented as one model for change.
Cite as:
Morrison, H. (2019). What counts in research? Dysfunction in knowledge creation & moving beyond. Sustaining the Knowledge Commons / Soutenir Les Savoirs Communs. Retrieved from https://sustainingknowledgecommons.org/2019/05/22/what-counts-in-research-dysfunction-in-knowledge-creation-moving-beyond/
We are just talking about setting up systems where scientists assess the quality of research, rather than using citation metrics.
If we do this, we need to get it right to avoid the perverse incentives of the impact factor based current system. Do think along.
https://gitlab.com/publishing-reform/discussion/issues/94#note_172547077
Thanks Victor. The overlay journal model (post preprint and have peer review serve as an overlay or virtual journal) is one of the better models for transformation from my perspective. We already have numerous systems to assess the quality of scholarly work including research: evaluation, feedback and grading in the case of students; qualitative peer review in the case of publishing and conferences; and evaluating grant applications, job applicants, and applications for promotion and tenure. Evaluating for “which works to read” is a different question, and one that I think we should be thinking about differently. For example, if more scholars were engaged in coordinating roles such as reading the literature in a given field and summarizing it through living reviews, meta-analyses, evidence summaries, etc., that would help us cope with the vast amounts of literature, or gathering and pointing to the literature through projects like Retraction Watch or the Open Access Tracking Project. For this to happen, this needs (along with critique of existing studies and attempts at replication) needs to be recognized as the important work that it is. I do not use the word “science” deliberately; science is only one of our types of knowledges. All are necessary, even for science, as I explain here https://sustainingknowledgecommons.org/2019/04/09/science-lets-talk-your-friend-all-other-knowledges/
I think overlay journals are a great solution at the end of the transition, but feel that grassroots journals are better in making the transition happen. Overlay journals only review manuscripts published on repositories and not articles published in journals. That makes the added value of the open grassroots reviews much larger.
My background is physics and climatology, so I am used to scholarship being done and reviewed in article-sized chunks. Maybe the situation is different in the humanities. That is one reason I am reaching out to other fields, to get feedback on whether the idea would work for them.
I agree that ideas that are not science can just as well be valuable. It is hard to define science, but I do not see writing reviews and curating a grassroots review journal as much different from writing review articles or being an editor of a scientific journal. So these are things scientists are used to doing, although often less openly and systemically.
I agree wholeheartedly that grassroots journals are the best way to approach transition in the interim. Overlay and grassroots journals are two approaches to take back leadership of scholarly communication; the differences between them are merely technical and I see the technologies of journals, repositories, and conference proceedings as in a process of convergence.