What counts in research? Dysfunction in knowledge creation & moving beyond

One of the long-term challenges to transitioning scholarly communication to open access is reliance on bibliometrics. Many authors and organizations are working to address this challenge. The purpose of this post is to share some highlights of my work in progress, a book chapter (preprint) designed to explain the current state of bibliometrics in the context of a critique of global university rankings. Some reflections in brief that are new and relevant to advocates of open access and changes in evaluation of scholarly work follow.

  • Impact:it is not logical to equate impact with quality, and further, it is dangerous to do so. Most approaches to evaluation of scholarly work assume that impact is a good thing, an indicator of quality research. I argue that this reflects a major logical flaw, and a dangerous one at that. We should be asking whether it makes sense for an individual research study (as opposed to weight of evidence gained and confirmed over many studies) should have impact. If impact is good and more impact is better, then the since-refuted study that equated vaccination with autism must be an exceptionally high quality study, whether measured by traditional citations or the real-world impact of the return of diseases such as measles. I argue that this is not a fluke, but rather a reasonable expectation of reward systems that favour innovation, not caution.  Irreproducible research, in this sense, is not a fluke but rather a logical outcome of current evaluation of scholarly work.
  • New metrics (or altmetrics) serve many purposes and should be developed and used, but should be avoided in the context of evaluating the quality of scholarship to avoid bias and manipulation. It should obvious that metrics that go beyond traditional academic citations are likely to reflect and amplify existing social biases (e.g. gender, ethnicity), and non-academic metrics such as tweets are in addition subject to manipulation by interested parties including industry and special interest groups (e.g. big pharma, big oil, big tobacco).
  • New metrics are likely to change scholarship, but not necessarily in the ways anticipated by the open access movement. For example, replacement of the journal-level citation impact by article-level citations is already very well advanced, with Elsevier in a strong position to dominate this market. Scopus metrics data is already in use by university rankings and is being sold by Elsever to the university market.
  • It is possible to evaluate scholarly research without recourse to metrics. The University of Ottawa’s collective agreement with full-time faculty reflects a model that not only avoids the problems of metrics, but is an excellent model for change in scholarly communication as it is recognized that scholarly works may take many forms. For details, see the APUO Collective Agreement 2018 – 2021 section 23.3.1 – excerpt:

23.3.1. General Whenever this agreement calls for an assessment of a Faculty Member’s scholarly activities, the following provisions shall apply.

a) The Member may submit for assessment articles, books or contributions to books, the text of presentations at conferences, reports, portions of work in progress, and, in the case of literary or artistic creation, original works and forms of expression

b) Works may be submitted in final published form, as galley proofs, as preprints of material to be published, or as final or preliminary drafts. Material accepted for publication shall be considered as equivalent to actually published material…

h) It is understood that since methods of dissemination may vary among disciplines and individuals, dissemination shall not be limited to publication in refereed journals or any particular form or method.

There may be other models; if so, I would be interested in hearing about them, please add a comment to this post or send an e-mail.

The full book chapter preprint is available here:  https://ruor.uottawa.ca/handle/10393/39088

Excerpt

This chapter begins with a brief history of scholarly journals and the origins of bibliometrics and an overview of how metrics feed into university rankings. Journal impact factor (IF), a measure of average citations to articles in a particular journal, was the sole universal standard for assessing quality of journals and articles until quite recently. IF has been widely critiqued; even Clarivate Analytics, the publisher of the Journal Citation Reports / IF, cautions against use of IF for research assessment. In the past few years there have been several major calls for change in research assessment: the 2012 San Francisco Declaration on Research Assessment (DORA), the 2015 Leiden Manifesto (translated into 18 languages) and the 2017 Science Europe New vision for meaningful research assessment. Meanwhile, due to rapid change in the underlying technology, practice is changing far more rapidly than most of us realize. IF has already largely been replaced by item-level citation data from Elsevier’s Scopus in university rankings. Altmetrics illustrating a wide range of uses including but moving beyond citation data, such as downloads and social media use are prominently displayed on publishers’ websites. The purpose of this chapter is to provide an overview of how these metrics work at present, to move beyond technical critique (reliability and validity of metrics) to introduce major flaws in the logic behind metrics-based assessment of research, and to call for even more radical thought and change towards a more qualitative approach to assessment. The collective agreement of the University of Ottawa is presented as one model for change.

Cite as:

Morrison, H. (2019). What counts in research? Dysfunction in knowledge creation & moving beyond. Sustaining the Knowledge Commons / Soutenir Les Savoirs Communs. Retrieved from https://sustainingknowledgecommons.org/2019/05/22/what-counts-in-research-dysfunction-in-knowledge-creation-moving-beyond/