Irrational rationality: critique of metrics-based evaluation of researchers and universities

According to one of the most consulted of the global university rankings services, the QS World University Rankings 2022, the University of Toronto is the top ranked university in Canada. It shouldn’t take more than a brief pause to reflect on this statement to see the fiction in what is presented as objective empirical information (pseudoscience). In the real world, it is mid-June, 2021. The empirical “facts” on which QS is based are still in progress, in a year of pandemic with considerable uncertainty. It is not possible to complete data on 2021 until the year is over. Meanwhile, QS is already reporting stats for 2022; perhaps they are psychic?

Scratching slightly at the surface, anyone with even a little bit of familiarity with the universities in Canada is probably aware that the University of Toronto is currently under a rare Censure against the University of Toronto due to a “serious breach of the principles of academic freedom” in a hiring decision. Censure is a “rarely invoked sanction in which academic staff in Canada and internationally are asked to not accept appointments, speaking engagements or distinctions or honours at the University of Toronto, until satisfactory changes are made”. I don’t know the details of the QS algorithms, but I think it’s fair to speculate that neither support for academic freedom or a university’s ability to attract top faculty for appointments, speeches, distinctions or honours is factored in, or if factored in, weighted appropriately.

Digging just a little bit deeper, someone with a modicum of understanding of the university system in Canada and Ontario in particular would know that the University of Toronto is one of Ontario’s 23 public universities, all of which have programs approved and regularly reviewed for quality by the same government, and funded under the same formulae and provide the same economic support for students. Degrees at a particular level are considered equivalent locally and courses are often transferable between institutions. When not under censure, the University of Toronto is indeed a high quality university; so is the University of Ottawa, where I work, Carleton (the other Ottawa-based university), and all the other Ontario universities. Specific programs frequently undergo additional accreditation. My department offers a Master’s of Information Studies program that is accredited by the American Library Association (ALA). Both the Ontario government and ALA require actual data in their QA / accreditation process. This includes evidence of strategic planning, but not guesswork about future output.

If QS is this far off base in their assessment of universities in the largest province of a G7 country (the epitome of the Global North), how accurate is QS and other global university rankings in the Global South? According to Stack (2021) and the authors of the newly released book Global University Rankings and the Politics of Knowledge http://hdl.handle.net/2429/78483 global university rankings such as QS and THE and the push for the Global South to develop globally competitive “world class universities” are more about reproducing colonial relations, marketizing higher education and commercializing research than assuring high quality education. The attention paid to such rankings distracts universities and even countries from what matters locally. As Chou points out, the focus on rankings leads scholars in Taiwan to publish in English rather than Mandarin although Mandarin is the local language. A focus on publishing in international, English language journals creates a disincentive to conduct research of local importance almost everywhere.

My chapter in this work focuses on the intersection of critique on metrics-based evaluation of research and how this feeds into the university rankings system. The first part of the chapter Dysfunction in knowledge creation and moving beyond provides a brief history and context of bibliometrics and the development of traditional and new metrics-based approaches and major critique and advocacy efforts to change practice (the San Francisco Declaration on Research Assessment (DORA) and the Leiden Manifesto). The unique contribution of this chapter is critique of the underlying belief behind both traditional and alternative metrics-based approaches to assessing research and researchers, that is, the assumption that impact is good and an indicator of quality research and therefore it makes sense to measure impact, with the only questions being whether particular technical measures of impact are accurate or not. For example, if impact is necessarily good, then the retracted study by Wakefield et al. that falsely correlated vaccination with autism is good research by any metric – many academic citations both before and after publication, citations in popular and social media and arguably a factor in the real-world impact of the anti-vaccination movement and the subsequent return of preventable illnesses like measles and a factor in the challenge of fighting COVID through vaccination. An alternative approach is suggested, using the traditional University of Ottawa’s collective agreement with APUO (union of full-time professors) as a means of evaluation that considers many different types of publications and considers quantity of publication in a way that gives evaluators the flexibility to take into account the kind of research and research output.

References

Morrison, H. (2021). What counts in research? Dysfunction in knowledge creation and moving beyond. http://ruor.uottawa.ca/handle/10393/39088 In: Stack, M. (2021). Global University Rankings and the Politics of Knowledge, pp. 109 – 130. http://hdl.handle.net/2429/78483

Stack, M. (2021). Global University Rankings and the Politics of Knowledge. http://hdl.handle.net/2429/78483

DOAJ, Impact Factor and APCs

by César Villamizar and Heather Morrison

In May 2015 we conducted a pilot study correlating OA APCs and the journal impact factor, using data from 2010, 2013 and 2014. Here are some early results:

  • about 10% of the journals listed in JCR are DOAJ journals
  • over 10% of the journals listed in DOAJ have an impact factor
  • about 40% of the DOAJ IF journals had an APC as of May 2015 (estimate; higher than overall journals with APC)
  • average APC of IF journals in 2014 more than double overall average APC ($1,948 as compared with overall average of $964)
  • average APCs of IF journals increased by 7% in a 5-month period from 2013 to 2014 and by 16% from 2010 to 2014
  • over 80% of APC / IF journals increased price by 6% or more in a 5-month period from December 2013 to May 2014
  • about 20% of APC / IF journals increased price by 10% or more in a 5-month period from December 2013 to May 2014
  • 7% of APC / IF journals increased price by 20% or more in a 5-month period from December 2013 to May 2014

Conclusion: about 10% of DOAJ journals have impact factors, and about 10% of impact factor journals are DOAJ journals. Open access journals (or some OA journals) using the APC business model may be exploiting impact factor status as a means to raise prices. Further investigation warranted.

Details

As of May 3, 2015, Thomson Reuters’ Journal Citation Reports (JCR) listed 11,619 journals with impact factor (IF). Of these, 1,146 are listed in the Directory of Open Access Journals (DOAJ). As of May 15, 2015, 10,532 journals were listed in DOAJ. This means that 9.8% of the titles listed in JCR are DOAJ titles, and 10.8% of DOAJ journals have an IF.

The pilot involved selecting half of the DOAJ journals with an IF (572 journals from both sciences and social sciences, selected alphabetically abbreviated title A – J Otolaryngol-Head and looking up the quartile and subject ranking. Of these titles, 169 were included in the May 2014 OA APC sample. For 126 journals data was available for both December 2013 and May 2014, the basis of the 2013-2014 calculations. Assuming that the portion of APC-charging journals would be the same for non-sampled journals, this would result in an estimate of 229 journals with IF and APC, 40% of the total. This is higher than the 26% of journals with APCs as of May 2014.

Stats of the 572 in DOAJ with impact factor (pilot):

  • 42.1% of the journals are in the quartile four (Q4), 27.2% of the journals are in the quartile three (Q4), 18.9% of the journals are in the quartile two (Q2), and 11.8% of the journals are in the quartile one (Q1)
    • 69% of the journals are in the Q4 and Q3
    • 31% of the journals are in the Q2 and Q1

DOAJIFquartile

 

  • Out of the 572 journals,
    • APC data by year
      • 2010 B&S : 176
      • Dec 2013 SKC : 129
      • May 2014 SKC : 169
  • We have 126 journals with APC information collected in Dec 2013 SKC and May 2014 SKC
  • We have 110 journals with APC information collected in 2010 S&B,Dec 2013 SKC and May 2014 SKC.

Stats of the 126 journals with APC Data (Dec 2013 SKC – May 2014 SKC)

  • 17,5% of the journals are in the quartile four (Q4), 38,1% of the journals are in the quartile three (Q4), 30,2% of the journals are in the quartile two (Q2), and 14,3% of the journals are in the quartile one (Q1)
    • 55,5% of the journals are in the Q4 and Q3
    • 45,5% of the journals are in the Q2 and Q1

DOAJIFallapcs

  • 3,2% of the journals decreased their APC (this is 3 journals; 2 are Hindawi journals. Hindawi as of May 2014 had a practice of rotating free publication. These 2 journals had APCs of 0 in 2014, but have substantial prices today (Bioinorganic Chemistry Applications is now $1,250 and International Journal of Genomics is now $1,500). The third journal with an apparent small price decrease, Experimental Animals, from $200 to $198 USD is likely an anomaly due to a weakening of the main currency, the Japanese Yen, with respect to the USD. In other words, all price decreases appear to be temporary anomalies.
  • 14,3% of the journals maintained their APC
  • 82,5% of the journals increased their APC at least 6.4%
    • 3,1% increased their APC between 6,4% and 7,49%
    • 54,8% increased their APC between 7,5% and 9,49%
    • 15% increased their APC between 9,5% and 13,9%
    • 7% increased their APC between 14% and 25%

The following figure reflects the 123 titles remaining after removing the 2 anomalous 0 APC titles.

DOAJIFAPC201314increasewhite

The following chart illustrates the percentage of journals by price increase from 2013 to 2014.

DOAJIFAPC201314percentincrease

APC 2010 USD APC 2013 USD APC 2014 USD
Max 2,165 2,420 2,650
Min 500
Min greater than zero 500 200 198
Median 1,825 2,060 2,215
Mode 1,825 2,060 2,215
Average 1,637 1,808 1,948
  • Medicine and Biology and Life Science represents 81,1% of the journals categories susceptible to charge APCs
    • 3% of the journals in these two categories increased their APC at least in 6.4%
    • 9% increased their APC between 6.4% and 7.49%
    • 1% increased their APC between 7.5% and 9.49%
    • 50% increased their APC between 9.5% and 13.9%
    • 8% increased their APC between 14% and 25%

Note and references

2010 data courtesy of Solomon, D.J. & Björk, B.C. (2012). A study of open access journals using article processing charges. The Journal of the American Society for Information Science and Technology 2012. Retrieved May 31, 2015 from http://www.openaccesspublishing.org/apc2/ (data unpublished)

2014 data: Morrison H, Salhab J, Calvé-Genest A, Horava T. Open Access Article Processing Charges: DOAJ Survey May 2014. Publications. 2015; 3(1):1-16. http://www.mdpi.com/2304-6775/3/1/1

Cite as:

Villamizar, C., & Morrison, H. (2015). DOAJ, Impact Factor and APCs. Sustaining the Knowledge Commons / Soutenir Les Savoirs Communs. Retrieved from https://sustainingknowledgecommons.org/2015/06/01/doaj-impact-factor-and-apcs/