[Cross-posted at Revise and Dissent.]
The Research Quality Framework (RQF) was a proposal by the previous Australian federal government to introduce a set of metrics by which the research output of university departments can be measured. Something like the Research Assessment Exercise in Britain, certainly in principle (I don't know enough about either to say how alike they were in practice). The new federal government scrapped the RQF earlier this year. It's gone, dead, buried. Instead we're getting something called the Excellence in Research for Australia (ERA) Initiative, which is completely different, and much better. Personally, I think I preferred the RQF -- more interesting possibilities for backronyms there.
I don't really have an objection to this type of thing, in principle. But as everyone knows (including, I'm sure, those who devise them) performance metrics can lead to perverse incentives. The average length of time people have to wait for elective surgery would seem to be a good one for hospitals, but not if they start turning people away rather than hire more doctors or expand facilities in order to reduce this metric. Or even worse, start turning them out before they are fully recovered.
So the precise metrics used matter. And one of the ERA metrics seems to be causing a lot of concern: the ranking of journals, both international and Australian, in terms of quality. Publishing in high quality journals scores more highly than publishing in low quality journals, and in the end this presumably translates into more dollars. Seems fair enough on the face of it: obviously most historians would prefer to publish in high journals whenever possible anyway, with or without an ERA. But who decides which journal gets what rank?
The short answer is: not historians. The longer answer is the Australian Research Council (ARC), which is the peak body in this country for distributing research grants. In the first instance they are relying on journal impact factors (a measure of how often articles from a journal are cited by other journals), which at first glance would seem to discriminate against historians, for whom monographs are a hugely important means of publishing research. Maybe there's a way of correcting for that, I don't know. Anyway, there are four ranks, ranging from C at the bottom, through B and A, to A* at the top. [Spinal Tap reference here] A* is defined as follows:
Typically an A* journal would be one of the best in its field or subfield in which to publish and would typically cover the entire field/subfield. Virtually all papers they publish will be of a very high quality. These are journals where most of the work is important (it will really shape the field) and where researchers boast about getting accepted. Acceptance rates would typically be low and the editorial board would be dominated by field leaders, including many from top institutions.
This is supposed to represent the top 5% of all journals in a field or subfield. A is like A*, only less so, and represents the next 15%; B, the next 30%; C, the bottom 50%. I can see a danger for perverse incentives here, at least for Australian journals (international journals won't notice a couple of submissions more or less): C rank journals might get even fewer quality articles submitted to them, because these will be directed to the A*s, As and Bs first: how can they then hope to climb up to B? So ranking journals in this way not only measures the quality of journals, it might actually fix them in place: a self-fulfilling metric.
At least the ARC is seeking input from historians (and indeed, all researchers in Australia in all fields) about the proposed ranks, but what effect this will have is anyone's guess. The ARC has already extended the deadline for submissions from next week to mid-August, so they're clearly aware of the 'large interest' the journal ranks have aroused.
So, to see how the draft ranks play out in my own field (what that field is is debatable, but let's call it modern British military history), I trawled through my bibliography files to gather together a list of journals which have published articles useful to me in the last 25 years, discarding any which appeared only once (on the basis that they're evidently only of marginal relevance).
A*:
- Historical Journal
- Journal of Contemporary History
- Journal of Modern History
A:
- War in History
B:
- Journal of Military History
- War and Society
C:
- International History Review
- Journal of British Studies
- Twentieth Century British History
One thing leaps out: no military history journal is ranked A*. There's an A (War in History) and two Bs. This is troubling for military historians: we will be disadvantaged relative to historians working in other subfields if even our best journals are ranked low. It could be argued that some journals are too specialised to be in the top rank, but then Ajalooline Ajakiri Tartu Ulikooli Ajaloo Osakonna Ajakiri, which I think focuses on Baltic history, is given an A* rank, alongside American Historical Review and Past & Present.
Is it really the case that military history doesn't deserve a place in the top 5%? Are the other journal ranks relatively fair? What do people think?
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Permissions beyond the scope of this license may be available at http://airminded.org/copyright/.
Gavin
It's hard to see how WiH could be any better than it is. If it isn't publishing enough ambitious field-defining articles then maybe no-one's writing them. But WiH is clearly the best in its field and it's an important field. (But obviously I have a vested interest in the one journal that I've appeared in being ranked really high!)
I'm surprised that Journal of British Studies is ranked so low. I thought it was better than that.
Brett Holman
Post authorI'd tend to agree with you on WiH, and if I only made one change it would be to make it A*. I think, at least, the positions of the military history journals relative to each is about right. I've been advised that it would be better to publish the paper I'm working on in JMH than WiH, but my gut feeling is that while both have published important work in my area, WiH's are more interesting overall, which I guess equates to better! (The other issue is that JMH is more tolerant of long papers than WiH, so WiH might not be right for this paper anyway.) There's also JCH, which seems willing to publish military history, broadly defined.
Maybe JBS has declined in recent years? I've got a lot refs from it in my db, but only 1 after 1985.
Mike Cosgrave
A fundamental problem for many humanities scholars is that publications within your own "national" context - on Australian or Irish History or literature for example - may not be of great interest to international journals whereas scientific articles are universal. It is quite possible for an academic in the humanities (or indeed the social sciences) to do a great deal of good work which will be "below the radar" of these criteria.
These is also a problem with assessment based on citations - we have science academics in my uni with over 13,000 citations while the top cited history staff only have a few hundred.
The other problem with all these journal based assessments is that papers in the sciences typically have multiple authors, including research supervisors, heads of dept and other bodies whereas a humanities paper is usually a solo effort. the Head of a productive dept in the sciences may accumulate many co-authored papers and many many citations, whereas his or her (hir?) humanities colleagues will have fewer.
Humanities academics really have to keep on making these points to the "bean counters"
Brett Holman
Post authorExcellent points, Mike. I can demonstrate some of them from my own brief, unillustrious career in science, from which I garnered 4 publications, 1 in a refereed journal and 3 in conference proceedings (the exceedingly low publication rate for humanities conferences doesn't help either). Only one of these I wrote; none has fewer than 3 authors (one has 6). The refereed paper has, so far, had a total of 46 citations. All this without really trying! In fact, the journal paper was published nearly 4 years after I left the field and 8 years after my major contribution to the project ended.
An easy way to correct for this would be to weight a publication's value by the number of authors, i.e. a single-authored paper is worth 1, one with 2 authors is worth 1/2 to each author, one with 3 authors is worth 1/3, etc. But then people working in fields like particle physics, where there are hundreds or even thousands of co-authors, would complain ... Maybe science and humanities just work too differently to be measured by the same metrics, but we're measured using those drawn from the science model.
Brett Holman
Post authorAntoine Capet sent the above post and comments to H-Albion. On 24 July, Peter Stansky replied as follows:
Brett Holman
Post authorSome more comments from H-Albion, from 27 July. The first is from Robert McNamara:
The second is from Ian Welch:
And the third is from William Anthony Hay: