Inspired by @emilygathergood, @isaacsoon2, and others (like @emuehlbe) who did the work of naming the gender imbalance in NTS and other EC journals (and prompted by @JillHicksKeeton who reminded me it’s people like me who should be doing this work), a thread:
I took a deep dive into journal metric systems to think about how we assess prestige in biblical studies journals. I’m not an expert on this, so please correct my errors. Here’s what I found:
Among academic journals, the major metric for assessing prestige is citation, meaning that the more an article or journal is cited, the more prestigious it is considered to be. On one level, this makes intuitive sense: more popular = more useful = more important
But one can also imagine the flaws in this logic pretty quickly: more citation does not directly index to quality, rigor, or impact on fields of knowledge. It can also be easily manipulated. Most major databases on citation now try and correct for manipulation.
The major databases for citation are Journal Citation Reports, SCImago, and Google Scholar. JCR was originally a product of Thomson Reuters, while SCImago uses data from Scopus, which is a product of Elsevier.
JCR and SCImago seem largely to have been designed to create metrics for the hard sciences and to aid librarians in determining what journals (most of which are published by Thomson Reuters and Elsevier) to subscribe to.
Thus the major publishers of academic journals, who make large profits off of academic labor, provide the bulk of data on how their journals should be assessed.
You can interact with these databases yourself to see how they represent data, both for individual journals and for other metrics: scholars, articles, books, countries, disciplines.
scimagojr.com
jcr.clarivate.com
scholar.google.com/citations
Scopus (scopus.com) feeds directly into SCImago and also makes it possible to look at citation of individual authors, books, and articles. Most anyone who checks Scopus will find that their own citational matrix insufficiently captures reality.
For biblical studies, the first thing I noticed was what was missing. Not all of these databases contain the same journals. Some skim for data from blogs and social media. Some include data from book publishers.
If you compare citation across these platforms for your work, you’ll see wide variations in what is included and counted.
JCR and SCImago use eigenvector centrality as part of their analysis, though the variations in their databases mean that we cannot directly compare their respective ECs. EC measures the centrality of nodes in a network, by assigning value to citation as a function of centrality.
For biblical studies journals, the data is…unplentiful. In JCR, JBL only has one year of information available (2019). The NTS data goes back to 2017 and shows a precipitous drop in citations of NTS articles over that period.
It also shows that the half life for citations in NTS is 25.5 years, meaning half of the citations are of materials produced within the last 25.5 years and the other half are older. For JBL it is 20.6; NovTest 28.8; CBQ 23.9.
This means that the turnover rate for scholarly conversations in these journals is quite slow. (Compare with JAAR, which has a citing half life of 13.7 years). What this means is that the conversation in NTS is centered as though it was 1995.
Something that we can see in SCImago’s data is the number of articles that are published by a journal, but not cited. This number for biblical studies is often quite large. See here the graph for NTS articles:
The use of citation-based analysis to determine prestige puts the power of biblical studies journals in perspective. Though it is not a fair comparison, here are several major bib studies journal compared with Science by their SJR ranking (=avg prestige/article) by SCImago:
Here is a slightly fairer comparison with the journal Sociology:
Biblical studies journals have a pretty low level of prestige within the citational networks in the field, at least based on the way these metrics are calculated. This indicates to me that the reason we see these journals as important is not based upon how they are cited and used
Some other thoughts before I wrap up this thread:
Though Scopus tries to track citation via social media, blogs, and other internet data sweeps, neither it nor other databases do even a reasonably comprehensive job of assessing “impact” via non-journal citation. These databases hide as much as they reveal.
Further, many open access platforms are not included in JCR and SCImago. For biblical studies, that includes @ancientjew, @JIBS_Journal, @AABNER15 and The Bible and Critical Theory, places where a great deal of important work is being done to rethink the field.
Based on this dive into citational databases, I have some thoughts and suggestions (though please correct me and add your thoughts to the conversation):
1) the systems used to track prestige show that our “major” journals are not as prestigious as major journals in other fields. We need not be beholden to them as gatekeepers.
2) the data shows that the conversations in these journals are moving forward at glacial paces. The conversation is still centered in the 1990s.
3) much of what good scholarship includes cannot measured by citation-based models, which cannot measure, for example, negative citations or the processes for evaluation before publication. Also, much of what is interesting and new is not included.
4) Our evaluations of prestige are largely subjective, based on what our advisors learned from their advisors and so on. Most of us probably can’t name the board members, editors, or procedures of the major journals, meaning that the processes and the actors remain largely opaque
Secrecy and the work that is done to maintain it, as Hugh Urban’s work has shown us, is often part of how authority is constructed within religious groups. The lack of transparency by journals is likely part of how they have maintained prestige.
5) because our evaluations of prestige are largely subjective, we can and should create new norms and values to suit a changing field. If journals like NTS cannot be transparent or demonstrate clear plans to change, (certain) scholars should take their publications elsewhere.
Our notions of prestige, because they are rooted in “what we’ve always done,” thus maintain a white male centered conception of what counts as scholarship at the center of the pipeline to publication.
In order to avoid negatively affecting early career scholars, the problem created by the current regime of prestige means that established scholars should lead the way:
1) by shifting their publication plans; 2) withholding their labor from journals; 3) writing new metrics for evaluation within their departments for tenure evaluation, promotion, and hiring; 4) building new platforms.
Journals also have a role to play. 1) make submission data public and collect more of it. Scholars, usually from historically-marginalized groups, should not have to hunt through back catalogues to get statistics; 2) rethink peer review. Double blind does not equal objective.
3) attend to how we pick reviewers, both in peer review and in formal book reviews. Many reviewers see review as a gauntlet to force scholarship through, rather than as shepherds of new work. Boards need to assess their reviewers.
Since we decide on which journals are prestigious, we can demand progress on these and other metrics in order to accord prestige. We can also withhold our scholarly labor from journals that refuse to change.
Finally, it is worth reflecting what citation analysts call “coercive citation,” a practice in which citation from certain journals is coerced, either to skew data on a particular publication or to enforce knowledge cliques.
Citation algorithms look for this in their databases, but we as a guild do this too.
Journal editors, reviewers, advisors, and mentors all need to think about what sort of coercive citation practices they may be unwittingly enforcing when they advise, review, or evaluate work.
dear editors: if you need labor to help diversify your table of contents, I'm willing and available.
dear library friends (like @ToMakeItFall): any advice on how to gather better data and interpret it?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Cavan Concannon

Cavan Concannon Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(