LLM like ChatGPT may mean trouble for SEO and online ads, fine. But maybe they will finally make the academia stop using simple productivity/popularity metrics like citation or publication counts as a measure of research quality.
These metrics "work" only because writing and publishing an article comes with a cost and effort (to find an idea, make research, write it down etc). If the cost to writing is almost zero and incentive to publish is so high, the metric will get inflated and stop meaning anything.
Which will be essentially a good thing. We will be forced to find better proxies for quality. But they will also have to be better than asking the PeerReviewGPT to do the research assessment for us... #AcademicTwitter#scientometrics
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Not only fighting misleading content will be a challenge for academia in the post-ChatGPT era. It has suddenly become easy to run academic paper mills at scale, set up credibly looking scam journals or even money laundering schemes. Can we imagine a systemic way out of it?🧵
If you’ve never worked in academia, you’ve probably never heard that academic publishing is dominated by huge, very profitable companies which use the pressure of “publish-or-perish” put on the scientists to earn big money (the 30%-profit-margin type of money).
How come? Scientists are required to publish articles in an academic journal and to refer to other people’s work. Articles are reviewed by the experts – their colleagues, employed at other scientific institutions – in a form of brief fact-checking which is called peer review.
Today I asked ChatGPT about the topic I wrote my PhD about. It produced reasonably sounding explanations and reasonably looking citations. So far so good – until I fact-checked the citations. And things got spooky when I asked about a physical phenomenon that doesn’t exist.
I wrote my thesis about multiferroics and I was curious if ChatGPT could serve as a tool for scientific writing. So I asked to provide me a shortlist of citations relating to the topic. ChatGPT refused to openly give me citation suggestions, so I had to use a “pretend” trick.
When asked about the choice criteria, it gave a generic non-DORA compliant answer. I asked about the criteria a few times and it pretty much always gave some version of “number-of-citations-is-the-best-metric”. Sometimes it would refer to a “prestigious journal”.