Idea: part of why ChatGPT seems so appealing as a substitute of a search engine is that most of us don't know a good method for knowledge management. Nobody taught us how to build a second brain or make smart notes, so we just keep everything in our head and inevitably struggle
ChatGPT creates an illusion of a knowledge base which can be queried using a natural language, so that we wouldn't have to remember anything anymore. But as I read more about #Zettelkasten and other methods, it seems that each of us could've had such a KB even w/o a computer
Imagine that early at school you'd learn how to create your own system of interconnected notes containing the material you learn. You'd then use it as a seed for your second brain which you keep building throughout your life and filling with personal and professional knowledge.
Imagine that making smart notes, organizing and updating them, and re-using them later for all kinds of intellectual work would be as natural to you as brushing your teeth. Think of all the knowledge you have learned and forgotten w/o such a system. How much further would you be?
And it's a low-tech solution. Niklas Luhmann did it with pen, paper and a slip box, not even a computer. He didn't need to rely on a proprietary AI with unspecified training set. So just with simple digital tools each of us could be a super-powered version of themselves.
So it seems like we should start paying attention to knowledge literacy as another crucial personal skill. And teaching and learning simple knowledge management methods could become a way to counteract overreliance on ChatGPT and similar tools.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Not only fighting misleading content will be a challenge for academia in the post-ChatGPT era. It has suddenly become easy to run academic paper mills at scale, set up credibly looking scam journals or even money laundering schemes. Can we imagine a systemic way out of it?🧵
If you’ve never worked in academia, you’ve probably never heard that academic publishing is dominated by huge, very profitable companies which use the pressure of “publish-or-perish” put on the scientists to earn big money (the 30%-profit-margin type of money).
How come? Scientists are required to publish articles in an academic journal and to refer to other people’s work. Articles are reviewed by the experts – their colleagues, employed at other scientific institutions – in a form of brief fact-checking which is called peer review.
Today I asked ChatGPT about the topic I wrote my PhD about. It produced reasonably sounding explanations and reasonably looking citations. So far so good – until I fact-checked the citations. And things got spooky when I asked about a physical phenomenon that doesn’t exist.
I wrote my thesis about multiferroics and I was curious if ChatGPT could serve as a tool for scientific writing. So I asked to provide me a shortlist of citations relating to the topic. ChatGPT refused to openly give me citation suggestions, so I had to use a “pretend” trick.
When asked about the choice criteria, it gave a generic non-DORA compliant answer. I asked about the criteria a few times and it pretty much always gave some version of “number-of-citations-is-the-best-metric”. Sometimes it would refer to a “prestigious journal”.