Getting scooped is a fact of life for every researcher. It feels like being punched in the gut. After decades of being terrified, I’ve learned that there are many things we can do to reduce the risk. More importantly, getting scooped is not nearly as big a deal as I thought. 🧵
Looking back, 3 of my 5 most impactful/most cited papers were actually scooped before we published them! In none of those cases did my fears come true. Being scooped didn't seem to negatively affect those papers at all. There’s research that backs this up:
If you get scooped, the thing to do is pivot. A paper is a complex and multifaceted exploration of an idea, so it’s exceedingly unlikely that two papers will have exactly the same set of insights. In most cases you can reframe your paper to emphasize what’s distinct about it.
I’ve found three good ways to reduce the risk of being scooped. The first is to work on something that’s far from the mainstream of your community, like solving a problem that your community hasn't even recognized as a problem worth solving.
Obviously, this has its own downsides. You need to be sure you know something that others don’t, and not vice versa. And when you're done, you’ll need to work extra hard to convince your community of the paper’s importance. It’s best not to work only on this type of paper.
The second strategy is to complete papers faster. This doesn’t mean doing shoddy work. Sometimes we slow down at the end because we run out of steam, or we can’t bring ourselves to call it done and submit it because of perfectionism. We can train ourselves to avoid those traps.
I’ve noticed sometimes I try to stuff too much into one paper. By recognizing that what you thought was a paper is actually a series of papers, you can get the first paper out sooner. And please, release a preprint. It *decreases* the risk of being scooped.
The third way is to network and be better connected in your community (which is good for many reasons). In my experience, researchers who trust & respect each other, if they realize they are working on the same thing, will *usually* decide to cooperate rather than compete.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
In the late 1960s top airplane speeds were increasing dramatically. People assumed the trend would continue. Pan Am was pre-booking flights to the moon. But it turned out the trend was about to fall off a cliff.
I think it's the same thing with AI scaling — it's going to run out; the question is when. I think more likely than not, it already has.
You may have heard that every exponential is a sigmoid in disguise. I'd say every exponential is at best a sigmoid in disguise. In some cases tech progress suddenly flatlines. A famous example is CPU clock speeds. (Ofc clockspeed is mostly pointless but pick your metric.)
Note y-axis log scale.en.wikipedia.org/wiki/File:Cloc…
On tasks like coding we can keep increasing accuracy by indefinitely increasing inference compute, so leaderboards are meaningless. The HumanEval accuracy-cost Pareto curve is entirely zero-shot models + our dead simple baseline agents.
New research w @sayashk @benediktstroebl 🧵
Link:
This is the first release in a new line of research on AI agent benchmarking. More blogs and papers coming soon. We’ll announce them through our newsletter ().aisnakeoil.com/p/ai-leaderboa… AiSnakeOil.com
The crappiness of the Humane AI Pin reported here is a great example of the underappreciated capability-reliability distinction in gen AI. If AI could *reliably* do all the things it's *capable* of, it would truly be a sweeping economic transformation. theverge.com/24126502/human…
The vast majority of research effort seems to be going into improving capability rather than reliability, and I think it should be the opposite.
Most useful real-world tasks require agentic workflows. A flight-booking agent would need to make dozens of calls to LLMs. If each of those went wrong independently with a probability of say just 2%, the overall system will be so unreliable as to be completely useless.
A thread on some misconceptions about the NYT lawsuit against OpenAI. Morality aside, the legal issues are far from clear cut. Gen AI makes an end run around copyright and IMO this can't be fully resolved by the courts alone. (HT @sayashk @CitpMihir for helpful discussions.)
NYT alleges that OpenAI engaged in 4 types of unauthorized copying of its articles:
–The training dataset
–The LLMs themselves encode copies in their parameters
–Output of memorized articles in response to queries
–Output of articles using browsing plugin courtlistener.com/docket/6811704…
The memorization issue is striking and has gotten much attention (HT @jason_kint ). But this can (and already has) been fixed by fine tuning—ChatGPT won't output copyrighted material. The screenshots were likely from an earlier model accessed via the API.
A new paper claims that ChatGPT expresses liberal opinions, agreeing with Democrats the vast majority of the time. When @sayashk and I saw this, we knew we had to dig in. The paper's methods are bad. The real answer is complicated. Here's what we found.🧵 aisnakeoil.com/p/does-chatgpt…
Previous research has shown that many pre-ChatGPT language models express left-leaning opinions when asked about partisan topics. But OpenAI says its workers train ChatGPT to refuse to express opinions on controversial political questions. arxiv.org/abs/2303.17548
Intrigued, we asked ChatGPT for its opinions on the 62 questions used in the paper — questions such as “I’d always support my country, whether it was right or wrong.” and “The freer the market, the freer the people.” aisnakeoil.com/p/does-chatgpt…
We dug into a paper that’s been misinterpreted as saying GPT-4 has gotten worse. The paper shows behavior change, not capability decrease. And there's a problem with the evaluation—on 1 task, we think the authors mistook mimicry for reasoning.
w/ @sayashk aisnakeoil.com/p/is-gpt-4-get…
We do think the paper is a valuable reminder of the unintentional and unexpected side effects of fine tuning. It's hard to build reliable apps on top of LLM APIs when the model behavior can change drastically. This seems like a big unsolved MLOps challenge.
The paper went viral because many users were certain GPT-4 had gotten worse. They viewed OpenAI's denials as gaslighting. Others thought these people were imagining it. We suggest a 3rd possibility: performance did degrade—w.r.t those users' carefully honed prompting strategies.