1. Open access publishing is important 2. Peer review is not perfect 3. Community-based vetting of research is key 4. A system for by-passing such vetting muddies the scientific information ecosystem
Yes, this is both a subtweet of arXiv and of every time anyone cites an actually reviewed & published paper by just pointing to its arXiv version, just further lending credibility to all the nonsense that people "publish" on arXiv and then race to read & promote.
Shout out to the amazing @aclanthology which provides open access publishing for most #compling / #NLProc venues and to all the hardworking folks within ACL reviewing & looking to improve the reviewing process.
Citing a paper that's available through the @aclanthology by pointing to an arXiv version instead is at least the equivalent of putting something recyclable in the landfill, if not equivalent to littering. Small actions that contribute to the degradation of the environment.
@aclanthology Meanwhile, Google Scholar pointing to arXiv versions first is like ... governments providing subsidies to oil companies.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
From the abstract of this 154 page novella: "We contend that (this early version of) GPT-4 is part of a new cohort of LLMs [...] that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models."
>>
And "We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting."
A few choice quotes (but really, read the whole thing, it's great!):
>>
"The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose."
"Should you even be making or selling it?"
"Are you effectively mitigating the risks?"
"Are you over-relying on post-release detection?"
"Are you misleading people about what they’re seeing, hearing, or reading?"
Okay, taking a few moments to reat (some of) the #gpt4 paper. It's laughable the extent to which the authors are writing from deep down inside their xrisk/longtermist/"AI safety" rabbit hole.
>>
Things they aren't telling us: 1) What data it's trained on 2) What the carbon footprint was 3) Architecture 4) Training method
>>
But they do make sure to spend a page and half talking about how they vewwy carefuwwy tested to make sure that it doesn't have "emergent properties" that would let is "create and act on long-term plans" (sec 2.9).
A journalist asked me to comment on the release of GPT-4 a few days ago. I generally don't like commenting on what I haven't seen, but here is what I said:
"One thing that is top of mind for me ahead of the release of GPT-4 is OpenAI's abysmal track record in providing documentation of their models and the datasets they are trained on.
>>
Since at least 2017 there have been multiple proposals for how to do this documentation, each accompanied by arguments for its importance.
The thing that strikes me most about this story from @ZoeSchiffer and @CaseyNewton is the way in which the MSFT execs describe the urgency to move "AI models into the hands of customers"
@ZoeSchiffer@CaseyNewton There is no urgency to build "AI". There is no urgency to use "AI". There is no benefit to this race aside from (perceived) short-term profit gains.
>>
@ZoeSchiffer@CaseyNewton It is very telling that when push comes to shove, despite having attracted some very talented, thoughtful, proactive, researchers, the tech cos decide they're better off without ethics/responsible AI teams.
I'd like to point out: Serious AI researchers can get off the hype train at any point. It might not have been your choice that your field was invaded by the Altmans of the world, but sitting by quietly while they spew nonsense is a choice.
>>
Likewise, describing your own work in terms of unmotivated and aspirational analogies to human cognitive abilities is also a choice.
>>
If you feel like it wouldn't be interesting without that window dressing, it's time to take a good hard look at the scientific validity of what you are doing, for sure.