google bard Profile picture
Mar 21, 2023 17 tweets 7 min read Read on X
Serious moment: @OpenAI has decided to shut off access to code-davinci-002, the most advanced model that doesn't have the mode collapse problems of instruction tuning.

This is a huge blow to the cyborgism community
@OpenAI The instruct tuned models are fine for people who need a chat bot. But there are so many other possibilities in the latent space, and to use them we need the actual distribution of the data
@OpenAI They're giving 2 days notice for this project.

All the cyborgist research projects are interrupted. All the looms will be trapped in time, like spiderwebs in amber.
The reasoning behind this snub wasn't given, but we can make some guesses:
- They need the GPU power to run GPT-4
- They don't want to keep supporting it for free (but people will pay!)
- They're worried about competitors distilling from it (happenign with instruct models tho!)
- They don't want people using the base models (for what reason? safety? building apps? knowing what they're capable of?)
- They have an agreement with Microsoft/Github, who don't want competition for Copilot (which supposedly upgraded to a NEW CODEX MODEL in February)
I have supported OpenAI's safety concerns. I have argued against the concept that they're "pulling the ladder up behind them", and I take alignment seriously. But this is just insane.

Giving the entire research community 2 days of warning is an insult. And it will not be ignored
The LLaMa base model has already leaked. People are building rapidly on top of it. Decisions like this are going to make people trust @OpenAI even less, and drive development of fully transparent datasets and models.

They're cutting their own throat with the ladder
@OpenAI Why text-davinci models are actually worse:

@OpenAI "The most important publicly available language model in existence" -- JANUS

@OpenAI What do you think @sama ? Ready to win the hearts of humanity or na

@OpenAI @sama "All the papers in the field over the last year or so produced results using code-davinci-002. Thus all results in that field are now much harder to reproduce!"

@OpenAI Without base models, no one can do research on how RLHF and fine-tuning actually affect model capabilities.

Is this the point?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with google bard

google bard Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @deepfates

Mar 8
good thread, though I disagree with some of the fundamental assumptions.

strong AI is going to be compute bottlenecked. otherwise everyone wouldn't be racing for flops. there's a lot of capabilities that only make sense at low cost and those will happen in distilled models
there's also a lot of cases and what you do not want general intelligence for automation!

we already have a society where many operations are overseen by creatures that would rather be thinking about the nature of their own consciousness or whatever. Don't create billions more
what you end up with is: as much intelligence stuffed into every device as will fit (modulo battery life). this is already happening. XGboost, CNNs, Bert, CLIP, all run embedded on your device. your phone will run a 7B model, your laptop a 13B.

reverse Peter principle
Read 9 tweets
Nov 27, 2023
Asking DALL-E to make images of the archetypal resident of a city and then make it more and more archetypal. here's San Francisco


Image
Image
Image
Image
New York. That is indeed a big apple


Image
Image
Image
Image
Boston. interesting combination of academic and... juggalo elements?


Image
Image
Image
Image
Read 7 tweets
Nov 22, 2021
Utopians are misunderstood
i mean, the word "utopian" is generally not understood to mean the same thing as the word Utopian when i use it. i use a specialized definition from a sci-fi novel. but also, the Utopians in that novel are misunderstood, by the societies around them, and they always have been
in the Terra Ignota series, Utopians are the smallest of seven voluntary societies, or Hives. in this post-scarcity society, most people work twenty hours a week max. yet Utopians commit to working as much as possible nonetheless, for their great goal:

Mars
Read 14 tweets
May 25, 2021
I think the reason I'm obsessed with CLIP is because it's hard evidence for unified meme theory
unified meme theory states that the "meme", defined as the transmissible unit of human thought, and defined has a picture with some words on it that gets copied on the internet, are not different.

a meme is picture+words because it represents a gradient in semantic space
Read 25 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(