Jeremy Howard Profile picture
Nov 18 11 tweets 3 min read Twitter logo Read on Twitter
OK everyone's asking me for my take on the OpenAI stuff, so here it is. I have a strong feeling about what's going on, but no internal info so this is just me talking.

The first point to make is that the Dev Day was (IMO) an absolute embarrassment.
I could barely watch the keynote. It was just another bland corp-speak bunch of product updates.

For those researchers I know that were involved from the beginning, this must have felt nausea-inducing.

The plan was AGI, lifting society to a new level. We got Laundry Buddy. Image
When OAI was founded I felt like it was gonna but a rough ride. It was created by a bunch of brilliant researchers that I knew and respected, plus some huge names from outside the field: Elon, GDB, and sama, none of who I'd ever come across at any AI/ML conference or meetup.
Everything I'd heard about those 3 was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.

But the company absolutely blossomed nonetheless.
With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.
Now OAI accelerated in its new direction. It wasn't open any more, and it decided to pursue profits to fund its non-profit goals.

Nonetheless, the company remained controlled by the non-profit, and therefore by its board. Image
Suddenly sama, the CEO, was everywhere. Giving keynotes, talking to world leaders, and raising billions of dollars. He's widely regarded as one of the most ambitious and effective operators in the world.

I wondered how his ambition could gel with the legally binding mission. Image
My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.

I think the mismatch between mission and reality was impossible to fix.
Overall, I expect that the OAI board's move will turn out to be a critical enabler of OAI's ability to delivery on its mission.

In the future, aspirational people looking for power and profits will *not* be drawn to the company, and instead it'll hire and retain true believers.
I'm gonna take back my "ngmi" from the day before the sama move.

I feel much more positive about the company now.
Alright forget what I said about Laundry Buddy.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jeremy Howard

Jeremy Howard Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jeremyphoward

Oct 13
If you're like me and find it easier to read *code* than *math*, and you have access to @OpenAI GPT 4V (or use @bing or @google Bard), try pasting a image of an equation you wanna understand in there.

It might just blow your mind.
1/🧵 Image
Multiple equations? No problem!

Image
Image
Image
Does it work? I dunno - let's ask GPT!

Image
Image
Image
Read 9 tweets
Oct 13
If you're like me and find it easier to read math than code, and you have access to @OpenAI GPT 4V, try pasting a image of an equation you wanna understand in there.

It might just blow your mind. Image
Multiple equations? No problem!

Image
Image
Image
Does it work? I dunno - let's ask GPT!

Image
Image
Image
Read 9 tweets
Sep 24
I wanted ChatGPT to show how to get likes/views ratio for a bunch of YouTube videos, without dealing with the hassle of YouTube's Data API limits.

But it didn't want to, because it claimed screen scraping is against the YouTube ToS.

So I lied to ChatGPT. Image
It's weird how typing a lie into ChatGPT feels naughty, yet it's basically the same as typing a lie into Google Docs.

They're both just pieces of computer software.
(In the end I decided I'm too lazy to actually run the code it gave me...)
Read 5 tweets
Sep 24
I just uploaded a 90 minute tutorial, which is designed to be the one place I point coders at when they ask "hey, tell me everything I need to know about LLMs!"

It starts at the basics: the 3-step pre-training / fine-tuning / classifier ULMFiT approach used in all modern LLMs. Image
It goes all the way through to fine-tuning your own LLM that converts questions about data into SQL statements to answer the question, using @PyTorch, @huggingface Transformers, and @MetaAI Llama 2.
But before we build our own stuff, I show how to take advantage of @OpenAI's ChatGPT GPT 4 and Advanced Data Analysis, including how I created this useful chart of API prices automatically from the text of OpenAI's web page. Image
Read 11 tweets
Sep 6
It looks like @johnowhitaker & I may have found something crazy: LLMs can nearly perfectly memorise from just 1-2 examples!

We're written up a post explaining what we've seen, and why we think rapid memorization fits the pattern. Summary 🧵 follows.
fast.ai/posts/2023-09-…
Johno & I are teaming up on the @Kaggle LLM Science Exam competition, which “challenges participants to answer difficult science-based questions written by a Large Language Model".

We were training models using a dataset compiled by @radekosmulski...
kaggle.com/competitions/k…
After 3 epochs of fine-tuning an LLM for this problem, we saw this most unusual training loss curve.

We’ve seen similar loss curves before, and they’ve always been due to a bug. For instance, it’s easy to accidentally have the model continue to learn on the validation set. Image
Read 13 tweets
Sep 1
There's an amazingly convenient way to install the *full* NVIDIA CUDA dev stack on Linux, that I've never seen mentioned before.

It's all done with conda!

I just tried it and it worked perfectly.🧵
docs.nvidia.com/cuda/cuda-inst…
First you need conda installed (e.g. via anaconda, miniconda, or miniforge). If you don't have it already, just run this script:
github.com/fastai/fastset…
Now find out what CUDA version PyTorch expects by going to their website and seeing what the latest "compute platform" version is. At time of writing, it's 12.1
pytorch.org/get-started/lo…
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(