Itamar Golan Profile picture
Mar 30 5 tweets 2 min read Twitter logo Read on Twitter
The LLM tip of the day #1-

I have discovered a small yet crucial tip for producing superior code with GPT-4, which can help you increase productivity, deliver faster results, and enhance accuracy.

#LLMTipOfTheDay

>>>Thread>>> Image
When you are asking an LLM, let's say GPT-4, to code something, it is fair to say (although I am simplifying things a bit) that it is converging to the expectancy level of the training data.

What do I mean?

>>>
It was trained on code from less skilled programmers (many) as much as it was trained on code from rockstars programmers (less). So it is generating code intuitively that is an average of what it has learned.

But you want highly skilled code, right? So just ask for it!

>>>
In this example, I'm asking for it in the System section, as if I am Jeff Dean, a top-world expert in Coding and Algorithm Design at Google.

Surprisingly, or not?, in 75% of the examples I have checked, the code was much better!

>>>
Obviously, this paradigm can be utilized in other domains, from poetry to product marketing.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Itamar Golan

Itamar Golan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ItakGol

Mar 31
Introducing HuggingGPT🔥🚀

HuggingGPT is a collaborative system that consists of an LLM as the controller and numerous expert models as collaborative executors (from HuggingFace Hub).

github.com/microsoft/JARV…

The workflow of HuggingGPT consists of 4 stages
>>> Image
1/
Task Planning: Using ChatGPT to analyze the requests of users to understand their intention, and disassemble them into possible solvable sub-tasks.
2/
Model Selection: Based on the sub-tasks, ChatGPT invoke the corresponding models hosted on HuggingFace.
Read 5 tweets
Mar 30
Wondering how to create ChatGPT From GPT-3? 🤓

Reinforcement Learning from Human Feedback (RLHF)!

A complete guide (13 tweets) for RLHF.

Thread>>> Image
1/
It is unlikely that supervised learning is going to lead us to true artificial intelligence. Before Deep Learning, most reinforcement learning applications were impractical.
2/
Now, I would expect most human-like behavior of computer applications to be learned through reinforcement learning strategies. ChatGPT is not "better" than GPT-3, it is just more aligned with what humans expect in terms of conversational skills.
Read 15 tweets
Mar 28
Curious about how your life will change with ChatGPT's Browsing mode?

Check out this 12-tweet thread for early access insights.

Absolutely mind-blowing>>🤯🤯🤯
Initializing...

1 / 12
"Apple in the news"

2 / 12
Read 13 tweets
Mar 11
Introducing OpenChatKit 🚀 -
The first open-source alternative to ChatGPT!

A team of ex-OpenAI fellows at Together have released a 20B chat-GPT model, fine-tuned for chat using EleutherAI's GPT-NeoX-20B, with over 43 million instructions under the Apache-2.0 license.

>>>
This instruction-tuned large language model has been optimized for chat on 100% carbon-negative compute.

OpenChatKit includes four essential components:

>>>
- An instruction-tuned large language model, fine-tuned for chat using EleutherAI's GPT-NeoX-20B with over 43 million instructions on carbon-negative compute.
- Customization recipes that help fine-tune the model to deliver high accuracy results for specific tasks.
Read 5 tweets
Mar 2
Birthday Paradox Explained

The birthday paradox is a surprising and counterintuitive phenomenon in probability theory that demonstrates the likelihood of two people in a group sharing the same birthday, even when the group is relatively small.

1 / 9
The paradox is often misunderstood as being about the probability of two people in a group having the same birthday, but it's actually about the probability of any two people in the group having the same birthday.

2 / 9
This means that even if one person has a different birthday than anyone else in the group, the paradox still applies.

Here's an example: Let's say you're in a room with 23 other people. What's the probability that at least two people in the room share the same birthday?

3 / 9
Read 9 tweets
Feb 23
*** The History Behind ChatGPT ***

OpenAI's ChatGPT is a remarkable NLP model that has gotten a lot of attention, but it is important to note that the technology behind it has a rich history of research and development spanning several decades.

<1 / 14> THREAD Image
RNNs, first introduced in 1986 by David Rumelhart, form the foundation of it all. RNNs are specialized artificial neural networks designed to work with time-series or sequence data (paper: lnkd.in/d4jeAZnJ).

<2 / 14> THREAD
In 1997, Sepp Hochreiter and Jürgen Schmidhuber created LSTM networks, an RNN variant with a special memory cell that enables the network to retain information from past inputs over extended periods.

<3/ 14> THREAD
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(