Yann Leflour Profile picture
Mar 21 β€’ 16 tweets β€’ 7 min read β€’ Twitter logo Read on Twitter
🏁 On with Part 4 of my #gpt4 saga

Between pricing (I'm cheap as f😡k) and rate limiting, I'm now spending half my time writing

Truth is, I quite enjoy it so let's gooooo πŸ‘‡

1/15
So at least the #GPT4's API is available for prompting.

But I don't like the playground

πŸ‘ΆπŸ» I need pairprog-webview, nextjs, tailwind to chat with your own API
πŸ€– Sure thing
πŸ‘΅πŸ» Wow, this API call looks quite outdated and doesn't work
πŸ€– ermmm...

How do I solve this? πŸ€”

2/15
Open up platform.openai.com/docs/api-refer…

Copy, paste into @euangoddard ✨magnificent✨ "Paste as markdown page", copy back, paste to #gpt4

πŸ‘ΆπŸ» Here's the updated doc
πŸ€– K boss
βœ‚οΈ > πŸ“‹ > ▢️

✨ It works ✨

3/15
But the #chatgpt site response streaming feature is cool, I want it tooooo 😭

Fortunately, there is an issue's answer on the repo regarding this

So I need to copy, paste as markdown, STOP βœ‹. It's getting tiring

Wait, wait, wait... Guess it's time for more rambling πŸ˜ƒ

4/15
One of the highest costs of training an LM is gathering and formatting the training data

But the web is currently made for humans, not πŸ€–

So while companies working on large ML have the resource to do so, at a consumer level, we don't

Couple of things could happen πŸ‘‡

5/15
1️⃣ The web starts to develop a "Prompt" presentation layer

Imagine if every site using @docusaurus had a "copy as prompt" button for each section/page

Now humans can fetch and feed updated doc in #chatgpt in a matter of seconds

Right @sebastienlorber πŸ˜‰ ?

6/15
Your doc doesn't have it yet? Might not use your tool

Even better, if there's a dedicated endpoint to fetch the doc as prompt formatted text

With the current advent of COSS (Commercial Open Source Software), such accessibility features should become a given

7/15
But fetching is still tiresome, and searching is a time waster (more on this in another part probably)

So let's take a step back and get meta

What is my main tool to get answers right now?

Yep, it starts with "Chat" and ends with "GPT" πŸ˜—

8/15
2️⃣ LM to LM interfaces

What's better than a generic #gpt4 LM to answer a precise question about the latest release of a framework?

Another LM trained and updated on this framework's specificities.

What if #chatgpt could ask #chat-nextjs for specific information? 🀯

9/15
This could be the advent of distributed LM and a new version of the web

Responses will always be up to date, with higher quality, and likely to provide references (please do so πŸ™)

Want SEO (well PEO)? Be the place to find the answer to a specific question with a prompt

10/15
3️⃣ Knowledge becomes closed source behind prompts πŸ”’

Smaller LMs would be great, but we still have an issue with large LM (think #gpt)

If large LMs can simply gobble up the knowledge you share openly and replace you, will you still be willing to share it? 🫒

11/15
We already started losing exploration with web 2.0 over personalisation

Do you still go to Youtube in hope of discovering new creators ? If you're leftist leaning, are right leaning news site appearing in your Google results ?

Well, this is worse 😞

12/15
Think Google is a black box ? At least we have a basic understanding of how it works and picks results

In a LLM web you'll never know how you got the info

Actually, most don't care. Reacting to Reddit posts based solely on the headline is already a meme at this point

13/15
And it's already happening, @OpenCage had to put up a blog post to contradict #chatgpt about it's own service. It's own service !

blog.opencagedata.com/post/dont-beli…

Can't fix the result, can't find why #chatgpt thinks this, yet thousands of people take it at face value

14/15
I could go on but this is getting way too convoluted for me

So back to being a simple minded writing a vscode extension

Yay πŸ₯³

15/15
Well what did you expect ?

Of course it's the end of part 4. This extension won't be writing itself while I tweet

If you ❀️ this thread, Like/Retweet the first tweet below if you can

You can also hit @yleflour & πŸ”” for more rambling...

... cause there's a lot more coming 😝

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Yann Leflour

Yann Leflour Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @yleflour

Mar 28
πŸ€– My LLM Week 1 & 2 πŸ€–

#gpt4 was only publicly released 14 days ago

Yet, the LLM world is moving really fast, and people are noticing that it's an "iPhone moment"

But if you haven't been able to play catch up, here are the 3 top topics that caught my attention πŸ‘‡
1/ The paper on jobs impacted by #gpt dropped
And #llm replacement is already starting and hitting some

reddit.com/r/blender/comm…
Read 10 tweets
Mar 27
🏁 On with S02E01 of my saga for the AI-enabled developer

In this issue, I want to talk a bit about the elephant in the room 🐘

A topic that's not really mentioned for #gpt4's web users cause you need to preserve the magic πŸͺ„

Yep, I'm gonna talk about max context size πŸ‘‡

1/13
☝️ #gpt4 doesn't remember everything

Have you heard of 8K and 32K contexts?

Apart from pricing πŸ’Έ, do you know what it really entails?

It means that your chat actually has a short-term memory 😢

2/13 Image
If you are a #gpt pro user, have you noticed that it keeps forgetting rules from time to time?

And have you noticed that the longer the messages & response, the faster it seems to forget?

Well, that's because of context limitations

3/13
Read 13 tweets
Mar 24
While I went off track once again, I managed to ship something πŸŽ‰

Here is the content of pAIRprog chrome 0.0.1:
πŸ”Έ Copy Github issue + answers into prompts
πŸ”Έ Copy StackOveflow issue + answer into prompts

It's under review, but it'll drop here: chrome.google.com/webstore/detai…

1/10
Of course, #gpt4 is coding most of it

But I'd say it's a 70% / 30% effort split this time

It provided a lot of outdated answers
> So I needed to feed it Github issues
> Which is tedious
> Which is why I'm making this extension πŸ™ƒ

But I learned a lot along the way πŸ‘‡

2/10
πŸ’‘ #gpt4 "knows" the HTML of popular sites

I asked it to write code to add a "copy as Prompt" link for @StackOverflow

I remained as vague as possible, yet it wrote me the right CSS classes in the selector 😦

3/10 ImageImageImage
Read 10 tweets
Mar 23
🏁 And on with part 6

Let's take a turn and work on the Chrome Extension cause I believe I'll be getting it out way before the VSCode one

I'm not waiting for the web to get promptful so I'm taking maters into my own hands

1/11
As always, #gpt4 is setting up and creating everything for me admirably. Although I see some questionable configs in eslint, prettier and tsconfig if it tries to provide them

Oh πŸ’© !

2/11 Image
I've been keeping the same chat window to keep as much context as possible. But because of this, I now have to deal with resets

I need to be able to get a new chat window up to speed as quickly as possible ⏱

3/11
Read 11 tweets
Mar 22
🏁 Like a plumber would say, "Let's-a-go" with part 5

This time, I'll focus a bit more on the development side cause that may be why you've come here in the first place πŸ˜—

1/13
#chatgpt's been great to pop up new projects (albeit some outdated methods once in a while)

But now that we're getting to the meat of software making, which is actually more than following a gettin' started page, a couple of things are happening

Time for numbering #️⃣

2/13
1️⃣ I need to provide more context more often

Yep, the large data set it's been trained on is not enough. Now it needs to be aware of local context which means:

πŸ”Έ Reminding it of folder structure
πŸ”Έ Giving files content
πŸ”Έ Giving dependencies content
πŸ”Έ Fetching it doc

3/13
Read 13 tweets
Mar 20
I'm on an adventure to get #gpt4 to code its own VSCode extension πŸ—Ί

Starting from scratch with Language Models, I'll share every discovery, insight, or rambling that crosses my mind along the way.

So, if you're interested, here's a link to every part πŸ‘‡
1️⃣ The honeymoon part

In which I start my journey and quickly diverge into publishing a full website using mostly #gpt4 prompts
2️⃣ Leaning into prompts

In which I make the mistake of continuing my part 1 thread and quickly diverge into some insights about #gpt4 prompting, and the parallels to standards in #leanengineering
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(