Yann Leflour Profile picture
Mar 23 β€’ 11 tweets β€’ 4 min read β€’ Twitter logo Read on Twitter
🏁 And on with part 6

Let's take a turn and work on the Chrome Extension cause I believe I'll be getting it out way before the VSCode one

I'm not waiting for the web to get promptful so I'm taking maters into my own hands

1/11
As always, #gpt4 is setting up and creating everything for me admirably. Although I see some questionable configs in eslint, prettier and tsconfig if it tries to provide them

Oh πŸ’© !

2/11 Image
I've been keeping the same chat window to keep as much context as possible. But because of this, I now have to deal with resets

I need to be able to get a new chat window up to speed as quickly as possible ⏱

3/11
Fortunately, I've been updating 2 documents along the way

πŸ”Έ .pairprog/system_promptΒ·md
πŸ”Έ .pairprog/project_promptΒ·md

Those help me get the system up to speed.

"But what's inside" you ask ?

4/11
1️⃣ System prompt acts as the basic AI definer

"You are pAIrprog, a pair programming companion"

"You can use `// COMMAND path/file.ext` in code blocks to provide me with instructions"

"I can use CODE_REVIEW with content"

"Do not apologize" as it's starting to anoy me πŸ™„

5/11
My system prompt needs to be applied to every response in every project

It is the base configuration of pAIrprog's personality πŸ€–

If using the API, there is a dedicated field for this

In the webview, you need to remind it from time to time, which gets annoying fast

6/11
2️⃣ Project prompt

My project is now composed of 6 submodules. I don't want each of them to use a different stack

So I defined a project prompt that gets appended to my system prompt: "use pnpm, typescript, ..." and my workspace list of modules with descriptions

Works πŸ‘Œ

7/11
πŸ”’ So now I can boot up a new chat with a clean context in a matter of seconds to work as pAIrprog

With the other commands, I can add the remaining context and give #gpt4 a dir tree or such

But the base context is getting too big...

... too big & too pricey πŸ€‘

8/11
System prompt + project prompt = ~1,500 tokens under #gpt3 calculations

This is 1500 tokens minimum PER API call (more on that later)

~5cts per request in an 8K context
~9cts in a 32K context

And this doesn't take into account #gpt4's completion cost, obviously... 😱

9/11
As of today, I now know that pricing will have an impact on the features of any #GPT4 project

So we may have to think about what a context truly is and how it should be tailored to use cases instead of being generic 🧐

More on that later on

10/11
End of today's 🧡

Thanks for reading. If you enjoyed it, you can subscribe, πŸ””, and ❀️ that first tweet. That would help me tremendously

Updates are now dropping everyday weekday at 1PM Paris time / 8AM EDT so be on the lookout

✌️

11/11

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Yann Leflour

Yann Leflour Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @yleflour

Mar 28
πŸ€– My LLM Week 1 & 2 πŸ€–

#gpt4 was only publicly released 14 days ago

Yet, the LLM world is moving really fast, and people are noticing that it's an "iPhone moment"

But if you haven't been able to play catch up, here are the 3 top topics that caught my attention πŸ‘‡
1/ The paper on jobs impacted by #gpt dropped
And #llm replacement is already starting and hitting some

reddit.com/r/blender/comm…
Read 10 tweets
Mar 27
🏁 On with S02E01 of my saga for the AI-enabled developer

In this issue, I want to talk a bit about the elephant in the room 🐘

A topic that's not really mentioned for #gpt4's web users cause you need to preserve the magic πŸͺ„

Yep, I'm gonna talk about max context size πŸ‘‡

1/13
☝️ #gpt4 doesn't remember everything

Have you heard of 8K and 32K contexts?

Apart from pricing πŸ’Έ, do you know what it really entails?

It means that your chat actually has a short-term memory 😢

2/13 Image
If you are a #gpt pro user, have you noticed that it keeps forgetting rules from time to time?

And have you noticed that the longer the messages & response, the faster it seems to forget?

Well, that's because of context limitations

3/13
Read 13 tweets
Mar 24
While I went off track once again, I managed to ship something πŸŽ‰

Here is the content of pAIRprog chrome 0.0.1:
πŸ”Έ Copy Github issue + answers into prompts
πŸ”Έ Copy StackOveflow issue + answer into prompts

It's under review, but it'll drop here: chrome.google.com/webstore/detai…

1/10
Of course, #gpt4 is coding most of it

But I'd say it's a 70% / 30% effort split this time

It provided a lot of outdated answers
> So I needed to feed it Github issues
> Which is tedious
> Which is why I'm making this extension πŸ™ƒ

But I learned a lot along the way πŸ‘‡

2/10
πŸ’‘ #gpt4 "knows" the HTML of popular sites

I asked it to write code to add a "copy as Prompt" link for @StackOverflow

I remained as vague as possible, yet it wrote me the right CSS classes in the selector 😦

3/10 ImageImageImage
Read 10 tweets
Mar 22
🏁 Like a plumber would say, "Let's-a-go" with part 5

This time, I'll focus a bit more on the development side cause that may be why you've come here in the first place πŸ˜—

1/13
#chatgpt's been great to pop up new projects (albeit some outdated methods once in a while)

But now that we're getting to the meat of software making, which is actually more than following a gettin' started page, a couple of things are happening

Time for numbering #️⃣

2/13
1️⃣ I need to provide more context more often

Yep, the large data set it's been trained on is not enough. Now it needs to be aware of local context which means:

πŸ”Έ Reminding it of folder structure
πŸ”Έ Giving files content
πŸ”Έ Giving dependencies content
πŸ”Έ Fetching it doc

3/13
Read 13 tweets
Mar 21
🏁 On with Part 4 of my #gpt4 saga

Between pricing (I'm cheap as f😡k) and rate limiting, I'm now spending half my time writing

Truth is, I quite enjoy it so let's gooooo πŸ‘‡

1/15
So at least the #GPT4's API is available for prompting.

But I don't like the playground

πŸ‘ΆπŸ» I need pairprog-webview, nextjs, tailwind to chat with your own API
πŸ€– Sure thing
πŸ‘΅πŸ» Wow, this API call looks quite outdated and doesn't work
πŸ€– ermmm...

How do I solve this? πŸ€”

2/15
Open up platform.openai.com/docs/api-refer…

Copy, paste into @euangoddard ✨magnificent✨ "Paste as markdown page", copy back, paste to #gpt4

πŸ‘ΆπŸ» Here's the updated doc
πŸ€– K boss
βœ‚οΈ > πŸ“‹ > ▢️

✨ It works ✨

3/15
Read 16 tweets
Mar 20
I'm on an adventure to get #gpt4 to code its own VSCode extension πŸ—Ί

Starting from scratch with Language Models, I'll share every discovery, insight, or rambling that crosses my mind along the way.

So, if you're interested, here's a link to every part πŸ‘‡
1️⃣ The honeymoon part

In which I start my journey and quickly diverge into publishing a full website using mostly #gpt4 prompts
2️⃣ Leaning into prompts

In which I make the mistake of continuing my part 1 thread and quickly diverge into some insights about #gpt4 prompting, and the parallels to standards in #leanengineering
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(