Exciting news! PromptPerfect now offers auto prompt engineering for #GPT4 and @LexicaArt - I'm really impressed by GPT4 for its complex reasoning and math problem-solving! Let's see some examples 🧵
@LexicaArt Can we create a story with only three-letter words? lets see how GPT4 responds. Without prompt optimization, one can see the first sentence is already wrong: land, hill, tiny are all 4 letters. After opt, it is pretty good. This shows the effectiveness of PromptPerfect even on… twitter.com/i/web/status/1…
Here is a simple math problem. Before & after all give the correct answer. With an optimized prompt, some disclaimer is added at the end about the assumptions. What surprises me is the two answers are exactly the same besides this disclaimer! How can that be possible for LLM… twitter.com/i/web/status/1…
Last but not least, we add prompt optimizer for @LexicaArt Stunning results as always! @sharifshameem great work on Aperture2 and looking forward to 3!
Our prompt engineering algorithm has been further improved, making it more effective than ever before. Don't miss out on this release! Check out PromptPerfect 0.5 yourself at promptperfect.jina.ai
• • •
Missing some Tweet in this thread? You can try to
force a refresh
While migrating from davinci003 to ChatGPT API @OpenAI released yesterday, we found two interesting observations. Good or bad? u tell me. First, the `assistant` role in the new API always addresses itself in the first person. This can be convenient in conversation UX, but in
In many non-conversation cases, this is pretty problematic. Let's see the two optimized prompts from davinci003 (left, correct) and chatGPT (gpt-3.5-turbo, incorrect), the first-person & second-person are completely reversed between the two. What's more interesting is that
chatGPT (gpt-3.5-turbo) tends to add some legal disclaimer in the generation, see redlines. In most downstream applications, this auto-disclaimer are unnecessary and unwanted. Downstream apps should handle themselves.