ChatGPT does not provide factual scientific information. It will give information that will sound reasonably accurate and is hard for a qualified expert to distinguish but in the end, it will be incorrect.
It lacks at solving the Cognitive Reflection Test in the first go. It gives an intuitive answer but that is incorrect. If prompted with a prompt around thinking step by step, it produces a solution that is correct.
All the amazing use cases that can be solved using ChatGPT
👇🏼👇🏼👇🏼
Programming
• Clone repo
• Convert JSONL into JSON array
• Convert text to AWS IAM policy
• Create a summarizer app
• Create react component
• Explain programming problems in a particular style
• Explain the regular expression
• Find & explain bugs in code
• Generate code to be presented to management for review
• How to build a neural network using Pytorch
• Optimize code
• Refactor code
• Write blender 3D plugin
• Write exploit for code
• Convert PHP + Jquery code to Next.js + Tailwind
• Write Kubernetes deployment file
When ChatGPT was released, it gave new superpowers to the world. We are daily discovering something new about the model but are limited to only sharing a screenshot of it.
3/ What if we could convert our conversation into an app and users can interact with this app?
What if anyone who has basic conversational skills can write a prompt and turn it into an app?
I played around with it. Sharing some observations about its output. 🔍
2/ ChatGPT is good at conversing in a human and natural way. Asked for gift suggestions for my mom. It first added a filler paragraph, suggestions, and then a conclusion. This is the same as a human would answer.
3/ I continued the conversation and asked for more details about the photo frame. Photo frame was one of the suggestions. ChatGPT clearly preserved the context and generated a good response. This is a significant improvement as most of the chatbot fails in preserving context.
1/ OpenAI recently released a new GPT3 model called text-davinci-003. It has several enhancements over the previous versions.
I compared text-davinci-002 and text-davinci-003 with the same prompt and input settings.
Here are the observations 🔍
2/ davinci003 doesn't make assumptions even at some temperature value, whereas davinci002 does.
Here davinci002 assumed that a CSS library would be available and hence its output contained class names. davinci003 did not assume & its output is inline CSS styles.
3/ davinci003 is good at generating long form content.
Both davinci002 and davinci003 inferred web frameworks for Python language correctly but the output produced by davinci003 is longer, to the point and meaningful. davinci002 output is more generic and filler kind of text.