The Generative Pre-training Transformer (version 3) is an extremely sophisticated language predictor created by @OpenAI. It can be accessed via an API. So a human gives the model some text and GPT-3 provides back its best guess around what text should follow.
As an AI language model, it has analyzed almost all the text on the internet. It then takes this data and makes predictions by identifying patterns. The output is based on the surrounding text and it has zero understanding of words or the output
For a couple of reasons:
1. It's the most powerful AI language model that has been released
2. It's currently in closed-access and limited releases create hype and FOMO
3. People are sharing some very cool things they've created with it:
No. With zero comprehension, GPT-3 does not fact-check, tends to lose coherence over long passages, can contradict itself at any time, and cannot create unique thoughts.
My goal is to provide tools that make it easiest for you to spend time on tasks you love and are great at.