Free #AI "tip" for editorial teams & publishers (or other content-heavy businesses)
1. Create embeddings of ALL your content (every review, news, guide, video, podcast... for vector search). This can be done with @OpenAI but also with other models like @CohereAI or @huggingface
2. Train your own fine-tuned LLMs (again can be #GPT3 but can also be others like #BLOOM or Googles #FlanT5 or many others) for Q&A, Recommendations, Chat, and many more...
3. Connect your embeddings with your "own" LLMs and put a well-designed interface (text and voice) on your home pages and under every article or video you have.
4. Users will explore more, understand more, find real answers, and can access ALL your content (not just the latest one) with just a few words.
Especially on Mobile where the attention span is short and the UX of most websites is far away from optimal.
5. Users will stay longer and connect with the editorial content much more. Only expert teams have this kind of info "treasure". ChatGPT is not good in niches and Google will try even harder to keep users on their platforms and ecosystems.
6. For example: Instead of just reading a review, they can ask and "interrogate" the text and get additional information. What are the favorite XYZ of this author? How is this "thing" better or worse than the other "thing".
7. Editors could focus more on reporting (aka real-life experiences) and connecting the dots.
Just imagine your website would have a ChatGPT like interface where you could ask and interact for hours.
8. People will pay for this (if you are a real expert in your area) or you could serve perfect ad messages because the user is actually telling you what she or he is interested in.
If you think this is too expensive, not secure or too complicated. It's not.
3000 pages with OpenAI (I don't know the prices of other services) cost about 1 Dollar for Embeds. You can use LLMs for creating the data sets for your own fine-tuning. And you can use smaller, cheaper LLMs for your fine-tuning. It does not have to be 'Davinci'.
And last but not least, OpenAI runs now on Azure with all the security certificates you need. Google or others offer affordable Cloud solutions with Open Source LLMs, too. #noadvertisment
Use your own treasures or get "owned" by AI Comps. that don't care about editorial content.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Did some "testing" with the #ChatGPT (or #GPT3) detector by Hello-SimpleAI.
1. It's much more accurate than the GPT2 detector. All zero-shot prompt texts were detected. (but to be honest, anybody that works with ChatGPT on a daily basis can detect a ChatGPT text in 2 seconds)
2. Long-prompt generations (especially from GPT3 playground) and summaries: not so much. About 8 out of 10 texts were not detected. 3. Reworked text or "text combinations" (several layers of AI interactions): zero detection 4. Translations in German: almost zero detection
Tl;dr: Helpful tool for ChatGPT detection. But the bigger question is: should text (content) that is improved / re-written / corrected / translated / transformed by a #LLM be detectable? And if so, why don't we label highly altered or enhanced pictures, videos or audio?