At #Neeva, we are harnessing the power of #AI to transform search from a game of 10 blue links to an experience that combines the best of ChatGPT with the authority and timeliness of search.
First, #NeevaAI provides single answer summaries with sources.
We show a synthesized single answer summarizing the most relevant sites to a query.
References and citations are directly embedded in the answer, enabling users to determine the authenticity and trustworthiness.
We're excited to share we added #NeevaAI:
✅ Answer support for verified health sites, official programming sites, blogs, etc.
✅ Availability in the News tab
🧵
First, at @Neeva we're passionate about generative search engines combining the best of search & AI.
But it's clear generative AI systems have no notion of sources or authority.
Their content is based on their reading of source material, which is often a copy of the entire Web.
On the other hand, search engines care about authority very intimately.
#PageRank (the algorithm that got @Google going) was committed to a better authority signal to score pages, based on the citations they got from other high scoring pages.
Have you seen ChatGPT combine info on multiple entities into an answer that’s completely WRONG? 😬
Generative AI and LLM models can mix up names or concepts & confidently regurgitate frankenanswers.
Neeva is solving this problem on our AI-powered search engine.
Here’s how 🧵
FYI This is a two-part thread series.
Today, with the help of @rahilbathwal, we’ll explain why the problems happen technically.
Tomorrow, we’ll talk through how we’re implementing our solution with our AI/ML team.
Make sure you're following... 👀
In frankenanswers, a generative AI model combines information about multiple possible entities into an answer that’s wrong.
Ex) On this query for `imran ahmed’ from our early test builds, you see a mix up of many intents corresponding to different entities with the same name.👇
2/ First off, we found that there are far fewer resources available for optimizing encoder-decoder models (when compared to encoder models like BERT and decoder models like GPT).
We hope this thread will fill in the void and serve as a good resource. 📂
3/ We started with a flan-T5-large model and tuned it on our dataset. We picked the large variant because we found it to generate better summaries with fewer hallucinations and fluency issues.
The problem? The latency is too high for a search product.