Neeva Profile picture
Jan 6, 2023 7 tweets 5 min read Read on X
🥁 Introducing #NeevaAI 🪄

🌟 Powerful AI
🌟 Cutting-edge LLMs
🌟 Authoritative answers in real-time (solving major drawbacks of ChatGPT)

It’s like nothing we, or anyone, have built before.

US users w/ a Neeva account can try it now or create one here: neeva.com/search?q=shoul…
While ChatGPT is a groundbreaking product, it comes with two significant drawbacks.

1️⃣ ChatGPT’s output does not include sources or references, making it impossible to determine the credibility of an answer.

2️⃣ ChatGPT does not retrieve real-time data or information.
#NeevaAI solves both of these problems. 💁‍♀️

At #Neeva, we are harnessing the power of #AI to transform search from a game of 10 blue links to an experience that combines the best of ChatGPT with the authority and timeliness of search.
First, #NeevaAI provides single answer summaries with sources.

We show a synthesized single answer summarizing the most relevant sites to a query.

References and citations are directly embedded in the answer, enabling users to determine the authenticity and trustworthiness. Image
Second, #NeevaAI returns current information.

We built one of the largest independent search stacks crawling hundreds of millions of pages a day.

By combining #AI with our in-house search stack, results are ⚡fast, timely, and relevant. Image
🎩🔮 The magic of #NeevaAI is doing all of this in real-time as the web changes.

As of today, NeevaAI triggers on the majority of searches, and that number will grow in the coming months! 📈

Get more details at our blog ⤵️
neeva.com/blog/introduci…
While we're thrilled to offer this transformative experience, the product is in beta. Keep in mind we won’t get everything right.

As you use #NeevaAI, send us your honest feedback!

Get in touch with us:
📧 Email feedback@neeva.co
📮 neeva.com/p/feedback
👇 In the comments

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Neeva

Neeva Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Neeva

Mar 15, 2023
What if you could control the sources that go into your AI search engine?

At @Neeva, we're making that a reality.

We're excited to share we added #NeevaAI:
✅ Answer support for verified health sites, official programming sites, blogs, etc.
✅ Availability in the News tab

🧵
First, at @Neeva we're passionate about generative search engines combining the best of search & AI.

But it's clear generative AI systems have no notion of sources or authority.

Their content is based on their reading of source material, which is often a copy of the entire Web.
On the other hand, search engines care about authority very intimately.

#PageRank (the algorithm that got @Google going) was committed to a better authority signal to score pages, based on the citations they got from other high scoring pages.
Read 12 tweets
Mar 11, 2023
Considering using a LLM for production, but it's too slow?

Well fear not! Today you can with Double Pseudo Labeling!

Here's why the @Neeva team did, and we've never looked back!

🧵 Image
Before we get started, let's briefly go over #NeevaAI and what it is...

✅ Our AI generates answers
✅ Our Search makes sure answers are timely & factual

All in all, @Neeva combines search + AI to generate a single, cited for each of your searches.
#NeevaAI works by combining search & phases of generative LLMs to generate AI answers

Phase 1️⃣: Run per-doc summarization and question-answering models on the top results

Phase 2️⃣: Run a cross-document attributed summarizer to synthesize a single answer for the query

Example⬇️ Image
Read 17 tweets
Feb 10, 2023
Yesterday, we talked about why AI chatbots generate frankenanswers.

As promised, today we're going over how NeevaAI implements our solution to frankenanswers.

BIG thanks to our talented AI/ML team: @avinashparchuri @rahilbathwal

Keep reading for how we tackle this problem ⤵️
And if you didn't read our thread on this yesterday, check out part 1 below.

(we promise it makes more sense if you do! 😜)
In order to fix frankenanswers, we're required to take a step back and ask ourselves... what would we like NeevaAI to do in such cases?
Read 15 tweets
Feb 9, 2023
Have you seen ChatGPT combine info on multiple entities into an answer that’s completely WRONG? 😬

Generative AI and LLM models can mix up names or concepts & confidently regurgitate frankenanswers.

Neeva is solving this problem on our AI-powered search engine.

Here’s how 🧵
FYI This is a two-part thread series.

Today, with the help of @rahilbathwal, we’ll explain why the problems happen technically.

Tomorrow, we’ll talk through how we’re implementing our solution with our AI/ML team.

Make sure you're following... 👀
In frankenanswers, a generative AI model combines information about multiple possible entities into an answer that’s wrong.

Ex) On this query for `imran ahmed’ from our early test builds, you see a mix up of many intents corresponding to different entities with the same name.👇
Read 13 tweets
Feb 6, 2023
1/ NeevaAI serves abstractive summaries of web pages that are generated in real-time.

We achieved this by a ~10x reduction in latency of a fine-tuned t5-large encoder-decoder model.

TY @asimshankar, @rajhans_samdani, @AshwinDevaraj3 + @spacemanidol

See our lessons learned.. 🧵
2/ First off, we found that there are far fewer resources available for optimizing encoder-decoder models (when compared to encoder models like BERT and decoder models like GPT).

We hope this thread will fill in the void and serve as a good resource. 📂
3/ We started with a flan-T5-large model and tuned it on our dataset. We picked the large variant because we found it to generate better summaries with fewer hallucinations and fluency issues.

The problem? The latency is too high for a search product.
Read 14 tweets
Jan 18, 2023
1/ At #Neeva, design is the act of giving form to an idea: we gather data and inspiration, think, make, and iterate through feedback. 💡

Here's how our team, working alongside the ✨Neeva Community✨, shaped our latest news tool, #BiasBuster...

(read on 📖)
2/ To improve the news experience on Neeva, we solicited insights from users of various news outlets.

One early finding 👉 the journey to get daily news typically started from news providers' sites and apps, but NOT from a search engine.

🤔
3/ So we asked ourselves, when does a search engine becomes necessary and helpful in the journey? 💭

Several users shared that they searched for specific events and stories about which they wanted to learn more.

An avid news user put, "Search is for focused topics.".
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(