Neeva Profile picture
Mar 15 12 tweets 7 min read
What if you could control the sources that go into your AI search engine?

At @Neeva, we're making that a reality.

We're excited to share we added #NeevaAI:
✅ Answer support for verified health sites, official programming sites, blogs, etc.
✅ Availability in the News tab

🧵
First, at @Neeva we're passionate about generative search engines combining the best of search & AI.

But it's clear generative AI systems have no notion of sources or authority.

Their content is based on their reading of source material, which is often a copy of the entire Web.
On the other hand, search engines care about authority very intimately.

#PageRank (the algorithm that got @Google going) was committed to a better authority signal to score pages, based on the citations they got from other high scoring pages.
At @Neeva, we believe a great AI search engine should allow you to control the sources that go into your answers.

Here's an example of how we're doing it differently ⤵️ Image
Just two weeks back, #NeevaAI went multi-perspective by adding support for @Neeva’s Bias Buster feature.

This provides multiple perspectives on any topic, transparently.

A deeper dive on our multi-perspective, authoritative, & personalizable AI ⬇️
As we continued to imagine the possibilities of multi-perspective AI, we are proud to add #NeevaAI support for verified health sites & authoritative programming websites.

Here's an example 🔎: [why water is important for maintaining good health] Image
You may be looking for a more authoritative answer from only verified sites.

In that case just:
1️⃣ Click on the verified facet
2️⃣ See an authoritative #NeevaAI answer from Neeva verified sites.

It's that simple! 👍
Here's another example, we typed in 🔎: [fp32 vs tf32]

1️⃣ Click on Official Docs
2️⃣ See an authoritative #NeevaAI answer! Image
In addition, users can now access a single #NeevaAI summary directly via the @Neeva News tab.

This includes citation cards, in which a search query returns a search result highlighting authoritative information about the researched topic. 🙌

Here's an example ⬇️
Without links embedded in AI answers, users won’t find their way to a publisher’s website, effecting their referral traffic.

These significant drops will lead to material impacts on ad revenue especially at a time when most publishers are fighting just to stay afloat.
At @Neeva, we are committed to building an equitable ecosystem where content creators and publishers are a part of the conversation.

We are working to help publishers integrate fluent AI search natively in their websites allowing users to discover & consume content seamlessly.
We are always looking for ways to put you in control of your own AI. Not the other way around.

We will continue to use this thinking as we upgrade #NeevaAI search.

Start using our authoritative answers by signing up for a free @Neeva account at neeva.com

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Neeva

Neeva Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Neeva

Mar 11
Considering using a LLM for production, but it's too slow?

Well fear not! Today you can with Double Pseudo Labeling!

Here's why the @Neeva team did, and we've never looked back!

🧵 Image
Before we get started, let's briefly go over #NeevaAI and what it is...

✅ Our AI generates answers
✅ Our Search makes sure answers are timely & factual

All in all, @Neeva combines search + AI to generate a single, cited for each of your searches.
#NeevaAI works by combining search & phases of generative LLMs to generate AI answers

Phase 1️⃣: Run per-doc summarization and question-answering models on the top results

Phase 2️⃣: Run a cross-document attributed summarizer to synthesize a single answer for the query

Example⬇️ Image
Read 17 tweets
Feb 10
Yesterday, we talked about why AI chatbots generate frankenanswers.

As promised, today we're going over how NeevaAI implements our solution to frankenanswers.

BIG thanks to our talented AI/ML team: @avinashparchuri @rahilbathwal

Keep reading for how we tackle this problem ⤵️
And if you didn't read our thread on this yesterday, check out part 1 below.

(we promise it makes more sense if you do! 😜)
In order to fix frankenanswers, we're required to take a step back and ask ourselves... what would we like NeevaAI to do in such cases?
Read 15 tweets
Feb 9
Have you seen ChatGPT combine info on multiple entities into an answer that’s completely WRONG? 😬

Generative AI and LLM models can mix up names or concepts & confidently regurgitate frankenanswers.

Neeva is solving this problem on our AI-powered search engine.

Here’s how 🧵
FYI This is a two-part thread series.

Today, with the help of @rahilbathwal, we’ll explain why the problems happen technically.

Tomorrow, we’ll talk through how we’re implementing our solution with our AI/ML team.

Make sure you're following... 👀
In frankenanswers, a generative AI model combines information about multiple possible entities into an answer that’s wrong.

Ex) On this query for `imran ahmed’ from our early test builds, you see a mix up of many intents corresponding to different entities with the same name.👇
Read 13 tweets
Feb 6
1/ NeevaAI serves abstractive summaries of web pages that are generated in real-time.

We achieved this by a ~10x reduction in latency of a fine-tuned t5-large encoder-decoder model.

TY @asimshankar, @rajhans_samdani, @AshwinDevaraj3 + @spacemanidol

See our lessons learned.. 🧵
2/ First off, we found that there are far fewer resources available for optimizing encoder-decoder models (when compared to encoder models like BERT and decoder models like GPT).

We hope this thread will fill in the void and serve as a good resource. 📂
3/ We started with a flan-T5-large model and tuned it on our dataset. We picked the large variant because we found it to generate better summaries with fewer hallucinations and fluency issues.

The problem? The latency is too high for a search product.
Read 14 tweets
Jan 18
1/ At #Neeva, design is the act of giving form to an idea: we gather data and inspiration, think, make, and iterate through feedback. 💡

Here's how our team, working alongside the ✨Neeva Community✨, shaped our latest news tool, #BiasBuster...

(read on 📖)
2/ To improve the news experience on Neeva, we solicited insights from users of various news outlets.

One early finding 👉 the journey to get daily news typically started from news providers' sites and apps, but NOT from a search engine.

🤔
3/ So we asked ourselves, when does a search engine becomes necessary and helpful in the journey? 💭

Several users shared that they searched for specific events and stories about which they wanted to learn more.

An avid news user put, "Search is for focused topics.".
Read 13 tweets
Jan 17
1/ Have you heard? Bias Buster is now available in #Neeva's main search tab!

🔎 Try a search here: neeva.com/search?q=calif…

And if you're wondering how we crawled and evaluated topics to create our 5 point scale slider, stay tuned! 🤓

We dive into it on this thread 🧵… Image
2/ Our goal? 👉 Show a variety of POVs on particular news topics.

To reach this goal, we categorized results based on 5 buckets to ensure a smooth experience while interacting with the slider. This includes:

🪣 Far Left
🪣 Left Leaning
🪣 Center
🪣 Right Leaning
🪣 Far Right
3/ So, how do we categorize our results to fit these buckets?

By using third party media bias tools, such as @AllSidesNow and @MBFC_News.

Each result is categorized by its respective domain.
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(