Introducing Nomic Embed - the first fully open long context text embedder to beat OpenAI
- Open source, open weights, open data
- Beats OpenAI text-embeding-3-small and Ada on short and long context benchmarks
- Day 1 integrations with @langchain, @llama-index, @MongoDB
Open source models are not replicable unless you have access to their training data.
We release our training dataset of 235M curated text pairs to enable anyone to replicate Nomic Embed from scratch.
Native GPT4All Integration
Chat with your data locally powered by Nomic Embed. gpt4all.io
We also launch the Nomic Embedding API
- 1M Free tokens!
- Production ready embedding inference API including task specific embedding customizations.
- Deep integration with Atlas Datasets
- New models incoming 👀
- First general purpose Mixture-of-Experts (MoE) embedding model
- SOTA performance on the multilingual MIRACL benchmark for its size
- Support for 100+ languages
- Truly open source - open training data, weights, & code
- Apache 2.0 License
Why Mixture-of-Experts? It activates only a subset of model parameters during training and inference, encouraging only the most relevant model parameters to be used on inputs. This maintains strong performance on downstream tasks while cutting costs and memory usage.
Today, every Nomic-Embed-Text embedding becomes multimodal. Introducing Nomic-Embed-Vision:
- a high quality, unified embedding space for image, text, and multimodal tasks
- outperforms both OpenAI CLIP and text-embedding-3-small
- open weights and code to enable indie hacking, research, and experimentation
- released in collaboration with @MongoDB, @llama_index, @LangChainAI, @huggingface, @awscloud, @digitalocean, @LambdaAPI
Existing text-image embedding models, including OpenAI’s CLIP, dramatically underperform specialized text encoders on text retrieval tasks. This forces developers to deploy several embedding models and store several vector indices for multimodal applications. With Nomic-Embed-Vision, developers can use a single vector space to power both their text-text and text-image retrieval tasks.
We’ve been honored by the reception of Nomic-Embed-Text, which has grown into one of the most downloaded models on @huggingface.
We designed Nomic-Embed-Vision to be compatible with Nomic-Embed-Text out of the box, making it easy for developers using Nomic-Embed-Text to extend their applications with multimodal features.
Put simply, any vector created using Nomic-Embed-Text can be used to query vectors created by Nomic-Embed-Vision, and vice versa.
- Deduplicate your text, image and embedding datasets in your web browser.
- Scales to millions of datapoints (e.g. English Wikipedia)
- Cross correlate with real-time regex search and semantic lasso's.
Duplicate detection is a critical component to curating datasets for AI training.
Atlas is the only dataset platform that let's you perform this operation both in your web browser and programmatically.
- Variable sized embeddings with matryoshka learning and an 8192 context.
- Outperforms OpenAI text-embedding-3-small across output sizes.
- Open source, open training code, open data.
Day 0 in @LangChainAI, @llama_index and @MongoDB
Performance is critical for the production use of embeddings but what about the memory, storage, and bandwidth footprint of the vectors?
Nomic Embed v1.5 allows you to trade off memory footprint for performance all-in-one-model.
Blog: blog.nomic.ai/posts/nomic-em…
You can use Nomic Embed v1.5 in production through the Nomic Embedding API or run the open-weights yourself. docs.nomic.ai/reference/endp…
Local LLMs in GPT4All are now 2x faster on Apple Silicone ⚡
- Supports all LLaMa models
- Exclusive support of the Replit model for 23 tok/s code generation enabling local Copilot!
Watch the 13B parameter Hermes model run at 15 tok/s locally! gpt4all.io
To make this possible, GPT4All hackers had to implement several custom Apple Metal kernels for LLM ops (e.g. Alibi) and support a custom fork llama.cpp!
Excited to get these changes upstream! github.com/nomic-ai/llama…
Silicone support is ready for all GPT4All bindings:
- Python
- Typescript
- Golang
- Java
Start building with powerful, open-source and *fast* local LLMs!
Documentation: docs.gpt4all.io
GPT4All LocalDocs allows you chat with your private data!
- Drag and drop files into a directory that GPT4All will query for context when answering questions.
- Supports 40+ filetypes
- Cites sources. gpt4all.io
LocalDocs enables any GPT4All model to cite its sources.
When GPT4All decides that it can improve response factuality by using your documents it does so and tells you which documents it used.
Install the universal local LLM client from gpt4all.io, go to settings and enable the plugin!