What are the latest research trends in AI?
Explore all NeurIPS submissions from 1987 to 2022 in Atlas. atlas.nomic.ai/map/neurips
Learn how it works and how to make it yourself👇
Each point is an accepted abstract at NeurIPS between 1987 and 2022.
Clusters of points represent research topics. For example, all papers about graph neural networks are here:
Atlas lets you interact with unstructured datasets over time. Filtering by submission year shows us how submissions to NeurIPS evolve:
80's and early 90's: Kernels, Speech Recognition and models of the brain.
90's-00's: RL, Clustering and Active Learning become popular.
2010's: Theory of DL, ConvNets, Causal Inference and Adversarial Attacks
2018-2022: Self supervised learning, Pruning, Bandit problems, 3D Deep learning
Searching the map for `transformer` surfaces the prevalence of the architecture across research topics:
Language models, Vision, Speech, 3D modeling, RL, EEG, Pruning and compression.
so exciting to get a chance to collaborate with @Wikipedia & @Wikimedia on the first full multilingual wikipedia map! even more excited that the entire pipeline (encoder, article vectors, and visualization method) is open source 🧵
@Wikipedia is an incredible resource for both machine and human learning, but lacked the infrastructure to be fully utilized in open source. we wanted to change that.
@cohere was the first to make strides in this area, with their open dataset of simple-wiki embeddings. unfortunately, this dataset was neither comprehensive nor openly reproducible.
- First general purpose Mixture-of-Experts (MoE) embedding model
- SOTA performance on the multilingual MIRACL benchmark for its size
- Support for 100+ languages
- Truly open source - open training data, weights, & code
- Apache 2.0 License
Why Mixture-of-Experts? It activates only a subset of model parameters during training and inference, encouraging only the most relevant model parameters to be used on inputs. This maintains strong performance on downstream tasks while cutting costs and memory usage.
Today, every Nomic-Embed-Text embedding becomes multimodal. Introducing Nomic-Embed-Vision:
- a high quality, unified embedding space for image, text, and multimodal tasks
- outperforms both OpenAI CLIP and text-embedding-3-small
- open weights and code to enable indie hacking, research, and experimentation
- released in collaboration with @MongoDB, @llama_index, @LangChainAI, @huggingface, @awscloud, @digitalocean, @LambdaAPI
Existing text-image embedding models, including OpenAI’s CLIP, dramatically underperform specialized text encoders on text retrieval tasks. This forces developers to deploy several embedding models and store several vector indices for multimodal applications. With Nomic-Embed-Vision, developers can use a single vector space to power both their text-text and text-image retrieval tasks.
We’ve been honored by the reception of Nomic-Embed-Text, which has grown into one of the most downloaded models on @huggingface.
We designed Nomic-Embed-Vision to be compatible with Nomic-Embed-Text out of the box, making it easy for developers using Nomic-Embed-Text to extend their applications with multimodal features.
Put simply, any vector created using Nomic-Embed-Text can be used to query vectors created by Nomic-Embed-Vision, and vice versa.
- Deduplicate your text, image and embedding datasets in your web browser.
- Scales to millions of datapoints (e.g. English Wikipedia)
- Cross correlate with real-time regex search and semantic lasso's.
Duplicate detection is a critical component to curating datasets for AI training.
Atlas is the only dataset platform that let's you perform this operation both in your web browser and programmatically.
- Variable sized embeddings with matryoshka learning and an 8192 context.
- Outperforms OpenAI text-embedding-3-small across output sizes.
- Open source, open training code, open data.
Day 0 in @LangChainAI, @llama_index and @MongoDB
Performance is critical for the production use of embeddings but what about the memory, storage, and bandwidth footprint of the vectors?
Nomic Embed v1.5 allows you to trade off memory footprint for performance all-in-one-model.
Blog: blog.nomic.ai/posts/nomic-em…
You can use Nomic Embed v1.5 in production through the Nomic Embedding API or run the open-weights yourself. docs.nomic.ai/reference/endp…
Introducing Nomic Embed - the first fully open long context text embedder to beat OpenAI
- Open source, open weights, open data
- Beats OpenAI text-embeding-3-small and Ada on short and long context benchmarks
- Day 1 integrations with @langchain, @llama-index, @MongoDB
Open source models are not replicable unless you have access to their training data.
We release our training dataset of 235M curated text pairs to enable anyone to replicate Nomic Embed from scratch.