cohere Profile picture
Give your technology language.
Chung Profile picture evolvingspirals Profile picture 2 subscribed
Apr 4 6 tweets 2 min read
Today, we’re introducing Command R+: a state-of-the-art RAG-optimized LLM designed to tackle enterprise-grade workloads and speak the languages of global business.

Our R-series model family is now available on Microsoft Azure, and coming soon to additional cloud providers. Image Command R+ offers best-in-class advanced Retrieval-Augmented Generation (RAG) capabilities to provide accurate, enterprise-ready solutions with citations to reduce hallucinations. Image
Jun 2, 2023 5 tweets 1 min read
1/ Nabila Abraham introduces a detailed guide on implementing semantic search using OpenSearch and Cohere, a powerful combination for searching large data sets. Follow the link for a comprehensive demo: 🔍 txt.cohere.com/semantic-searc… 2/ The demo demonstrates how to leverage OpenSearch's support for vector search and Cohere’s high-quality embeddings to improve text search capabilities. This brings more context and relevance to search results than traditional keyword-based methods. 💡
May 23, 2023 5 tweets 2 min read
1/5: Interested in Transformer Models in machine learning? They are incredibly good at keeping track of context, and this is why the text that they write makes sense. Check out this video for more on their architecture and functionality: 2/5: Introduced in 'Attention is All You Need', Transformer Models have multifaceted use from writing creative content to human interaction, owing to their architecture. For a deeper dive into the components of these models: visit LLM University: docs.cohere.com/docs/transform…
Apr 14, 2023 5 tweets 2 min read
1/ 🚀 Exciting news! Cohere's multilingual embedding model now enables cross-lingual text classification in 100+ languages! 🌟 Read our latest blog post by @Nils_Reimers, @amrmkayid, & Elliott Choi to learn how you can leverage this groundbreaking tech:
txt.cohere.ai/cross-lingual-… 2/ With this model, you can excel in sentiment analysis, content moderation, and intent recognition, all while outperforming the alternatives! 💪🎯 Image
Apr 13, 2023 4 tweets 3 min read
1/ 🤔 Should we care about machine learning model interpretability? Professor @hima_lakkaraju tackles questions about model understanding and its implications for real-world use cases of large language models. 🌐 Image @hima_lakkaraju 2/ 🎓 Harvard Prof. Lakkaraju demonstrates TalkToModel, an interactive dialogue system that explains ML models through conversations. 🗣️ This system shows a compelling conversational explainable user interface (XAI). Image
Apr 12, 2023 7 tweets 2 min read
1/ 🧠 Ever wondered how Transformer Models work and why they're such a big deal in machine learning? 🤖 @luis_likes_math breaks it down! 🧵
txt.cohere.ai/what-are-trans… 2/ 📚 Transformers were introduced in the paper "Attention is All You Need" & can do amazing things like writing stories, answering ❔s, & even passing exams! 🎓 They're great at keeping track of context, which is why their generated text makes sense.😮
arxiv.org/abs/1706.03762
Apr 6, 2023 12 tweets 16 min read
(1/12) 🚀 Don't fall behind! Stay ahead of the game with March 2023's top NLP papers 📄 Curated by @forai_ml, this list covers the latest advancements in NLP.
Get up to speed with the latest language AI advancements now! 🔥
Post generated with Cohere. 🧵
txt.cohere.ai/unlocking-new-… (2/12) PaLM-E: An Embodied Multimodal Language Model
Authors: @DannyDriess, @xf1280, Mehdi S. M. Sajjadi, @coreylynch, @achowdhery, @brian_ichter, @ayzwah, @JonathanTompson, @QuanVng, @TianheYu,
@wenlong_huang, @YevgenChebotar, @psermanet, @duck et. al. Image
Mar 29, 2023 10 tweets 6 min read
When trying to map out the rapidly growing generative AI landscape, where do you even start? 💥

@JayAlammar shares some observations on the value of the AI technology stack.

We’ll also take a look at where some of the technical moats might be. (Thread) Image @JayAlammar A natural place to start is an AI tech stack made up of these three layers: Application, Models, and Cloud Platform. Image
Mar 24, 2023 6 tweets 2 min read
1/ 🚀 Turbocharge Semantic Search with AI in 5 Easy Steps! Learn how to build Cofinder, an AI-powered semantic search app using Cohere's API and Streamlit! 🎯💻🧵
txt.cohere.ai/turbocharge-se… 2/ 🌐 Cofinder helps the Cohere Community find relevant content based on personal goals. Just ask natural language questions & get the most relevant content, answers, & context! 🗂️🔍
Mar 24, 2023 18 tweets 9 min read
You have so many cool ideas for building LLM-powered applications. If only there were a faster way to build and test them out. ⚡

Good news — you can do that with Cohere and @LangChainAI. Let’s see how. (Thread) @LangChainAI With Cohere, you get access to large language models (LLMs) via a simple API without needing the machine learning know-how.

You get two key types of language processing capabilities — text generation and text embedding — each served by a different family of models.
Mar 9, 2023 15 tweets 3 min read
Semantic search is a very effective way to search documents with a query.

But what exactly does the word “semantic” mean here?

Probably the best way to understand semantic search is to understand what is *not* semantic search.

Let’s take a look. (Thread) Image Before semantic search, the most popular way of searching was keyword search.

Imagine you have a list of many sentences. When you ask a question (query), keyword search looks for the sentence (response) with the largest number of words in common with the query.
Mar 1, 2023 15 tweets 4 min read
A custom language model excels at a specific task, meaning you get an extra performance gain.

Training your own custom Cohere models is easy, and it doesn’t require machine learning skills.

Let’s see how to do that. (Thread) First, let’s see why and when you might want to create a custom model.

A base generative model is already trained on massive volumes of data, making it great at capturing patterns on a broad scale.

But sometimes, your task contains highly specific nuances. For example:
Feb 17, 2023 12 tweets 3 min read
Our text generation endpoint doesn’t just return text in its response. 💬

It also returns “likelihood” values.

Let’s deconstruct the response and see what these values mean. (Thread) Image The Generate endpoint accepts a text input, that is the prompt, and outputs a “Generation” object. Image
Jan 27, 2023 9 tweets 3 min read
Traditional keyword search has its limitations — it often doesn’t find the relevant information that matches the user’s search intent. 🔍

Here’s how Cohere’s multilingual text understanding model solves this problem. (Thread) Let’s take an example. Here’s a simple search query: “What is the capital of the United States?” A traditional keyword-based search would produce the following results.
Jan 25, 2023 16 tweets 4 min read
Word and sentence embeddings are the bread and butter of language models. 📖

Here is a very simple introduction to embeddings. (Thread) The quintessential task of natural language processing (NLP) is to understand human language. 

However, there is a big disconnect — humans speak in words and sentences, but computers only understand and process numbers.
Jan 11, 2023 10 tweets 2 min read
How to find ideas for your next generative AI app? 💡

By identifying your goal - the problem you want to solve. 🎯

Let’s look at some example goals that are well-suited for large language models. (Thread) Image 1. Automate repetitive tasks

When you want to produce outputs on a consistent basis with a certain format and quality. 

Example use cases:
- Producing ad copy
- Creating product descriptions
- Extracting phone numbers from text
Dec 8, 2022 12 tweets 5 min read
How do you get text-generation AI to give you what you need?

At the heart of it is designing a good prompt. With Cohere's models, you can construct a prompt in two ways:

1. Prompting by Instruction 🎙
2. Prompting by Example 📚

Let’s see how to use them. 🧵 1. Prompting by Instruction 🎙

This works best with our Command-Xlarge model (This model is in beta). When using this model, you want to tell rather than show.

This type of prompt tends to generate more open-ended responses with more variety in the output format. Image