LangChain Profile picture
Oct 18 14 tweets 10 min read Twitter logo Read on Twitter
⭐️ Prompt Trends + Highlights ⭐️

We recently launched the LangChain Hub to support prompt sharing + workshopping.

We collected hundreds of prompts across many use-cases.

Here, we distill major themes and highlight interesting examples.

Blog:
blog.langchain.dev/the-prompt-lan…
Image
Reasoning 🧠

Simple instructions ("think step by step") can improve many reasoning tasks.

Great thread from @_jasonwei w/ trade-offs:

Recent @GoogleDeepMind work (img below) shows accuracy across many such instructions:

arxiv.org/abs/2309.03409
Image
Writing ✍️

@mattshumer_ has shared some of our favorite prompts to improve your writing:



Also nice prompts for content generation (tests c/o @GregKamradt, threads c/o @HardKothari):

smith.langchain.com/hub/rlm/matt-s…
smith.langchain.com/hub/rlm/matt-s…
smith.langchain.com/hub/gregkamrad…
smith.langchain.com/hub/hardkothar…
SQL 🗄️

@fpingham + others have done great work on text-to-SQL.

Giving LLM CREATE TABLE description and example rows (SELECT statement) improves SQL generation.

Prompt:

Paper:
smith.langchain.com/hub/rlm/text-t…
arxiv.org/pdf/2204.00498…
Brainstorming 🧑‍🏫

@mattshumer_ shared a great prompt using multiple user personas to ideate on business plans:


Also, prompt from NASA to emulate the strategies used by living things for design ideation:

smith.langchain.com/hub/hwchase17/…
smith.langchain.com/hub/bruffridge…
www1.grc.nasa.gov/research-and-e…
Extraction (1/2) 📒

LLMs + fxn calling is powerful for extraction.

See @jxnlco's great work on structured prompting for additional context:


We've seen several prompts to support function calling, such as this:
jxnl.github.io/instructor/tip…
smith.langchain.com/hub/homanp/sup…
Extraction (2/2) 📒

@yoheinakajima's Instagraph is a great example of extraction (knowledge graph triples):


Here's a prompt we have used for triple extraction:
github.com/yoheinakajima/…
smith.langchain.com/hub/langchain/…
Image
RAG 📓

Retrieval augmented generation (RAG) is one of the most popular LLM applications:


We've seen prompt adaptation to support RAG w/ instructions for many open source LLMs (LLaMA2, Mistral, etc):

smith.langchain.com/hub/rlm/rag-pr…
smith.langchain.com/hub/rlm/rag-pr…
smith.langchain.com/hub/rlm/rag-pr…
Image
LLM Graders ✏️

Using LLMs as graders is a powerful idea in evaluation workflows.

Lots of work in LangSmith has focused on this:


Some useful prompts:

docs.smith.langchain.com/evaluation?ref…
smith.langchain.com/hub/simonp/mod…
smith.langchain.com/hub/wfh/automa…
Synthetic Data generation 📚

Gathering training data to support LLM fine-tuning is a challenge.

There's some great work from @AnthropicAI and others on this:


A few prompts to generate synthetic datasets:

evals.anthropic.com/model-written/…
smith.langchain.com/hub/homanp/que…
smith.langchain.com/hub/gitmaxd/sy…
Prompt Optimization 🤖

LLMs can serve as translation modules between human instruction and LLM-optimized prompts.

We've seen a few of these e.g., @midjourney: "Freddie Mercury performing at the 2023 San Francisco Pride Parade":

smith.langchain.com/hub/hardkothar…
smith.langchain.com/hub/aemonk/mid…
Image
Code Understanding and Generation 👩‍💻

Code analysis is one of the most popular LLM use-cases (e.g., Co-pilot, Code Interpreter, etc).

We've seen many prompts for code review and generation:

smith.langchain.com/hub/chuxij/ope…
smith.langchain.com/hub/homanp/git…
Summarization ⏳

Content summarization is a powerful LLM use-case.

Longer context LLMs, such as @AnthropicAI Claude2, can ingest 100+ pages for summarization:


Techniques like chain of density offer a complimentary approach:
smith.langchain.com/hub/hwchase17/…
smith.langchain.com/hub/lawwu/chai…
Workshop and test any prompt in LangChain hub.

It offers a playground w/ a wide variety of LLMs:
+ @thefireworksai: OSS models (e.g., LLaMA2)
+ OpenAI
+ Anthropic
+ Google PaLM
+ ... and more
Image
Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with LangChain

LangChain Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @LangChainAI

Oct 12
🏓Introduction LangServe

The best way to deploy your LangChains

📤Input/Output schema
📃/docs endpoint
🔠invoke/batch/stream endpoints
🎏/stream_log endpoint for streaming intermediate steps
🛠️LangSmith Integration

Used to power ChatLangChain and WebLangChain

Blog post and 🧵

Image
Image
Image
Github Repo for the package:

We cover a lot of the motivation and features in a blog post here:

We'll pull out a lot of the most important points into a thread heregithub.com/langchain-ai/l…
blog.langchain.dev/introducing-la…
⏫Improvements to LangChain Expression Language

A lot of the features we were able to implement were made possible by improvements to LangChain Expression Language

We highlight the most important ones, including better streaming, input/output schemas, intermediate results
Read 13 tweets
Sep 27
🚀Re-launching Chat LangChain

To help navigate the many features of 🦜🔗, we asked the amazing @mollycantillon to revamp the Chat LangChain chatbot.

Read about how she used LCEL, indexed our docs, deployed with FastAPI, ran evals, and more:

Highlights👇blog.langchain.dev/building-chat-…
Ingestion

At a high level, the ingestion pipeline looks like this:
- Use document loaders to scrape the Python docs and API reference
- Chunk
- Using Indexing API to sync latest docs <> vecstore
- Use Github Actions to run ingestion daily Image
QA

If we've scraped and chunked our docs well, a lot of the hard work is done for us by the time we reach the actual QA. Here we just need to:
- Rephrase latest user question given context of current chat session
- Retrieve from vecstore using rephrased q
- Synthesize answer Image
Read 7 tweets
Sep 10
Weekend Reads

our favorites from this week

🧵
Thorough (and fun and well-written) overview of GenAI space by David Kypuros, @bobbyjohnstx, and Jason Nagin at @RedHat

demos, overview of key players, code–it’s got it all!

medium.com/@davidkypuros/…
replacing 50 lines of code with a single LLM prompt by @benstein

comparing two mailing addresses = infinite edge case = 😵‍💫

now on LangChain Hub as a prompt instead! (incl. instructions on how to use)

smith.langchain.com/hub/bagatur/ad…
Read 13 tweets
Sep 5
🎡 Introducing LangChain Hub 🦜🔗

A place to publish, discover, and try out prompts

We’re particularly excited about a centralized hub’s promise to enable:
-Encoding of expertise
-Discoverability of prompts for a variety of models
-Inspectability
-Cross-team collaboration

🧵
Check it out here:

Read more about the motivation and future direction in our blog post here:

What are some of the motivations for the hub?

👇smith.langchain.com/hub
blog.langchain.dev/langchain-prom…
1⃣Encoding of Expertise

As @emollick put it in in his recent article, there’s a need for "prompt libraries that encode the expertise of their best practices into forms that anyone can use."

oneusefulthing.org/p/now-is-the-t…
Read 10 tweets
Sep 1
theres been a lot of excitement around fine-tuning recently, both in open source and with OpenAI's API

Here’s a list of some of our favorite resources, use-cases, and experiments on the topic over the last ~week

🧵
🗣️ Fine-tuning in your voice

No one wants their apps to feel generic or bot-like! Some resources on how to make them feel more like us!

Blog:

Webinar: blog.langchain.dev/chat-loaders-f…
👩🏼‍🍳Curate fine-tuning data with LangSmith Cookbook

🦜🛠️LangSmith offers easy-to-use filters for tags, content, and feedback to help curate better training data for your chat models. Makes data wrangling less painful. Cookbook and guide:

github.com/langchain-ai/l…
Read 12 tweets
Aug 30
🦜🛠️ Monitoring in LangSmith 📈

Launching today! Easily track analytics on your project over time

👍 feedback
💸 usage (chains, agents, LLMs, tokens)
⏲️ latency
🚨 errors
💬 time to first token

👇
Image
Image
👍/👎 Feedback Charts

Capturing feedback is incredibly important to get a sense of how your application is doing

You can now track this feedback over time, allowing you to have confidence that your users having the best possible interactions with your application
🪙Tokens Charts

These make it easy to track token usage over time and to spot unexpected behavior

That way, you know where to start digging in for cost & performance improvements Image
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(