LLMs unlock a natural language interface with structured data. Part 4 of our initiative to improve @LangChainAI docs shows how to use LLMs to write / execute SQL queries w/ chains and agents. Thanks @manuelsoria_ for work on the docs:
https://t.co/CyOqp5I3TMpython.langchain.com/docs/use_cases…
1/ Text-to-SQL is an excellent LLM use-case: many ppl can describe what they want in natural language, but have difficultly mapping that to a specific SQL queries. LLMs can bridge this gap, e.g., see:
https://t.co/b0NMkHPe9xarxiv.org/pdf/2204.00498…
2/ create_sql_query_chain( ) maps from natural language to a SQL query: pass the question and the database into the chain, and get SQL out. Run the query on the database easily:
3/ The LangSmith trace is a great way to see that the chain employs ideas from the paper above: give LLM a CREATE TABLE description for each table and and three example rows in a SELECT statement. This gives the LLM context about the db structure:
https://t.co/Pqu86RFcJPsmith.langchain.com/public/c8fa52e…
4/ Extending this, SQLDatabaseChain will generate the query, execute if, and also synthesize the result in natural language. This creates a natural language wrapper around a SQL DB w/ input and output:
https://t.co/avT4kRVIiosmith.langchain.com/public/7f202a0…
5/ Finally, SQL agents can be used for more complex tasks (multi-query) and can recover from errors. The trace shows how a ReAct agent can use a toolkit of SQL operations (read table, write query, run query) :
https://t.co/zUrXja5bzVsmith.langchain.com/public/a86dbe1…
6/ For more, see blog post and webinar w/ @fpingham and @JonZLuo:
Getting structured LLM output is hard! Part 3 of our initiative to improve @LangChainAI docs covers this w/ functions and parsers (see @GoogleColab ntbk). Thanks to @fpingham for improving the docs on this:
2/ Functions (e.g., using OpenAI models) have been a great way to tackle this problem, as shown by the work of @jxnlco and others. LLM calls a function and returns output that follows a specified schema. wandb.ai/jxnlco/functio…
We've kicked off a community driven effort to improve @LangChainAI docs, starting w/ popular use cases. Here is the new use case doc on Summarization w/ @GoogleColab notebook for easy testing ...
https://t.co/e6QYl8pEsHpython.langchain.com/docs/use_cases…
1/ Context window stuffing: adding full documents into LLM context window for summarization is easiest approach and increasingly feasible as LLMs (e.g., @AnthropicAI Claude w/ 100k token window) get larger context windows (e.g., fits hundreds of pages).
https://t.co/aClREUqtPd
2/ Embed-cluster-sample: @GregKamradt demod a cool approach w/ @LangChainAI to chunk, embed, cluster, and sample representative chunks that are passed to the LLM context window. A nice approach to save cost by reducing tokens sent to the LLM.
... there's a new loader for etherscan transactions. Folks like @punk9059 may have a pulse on applications w/in the larger crypto community. Always interesting to learn about: python.langchain.com/docs/integrati…
Web research is a great LLM use case. @hwchase17 and I are releasing a new retriever to automate web research that is simple, configurable (can run in private-mode w/ llamav2, GPT4all, etc), & observable (use LangSmith to see what it's doing). Blog:
https://t.co/LU0PWDmrBEblog.langchain.dev/automating-web…
Projects like @assaf_elovic gpt-researcher are great example of research agents; we started with an agent, but landed on a simple retriever that executes LLM-generated search queries in parallel, indexes the loaded pages, and retrieves relevant chunks. LangSmith trace:
The retriever is compatible w/ private workflows. Here's a trace running on my laptop (~50 tok/sec) w/ Llama-v2 and @nomic_ai GPT4all embeddings + @trychroma: LLM will generate search queries and also be used for the final answer generation. See docs: https://t.co/I5V51LVdOFpython.langchain.com/docs/modules/d…
Document splitting is common for vector storage / retrieval, but useful context can be lost. @LangChainAI has 3 new "context-aware" text splitters that keep metadata about where each split came from. Works for code (py, js) c/o @cristobal_dev, PDFs c/o @CorranMac, and Markdown ..
.. the newest @LangChainAI release (v0.0.220) has a contribution from @CorranMac that uses Grobid for context-aware splitting of PDFs; great for scientific articles or large docs. Each text chunks retains the section of the paper it came from. See here .. https://t.co/tqKedGTwLCpython.langchain.com/docs/modules/d…
.. earlier this week, @cristobal_dev added context aware splitting for .js and .py, which will keep the class or function that each split comes from. He also added helpful documentation on usage here .. python.langchain.com/docs/modules/d…
@karpathy's YouTube course is one of the best educational resources on LLMs. In this spirit, I built a Q+A assistant for the course and open soured the repo, which shows how to use @LangChainAI to easily build and evaluate LLM apps karpathy-gpt.vercel.app github.com/rlancemartin/k…
1/ @LangChainAI has a new document loader for YouTube urls. Simply pass in urls and get the resulting text back (using @OpenAI whisper API). The repo shows how to use this this to get the text for all @karpathy course videos in a few lines of code ...