A unified platform to help developers debug, test, evaluate, and monitor their LLM applications.
Integrates seamlessly with LangChain, but doesn't require it.
✒️Blog
what we built, where we’re going, and how our Alpha partners put LangSmith to use
🙏🏽 to companies like @klarna @SnowflakeDB, @streamlit @BCG @DeepLearningAI_ @fintual @mendableai @multion_ai & @quivr_brain for helping us shape LangSmith
4⃣ GPT4All embeddings
🗃️Async support (and more) for @qdrant_engine vecstore
🧠 Tongyi Qianwen LLM
Let's take a look 🧵
🗃️Async support (and more) for @qdrant_engine vecstore
@LukawskiKacper's latest contribution adds full async support along with MMR search and deletion capabilities to the already very capable Qdrant vector store interface.
First up: how to use OpenAI functions to return a structured response
One of the simplest yet most powerful use cases for OpenAI functions is just using it to structure responses
This is the main use we focus on documenting
This involves:
- Passing the schema you want as a function
- Forcing the LLM to respond using that function
- Parsing the function call and treating that as the response
With `create_structured_output_chain` we do all that setup for you, and you get an LLMChain that just works
♻️Self-query support for @MyScaleDB by GH mpskex
🕸️Integrations with GH oobabooga's text-generation-webui by GH lonestriker
♒️Revamp of @activeloopai's Deep Lake vector store by GH adolkhan
🌤️ Improvements to AnalyticDB vector store by GH wangxuqi
and even more...
3/4