Hard Kothari Profile picture
Aug 28, 2023 10 tweets 5 min read Read on X
Are you using @LangChainAI but it is difficult to Debug?

Not anymore with LangSmith

It makes tracing each LLM call very easy and intiutive.

It's like looking under the hood of system.

After getting beta access, I explored it over last week & below are my 🔑 take aways:

🧵
Clear Input / Output:

It provides clear picture of what goes in and what came out at the highest and the most granular level depending on what you want to see.

You can view individual LLM call input/ouput or as a chain together.
Whole Chain Breakdown
Overall Input Output of entire sequence of chains
Individual Chain Details:

If you have multiple chains in your definition, it gives you analysis of each of them individually with details regarding input, output, prompt used, and history passed if any.

Individual Chain Input/Output
Prompt Input/Ouptut
LLM Input/Ouput
Analytics:

It gives you time spent and token used per step as well as in total at the top.

This would help us to understand the bottleneck in our chain and help in optimizing the chain performance.

Token count would help in understanding the cost per call. Analytics: Time and Tokens per step
JSON/YAML:

You can toggle the output/input view to JSON or YAML whichever is comfortable for you to review.

This makes life so easier to review in YAML but then copy/paste in JSON to use in code if needed.
YAML Mode
JSON Mode
Share:

You can share your trace with anyone using the share button. It will generate a public link accessible to anyone for review.

This makes it very easy to collaborate with others and explain the team working behind the scenes. Share with public link
Metadata & Feedback:

It also provides you metadata for the whole sequence, individual chain and individual llm call.

Feedback is also available if any associated with any of these element.

Metadata helps to get all the information related to each chain/call or step.
Sequence Metadata
LLM call metadata
LLM Dashboard:

Overall analytics associated with all the call made using the project in a Dashboard is a bird eye view of how many calls made and how much time and tokens used with it.

This really helps to tap on the cost of the project and reports associated with. LLM Dashboard
Graphs Dashboard:

This is one of the cool feature added in the dashboard. Graphs.

We all love graphs and it gives bird eye view of everything happening with your project 🙂 Graphs
If you found this information helpful, follow me
@HardKothari for more such content on AI and Automation.

Show your support by :
- Like 💕
- Retweet 📷
- Sharing your thoughts 📷📷

Thank you!💬

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Hard Kothari

Hard Kothari Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @HardKothari

Oct 31, 2023
🔎Exploring @LangChainAI's Self-Querying:

Last week, we uncovered what it is and grasped an overview with a visual diagram.

Today, I'm excited to take you through a practical code example demonstrating how to retrieve relevant documents.

Let's dive in together in the 🧵👇
def self_querying_default_retriever(llm: Union[ChatOpenAI, OpenAI] = ChatOpenAI(temperature=0),                                     embedding_fn: embeddings = OpenAIEmbeddings(),                                     documents: List[Document] = document_movies,                                     document_content_description: str = "Name of a movie",                                     metadata_field_info: List[AttributeInfo] = movies_metadata_field_info):     embeddings = embedding_fn      vectorstore = Chroma.from_documents(documents, embeddings)      document_content_description ...
1. Dependencies:

Let’s import some dependencies before which we would need it in the code further Image
2. Documents:

Let’s get the list of documents along with their relevant metadata which we want to query using this advanced retriever Image
Read 9 tweets
Sep 12, 2023
✈️Finally good to be back from a long vacation.

Have you ever wanted to use YT video to:
- post on @X
- write a blog
- share in with your audience in your newsletter
- summarize for yourself

What if I said you can do all of this in one click? 🙀

No need to prompt it, No need of copy pasting from somewhere in ChatGPT.

YT -> Content Creator 🪄

You can do so with my latest small project which I have been working on using @LangChainAI and @streamlit.

This is an extension to my previous project of summarization to make it more useful at one go!!

Please check out the link in 🧵👇🏽

Note: The creation might take a while depending on how many content checkboxes you select while creating.
My previous post on YT Summarizer

x.com/HardKothari/st…
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(