Hard Kothari Profile picture
AI & Automation Advocate | Passionate about Smart Home & Workflow Tools | Exploring Automations for Enterprises | Embracing Web Development Challenges
Oct 31, 2023 9 tweets 3 min read
🔎Exploring @LangChainAI's Self-Querying:

Last week, we uncovered what it is and grasped an overview with a visual diagram.

Today, I'm excited to take you through a practical code example demonstrating how to retrieve relevant documents.

Let's dive in together in the 🧵👇
def self_querying_default_retriever(llm: Union[ChatOpenAI, OpenAI] = ChatOpenAI(temperature=0),                                     embedding_fn: embeddings = OpenAIEmbeddings(),                                     documents: List[Document] = document_movies,                                     document_content_description: str = "Name of a movie",                                     metadata_field_info: List[AttributeInfo] = movies_metadata_field_info):     embeddings = embedding_fn      vectorstore = Chroma.from_documents(documents, embeddings)      document_content_description ... 1. Dependencies:

Let’s import some dependencies before which we would need it in the code further Image
Sep 12, 2023 5 tweets 2 min read
✈️Finally good to be back from a long vacation.

Have you ever wanted to use YT video to:
- post on @X
- write a blog
- share in with your audience in your newsletter
- summarize for yourself

What if I said you can do all of this in one click? 🙀

No need to prompt it, No need of copy pasting from somewhere in ChatGPT.

YT -> Content Creator 🪄

You can do so with my latest small project which I have been working on using @LangChainAI and @streamlit.

This is an extension to my previous project of summarization to make it more useful at one go!!

Please check out the link in 🧵👇🏽

Note: The creation might take a while depending on how many content checkboxes you select while creating.
App 🔗
yt-content-creator.streamlit.app
Aug 28, 2023 10 tweets 5 min read
Are you using @LangChainAI but it is difficult to Debug?

Not anymore with LangSmith

It makes tracing each LLM call very easy and intiutive.

It's like looking under the hood of system.

After getting beta access, I explored it over last week & below are my 🔑 take aways:

🧵 Clear Input / Output:

It provides clear picture of what goes in and what came out at the highest and the most granular level depending on what you want to see.

You can view individual LLM call input/ouput or as a chain together.
Whole Chain Breakdown
Overall Input Output of entire sequence of chains