It’s not uncommon to encounter issues with LLM API's. In production, you need to gracefully handle such issues.
We’ve introduced Fallbacks to the LangChain Expression Language (LCEL) to help with just that.
Available in 🦜🔗 Python and JS! a 🧵:
🙅Handling API Errors
A request to an LLM API can fail for a variety of reasons - the API could be down, you could have hit rate limits, any number of things. Here’s how we can handle this with fallbacks:
🚃Fallbacks for Sequences
When one model fails, you may need to adjust more than just the model for downstream code to work. For example, you may need to update the prompt as well as the output parsing. Luckily, you can create fallbacks for entire sequences of LCEL objects.