LLMs are bad at acknowledging what they don't know
But they are good at answering questions from a well defined context
So instead, you can provide a set of information via RAG (retrieve it and supply it in the prompt) and tell the LLM to *only* answer questions from this data
Don't be that teammate who blindly refactors code and only makes things worse.
Let's look at some good vs bad refactoring patterns with real examples 🧵
Let's take this code.
I hired a developer once that saw us calling this `functions.runWith(...)` repeatedly with different options and decided to consolidate it all into one createApi function
This new, consolidated code, had a huge issue.
Can you see it?
When we started deploying these APIs, they began breaking left and right.
In all seriousness though, I know I post a lot of coding tips and "do this not that" best practices stuff, but I want to use this as a reminder to point out...
I don't write perfect code. No one does. My code has been complained about by fellow engineers as much as anyone, if not probably more.