I should have spoken up earlier about the ability to easily get toxic outputs from GPT-3. By posting a demo without reinforcing that it can exhibit harmful biases, I probably exacerbated the hype. The paradigm shift towards few-shot learning is cool, but there are consequences.
I also think it’s possible to believe in #ResponsibleAI and be optimistic about the potential of language models. I currently wouldn’t trust a language model to make decisions that determine the fates of humans, but I’m excited by cute apps that enhance my developer productivity.
I’m glad people are speaking up sooner rather than later about the consequences of models exhibiting bias at scale, and I hope more people continue to work on solutions to these problems.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
first, humans tend to underspecify the first version of their prompt. if they're in the right environment where they can get a near-instantaneous LLM response in the same interface (e.g., chatgpt, Claude, openai playground), they just want to see what the llm can do
there's a lot of literature on LLM sensemaking from the HCI community here (our own "who validates the validators" paper is one of many), but I still think LLM sensemaking is woefully unexplored, especially with respect to the stage in the mlops lifecycle
Our (first) DocETL preprint is now on Arxiv! "DocETL: Agentic Query Rewriting and Evaluation for Complex Document Processing" It has been almost 2 years in the making, so I am very happy we hit this milestone :-) arxiv.org/abs/2410.12189
DocETL is a framework for LLM-powered unstructured data processing and analysis. The big new idea in this paper is to automatically rewrite user-specified pipelines into a sequence of finer-grained and more accurate operators.
I'll mention two big contributions in this paper. First, we present a rich suite of operators, with three entirely new operators to deal with decomposing complex documents: the split, gather, and resolve operators.
DocETL is our agentic system for LLM-powered data processing pipelines. Time for this week’s technical deep dive on _gleaning_, our automated technique to improve accuracy by iteratively refining outputs 🧠🔍 (using LLM-as-judge!)
2/ LLMs often don't return perfect results on the first try. Consider extracting insights from user logs with an LLM. An LLM might miss important behaviors or include extraneous information. These issues could lead to misguided product decisions or wasted engineering efforts.
3/ DocETL's gleaning feature uses the power of LLMs themselves to validate and refine their own outputs, creating a self-improving loop that significantly boosts output quality.
LLMs have made exciting progress on hard tasks! But they still struggle to analyze complex, unstructured documents (including today's Gemini 1.5 Pro 002).
2/ Let's illustrate DocETL with an example task: analyzing presidential debates over the last 40 years to see what topics candidates discussed, & how the viewpoints of Democrats and Republicans evolved. The combined debate transcripts span ~740k words, exceeding context limits of most LLMs.
3/ But even for Gemini 1.5 Pro (2M token context limit), when given the entire dataset at once, it only reports on the evolution of 5 themes across all the debates! And, the reports get progressively worse as the output goes on. docetl.com/#demo-gemini-o…
recently been studying prompt engineering through a human-centered (developer-centered) lens. here are some fun tips i’ve learned that don’t involve acronyms or complex words
if you don’t exactly specify the structure you want the response to take on, down to the headers or parentheses or valid attributes, the response structure may vary between LLM calls / it is not amenable to production
play around with the simplest prompt you can think of & run it a bunch of times on different inputs to build intuition for how LLMs “behave” for your task. then start adding instructions to your prompt in the form of rules, e.g., “do not do X”
thinking about how, in the last year, > 5 ML engineers have told me, unprompted, that they want to do less ML & more software engineering. not because it’s more lucrative to build ML platforms & devtools, but because models can be too unpredictable & make for a stressful job
imo the biggest disconnect between ML-related research & production is that researchers aren’t aware of the human-centric efforts required to sustain ML performance. It feels great to prototype a good model, but on-calls battling unexpected failures chip away at this success
imagine that your career & promos are not about demonstrating good performance for a fixed dataset, but about how quickly on average you are able to respond to every issue some stakeholder has with some prediction. it is just not a sustainable career IMO