I joined @sheplusplus because I wanted to be a part of a community of ambitious women and help other ambitious women make strides in their careers (namely tech). But quarterly/annual dues were a large part of the reason behind my decision not to join a sorority in college. (2/7)
When the financial barrier to entry is high for minorities in the field, the only minorities who can join the group have to be extra privileged. To actually increase diversity in the field, we need to make efforts to include those who can't afford large membership dues. (3/7)
I learned this the hard way. A she++ program I ran, the #include Fellowship, offered a free trip to Bay Area for select high school students who demonstrated an interest in CS education. We ended up indirectly selecting HS kids who spent lots of $$ hosting hackathons, etc. (4/7)
Majority of the kids we selected were from large cities (Bay Area, NYC, Seattle) and had coding experience. I feel like I could have made a lot more of an impact if we had targeted HS kids in more rural areas and not required them to have demonstrated CS education interest. (5/7)
Bottom line is, it's super important to empower women, but it's even more important to be as inclusive as possible for all identifying women. I want to see more initiatives (especially in tech) that reach women from lower socioeconomic backgrounds, more rural areas, etc. (6/7)
@the_wing could drastically lower membership costs for individuals and (maybe) make $$ by adopting an accelerator business model. I know it's hard, but I'd love to hear others' thoughts on sustainable economics behind coworking spaces for minorities in a field! (7/7)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
what makes LLM frameworks feel unusable is that there's still so much burden for the user to figure out the bespoke amalgamation of LLM calls to ensure end-to-end accuracy. in , we've found that relying on an agent to do this requires lots of scaffolding docetl.org
first there needs to be a way of getting theoretically valid task decompositions. simply asking an LLM to break down a complex task over lots of data may result in a logically incorrect plan. for example, the LLM might choose the wrong data operation (projection instead of aggregation), and this would be a different pipeline entirely.
to solve this problem, DocETL uses hand-defined rewrite directives that can enumerate theoretically-equivalent decompositions/pipeline rewrites. the agent is then limited to creating prompts/output schemas for newly synthesized operations, according to the rewrite rules, which bounds its errors.
first, humans tend to underspecify the first version of their prompt. if they're in the right environment where they can get a near-instantaneous LLM response in the same interface (e.g., chatgpt, Claude, openai playground), they just want to see what the llm can do
there's a lot of literature on LLM sensemaking from the HCI community here (our own "who validates the validators" paper is one of many), but I still think LLM sensemaking is woefully unexplored, especially with respect to the stage in the mlops lifecycle
Our (first) DocETL preprint is now on Arxiv! "DocETL: Agentic Query Rewriting and Evaluation for Complex Document Processing" It has been almost 2 years in the making, so I am very happy we hit this milestone :-) arxiv.org/abs/2410.12189
DocETL is a framework for LLM-powered unstructured data processing and analysis. The big new idea in this paper is to automatically rewrite user-specified pipelines into a sequence of finer-grained and more accurate operators.
I'll mention two big contributions in this paper. First, we present a rich suite of operators, with three entirely new operators to deal with decomposing complex documents: the split, gather, and resolve operators.
DocETL is our agentic system for LLM-powered data processing pipelines. Time for this week’s technical deep dive on _gleaning_, our automated technique to improve accuracy by iteratively refining outputs 🧠🔍 (using LLM-as-judge!)
2/ LLMs often don't return perfect results on the first try. Consider extracting insights from user logs with an LLM. An LLM might miss important behaviors or include extraneous information. These issues could lead to misguided product decisions or wasted engineering efforts.
3/ DocETL's gleaning feature uses the power of LLMs themselves to validate and refine their own outputs, creating a self-improving loop that significantly boosts output quality.
LLMs have made exciting progress on hard tasks! But they still struggle to analyze complex, unstructured documents (including today's Gemini 1.5 Pro 002).
2/ Let's illustrate DocETL with an example task: analyzing presidential debates over the last 40 years to see what topics candidates discussed, & how the viewpoints of Democrats and Republicans evolved. The combined debate transcripts span ~740k words, exceeding context limits of most LLMs.
3/ But even for Gemini 1.5 Pro (2M token context limit), when given the entire dataset at once, it only reports on the evolution of 5 themes across all the debates! And, the reports get progressively worse as the output goes on. docetl.com/#demo-gemini-o…
recently been studying prompt engineering through a human-centered (developer-centered) lens. here are some fun tips i’ve learned that don’t involve acronyms or complex words
if you don’t exactly specify the structure you want the response to take on, down to the headers or parentheses or valid attributes, the response structure may vary between LLM calls / it is not amenable to production
play around with the simplest prompt you can think of & run it a bunch of times on different inputs to build intuition for how LLMs “behave” for your task. then start adding instructions to your prompt in the form of rules, e.g., “do not do X”