Introducing OpenBench 0.1: Open, Reproducible Evals 🧵
Evaluating large language models today is messy—every eval framework has its own way of prompting, parsing responses, and measuring accuracy. This makes comparisons impossible. How do you know Anthropic and OpenAI evaluate MMLU the same way?
Jul 16, 2024 • 8 tweets • 3 min read
Introducing Eris: A Novel Evaluation Framework Using Debate Simulations
Eris pits leading AI models against each other in structured debates, assessing reasoning, knowledge, and communication skills simultaneously. 1/ 🧵
How Eris works:
- Two LLMs are assigned opposing positions on a randomly selected topic
- They engage in a full academic debate structure: constructive speeches, cross-examinations, rebuttals, and closing arguments
- A separate judge LLM (currently Claude 3.5 Sonnet) evaluates the debate on multiple criteria
- Results are aggregated across many debates to produce win rates and comparative metrics
Jul 11, 2024 • 6 tweets • 3 min read
🚨New Benchmark Alert!🚨
Introducing Set-Eval: a novel multimodal benchmark for testing visual reasoning capabilities of large language models.
Claude 3.5 Sonnet has a score double that of GPT-4o, and both are below 15%!
More details, precise scores, and analysis below: 🧵
First, what are the rules of Set?
- 12 cards are laid out
- Each card has 4 features: color, shape, number, and shading
- A valid set is 3 cards where for each, it's either all the same or all different across the 3 cards
- No two cards can be identical
The task of the model is to identify a single valid set.
Mar 12, 2024 • 7 tweets • 2 min read
I hacked together a quick implementation of @alexalbert__'s prompt engineering workflow! An explanation 🧵:
1/github.com/AarushSah/prom…
@alexalbert__ 1/ Prompt optimizer is a variation of Alex's workflow that automates the creation of test cases and prompt refinement, while still keeping humans in the loop.