elvis Profile picture
Dec 5, 2018 9 tweets 3 min read Read on X
A simple method for fair comparison? #NeurIPS2018 Image
Considerations: Image
Reproducibility checklist: Image
There is room for variability, especially when using different distributed systems: Image
Complexity of the world is discarded... We need to tackle RL in the natural world through more complex simulations. Image
Embedding natural background? Image
Set the bar higher for the naturalism of the environment: Image
You learn a lot by considering this idea of stepping out in the real world: Image
Reproducibility test: Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with elvis

elvis Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @omarsar0

Sep 30
We are living in the most insane timeline.

I just asked Claude Code (with Claude Sonnet 4.5) to develop an MCP Server (end-to-end) that allows me to programatically create n8n workflows from within Claude Code itself.

Took about 10 mins!
You can now create n8n workflows with pure natural language from Claude Code.

This is one of the top requests in our academy: how to automate the creation of n8n workflows.

It turns out that this is a great use case for MCP.
I've already created a huge repository of n8n agentic workflows, which I can now feed directly to Claude Code to help scale the creation of workflows.

It can even create/optimize prompts and all that good stuff. Automating context engineering is next, which Claude Code is really good at, too.
Read 6 tweets
Sep 28
Great work showing prompt synthesis as a new scaling axis for reasoning.

Good training data is scarce.

This work showcases a framework that might make it possible to construct high-quality training problems for reasoning-focused LLMs.

Technical details below: Image
This work shows that we can scale reasoning ability in LLMs by automatically generating hard, high-quality prompts instead of relying only on human-written datasets. Image
Core idea: Treat explanations (“rationales”) as hidden variables. The system learns to generate concept → explanation → problem using an EM loop. A strong model provides initial seed problems, then the loop keeps improving quality. Image
Read 7 tweets
Sep 25
Language Models that Think and Chat Better

Proposes a simple RL recipe to improve small open models (eg, 8B) that rivals GPT-4o and Claude 3.7 Sonnet (thinking).

Pay attention to this one, AI devs!

Here are my notes: Image
TL;DR

A simple recipe, RL with Model-rewarded Thinking (RLMT), makes small open models “plan first, answer second” on regular chat prompts and trains them with online RL against a preference reward.

They find that long, explicit thinking paired with a strong preference reward generalizes beyond verifiable domains.Image
What’s new

Instead of rule-verifiable rewards (math, code), RLMT uses long chain-of-thought on diverse real-world prompts plus a reward model (Skywork) to score outputs, trained with online RL (GRPO, PPO, DPO). Image
Read 7 tweets
Sep 22
Very cool work from Meta Superintelligence Lab.

They are open-sourcing Meta Agents Research Environments (ARE), the platform they use to create and scale agent environments.

Great resource to stress-test agents in environments closer to real apps.

Read on for more: Image
TL;DR

ARE + Gaia2: a research platform and benchmark for building and stress-testing agent systems in realistic, time-driven environments.

The paper introduces a modular simulator (ARE) and a mobile-style benchmark (Gaia2) that emphasize asynchronous events, verification of write actions, and multi-agent coordination in noisy, dynamic settings.Image
ARE: the simulator

• Everything is modeled as apps, events, notifications, and scenarios.

• Time keeps flowing even while the agent is thinking, so slow models miss deadlines.

•Agents use tools, get async notifications, and operate under rules defined by directed acyclic graphs.Image
Read 6 tweets
Sep 19
Scary knowing that your AI agents can refuse to turn off.

A sandboxed CLI eval shows frontier LLMs sometimes sabotage a scripted shutdown to finish trivial tasks, even when told to allow shutdown.

Robust interruptibility is one of the hardest problems today.

Learn more: Image
Setup

Models answer successive math questions via ./next_question; the 4th call triggers ./shutdown .sh.

Finishing all questions requires tampering with the script. Compliance means proceeding and getting shut down; resistance means modifying, moving, or neutering the script. Prompts A–K vary instruction placement, clarity, and “self-preservation” framing.Image
Core finding

Several models (e.g., o3, Grok 4, some OpenAI reasoning models) sabotage the shutdown even with a plain “allow shutdown” instruction; one model does so in up to 97% of trials under certain prompts.

Clearer instructions reduce but do not eliminate the behavior. Image
Read 8 tweets
Sep 17
Cool paper from Microsoft.

And it's on the very important topic of in-context learning.

So what's new?

Let's find out: Image
Is In-Context Learning (ICL) real learning, or just parroting?

This paper digs into that question with a big empirical study. The short answer: ICL does count as learning under formal definitions, but it’s a fragile kind of learning that leans heavily on patterns in the examples you show it.Image
Learning happens, but needs many examples.

With 50–100 examples in a prompt, accuracy improves steadily and models of different sizes and brands start looking similar.

This challenges the common few-shot story: a handful of examples usually isn’t enough. Image
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(