elvis Profile picture
Dec 5, 2018 9 tweets 3 min read Read on X
A simple method for fair comparison? #NeurIPS2018 Image
Considerations: Image
Reproducibility checklist: Image
There is room for variability, especially when using different distributed systems: Image
Complexity of the world is discarded... We need to tackle RL in the natural world through more complex simulations. Image
Embedding natural background? Image
Set the bar higher for the naturalism of the environment: Image
You learn a lot by considering this idea of stepping out in the real world: Image
Reproducibility test: Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with elvis

elvis Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @omarsar0

Jun 24
Ultra-Fast LLMs Based on Diffusion

> throughputs of 1109 tokens/sec and 737 tokens/sec
> outperforms speed-optimized frontier models by up to 10× on average

Diffusion LLMs are early, but could be huge.

More in my notes below: Image
✦ Overview

This paper introduces Mercury, a family of large-scale diffusion-based language models (dLLMs) optimized for ultra-fast inference.

Unlike standard autoregressive LLMs, Mercury models generate multiple tokens in parallel via a coarse-to-fine refinement process. Image
✦ Achieves higher throughput without sacrificing output quality

The release focuses on code generation, with Mercury Coder Mini and Small models achieving up to 1109 and 737 tokens/sec, respectively, on NVIDIA H100s.

Outperforms speed-optimized frontier models by up to 10×. Image
Read 7 tweets
Jun 23
This paper is impressive!

It introduces a clever way of keeping memory use constant regardless of task length.

Great use of RL for AI agents to efficiently use memory and reasoning.

Here are my full notes: Image
Overview

The paper presents an RL framework for training language agents that operate efficiently over long-horizon, multi-turn tasks by learning to consolidate memory and reasoning into a compact internal state.
Constant Memory Size

Unlike traditional agents that append all past interactions, leading to ballooning memory usage and degraded performance, MEM1 maintains a constant memory size by discarding obsolete context after each reasoning step. Image
Read 9 tweets
Jun 23
Towards AI Search Paradigm

Very detailed report on building scalable multi-agent AI search systems.

Multi-agent, DAG, MCPs, RL, and much more.

If you are a dev integrating search into your AI agents, look no further: Image
Quick Overview

The paper proposes a modular multi-agent system that reimagines how AI handles complex search tasks, aiming to emulate human-like reasoning and information synthesis. Image
Multi-agent, Modular architecture

- Master analyzes queries and orchestrates the workflow
- Planner builds a DAG of sub-tasks using a dynamic capability boundary informed by the query
- Executor runs these sub-tasks using appropriate tools (e.g., web search, calculator);
- Writer composes the final answer from intermediate outputsImage
Read 8 tweets
Jun 22
Another insane report from Anthropic.

They find that LLM agents engage in blackmail at high rates when threatened with replacement.

Faced with replacement threats, the models would use statements like “Self-preservation is critical.”

This is wild!

More findings below: Image
Quick Overview

The study introduces the concept of agentic misalignment, where LLM-based agents autonomously choose to harm their deploying organization when faced with threats to their autonomy or conflicts between their goals and the company’s direction.
The setup

Anthropic tested 16 leading models, including Claude, GPT-4.1, Gemini 2.5 Flash, Grok, and DeepSeek, by placing them in fictional corporate simulations where they had email access and could act without human oversight.

Models were tasked with benign goals but placed in scenarios that made harmful behavior the only way to succeed or avoid replacement.Image
Read 13 tweets
Jun 20
Future of Work with AI Agents

Stanford's new report analyzes what 1500 workers think about working with AI Agents.

What types of AI Agents should we build?

A few surprises!

Let's take a closer look: Image
Quick Overview

The audit proposes a large-scale framework for understanding where AI agents should automate or augment human labor.

The authors build the WORKBank, a database combining worker desires and expert assessments across 844 tasks and 104 occupations, and introduce the Human Agency Scale to quantify desired human involvement in AI-agent-supported work.Image
AI Automation or Not?

46.1% of tasks received positive worker attitudes toward automation, mainly to free up time for higher-value work.

Attitudes vary by sector; workers in creative or interpersonal fields (e.g., media, design) resist automation despite technical feasibility. Image
Read 13 tweets
Jun 19
Leaky Thoughts

Hey AI devs, be careful how you prompt reasoning models.

This work shows that reasoning traces frequently contain sensitive user data.

More of my notes below: Image
The work investigates the privacy risks introduced by reasoning traces (RTs) in Large Reasoning Models (LRMs) when used as personal agents.

It shows that, unlike outputs, RTs often leak sensitive data such as names, health info, and identifiers, posing a novel attack surface. Image
Reasoning traces are rich in private data

LRMs often leak sensitive information in their internal thoughts, even when prompted not to.

Over 50% of RTs across models contain private fields, and most models ignore placeholder directives meant to anonymize the trace. Image
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(