elvis Profile picture
Dec 5, 2018 9 tweets 3 min read Read on X
A simple method for fair comparison? #NeurIPS2018 Image
Considerations: Image
Reproducibility checklist: Image
There is room for variability, especially when using different distributed systems: Image
Complexity of the world is discarded... We need to tackle RL in the natural world through more complex simulations. Image
Embedding natural background? Image
Set the bar higher for the naturalism of the environment: Image
You learn a lot by considering this idea of stepping out in the real world: Image
Reproducibility test: Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with elvis

elvis Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @omarsar0

Aug 12
Unlocking Long-Horizon Agentic Search

AI agents still struggle with long-horizon tasks.

This paper sheds light on how to improve long-horizon agentic search with RL.

Here are my notes: Image
Overview

It introduces ASearcher, an open-source framework for training LLM-based search agents capable of long-horizon, expert-level search.

Addresses 2 major limitations in prior open-source approaches: short turn limits (≤10) and lack of large-scale, high-quality QA data. Image
Fully asynchronous RL for long-horizon search

Unlike batch generation RL, ASearcher decouples trajectory execution from model updates, avoiding bottlenecks from long trajectories.

This enables relaxed turn limits (up to 128), with training showing >40 tool calls and >150k tokens in a single trajectory.Image
Read 8 tweets
Aug 11
Getting huge productivity boosts by combining Claude Code with Obsidian vaults.

Everything in Obsidian is .md, so this is like the most delicious context for LLMs.

Everything is in one place: notes, bookmarks, instructions, LLM context, AI outputs, and so on.
The part I like about Obsidian is that, finally, I feel like I own my notes.

I can access them everywhere.

Modify them when I want.

And leverage them with LLMs all the time.
To be clear, the free version of Obsidian is more than enough. I like the fact that I can sync notes/bookmarks from different devices.

I can edit the .MD files with any editor, too. This is what I really love about it. You own the notes.

You can also version your notes, too, for those who want to leverage Claude Code+GitHub actions on the road.
Read 4 tweets
Aug 7
BREAKING: OpenAI introduces GPT-5

Here's everything you need to know: Image
Altman claims that with GPT-5, it is now like talking to an expert.

It can write entire programs from scratch. Software-on-demand is a defining characteristic.

PhD-level experts in your pockets.
GPT-5 brings a higher level of reasoning.

It thinks just the perfect amount to generate the perfect answer.

Good for math, physics, law, and many other domains.

They claim that GPT-5 is the best coding model today.
Read 30 tweets
Aug 3
The Agentic Web is upon us!

If you want to learn about the Agentic Web, look no further.

This new report is a banger!

It presents a detailed framework to understand and build the agentic web.

Here is everything you need to know: Image
Agentic Web

This paper introduces the concept of the Agentic Web, a transformative vision of the internet where autonomous AI agents, powered by LLMs, act on behalf of users to plan, coordinate, and execute tasks. Image
It proposes a structured framework for understanding this shift, situating it as a successor to the PC and Mobile Web eras.

It's defined by a triplet of core dimensions (intelligence, interaction, and economics) and involves fundamental architectural and commercial transitions. Image
Read 15 tweets
Aug 2
Hierarchical Reasoning Model

This is one of the most interesting ideas on reasoning I've read in the past couple of months.

It uses a recurrent architecture for impressive hierarchical reasoning.

Here are my notes: Image
The paper proposes a novel, brain-inspired architecture that replaces CoT prompting with a recurrent model designed for deep, latent computation. Image
It moves away from token-level reasoning by using two coupled modules: a slow, high-level planner and a fast, low-level executor.

The two recurrent networks operate at different timescales to collaboratively solve tasks

Leads to greater reasoning depth and efficiency with only 27M parameters and no pretraining!
Read 9 tweets
Jul 30
Graph-R1

New RAG framework just dropped!

Combines agents, GraphRAG, and RL.

Here are my notes: Image
Introduces a novel RAG framework that moves beyond traditional one-shot or chunk-based retrieval by integrating graph-structured knowledge, agentic multi-turn interaction, and RL. Image
Graph-R1 is an agent that reasons over a knowledge hypergraph environment by iteratively issuing queries and retrieving subgraphs using a multi-step “think-retrieve-rethink-generate” loop.

Unlike prior GraphRAG systems that perform fixed retrieval, Graph-R1 dynamically explores the graph based on evolving agent state.Image
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(