elvis Profile picture
Dec 5, 2018 9 tweets 3 min read Read on X
A simple method for fair comparison? #NeurIPS2018 Image
Considerations: Image
Reproducibility checklist: Image
There is room for variability, especially when using different distributed systems: Image
Complexity of the world is discarded... We need to tackle RL in the natural world through more complex simulations. Image
Embedding natural background? Image
Set the bar higher for the naturalism of the environment: Image
You learn a lot by considering this idea of stepping out in the real world: Image
Reproducibility test: Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with elvis

elvis Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @omarsar0

Mar 13
Prompt Engineering is NOT dead!

If you develop seriously with LLMs and are building complex agentic flows, you don't need convincing about this.

I've built the most comprehensive, up-to-date course on prompting LLMs, including reasoning LLMs.

4 hours of content! All Python! Image
Check it out if you're building AI Agents or RAG systems -- prompting tips, emerging use cases, advanced prompting techniques, enhancing LLM reliability, and much more.

All code examples use pure Python and the OpenAI SDKs. That's it!
This course is for devs and AI engineers looking for a proper overview of LLM design patterns and prompting best practices.

We offer support, a forum, and live office hours too.

DM me for discount options. Students & teams also get special discounts.

dair-ai.thinkific.com/courses/prompt…
Read 5 tweets
Mar 11
NEW: OpenAI announces new tools for building agents.

Here is everything you need to know: Image
OpenAI has already launched two big agent solutions like Deep Research and Operator.

The tools are now coming to the APIs for developers to build their own agents. Image
The first built-in tool is called the web search tool.

This allows the models to access information from the internet for up-to-date and factual responses. It's the same tool that powers ChatGPT search.

Powered by a fine-tuned model under the hood... Image
Read 16 tweets
Mar 5
A Few Tokens Are All You Need

Can you cut the fine-tuning costs of an LLM by 75% and keep strong reasoning performance?

A new paper from the Tencent AI Lab claims that it might just be possible.

Let's find out how: Image
The First Few Tokens

It shows that all you need is a tiny prefix to improve your model’s reasoning—no labels or massive datasets are required!

Uses an unsupervised prefix fine-tuning method (UPFT)—only requiring prefix substrings (as few as 8 tokens) of generated solutions. Image
Task template for Prefix Tuning

They use a simple task template for prefix tuning. By using a few leading tokens of the solution, the model learns a consistent starting approach without requiring complete, correct final answers. Other approaches require entire reasoning traces. Image
Read 8 tweets
Feb 27
Say goodbye to Chain-of-Thought.

Say hello to Chain-of-Draft.

To address the issue of latency in reasoning LLMs, this work introduces Chain-of-Draft (CoD).

Read on for more: Image
What is it about?

CoD is a new prompting strategy that drastically cuts down verbose intermediate reasoning while preserving strong performance. Image
Minimalist intermediate drafts

Instead of long step-by-step CoT outputs, CoD asks the model to generate concise, dense-information tokens for each reasoning step.

This yields up to 80% fewer tokens per response yet maintains accuracy on math, commonsense, and other benchmarks. Image
Read 7 tweets
Feb 20
NEW: Sakana AI introduces The AI CUDA Engineer.

It's an end-to-end agentic system that can produce highly optimized CUDA kernels.

This is wild! They used AI to discover ways to make AI run faster!

Let's break it down: Image
The Backstory

Sakana AI's mission is to build more advanced and efficient AI using AI.

Their previous work includes The AI Scientist, LLMs that produce more efficient methods to train LLMs, and automation of new AI foundation models.

And now they just launched The AI CUDA Engineer.Image
Why is this research a big deal?

Writing efficient CUDA kernels is challenging for humans.

The AI CUDA Engineer is an end-to-end agent built with the capabilities to automatically produce and optimize CUDA kernels more effectively. Image
Read 14 tweets
Feb 19
NEW: Google introduces AI co-scientist.

It's a multi-agent AI system built with Gemini 2.0 to help accelerate scientific breakthroughs.

2025 is truly the year of multi-agents!

Let's break it down: Image
What's the goal of this AI co-scientist?

It can serve as a "virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries." Image
How is it built?

It uses a coalition of specialized agents inspired by the scientific method.

It can generate, evaluate, and refine hypotheses.

It also has self-improving capabilities.
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(