elvis Profile picture
Dec 10, 2019 13 tweets 5 min read Read on X
Machine learning for single cell biology: insights and challenges by Dana Pe’er. #NeurIPS2019 Image
The representation challenge Image
On visualizing and modeling the data ImageImageImage
On clustering single sell data Image
The challenge of inferring temporal progression of cell phenotype Image
An effort to map different cell types ImageImage
There exist many challenges on how to analyze cell data Image
Data harmonization is a critical challenge Image
Ways in how deep learning is used in cell understanding Image
👏 Image
Other challenges Image
Understanding response to therapy Image
Segmenting and analyzing cells is challenging ImageImageImageImage

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with elvis

elvis Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @omarsar0

Apr 9
NEW: Google announces Agent2Agent

Agent2Agent (A2A) is a new open protocol that lets AI agents securely collaborate across ecosystems regardless of framework or vendor.

Here is all you need to know:
Universal agent interoperability

A2A allows agents to communicate, discover each other’s capabilities, negotiate tasks, and collaborate even if built on different platforms. This enables complex enterprise workflows to be handled by a team of specialized agents.
Built for enterprise needs

The protocol supports long-running tasks (e.g., supply chain planning), multimodal collaboration (text, audio, video), and secure identity/auth flows (matching OpenAPI-grade auth). Agents share JSON-based “Agent Cards” for capability discovery, negotiate UI formats, and sync task state with real-time updates.
Read 10 tweets
Apr 5
Llama 4 is here!

- Llama 4 Scout & Maverick are up for download
- Llama 4 Behemoth (preview)
- Advanced problem solving & multilingual
- Support long context up to 10M tokens
- Great for multimodal apps & agents
- Image grounding
- Top performance at the lowest cost
- Can be served within $0.19-$0.49/M tokensImage
LMArena ELO score vs. cost

"To deliver a user experience with a decode latency of 30ms for each token after a one-time 350ms prefill latency, we estimate that the model can be served within a range of $0.19-$0.49 per million tokens (3:1 blend)" Image
It's great to see native multimodal support for Llama 4. Image
Read 16 tweets
Mar 13
Prompt Engineering is NOT dead!

If you develop seriously with LLMs and are building complex agentic flows, you don't need convincing about this.

I've built the most comprehensive, up-to-date course on prompting LLMs, including reasoning LLMs.

4 hours of content! All Python! Image
Check it out if you're building AI Agents or RAG systems -- prompting tips, emerging use cases, advanced prompting techniques, enhancing LLM reliability, and much more.

All code examples use pure Python and the OpenAI SDKs. That's it!
This course is for devs and AI engineers looking for a proper overview of LLM design patterns and prompting best practices.

We offer support, a forum, and live office hours too.

DM me for discount options. Students & teams also get special discounts.

dair-ai.thinkific.com/courses/prompt…
Read 5 tweets
Mar 11
NEW: OpenAI announces new tools for building agents.

Here is everything you need to know: Image
OpenAI has already launched two big agent solutions like Deep Research and Operator.

The tools are now coming to the APIs for developers to build their own agents. Image
The first built-in tool is called the web search tool.

This allows the models to access information from the internet for up-to-date and factual responses. It's the same tool that powers ChatGPT search.

Powered by a fine-tuned model under the hood... Image
Read 16 tweets
Mar 5
A Few Tokens Are All You Need

Can you cut the fine-tuning costs of an LLM by 75% and keep strong reasoning performance?

A new paper from the Tencent AI Lab claims that it might just be possible.

Let's find out how: Image
The First Few Tokens

It shows that all you need is a tiny prefix to improve your model’s reasoning—no labels or massive datasets are required!

Uses an unsupervised prefix fine-tuning method (UPFT)—only requiring prefix substrings (as few as 8 tokens) of generated solutions. Image
Task template for Prefix Tuning

They use a simple task template for prefix tuning. By using a few leading tokens of the solution, the model learns a consistent starting approach without requiring complete, correct final answers. Other approaches require entire reasoning traces. Image
Read 8 tweets
Feb 27
Say goodbye to Chain-of-Thought.

Say hello to Chain-of-Draft.

To address the issue of latency in reasoning LLMs, this work introduces Chain-of-Draft (CoD).

Read on for more: Image
What is it about?

CoD is a new prompting strategy that drastically cuts down verbose intermediate reasoning while preserving strong performance. Image
Minimalist intermediate drafts

Instead of long step-by-step CoT outputs, CoD asks the model to generate concise, dense-information tokens for each reasoning step.

This yields up to 80% fewer tokens per response yet maintains accuracy on math, commonsense, and other benchmarks. Image
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(