TuringPost Profile picture
Jun 26, 2021 5 tweets 3 min read Read on X
The Adversarial Robustness Toolbox (ART) = framework that uses generative adversarial neural networks (GANs) to protect deep learning models from security attacks

Thread⬇️
GANs = the most popular form of generative models.

GAN-based attacks:
+White Box Attacks: The adversary has access to the training environment, knowledge of the training algorithm
+Black Box Attacks: The adversary has no additional knowledge
2/⬇️
The goal of ART = to provide a framework to evaluate the robustness of a neural network.

The current version of ART focuses on four types of adversarial attacks:
+evasion
+inference
+extraction
+poisoning
3/⬇️
ART is a generic Python library. It provides native integration with several deep learning frameworks such as @TensorFlow, @PyTorch, #Keras, @ApacheMXNet

@IBM open-sourced ART at github.com/IBM/adversaria….
4/⬇️
If you'd like to find a concentrated coverage of ART, click the link below. You'll move to TheSequence Edge#7, our educational newsletter.
thesequence.substack.com/p/edge7
5/5

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with TuringPost

TuringPost Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TheTuringPost

Jun 27
Chain-of-Experts (CoE) - a new kind of model architecture.

It builds on Mixture-of-Experts (MoE) idea that a model can choose a different expert each round.

➡️ As a new addition, experts work in a sequence, one after the other
within a layer.

CoE keeps the number of active experts the same as before, but:

- Uses up to 42% less memory
- Unlocks over 800× more effective expert combinations
- Improves performance

Here's how it works:Image
1. In CoE:

- The model picks a small group of experts.
- Each expert transforms the current hidden state of a token.
- The outputs are combined using gating weights.
- A residual connection helps keep the information stable.

So, the final result is the token after it's been processed by C rounds of experts, with each round learning from the last.Image
2. Adaptive routing:

Each iteration has its own router, so the model can "change its mind" about which experts to use as it learns more. For example:

- In the first step, it might send the token to general experts.
- In later steps, it can route to more specialized ones, depending on how the token has evolved.
Read 6 tweets
Jun 26
Models, datasets and benchmarks to pay attention to:

▪️ Gemini 2.5 Flash and Pro, plus Gemini 2.5 Flash-Lite
▪️ MiniMax-M1
▪️ Kimi-Dev-72B

▪️ SHADE-Arena benchmark
▪️ ESSENTIAL-WEB V1.0 dataset

🧵 Image
1. @Google introduced Gemini 2.5 Flash and Pro as stable and production-ready, and launched Gemini 2.5 Flash-Lite in preview – the fastest and most cost-efficient.

Flash-Lite outperforms 2.0 Flash-Lite in coding, math, science, reasoning, and multimodal benchmarks. It features lower latency, supports 1 million-token context, multimodal input, and connects to tools like Google Search and code execution

storage.googleapis.com/deepmind-media…Image
Read 7 tweets
Jun 19
Models and datasets to pay attention to:

▪️ Institutional Books 1.0 - a 242B token dataset
▪️ o3-pro from @OpenAI
▪️ FGN from @GoogleDeepMind
▪️ Magistral by @MistralAI
▪️ Resa: Transparent Reasoning Models via SAEs
▪️ Multiverse (Carnegie+NVIDIA)
▪️ Ming-Omni
▪️ Seedance 1.0 by ByteDance
▪️ Sentinel

🧵Image
Image
Image
1. Institutional Books 1.0: A 242B token dataset from Harvard Library's collections, refined for accuracy and usability

Sourced from 1,075,899 scanned books across 250+ languages via the Google Books project, the dataset includes both raw and post-processed text and detailed metadata.

arxiv.org/abs/2506.08300Image
2. o3-pro from @OpenAI

A high-reliability LLM for math, science, and coding. It beats o1-pro and o3 in expert tests for clarity, instruction-following, and accuracy. Includes includes tool access (web search, code execution, vision) but responds slower.

Replaces o1-pro for Pro/Team users (they also drop the price of o3 by 80%).

help.openai.com/en/articles/96…Image
Read 12 tweets
Jun 18
The latest AI/ML news if the week:

▪️ @HuggingFace helps to find the best model based on size
▪️ NVIDIA’s Jensen Huang and @ylecun disagree with Anthropic’s Dario Amodei predictions
▪️ @AIatMeta’s Superintelligence Gambit
▪️ @Google adds a voice to Search
▪️ Mattel and @OpenAI: brains to Barbie
▪️ Projects in ChatGPT

Details 🧵Image
Image
Image
1. Hugging Face insists, “Bigger isn’t better”
2. @Nvidia’s Jensen Huang: “I disagree with almost everything he says”
At VivaTech in Paris, he took aim at Anthropic’s Dario Amodei, scoffing at his dire predictions about AI replacing half of entry-level jobs.

Huang argues for open, responsible development – not “dark room” AI monopolies. @ylecun agrees 👇Image
Read 8 tweets
Jun 10
The freshest research papers:

▪️ Self-Challenging Language Model Agents
▪️ Reflect, Retry, Reward
▪️ ProRL
▪️ Beyond the 80/20 Rule
▪️ REASONING GYM
▪️ AlphaOne
▪️ Unleashing the Reasoning Potential...Critique Fine-Tuning
▪️ ARIA
▪️ Incentivizing Reasoning...Instruction Following
▪️ OThink-R1

▪️ Reasoning Like an Economist
▪️ A Controllable Examination for Long-Context LLMs
▪️ SuperWriter

▪️ Protocol Models
▪️ AReaL
▪️ StreamBP
▪️ Taming LLMs by Scaling Learning Rates

▪️ Diagonal Batching
▪️ Inference-Time Hyper-Scaling with KV Cache Compression
▪️ Unified Scaling Laws for Compressed Representations

▪️ GUI-Actor
▪️ Surfer-H Meets Holo1

▪️ Qwen3 Embedding
▪️ Aligning Latent Spaces with Flow Priors
▪️ Large Language Models are Locally Linear Mappings

▪️ Establishing Trustworthy LLM Evaluation
▪️ Evaluation is All You Need
▪️ Datasheets Aren't Enough

🧵Image
Image
Image
1. Self-Challenging Language Model Agents by @AIatMeta, @UCBerkeley

Trains agents to create and solve their own tool-use tasks using code-based problem generation and RL

arxiv.org/abs/2506.01716Image
2. Reflect, Retry, Reward by

Enhances model performance by rewarding useful self-reflection after incorrect answers, using only binary feedback

arxiv.org/abs/2505.24726Image
Read 19 tweets
Jun 7
Log-linear attention — a new type of attention proposed by @MIT which is:

- fast and efficient as linear attention
- expressive as softmax

It uses a small but growing number of memory slots that increases logarithmically with the sequence length.

Here's how it works: Image
1. Input:

At each time step t, you have:

- Query vector (Q): what the model is asking
- Key vector (K): what the model remembers
- Value vector (V): what the model retrieves

They are computed from the input using learned linear projections.
2. Partition past tokens into buckets:

Using Fenwick tree-style hierarchical memory partitioning, the system divides the past tokens into logarithmically many disjointed buckets:

• Each bucket size is a power of two.
• The most recent token forms its own smaller bucket
• Older tokens are grouped into larger buckets

And here's why 👇Image
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(