TuringPost #CES2025 Profile picture
Newsletter exploring AI & ML - AI 101 - ML techniques - AI Business insights - Global dynamics - ML History Led by @kseniase_ Save hours of research 👇🏼
2 subscribers
Dec 22 9 tweets 3 min read
Tree-of-Code (ToC) is a new way to help LLM-based agents perform better decision-making and execution.

It combines 2 powerful ideas:

- Tree-of-Thought for structured problem-solving
- CodeAct, which generates Python code for actions, for task-planning efficiency

ToC treats code as a way of thinking and builds a tree-like system.

▪️ Tests shows that ToC is more reliable than Tree-of-Thought and more accurate than CodeAct.
▪️ It works with different AI models without needing extra training.

So, how does ToC work? 🧵Image 1. Tree-of-Code (ToC) creates a complete pipeline where the AI plans and solves tasks step-by-step, turning its reasoning into clear, executable code.

It uses a tree-like system to explore multiple ways of generating code and solving problems, making the process more reliable and accurate.
Dec 1 13 tweets 5 min read
Top 10 GitHub Repositories to master ML, AI and Data Science:

• 100 Days of ML Code
• Data Science For Beginners
• Awesome Data Science
• Data Science Masters
• Homemade Machine Learning
• 500+ AI Projects List with Code
• Awesome Artificial Intelligence
• Machine Learning Design Interview
• Data Science Interviews
• Data Science Best Resources
+ Our twitter library

Don't forget to save the list!

Check out the links below 👇Image 1. 100 Days of ML Code - 45.6k stars

A plan for studying Machine Learning aspects, such as Data PreProcessing, simple and multiple linear and logistic regression, all math behind ML and much more.

github.com/Avik-Jain/100-…
Nov 29 7 tweets 3 min read
Natural Language Reinforcement Learning (NLRL) redefines Reinforcement Learning (RL).

The main idea:
In NLRL, the core parts of RL like goals, strategies, and evaluation methods are reimagined using natural language instead of rigid math.

What are the benefits?

- NLRL uses not only single numbers but also detailed feedback
- Interpretable and easier to understand
- Human-like decision-making

Let's explore this approach more precisely🧵Image 1. Text-based MDP (Markov Decision Process):

In NLRL, the states, actions, and feedback from the environment are described using natural language. For example, NLRL starts with a language goal, like "reach the goal" or "open the door."
Nov 9 7 tweets 2 min read
In one of our first "A path towards AGI" posts we discussed Neuro-symbolic systems.

Here's a new example of their implementation👇

Neuro-Symbolic Predicates (NSPs) are smart rules that help robots think by combining visual perception (neural) with logical rules (symbolic). With NSPs robots can easier plan and tackle complex tasks.

NSPs use programming basics (conditions, loops) and can connect with VLMs that understand images and text.

Here are the details about:
- 2 types of NSPs
- selecting NSPs
- task planning with learning High-Level Actions (HLAs)

🧵Image 2 types of NSPs:

• Primitive NSPs: Directly interact with what the robot can see or feel. A primitive NSP might ask the VLM if the robot is holding something or if the gripper is open.

• Derived NSPs: Depend on other NSPs rather than direct observations. For example, they determine if an object is on a plate by checking if it's on another object that is on the plate.
Nov 8 12 tweets 4 min read
Do LoRA and full fine-tuning actually change the model in the same way?

@MIT_CSAIL identified key differences between LoRA and full fine-tuning:

- Various adaptation to task
- Performance
- LoRA's intruder dimensions

What are these intruder dimensions and what impacts them?👇 Image 1. Intruder dimensions:

LoRA fine-tuning introduces new directions in the model’s weights, called intruder dimensions. These don’t appear in full fine-tuning and show up as low-similarity or “outlier” directions in the model’s weights. Image
Nov 7 15 tweets 10 min read
The freshest AI/ML researches of the week, part 2

▪️ Hybrid Preferences: Learning To Route Instances For Human Vs. AI Feedback
▪️ LongReward
▪️ Accelerating Direct Preference Optimization With Prefix Sharing
▪️ BITSTACK
▪️ NeuZip
▪️ AgentStore
▪️ OS-ATLAS
▪️ Document Parsing Unveiled
▪️ Neural Fields In Robotics: A Survey
▪️ Teaching Embodied RL Agents
▪️ Personalization Of LLMs: A Survey
▪️ Survey Of UI Design And Interaction In Generative AI

🧵Image
Image
Image
Image
1. Hybrid Preferences: Learning To Route Instances For Human Vs. AI Feedback

Balances human and AI feedback for improved preference tasks and reducing costs.

arxiv.org/abs/2410.19133
Code: github.com/allenai/hybrid… Image
Oct 16 10 tweets 3 min read
We need to achieve human-level AI. But how can we get there?

@ylecun proposes Objective-Driven AI as a promising new direction for achieving human-like reasoning and planning.

Here are the main points from his keynote at the Hudson Forum: Image
Image
1. Current limitations:

AI models, especially LLMs, lack persistent memory, true understanding, reasoning, and complex planning abilities.

Trained mainly on text and predicting word sequencies, they struggle to interact with the physical world.
Sep 22 6 tweets 3 min read
.@NVIDIA's new NVLM multimodal models use:

- A powerful processor kept unchanged during training
- Image division and pixel shuffle for effective processing

NVLM architectures:
• Decoder-only
• Cross-attention based
• Hybrid

Let's explore their differences and find the best👇Image 1. NVLM-D (Decoder-only):

Uses a vision encoder to convert images into text and an LM to handle text. They are connected by MLP module that aligns the image and text information.

It divides big images in small pieces and uses tile tags to keep track of the image's structure.
Image
Image
Sep 8 6 tweets 2 min read
That's really interesting! Mini-Omni LLM demonstrates parallel processing in real-time conversations. It can hear, process and talk at the same time.

How does Mini-Omni achieve this?

🧵 Image 1. Audio generation with text instruction:

The model converts text instructions into real-time speech. It generates a text response and quickly transforms it into spoken words, using text-to-speech technology.

This allows faster and smoother conversations.
Jul 10 7 tweets 3 min read
Let's dive into one of the newest concept of synthetic data generation -
active inheritance.

Proposed by @CohereForAI, it's a strategy used in ML to intentionally design synthetic data to achieve specific goals.

Here's how active inheritance works: Image 1. What's in the base?

The base is a knowledge distillation technique, where a smaller LLM (student) learns from a larger, more powerful model (teacher).

The student tries to mimic the teacher outputs for the same input prompts by learning from the data the teacher generates.
Jun 3 11 tweets 3 min read
A new model, Meteor, leverages multifaceted information and a Mamba architecture to enhance comprehension and response capabilities in vision-language tasks.

Let's explore its architecture and training strategy👇 Image 1. Meteor's architecture includes:

- a vision encoder (CLIP-L/14)
- vision and tor projectors (MLP modules with GELU activation)
- the Mamba-130M architecture for computational efficiency
- the InternLM2-7B as the backbone LLM. Image
Apr 11 10 tweets 2 min read
TimeGPT is the first foundation model specifically designed for time series analysis.

It excels at generating precise forecasts across a diverse range of datasets and domains.

Here's what you need to know about it:

1/8 Image The model leverages a Transformer-based architecture, optimized for time series data, with self-attention mechanisms that facilitate the handling of temporal dependencies and patterns across varied frequencies and characteristics.

2/8
Mar 3 9 tweets 3 min read
8 Free Courses to Master Large Language Models:

1. @cohere LLM University
2. @huggingface NLP course
3. @databricks courses
and more!

🧵


Image
Image
Image
Image
1. @cohere LLM University

The course offers insights into how LLMs work, and their practical applications, and guides participants on using LLMs to build and deploy applications.docs.cohere.com/docs/llmu
Feb 19 7 tweets 2 min read
DoRA (Weight-Decomposed Low-Rank Adaptation) sets a new standard for optimizing AI models.

It combines the benefits of full model fine-tuning and LoRA.

How does it do that? Let's see 👇🏼

1/7 Image The genius of DoRA lies in its unique handling of pre-trained weights.

It separates these weights into two parts:

1. one that determines the size (magnitude)
2. one that determines the orientation (direction) of the weight vectors

2/7 Image
Dec 27, 2023 18 tweets 7 min read
Want to understand foundation models, generative AI models, and transformers?

Here is your FREE list of 15+ resources to do that: 1. Efficient Transformers: A Survey explores the evolution of Transformer models in various domains. It provides a comprehensive overview of different Transformer variants (X-formers) to guide researchers.
arxiv.org/abs/2009.06732
Dec 10, 2023 8 tweets 3 min read
7 resources to master prompt engineering:

1. Prompt Engineering Guide
2. Learn Prompting course
3. ChatGPT Prompt Engineering for Developers course
...

🧵


Image
Image
Image
Image
1. Zero-shot, few-shot, and chain-of-thought lineage prompting techniques explained.

Check our article detailing various prompt engineering techniques at: turingpost.com/p/cot
Dec 8, 2023 9 tweets 4 min read
8 free courses to master large language models:

- Cohere LLM University
- Hugging Face NLP course
- DeepLearning AI courses
- Weights & Biases course
- Introduction to LLMs course by Google Cloud
...

🧵


Image
Image
Image
Image
1. @cohere LLM University

The course offers insights into how LLMs work, and their practical applications, and guides participants on using LLMs to build and deploy applications.

docs.cohere.com/docs/llmu
Dec 8, 2023 9 tweets 2 min read
Why Vector Embeddings Are Key for LLMs?

Vector embeddings turn complex data into numerical forms, a crucial step for Foundation Models & LLMs.

Let's dive into how they redefine AI’s capabilities: Image source: Pinecone 1. Semantic Information Capture:

Vector embeddings are adept at encoding both semantic & syntactic information. This allows models to grasp context and meaning, a fundamental aspect for understanding natural language.
Dec 2, 2023 11 tweets 6 min read
10 surveys about transfer learning and domain adaptation you need to read.

Domain: Computer vision

🧵


Image
Image
Image
Image
1. A Survey on Transfer Learning (2010)

The survey categorizes and reviews transfer learning's progress for classification, regression, and clustering, discussing its relationship with domain adaptation, multitask learning, and co-variate shift.

cse.ust.hk/~qyang/Docs/20…
Image
Dec 2, 2023 6 tweets 2 min read
Adapting LLMs to your specific needs is key to making the most out of these powerful tools in your business.

Ways to do this:

▪️ Prompting techniques
▪️ Retrieval-Augmented Generation (RAG)
▪️ Fine-Tuning

Let's dive into each of them: Image 1. Prompt engineering

Designing specific input prompts that guide the model to apply its general knowledge in a way that's relevant to the task.
Nov 22, 2023 24 tweets 6 min read
Quantization is a technique used to reduce the size and increase the efficiency of deep learning models.

Here is a list of 23 LLM quantization techniques you need to know about: Image 1. LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models arxiv.org/abs/2206.09557