KL Divergence has its origins in information theory. The primary goal of information theory is to quantify how much information is in data. The most important metric in information theory is called Entropy
A MASSIVE 303 page study from the very best Chinese Labs.
The paper explains how code focused language models are built, trained, and turned into software agents that help run parts of development.
These models read natural language instructions, like a bug report or feature request, and try to output working code that matches the intent.
The authors first walk through the training pipeline, from collecting and cleaning large code datasets to pretraining, meaning letting the model absorb coding patterns at scale.
They then describe supervised fine tuning and reinforcement learning, which are extra training stages that reward the model for following instructions, passing tests, and avoiding obvious mistakes.
On top of these models, the paper surveys software engineering agents, which wrap a model in a loop that reads issues, plans steps, edits files, runs tests, and retries when things fail.
Across the survey, they point out gaps like handling huge repositories, keeping generated code secure, and evaluating agents reliably, and they share practical tricks that current teams can reuse.
Overview of the evolution of code large language models (Code-LLMs) and related ecosystems from 2021 to 2025.
Evolution of programming development and research landscapes in AI-powered code generation.
Agents, robots, and us: Skill partnerships in the age of AI
- Today’s technologies could theoretically automate more than half of current US work hours. This reflects how profoundly work may change
- By 2030, about $2.9 trillion of economic value could be unlocked in the United States
- Demand for AI fluency—the ability to use and manage AI tools—has grown 7X in two years, faster than for any other skill in US job postings. The surge is visible across industries and likely marks the beginning of much bigger changes ahead.
Two-thirds of US work hours require only nonphysical capabilities.
"The Impact of Artificial Intelligence on Human Thought"
A big 132 page report.
AI is shifting real thinking work onto external systems, which boosts convenience but can weaken the effort that builds understanding and judgment,
A pattern the paper frames through cognitive offloading and cognitive load theory, and then tracks into social effects like standardized language and biased information flows, and manipulation tactics that target human psychology.
It says use AI to cut noise and routine steps, keep humans doing the heavy mental lifting, and add controls because personalization, deepfakes, and opaque models can steer choices at scale.
🧵 Read on 👇
🧵2/n. ⚙️ The Core Concepts
Cognitive load theory says working memory is limited, so AI helps when it reduces extraneous load and hurts when it replaces the germane load needed to build skill.
In plain terms, let tools clean up the interface and fetch data, but keep people doing the analysis, explanation, and sense‑making.
🧵3/n. 🧰 Offloading and memory
Handing memory, calculation, or choosing to an external aid frees attention now, yet steady offloading can dull recall and critical habits later.
The paper casts web search, note apps, and assistants as a human‑machine transactive memory system, useful when sources are reliable, risky when they are biased or wrong.
That is why trust and verification routines matter as much as speed.
🧠 "The Impact of Artificial Intelligence on Human Thought"
A big 132 page report.
AI is shifting real thinking work onto external systems, which boosts convenience but can weaken the effort that builds understanding and judgment,
Says, with AI's help Cognitive offloading cuts the mental work people invest in tasks, which boosts convenience in the moment but can weaken critical thinking and creativity over time.
With AI, personalized feeds lock users into filter bubbles, so views polarize across groups while language and reasoning become more uniform inside each group.
It recommends, use AI to cut noise and routine steps, but keep humans doing the heavy mental lifting, and add controls because personalization, deepfakes, and opaque models can steer choices at scale.
🧵 Read on 👇
🧵2/n. ⚙️ The Core Concepts
Cognitive load theory says working memory is limited, so AI helps when it reduces extraneous load and hurts when it replaces the germane load needed to build skill.
In plain terms, let tools clean up the interface and fetch data, but keep people doing the analysis, explanation, and sense‑making.
🧵3/n. 🧰 Offloading and memory
Handing memory, calculation, or choosing to an external aid frees attention now, yet steady offloading can dull recall and critical habits later.
The paper casts web search, note apps, and assistants as a human‑machine transactive memory system, useful when sources are reliable, risky when they are biased or wrong.
That is why trust and verification routines matter as much as speed.
You can fly it without a pilot license after a quick 5-day training course.
Needs an $8,000 deposit. comes with backup batteries, a ballistic parachute, and radar that handles auto-landing.
On the Safety features - Jetson include the ability to keep flying after 1 motor failure, hands-free hover and emergency functions, redundant battery propulsion, a ballistic parachute with rapid deployment, and a radar-sensor auto-landing system. Jetson also published a separate update on its airframe parachute system with test deployments.
Jetson published range testing that repeatedly achieved 11.02 miles at a cruise speed of 60 km/h, consistent with about a 18-20 minute endurance window depending on conditions and pilot weight.
In the US this fits FAA Part 103 ultralight rules, which means no pilot license and no aircraft registration. Operations are limited to daylight or civil-twilight with a strobe, not over congested areas, and not in controlled airspace without ATC authorization.