KL Divergence has its origins in information theory. The primary goal of information theory is to quantify how much information is in data. The most important metric in information theory is called Entropy
🧠 "The Impact of Artificial Intelligence on Human Thought"
A big 132 page report.
AI is shifting real thinking work onto external systems, which boosts convenience but can weaken the effort that builds understanding and judgment,
Says, with AI's help Cognitive offloading cuts the mental work people invest in tasks, which boosts convenience in the moment but can weaken critical thinking and creativity over time.
With AI, personalized feeds lock users into filter bubbles, so views polarize across groups while language and reasoning become more uniform inside each group.
It recommends, use AI to cut noise and routine steps, but keep humans doing the heavy mental lifting, and add controls because personalization, deepfakes, and opaque models can steer choices at scale.
🧵 Read on 👇
🧵2/n. ⚙️ The Core Concepts
Cognitive load theory says working memory is limited, so AI helps when it reduces extraneous load and hurts when it replaces the germane load needed to build skill.
In plain terms, let tools clean up the interface and fetch data, but keep people doing the analysis, explanation, and sense‑making.
🧵3/n. 🧰 Offloading and memory
Handing memory, calculation, or choosing to an external aid frees attention now, yet steady offloading can dull recall and critical habits later.
The paper casts web search, note apps, and assistants as a human‑machine transactive memory system, useful when sources are reliable, risky when they are biased or wrong.
That is why trust and verification routines matter as much as speed.
You can fly it without a pilot license after a quick 5-day training course.
Needs an $8,000 deposit. comes with backup batteries, a ballistic parachute, and radar that handles auto-landing.
On the Safety features - Jetson include the ability to keep flying after 1 motor failure, hands-free hover and emergency functions, redundant battery propulsion, a ballistic parachute with rapid deployment, and a radar-sensor auto-landing system. Jetson also published a separate update on its airframe parachute system with test deployments.
Jetson published range testing that repeatedly achieved 11.02 miles at a cruise speed of 60 km/h, consistent with about a 18-20 minute endurance window depending on conditions and pilot weight.
In the US this fits FAA Part 103 ultralight rules, which means no pilot license and no aircraft registration. Operations are limited to daylight or civil-twilight with a strobe, not over congested areas, and not in controlled airspace without ATC authorization.
BIG success for LLMs in financial trading & decision making.
New Stanford + Univ California study proves a 4B financial-domain model, Trading-R1, writes clear analyst theses and turns them into profitable trades.
Its trained on 100K cases over 18 months across 14 tickers, and its backtests show better risk-adjusted returns with smaller drawdowns.
The problem it tackles is simple, quant models are hard to read, and general LLMs write nice text that does not translate into disciplined trades.
The solution starts by forcing a strict thesis format, with separate sections for market data, fundamentals, and sentiment, and every claim must point to evidence from the given context.
Then it learns decisions by mapping outcomes into 5 labels, strong buy, buy, hold, sell, strong sell, using returns that are normalized by volatility over several horizons.
For training, it first copies high-quality reasoning distilled from stronger black-box models using supervised fine-tuning, then it improves with a reinforcement method called group relative policy optimization.
In held-out tests on NVDA, AAPL, AMZN, META, MSFT, and SPY, the combined approach beats small and large baselines on Sharpe and max drawdown, and the authors position it as research support, not high-frequency automation.
🧵 Read on 👇
🧵2/n. The 3 steps used to train Trading-R1.
The first step is Structure. The model is taught how to write a thesis in a clear format. It must separate parts like market trends, company fundamentals, and sentiment, and it has to place each claim in the right section.
The second step is Claims. Here the model learns that any claim it makes must be supported by evidence. For example, if it says revenue is growing, it must back that with a source or number provided in the context.
The third step is Decision. The model turns the structured thesis into an actual trading action. It predicts outcomes like strong buy, buy, hold, sell, or strong sell. Its prediction is checked against the true outcome, and it gets rewards or penalties depending on accuracy.
Each step first uses supervised fine-tuning, which means training on examples with correct answers, and then reinforcement fine-tuning, which means refining the model by giving rewards when it produces better outputs.
Finally, all stages are combined, producing Trading-R1, a model that can both write well-structured financial reasoning and map that reasoning into actual trading decisions.
🧵3/n. Three-Stage Financial Trading Model Training Pipeline
In Structure, the model learns to write in a clear format and keep sections organized.
In Claims, it learns to back every statement with quotes or sources, reducing hallucinations.
In Decision, it learns to turn the structured reasoning into buy, hold, or sell calls that are market-aware.
Each stage mixes supervised fine-tuning, reinforcement fine-tuning, and filtering of good examples to steadily improve.
"The Impact of Artificial Intelligence on Human Thought"
A big 132 page report.
AI is shifting real thinking work onto external systems, which boosts convenience but can weaken the effort that builds understanding and judgment,
A pattern the paper frames through cognitive offloading and cognitive load theory, and then tracks into social effects like standardized language and biased information flows, and manipulation tactics that target human psychology.
It says use AI to cut noise and routine steps, keep humans doing the heavy mental lifting, and add controls because personalization, deepfakes, and opaque models can steer choices at scale.
🧵 Read on 👇
🧵2/n. ⚙️ The Core Concepts
Cognitive load theory says working memory is limited, so AI helps when it reduces extraneous load and hurts when it replaces the germane load needed to build skill.
In plain terms, let tools clean up the interface and fetch data, but keep people doing the analysis, explanation, and sense‑making.
🧵3/n. 🧰 Offloading and memory
Handing memory, calculation, or choosing to an external aid frees attention now, yet steady offloading can dull recall and critical habits later.
The paper casts web search, note apps, and assistants as a human‑machine transactive memory system, useful when sources are reliable, risky when they are biased or wrong.
That is why trust and verification routines matter as much as speed.
Rude prompts to LLMs consistently lead to better results than polite ones 🤯
The authors found that very polite and polite tones reduced accuracy, while neutral, rude, and very rude tones improved it.
Statistical tests confirmed that the differences were significant, not random, across repeated runs.
The top score reported was 84.8% for very rude prompts and the lowest was 80.8% for very polite.
They compared their results with earlier studies and noted that older models (like GPT-3.5 and Llama-2) behaved differently, but GPT-4-based models like ChatGPT-4o show this clear reversal where harsh tone works better.
----
Paper – arxiv. org/abs/2510.04950
Paper Title: "Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy (short paper)"
Average accuracy and range across 10 runs for five different tones