I'm using GPT5 Pro to find me the best stocks and startup investments.
Asked it to use modern portfolio theory and size investments.
—Top Privates [+9.7%]: Databricks, Stripe, Anthropic, SpaceX
—Top Publics [+14.2%]: Nvidia, TSMC, Microsoft, Meta
Just put $1000 into the stocks!
Prompt: "Check all public / private stock market companies and tell me what I should invest in from first principles reasoning. You have $1000.
Please do deep research and present rationale for each investment. Each one should have a target price and expected value. Use advanced math for trading. Draw research from authoritative sources like research and unbiased pundits. Size my bets properly and use everything you know about portfolio theory. Corroborate each decision with a list of predictions about those companies.
Your goal is to maximize expected value. Make minimum 5 investments. Write it in a table."
This follows my previous experiment on Polymarket, which seemingly had ~2-4x the expected returns!
And yes, I know they’ve always reported on the 477 denominator, but that’s NOT “SWE-Bench verified”, that’s an entirely different metric, it’s “OpenAI’s subset of SWE Bench Verified” and that number can’t be compared
Microsoft just leaked their official compensation bands for engineers.
We often forget that you can be a stable high-performing engineer with
great work-life balance, be a BigTech lifer and comfortably retire with a net worth of ~$15M!
The best open-source AI model just dropped a detailed report on how it was trained, a rare resource for students given no frontier lab is publishing!
Kimi K2's estimated total cost of training is ~$20-30M, roughly in line with pricing: $0.6/M in $2.5/M out tokens.
10 highlights:
1. Generating tokens by rewriting high-quality tokens with LLMs in pre-training 2. Mining 3000+ MCPs and using LLM-generated personas to improve agentic tool calling 3. 10,000 parallel Kubernetes sandboxes to solve Github issues 4. New scaling laws for sparsity in MoE models
5. RL with verifiable rewards (RLVR) for math, coding, safety with self-critique model with long-reasoning penalty, causing direct, desisive answers 6. Training recipe of 4k sequences, then 32k then 128k with YaRN 7. High temp during initial RL training to promote exploration