Wild set of stories in Canada recently: most famous indigenous author, 2nd most famous writer, most famous singer, most famous director, a federal cabinet minister, a big university president, famous law professor...all turned out to be faking their Native ancestry. 1/3
In the case of King, he's getting favorable press about how "devastated" he is to have learned his absentee father wasn't actually Native. But online you can find discussions going back a dozen years from Cherokee genealogists noting he ignored them even back then. 2/3
The "why" seems to be "it's a more fun identity". I see younger folks saying "who would pretend to be a member of a downtrodden group?" There's of course been discrimination, but...Native folks have been cool my whole life! Even Dylan pretended to be Native to be "authentic". 3/3
(Take care: huge jump in Native identity on censuses in US/Canada since 2010 should be taken with a massive grain of salt. Also, I love that my 10th great grandmother is Wampanoag, so somehow at 1/4096th Native I beat Buffy. Don't worry, I don't claim it! )history.vineyard.net/daggett.htm#jo…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Another two weeks in various parts of India. I have said before that India feels like China 2005, where I briefly worked, in ambition and growth. However, three major issues need to be solved. 1/5
First, the License Raj mentality is still all over government services. Almost every govt interaction is poor from the first step (joke of an e-visa site, need to scan passport for Wifi, blink your eyes on video to get a SIM, queue at office 1 then hand token to office 2...) 2/5
If govt in India has too many rules, individuals have too few. Anarchy everywhere, from supply chain fraud to pollution control to, of course, the roads. China 2005 was not like this. Relational contracts - that is, trust - are essential for growth. 3/5
@joshgans and I did an internal talk on AI for research. Mostly demos, so no slides, but broadly: 1) Research should be efficient, open & replicable. 2) AI helps will all three. 3) Always use the best model. 4) Structure your processes/tools/etc. so you can continue to do 3. 1/15
What I mean by "structure your process/tools/etc" is first that everything you do - code, writing, editing, collabs, slides - should have plain text as the substrate. This means no matter what, you can always use every AI tool now and in the future to interact with it. 2/x
And second, "structure your process/tools/etc" means training yourself on how to complement AI. E.g., if you don't know how to peer review code (or worse yet have never seen a diff b/c you write it all yourself), you are not setting yourself up to use AI in your workflow. 3/x
AI economists and AI researchers: this is *excellent*. Details below, but as I feel like I've said in every talk on this topic since my slides said "GPT-2", 1) AI technical capabilities are better and improving quicker than you think, 2) impact on economy *much* slower. 1/13
Hinton famously said (at our event!) a decade ago to not train radiologists. Today, radiologist hiring very strong w/ avg income of US$520k. What went wrong? As a technical matter, Hinton was right - anomalies on images very easy for AI today (this article undersells that!). 2/13
But human prediction uses things like gender and race - AI often can't for legal reasons. Humans work with humans and incentives are set for that world - AI as a tool often causes shirking. 3/13
The "reasoning doesn't exist" Apple paper drives me crazy. Take logic puzzle like Tower of Hanoi w/ 10s to 1000000s of moves to solve correctly. Check first step where an LLM makes mistake. Long problems aren't solved. Fewer thought tokens/early mistakes on longer problems. 1/11
But fundamental problem is that there are foundation *models*, and then there is *the way we train these models to cutoff thought*. Imagine you say "hello". Try it on o3 and 4o. o3 takes longer because it "thinks" about other things you could mean. Unnecessary in this case. 2/x
ALL LLMs just predict next token (~word) based on distribution of words in training data plus reinforcement learning. "Thinking models" just RL in "thinking tokens" to check assumpions more carefully before responding. If you don't think that can create "reasoning", fine. 3/x
Gobsmacked that the Toner-Rodgers neural network for scientific discovery paper has been withdrawn. This was a very strong claim about how 2023 era AI was improving the speed of science. I don't have details on the issues, but wanted to pass along this retraction. 1/2
That said, I am personally aware of 2025 automated lab scientific progress using LLMs and other techniques, so don't update too much in the other direction! MIT note here: economics.mit.edu/news/assuring-…
A colleague points out this paper was also a rare claim that AI helped top workers pull away more than bottom workers catch up. I believe this is true in many real world use cases but to be intellectually honest we no longer have good empirical evidence in an important setting.
On the NBA and economic theory: dynamic mechanism design tells us how to redo the draft. A good rule of thumb (due to Reny?) is that stochastic mechanisms like draft lotteries usually aren't best. How to prevent tanking while still having worst team get top pick? Simple! 1/x
Theory is beautiful. Myerson Revelation: look only at mechanisms that get teams to reveal quality "type" (we/ dynamic caveats!). Better teams will get some information rents else they tank. Let's find mechanism with no tanking that maximizes prob worst team gets top pick. 2/x
Let V be value of winning it all, W<<V value of getting top pick, p(n,w,l) prob of winning it all after n games. True quality is random walk over season from baseline unknown to mech designer. Costs 0 to give up in a game, c to try. If both try win prob based on true quality. 3/x