Dan Elton Profile picture
Problems are solvable - so let's solve some!
Feb 18, 2023 15 tweets 5 min read
1/ Trying to signal boost w a short 🧵

Amateur Go player Kellin Pelrine can consistently beat "KataGo", an AI system that was once classified as "strongly superhuman".

Strikingly, the strategies employed to beat the AI do not foil other amateur players.
ft.com/content/175e53… 2/ I believe this saga started when @MIT researchers found Go problems that are easy for humans, hard for AI.

In this example, black has a guaranteed win if it simply chains together black columns. Yet KataGo, playing against itself, looses as black 60% of time.
Jan 28, 2023 5 tweets 2 min read
When I first saw @bryan_johnson's Blueprint, I thought it looked absurd.

However, I now find it extremely inspiring. He's showing us what is possible in the realm of human health. Nothing like it has been done before. I find it more inspiring than eg getting gold at Olympics. Blueprint: blueprint.bryanjohnson.co

2 days ago Bryan released a 1 hr 44 min overview of all the measurements etc he's doing:


On the surface it looks selfish spending so much $, but actually he's doing a huge public service w/ this incredible experiment.
Dec 1, 2022 8 tweets 3 min read
RE:"Human-level Diplomacy was my fire alarm" and the people who upvoted it on Less Wrong:

I've been wanting to highlight this excellent post by Ernest Davis about how Cicero works:
garymarcus.substack.com/p/what-does-me…

Cicero is a specialized narrow AI, actually ~7 modules linked together! One may wonder though if the same system could be trained to do similar tasks. Yes, but in general not easily. It took them three years of work and a massive training dataset of tens of thousands of online games to create it. Even then, they had to expert annotate "intent" data..
Dec 1, 2022 15 tweets 6 min read
This piece is making two major arguments:

Billionaires are funding EA. Billionaires are bad, therefore EA is bad.

EA AI safety work is promoting scaling & developing LLMs, LLMs are dangerous, therefore EA AI safety is dangerous.

Both of these are really bad arguments! 🧵⬇️ Regarding the first argument.. It makes sense that your average Joe isn't donating to AI safety. There are many more salient issues, like local homelessness, or the latest natural disaster.

EA is all about using abstract philosophical reasoning to find neglected cause areas.
Sep 4, 2021 5 tweets 2 min read
I'm calling on @Twitter to streamline threading & introduce an AI-powered autothread feature, so people can migrate their long-form commenting from Facebook to Twitter. The idea is to use AI to automatically chunk text into ~280 characters, & an option to display it seamlessly... I find Facebook almost impossible to use.. to the extent that I sometimes wonder if I'm going crazy. Yesterday I tried to post something three times - once on my desktop and twice on my phone. I ended up posting the same thing twice yesterday. I like pages to load fast, too...
Sep 2, 2021 5 tweets 2 min read
I have a new @medRxivpreprint that should be of interest to the #longevity community. We show how to predict risk for cardiovascular disease & all-cause mortality from a routine abdominal CT using deep learning. Accuracy on par with CAC score. Thread cont..medrxiv.org/content/10.110… In this case the scans were taken for CT colonoscopy. The accuracy of the risk prediction is on par with that obtained from a CAC score Cardiac CT scan, but without the additional cost & radiation dose. CT scans contain a wealth of health data which is currently mostly utilized.
Sep 2, 2021 4 tweets 2 min read
Any #LongCOVID study without a control group is trash. (I've seen several)

Many of the top long COVID symptoms are very common.. [fatigue, anxiety, feeling worried/sad (40% of adolescents), headache, dizziness, shortness of breath]

many symptoms can be psychosomatic, also .@FT headline is completely misleading... I looked at the paper (preprint) (researchsquare.com/article/rs-798…) and I can't find anywhere that "1 in 7 have Long COVID" (~14.5%) is the conclusion Image
Jul 31, 2020 6 tweets 2 min read
Some failure modes for #GPT3. It seems GPT3 lacks common sense explanatory theories and therefore can’t extrapolate to new situations. Ie it lacks “good explanations” in the sense of @DavidDeutschOxf, which he defines (a bit vaguely) as “hard-to-vary”. 1/n
rationalconspiracy.com/2020/07/31/fun… Explanatory theories don’t have to be as all-encompassing and as difficult to apply as the law of physics. Rather, they can be “common sense understandings” ("background knowledge") which is generalizable. Some examples would be that that liquids cause things to be wet.. 2/n