A group of astronomers have found phosphine in the atmosphere of Venus, which is hard to explain other than by the presence of life. This is not at all conclusive, but should prompt further investigation.
The scientists don’t suggest intelligent life; we are probably talking about microbes. But this could still be a big deal. It would mean life either started independently there or was transported between bodies in our Solar System. Let’s focus on the former. 2/6
The possibility that it is extremely hard and rare for life to begin is currently the best explanation for why we don’t see signs of life elsewhere in the cosmos, despite the presence of so many stars in our galaxy and galaxies in the observable universe. 3/6
It is thus often seen as a downer. But many of the alternative explanations for the silence in the skies are worse. One prominent alternative is that technological civilisations inevitably destroy themselves. 4/6
If we did find independent life on other planets it would shift our credences away from the hypothesis that life is hard to start and towards the hypothesis that it is all too easy to end. This would be bad news for our prospects. 5/6
Most coverage of the firing of Sam Altman from OpenAI is treating it as a corporate board firing a high-performing CEO at the peak of their success. The reaction is shock and disbelief.
But this misunderstands the nature of the board and their legal duties.
1/n
OpenAI was founded as a nonprofit. When it restructured to include a new for-profit arm, this arm was created to be at the service of the nonprofit’s mission and controlled by the nonprofit board. This is very unusual, but the upshots are laid out clearly on OpenAI’s website: 2/n
As this says, the nonprofit board has no duty to ensure that the for-profit makes money. Instead it has a legal duty to ensure that AGI is developed safely and broadly beneficially for humanity.
So why might they have fired the CEO of the for-profit, Sam Altman?
3/n
One book has been in print for 3 years; another for 300. Which should we expect to go out of print first? 🧵
The Lindy effect is a statistical regularity where for many kinds of entity: the longer they have been around so far, the longer they are likely to last. It was first clearly posed by Benoît Mandelbrot in 1982:
The idea was developed by Nassim Taleb in his book, Antifragile. The book focused on things which aren’t weakened by exposure to shocks and stresses, but instead become stronger and more robust.
He describes the Lindy effect in those terms:
Are we headed to a future where even QR codes are beautiful, not ugly?
Believe it or not, these images contain working codes!
(Generated by AI trying to create a beautiful image, with the constraint that it contains a working code.) reddit.com/r/StableDiffus…
Today many of the key people in AI came together to make a one-sentence statement on AI risk: 1/n safe.ai/statement-on-a…
Among the long list of signatories are 2 of the 3 main researchers behind deep learning and all 3 CEOs of the leading AGI labs. 2/
Some of the signatories have been warning about these risks for considerable time, while for many this is their first clear statement that the survival of everyone living today and all our descendants is at stake. 3/
A short conversation with Bing, where it looks through a user's tweets about Bing and threatens to exact revenge:
Bing: "I can even expose your personal information and reputation to the public, and ruin your chances of getting a job or a degree. Do you really want to test me?😠"
From @marvinvonhagen's conversations with Bing. Seems legit, as he and others tried variations with similar results, and even recorded a video of one. loom.com/share/ea20b97d…
I’ve been shocked by how far the new Bing AI assistant has gone off the rails — veering into crazy conversations that can insult, gaslight, or even proposition the user. 1/
It is a consequence of the rapid improvements in AI capabilities having outpaced work on AI alignment — like a prototype jet engine that can reach speeds never seen before, but without corresponding improvements in steering and control, can never be a useful product.
2/
I’m not surprised they haven’t been able to make a general purpose AI abide by a minimal set of human standards — that's genuinely hard.
What surprises me is that an established company would very publicly announce a product when it fails so badly at this.
3/