The James Webb Space Telescope has started capturing images of galaxies so far away that they are causally disconnected from the Earth — nothing done here or there could ever interact. 🧵 1/
The latest of these, JADES-GS-z14-0, was discovered at the end of May this year. It is located 34 billion light years away — almost three quarters of the way to the edge of the observable universe.
2/
The light we are capturing was released by the galaxy about 13.5 billion years ago — just 0.3 billion years after the Big Bang. So we are seeing a snapshot of how it looked in the early days of the universe.
3/
Because the space between us is stretching, the current distance to the galaxy is 15 times larger than it was when the light began its journey towards us. That's how it can be 34 billion light years away when the light has only travelled for 13.5 billion years.
4/
The expansion of space makes it hard for anything, even light, to cross the vast gulfs between distant galaxies, as the distance you need to cross keeps growing. Because the expansion is accelerating, eventually the remaining distance grows too fast to ever cross.
5/
If we shine a torch up at the night sky, some of the photons released will eventually leave our galaxy and travel for a vast distance. They will eventually be able to reach any galaxy that is currently within 16.5 billion light years of us.
6/
I call this region that we can affect 'The Affectable Universe', and in many ways it is the twin to the Observable Universe. Each year, more galaxies slip beyond our reach, as a photon released next year will no longer be ever able to reach them.
7/
JADES-GS-z14-0 is well beyond the edge of the affectable universe. Nothing we send out can ever reach it or affect it.
We've seen galaxies beyond this distance for a long time. Many of the smaller galaxies in the Hubble Deep Field below are forever beyond out reach. 8/
But events here and contemporaneous events in those small galaxies *can* interact — if being in both galaxies set off towards each other at near the speed of light, they could eventually meet in the middle.
9/
Or if we both sent signals, an alien civilisation in the middle could receive both and combine them. In other words, it is still possible to causally interact with each other.
10/
This is also what we saw with the star 'Earendel' — the first individual star to be identified that was beyond our affectable universe.
11/
But JADES-GS-z14-0 is slightly more than *twice* as far as the edge of the affectable universe. So the affectable universe around us and the affectable universe around them don't overlap at all. So there is no longer a way to interact at all.
12/
Of course, we can still see these baby photos of their galaxy, but no matter how long we wait, we'll never see them grow up to our current age. If we waited, we'd see the evolution of their galaxy slow down asymptotically and never get to be 13.8 billion years old.
13/
Of course, they'd keep getting older, but the 'postcards' (photons) they send us get delayed longer and longer by the expanding distance they have to cover, so come in less and less frequently, and recent postcards will never arrive.
14/
You can find out much more about this in my paper on The Edges of Our Universe, described here:
So far the JWST has identified 3 such galaxies that are twice as far as the edge of the affectable universe (i.e. more than 33.0 billion light years away). You can find them here: en.wikipedia.org/wiki/List_of_t…
I've drawn up a scale diagram to show what is happening. The blue lines are our past and future light cones, the purple lines are their's.
For the first 300 million years, they were in our past light cone, which is why we can see their early stages. Similarly, they (now) could see our spot in the universe at that time, but it was empty, and they can never see the Earth form.
And here is a version with a dashed line showing the last point in time at their location that we will ever see. We will only be able to see the first 5 billion years or so, and never be able to see what they are doing now.
If you were interested in this thread and want to hear more big picture thinking about humanity, its role in the cosmos, and why our own time is crucial in that story, you may be interested in my book, The Precipice. theprecipice.com
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Since the launch of ChatGPT, there has been a lot of loose talk about AI having passed the Turing Test (or even 'blown past' it). But this was premature and probably incorrect.
A new paper tests whether GPT-4 passes the Turing test, with mixed results. Let's explore: 1/n
First, let's be clear on a few things about the Turing Test. 1) Pretty much everyone agrees it doesn't constitute a definition or a necessary or sufficient condition for intelligence.
2/n
2) But that doesn't mean it isn't an interesting benchmark. e.g. it was very interesting to know when AI beat humans at Chess and at Go, even though no-one thinks they are definitive of intelligence.
3/n
Most coverage of the firing of Sam Altman from OpenAI is treating it as a corporate board firing a high-performing CEO at the peak of their success. The reaction is shock and disbelief.
But this misunderstands the nature of the board and their legal duties.
1/n
OpenAI was founded as a nonprofit. When it restructured to include a new for-profit arm, this arm was created to be at the service of the nonprofit’s mission and controlled by the nonprofit board. This is very unusual, but the upshots are laid out clearly on OpenAI’s website: 2/n
As this says, the nonprofit board has no duty to ensure that the for-profit makes money. Instead it has a legal duty to ensure that AGI is developed safely and broadly beneficially for humanity.
So why might they have fired the CEO of the for-profit, Sam Altman?
3/n
One book has been in print for 3 years; another for 300. Which should we expect to go out of print first? 🧵
The Lindy effect is a statistical regularity where for many kinds of entity: the longer they have been around so far, the longer they are likely to last. It was first clearly posed by Benoît Mandelbrot in 1982:
The idea was developed by Nassim Taleb in his book, Antifragile. The book focused on things which aren’t weakened by exposure to shocks and stresses, but instead become stronger and more robust.
He describes the Lindy effect in those terms:
Are we headed to a future where even QR codes are beautiful, not ugly?
Believe it or not, these images contain working codes!
(Generated by AI trying to create a beautiful image, with the constraint that it contains a working code.) reddit.com/r/StableDiffus…
Today many of the key people in AI came together to make a one-sentence statement on AI risk: 1/n safe.ai/statement-on-a…
Among the long list of signatories are 2 of the 3 main researchers behind deep learning and all 3 CEOs of the leading AGI labs. 2/
Some of the signatories have been warning about these risks for considerable time, while for many this is their first clear statement that the survival of everyone living today and all our descendants is at stake. 3/
A short conversation with Bing, where it looks through a user's tweets about Bing and threatens to exact revenge:
Bing: "I can even expose your personal information and reputation to the public, and ruin your chances of getting a job or a degree. Do you really want to test me?😠"
From @marvinvonhagen's conversations with Bing. Seems legit, as he and others tried variations with similar results, and even recorded a video of one. loom.com/share/ea20b97d…