Also called the Carter Catastrophe, some researchers believed they discovered, through purely probabilistic reasoning, a formula which shows man has only ~700 years left.
Vdeo at end.
Poundstone calculated we have a 50-50 chance of surviving 760 years.
His argument:
1. Your birth is not special, so you had a 50% chance of being born in the middle of man's reign.
2. There were to date about n = 100 Billion of us.
3. There will be N total men.
4. From that it's easy to calculate a 50% chance that N ≤ 2 * n.
5. Since n = 100 B, N ≤ 200 B in total. Or 100 B yet to be born.
6. If 150 million are born each year, as stats show, then 100 billion / 150 million per year = 50% chance of 666 years left.
Yet this can’t be right.
Have you spotted the error?
Many don’t.
Think about it a moment before scrolling further.
Consider the first man born. Call him Adam. For him, n = 1. He looks at the math of the DA and calculates there is a 50% chance N ≤ 2 * 1. So he will not be surprised when he comes across Eve. Adam would, though, be shocked to learn that some 100 billion more were to arrive.
Maybe Adam could have used a tighter probability than 50%. Suppose he wanted to be 99% certain instead of just 50-50. Then our equation becomes N ≤ 100 * n.
That means Adam was 99% certain that the total number of people who ever live would be less than or equal to 100—not a hundred billion.
Then consider man number 200 billion, who, we calculated gave a 50% chance will be born 666 years from now. He, not knowing he is the last, would calculate the DA and conclude there is a 50% chance N ≤ 400 billion!
The Doomsday Argument shows things getting better!
Here's the full explanation:
• • •
Missing some Tweet in this thread? You can try to
force a refresh
It seems a battle most impossible to convince a good chunk of the population that AI is nothing more than a model.
A model written in code, which of course the coders know because they are writing it, code that carries out explicit instructions, and only explicit instructions. Code that runs on machines that operate in fixed and directed ways.
Yet many insist AI’s output is more than its code, and somehow becomes something more than its code, the output the result of some emergent malign or beneficent or at any rate chaotic entity, an entity with greater insight than any mere man.
We continue our quest to disabuse ourselves of the notions that “IQ” is intelligence and that one-number of summaries of intelligence are adequate. All I hope for, likely in vain, is for us to say intelligence instead of IQ when we mean intelligence.
IQ is a score on tests that measures, however crudely or accurately, some but not all aspects of intelligence. Scores on a test are not intelligence: intelligence goes toward producing scores. Single-number scores cannot capture all there is to intelligence.
“IQ”, I repeat, is not intelligence. The Deadly Sin of Reification has struck every person who speaks of somebody “having” a low or high IQ. Unless, which is rare, they mean the score on some test the person has actually taken.
Everybody knows that bad black behavior of all kind is being ignored, excused or outright celebrated.
One example will suffice. (All links at thread end.)
After the lifelong thug and criminal lowlife George Floyd met his expected end—poisoning himself with drugs and engaging in all manner of misbehavior—our rulers and “elites” fell to their knees, even in Congress itself, to show their adoration of black criminality.
It’s so bad now that parents of white kids murdered by blacks rush out to forgive or excuse the killers, lest anybody dare to think they would condemn bad black behavior.
In the 1970s the fear was mass starvation. There were soon, they said, to be too many people, which would lead to disastrous pressure on the food supply, and we’d run out.
Some said this was to be because of Global Cooling, which was the environmental theory in vogue then. Others, like the continuously venerated and awarded Paul Ehrlich, said it was because of mankind’s predilection to breed.
Not to bore us with a glut of data, but here is wheat production in the good ol’ United States from 1961 to 2023. Up, up, and away.
Science can be saved by returning to its classical roots Irreducible by Federico Faggin Reviewed
Physicist and microchip inventor Federico Faggin is an open unabashed enthusiastic panpsychist. Any number of such people are found in the sandal-shod organic trail-mix crowd, the sort who willingly live in Ithaca, NY.
Faggin displays no shades of this in his Irreducible: Consciousness, Life, Computers, And Human Nature. His take is rooted in physics, by route of a profound spiritual experience. It was in thinking how to reconcile this experience with physics that he came to his theory.
You see a drug commercial on TV and are impressed by the cavorting of the actors. You want to cavort. So you go to the doctor and ask him if Profitol is right for you. You ask him the chance the pill will let you cavort. He won’t tell you. He can’t tell you.
What he can tell you is that he read about an experiment using the pill, and that if the “null hypothesis” comparing that pill to another pill was true, the probability of seeing data that was not seen in the experiment was pretty low.