What's the earliest you could have predicted that Python would be one of the top five programming languages? (Round to nearest year.)
I’m going to say 1995, but hear me out. (1) We already knew that attacking problems with computers, in a sort of ad hoc fashion, was going to be a big deal. *All* problems had a computational side, not just a special class.
(2) We knew that domain knowledge mattered—that the more the “ordinary” practitioner had direct access, the greater the payoff. No separate “computer group”.
(3) Moore’s law was holding up. Interpreted languages are much easier to work with, but they’re also much, much slower. It would take another decade until Python could attack a standard “real” problem, but it would come.
Predicting that Python (and similar) would dominate is this *much* easier than predicting that 90s neural nets would pay off in C21.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The best line on the rationalists, said to me once: these are the people who read the New York Times and took it seriously. What follows next is the response to that disappointment...
Effective Altruism is the most obvious (the Times says first world philanthropy will save the third, and yet it hasn’t happened yet); multiple different diagnoses for Neoreaction, I think.
A lot of people want to trace things back to SV culture, but this seems wrong. E.g., early Paul Graham essays, which are mostly about how to make money on startups, or inside baseball about code.
Thoughts on tails and Hayekian (neo) liberal economics. The standard story is that markets “learn”—absorbing trades and propagating the implicit information by setting prices. It’s sufficiently intelligent to bring me tea from Japan. But tails *can’t* be learned...
They don’t occur often enough to train the reenforcement algorithm that the price system runs. When they do, their surface logic is unrepeatable (GameStop, really?)
Tails can, however, be reasoned about. We can think about what would be out of scope, even if we can’t anticipate its details. We can ask if this or that profit is on some sense “unreasonable”, emphasis on reason, not inference.
Someone tosses a coin ten times; it comes up heads every time. What's the probability it comes up heads on the next toss? (Pretty darn high—part of @nntaleb's work is unprogramming you from your high-school rules of thumb.) Now consider the (related) Gambler's fallacy...
In this case, it's a theory about compensation: the worse one's luck is, the more likely it is to see a reversal. On the surface, it's irrational. The more bad luck you have, the more you accumulate evidence that the system is rigged.
But there's also an anthropic component. If the luck is bad enough, it starts to become inconsistent with your survival. You've accumulated evidence for correlations in the environment, but these correlations (may be) inconsistent with (people like you) being in this environment.
Say we gain complete control over genetics—we can code development like Python. After 10,000 years, which is the most likely outcome of our decision-making?
Can’t believe the elimination of men and permanent sexual immaturity/liminal stage are getting so few votes.
I don’t think speciation events are likely. Silicon software has been converging on near-total comparability for decades.
Student loans will (finally) be understood as a bubble on a par with the housing market in 2008.
Some of the fallout will be temporarily explained away by COVID, but many universities below the billion-dollar endowment mark (and some above) will find themselves short.
You guys told me julia was cool, but it turns out that it indexes arrays beginning with one? I'm sort of not joking—what possible justification is there to violate the near-universal standard?
This is very Bluebeard's Castle. What else is in the cupboard, "Julia"?