1/ I was rereading Softwar (the 2004 book on Larry Ellison and Oracle) this morning, and the main thing that stood out to me is that pretty much every idea in software today was already basically around 20 years ago
2/ Twenty years ago, everybody in the software industry was already debating whether "best of breed" applications would triumph over integrated solutions from Accenture ("one throat to choke")
3/ Twenty years ago, everybody in software was already complaining about how every company's data was getting siloed across a hundred different databases daisy-chained together by hacky ETL scripts, instead of a single system of record
4/ Twenty years ago, the software industry was already talking about transitioning datacenters from scale-up to scale-out computing, i.e. "making a bunch of cheap little computers 'look' like a big computer"
(Notice this is the whole premise of modern cloud computing)
5/ *Thirty* years ago, the software industry was already talking about how "internet computing will centralize data storage in huge databases and processing on large servers, while distributing information on demand across a global network"
...aka the Cloud, 11 years before AWS
6/ Twenty years ago, the software industry was already convinced that the future of the business model would be selling "software as a service"!
7/ Contrary to popular belief, scale-out computing, the cloud, SaaS, etc. were all ubiquitous ideas by the 90s. The irony of Softwar is that Oracle had little to do with the ultimate success of these ideas
Predicting the future is easy, it's just making money that's hard!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1/ The more time I spend in the corporate world, the more I understand why everybody just hires ex-consultants and investment bankers
It’s not because McKinsey or Goldman Sachs actually teach you how to do the job, per se
It’s because hiring undergrads is a free rider problem
2/ Ultimately, new graduates don’t actually know how to do anything
This is less intended as a value judgment (I was the same when I graduated from Amherst), and more as a statement of fact that elite American universities are not intended to be trade schools
3/ Even if you already know “hard skills” like accounting or SQL, you usually still need 2-3 years to acquire the requisite soft skills to work independently, eg how to present to execs, how to make your ideas clear, how to convince coworkers to do stuff they don’t want to do…
It’s an awesome feeling when you come across a new blog or Substack, read a half dozen of their posts, and think “wow, every single one of these is good.” Only happens to me once or twice a year
I have no idea who @dynomight7 is but you’re cool, keep up the the good work
Some posts I’ll call out:
Why first discoveries in science are overrated, given the frequency of simultaneous invention (good ideas tend to be products of their unique time and place, at which point they become obvious to everyone around them)
1/ This article on Pinduoduo (the 3rd largest e-commerce platform in China) is ostensibly focused on their new US venture Temu, but it quickly devolves into the most outrageous profile of Chinese corporate culture that I’ve ever read
2/ It starts out talking about how Temu’s org chart is structured into two warring factions trying to take each other over
To be fair this is relatively tame - organizing two competing teams on a single product is a relatively common practice in China tech
3/ Every new customer on Temu is randomly assigned to one of multiple internal teams, who compete to drive the highest sales or face dissolution
Again, this is sort of a common China tech practice, so not so much a Pinduoduo-specific thing
1/ The 2012 “AlexNet” paper (below) is usually considered the turning point for machine learning, when neural networks went from a 1980s punchline to the basis of all modern AI
But a new paper asks an interesting history question I hadn’t seen before - was AlexNet really first?
2/ The following excerpts all come from Appendix D of the recently-updated Sevilla et. al (2023) arxiv.org/abs/2202.05924
3/ Sevilla et. al makes 3 arguments -
First, AlexNet was famously trained on just two NVIDIA GTX 580s in late 2011, proving GPU architectures could drive a step-change in AI compute
But researchers had already discovered you could use GPUs to train ML models as early as 2005
1/ A Google colleague recently observed to me that computer science tends to reinvent the wheel every 20 years: “a new generation just reimplements old ideas, but with more compute”
What’s fascinating is this 20 year cycle seems to coincide with the timing of tech stock bubbles
2/ Let’s do the math. For starters, I think everyone acknowledges at this point that 2022 was the popping of a once-in-a-generation tech bubble…
3/ Now, let’s scroll back 21 years - most people also recall 2001 was the year that the original dotcom / telco / growth bubble in the late 90s popped
Something I’ve been pondering - the single biggest advantage that startups have over incumbents is probably the willingness to take insane reputational risks
I usually avoid discussing Tesla on Twitter, but Tesla in the early 2010s might be the best example of this ever
In 2015, the European auto analyst at my old employer Bernstein went to Germany to interview a bunch of top execs in BMW, Mercedes, and Audi’s electric vehicle programs about the disruptive threat of Tesla
Some of these direct quotes look incredibly risk-adverse 7 years later
For instance, Tesla had just released the Supercharger network 3 years earlier
It turns out: the Germans also had fast charging tech, but had refused to release it because they feared it’d accelerate battery degradation and cause EVs to fail as fast as “a new laptop or phone”