Whenever I hear about some FAANG layoffs, I keep coming back to @GergelyOrosz's discussion of "peacetime vs. wartime" companies.
I'd posit that many big tech companies that have been in peacetime mode **for most of our entire careers** are now in wartime mode.
And so what seems like a weird layoff or reorg is mostly because we have become so used to thinking of those companies as being in peacetime mode. And now they aren't.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I write private notes to myself. Mostly these are very personal. But, I want to share one since it relates to 2024.
Disclaimer: This isn't some grand theory of how society should work. This isn't prescriptive to you. This isn't about you. This me talking to me about my life.
Looking back from around 2020 until now, the biggest lesson I will take into 2024 is this:
**Focus my attention on things I want to accomplish AND I have significant control over.**
People have said roughly the same thing in many ways, such as "Play your own game.", but it all boils down to focusing on the things that matter to you and that you can affect in a meaningful way.
Last week a16z released their "Techno-Optimist Manifesto". Notably, in a section titled "The Enemy" they listed tech ethics -- an inclusion which has received a lot of negative attention.
I spent the week thinking about it, and here are some thoughts.
A quick disclaimer: If you want to know why a16z included it, you have to ask them. I don't have any relationship with a16z, they've never funded anything I've done, I don't go to their Christmas party or anything. Basically I don't have any insight into their mind.
Let's begin.
I've worked on open source humanitarian tech, social good tech, or whatever you want to call it for a decade. I've worked on software for election monitoring, human rights, anti-corruption, disaster relief, and more
Not academic papers, but shipping open source software
Sigh. I don't normally talk about this stuff because it isn't relevant to machine learning, but my PhD is in quantitative political science, specifically on the public health effects of modern armed conflict. A friend asked me what was going to happen, so here you go.
Disclaimer: I don't work in this field. I used to study it a decade ago. So that is the perspective you get.
There is a long and deep history researching the health effects of armed conflict, going back to the 1800s. I say this to tell you that this is an OLD and well understood topic, not some flashy thing someone thought about for the first time in 2023.
If you are interested in it, here why I think you should get into AI right now.
Disclaimer: If you are looking for some AI hype thread, go somewhere else.
TL;DR when a new thing comes along, there is a window of time where nobody is an expert. There are only people interested in it, playing around with it, and talking with each other. But eventually the thing matures and the window closes. After barriers to entry are much higher.
I have been a part of three of these windows.
The first was blogging (2003-2009). What ChatGPT/AI is today, blogging was in 2003. Nobody was a "big name" in blogging. Everyone was just some random person writing about a topic on their website.
Someone at Pixar deleted all of Toy Story 2 and the backup hadn't worked for a month, and the only reason we saw that movie was b/c someone on maternity leave had a copy of it on her home computer.
Her name is Galyn Susman and she is now the producer for the new Lightyear movie!