Yesterday's tweet about exiting the cloud and saving 10x on monthly AWS costs went viral x.com/rameerez/statu…
Many people had strong opinions about it
Here are some observations and thoughts:
- A lot of people have magical ideas about how datacenters actually work. They think servers in datacenters are fragile and volatile. Like things that can just vanish into thin air. Someone thought a lightning strike can take down an entire datacenter and nuke your business out of existence. These are mostly fear-driven opinions, the result of a successful cloud psyops campaign.
The reality is modern datacenters already account for all these problems and are equipped with many protections: not only against things as mundane as lightning, but against pretty much anything that can compromise uptime. They have plenty of redundant systems: redundant power sources, redundant cooling, tons of physical security... Everything in a datacenter is designed with resiliency and redundancy in mind to guarantee uptime.
Disasters can happen (OVH 2021), sure, and you should have backups to recover from them – but in my experience of ~15 years running servers I think they're rare, and I've never had any downtime of more than a few minutes. Your server is probably going to be okay.
- Anyone who has managed servers for long enough knows you spend most time in the initial setup; then servers tend to be relatively stable. Hardware failures are relatively rare, and once a server is up and running it usually runs flawlessly for years without much intervention. Managing your own servers is not a full-time job. You don't need to employ a 5-person devops team. You don't even need to hire a server guy: you can just do things yourself! It's not that difficult. Claude and ChatGPT usually have a good understanding of Linux systems and how to manage them, ask them for help
- "You're using the cloud wrong" / "You just need to know how to use the cloud" were common arguments as well. Look, I'm definitely not an expert but I know my way around AWS. Hell, I've even studied AWS certs. My infra was not overprovisioned. Yes, I had optimized costs before moving off the cloud, and yes I do know about reserved instances. My take: reserved instances only make the problem worse – they create vendor lock-in, and they essentially go against everything I've been trying to argue. The least thing you want if you're considering getting off the cloud because it keeps being too expensive is lock yourself into it with a 3-year contract (yes, I also do know you can resell RIs, thanks, my argument still stands)
- Many people thought this was my first time running servers and claimed I was overly optimistic in what running a server actually means. It's difficult to say these things without coming across as arrogant, but I've been managing servers since 2006. I started, as many did, editing PHP scripts and uploading them to my FTP server. I first had to learn how to install Wordpress, then ventured a bit and started editing WP templates, then everything else followed. It was an invaluable experience for me. It taught me the basics. It taught me what Linux was, how to navigate it, and by 2007 I was requesting Canonical for Ubuntu CD-ROMs that would arrive in the mail and that I would use to install in a partition in my parents' computer to learn more about Linux. Those early experiences taught me the basics of web development; everything else is built on top of it. Which leads me to the next point:
- I think the new generations of devs (genz, etc) are absolutely out of touch with the hardware that runs their software. They lack these kinds of foundational experiences. They were born in an era where a random guy on Youtube shilled them one specific vendor and taught them to run one very specific command that magically solved all their infrastructure problems. It's only reasonable they have magical assumptions about what servers are and how they work. They rant and rant about how you can just do things "serverless" without realizing they're just running their code in many different boxes. Ofc many of them go on to learn more about Linux and servers, but the average bootcamp grad, let's say, lacks the hands-on Linux experience that FTP hackers would have had 20 years ago. I'm not making moral judgements about this: I'm not arguing it's good or bad – it is what it is. The current state of web dev.
- I noticed the more experienced developers that are currently in the cloud have developed some sort of Stockholm syndrome about the cloud. It's essentially sunk cost fallacy, but it turns them into very irrational creatures. They develop this weird friction to challenging the status quo and changing their opinion on things. They start throwing irrational arguments left and right, repeating the AWS sales landing pages' talking points one by one, and don't even stop to think if those things are even of use to them. They got tricked into believing something, and once you touch upon belief systems, people get irrational. It doesn't matter how good my optimizations were. I could have cut costs 100x instead of just 10x – or I could have claimed something outrageous like I got to run all my infrastructure for just $1. It doesn't matter. These people would still be ranting and arguing that I'm doomed because now I don't have things like "infinite scalability capabilities" or "automated failovers with automated replica recovery". These are things that I've never needed or used, things that I'm sure 99.999% of people in the thread never needed or used either, but that they throw at you in a vain attempt to build an argument
- The majority of devs are clueless or have forgotten about how we got here. I remember very clearly how the cloud marketing psyops campaign started in the early 2010s. It was a deliberate move by companies to try to shill their enterprise cloud technology to early-stage startups, trying to get them locked in as early as possible so they could milk them as they raised rounds. I remember when AWS started to give out credits specifically for startups only. They would literally go startup accelerator by startup accelerator trying to get everyone onboard. AWS was not the only one. I remember attending an IBM cloud event in 2014. I was the CTO of a small startup at the time. They were very specifically targeting startups; in fact, we got the invitation via our accelerator IIRC. We ran everything on Heroku at the time, and it worked just fine. I remember thinking: what the hell is all this cloud stuff and how do I use it? I vividly remember feeling like they were trying to sell us something that was not designed for us. I spent some time looking into it and just couldn't wrap my head around this whole cloud thing. All of it sounded so alien to us. But all these companies poured literal millions upon millions of dollars into this cloud shilling campaign over the following years, tricking early startups into adopting enterprise technology. They ended up being successful at it – and the aftermath is the current state of web development in 2024. Zero interest rates through most of the last decade definitely helped in getting us here. There is now a counterculture movement, mainly led by @dhh and the Rails community, and this feels like something fundamentally fresh, right, and aligned with the reality of MOST software businesses on Earth. Which leads me to:
- Many people are absolutely out of touch with what most software businesses look like in the real world. They think in terms of Fortune 500; they truly believe enterprise is the norm. They think the average business needs all the bells and whistles the cloud has to offer: high availability, multi-zone replication, automatic failovers, distributed Kubernetes clusters... The reality is only a teeny tiny fraction of all software businesses need something like this. Most businesses will always be small, by a simple rule of power law, and the ones that need actual computing power can do incredibly well without the cloud up until a very high point. Scaling vertically can get you very, very far nowadays. Most devs wildly overestimate scaling requirements. They have such a low bar for what "high traffic" means. Here's a reference point: my current two-server setup serves millions of requests a day for millions of monthly visitors. So is the case for many other indie makers, like @levelsio, who even managed to get everything down to one single server. Most devs have never tried running a project of their own, with actual users and actual production traffic, on a single server – and it shows.
- Devs also wildly overestimate other technical requirements. Again, only a tiny fraction of software businesses need the bells and whistles. Of course they exist, but they're rare, and they usually have very good reasons for their technical decisions. Like Netflix, needing to transcode and stream enormous amounts of video to customers all over the world. That's where you need distributed systems, CDNs, edge computing, all of that stuff. Your little app with one thousand users that just sends some JSON objects around does definitely not need that. I feel like most devs have this magical notion in their heads that their project is something like Netflix. It's wishful thinking, and I get it – you want to be as successful as Netflix. But it makes you make the wrong technical decisions, and all of a sudden they think they need to have distributed servers all over the world because their users will somehow notice a few milliseconds difference in latency when they tap a button. It's wild.
- Cloudflare can get you really far. Some argued that running your own server is somehow less secure than running AWS's servers, as if EC2 instances were magically protected against hackers or something. Just lock the box: ask ChatGPT how to harden your Linux server and follow best security practices (like: don't use password auth, only strong ssh keys) and you're 90% there. Then, for an extra layer of protection, run Cloudflare on top of everything: proxy the IP of your server on their DNS so you don't expose it, and you're golden. And you get DDoS protection, edge caching, and a top-tier DNS for essentially free.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
- monitors your site's pages for Google's Web Vitals every day
- shows historical charts of your site performance
- emails you when your pages become slower
Heads up: if you're using PageSpeed Insights to measure your website's performance and make it rank better on Google, the big number in the circle is NOT the score Google takes into consideration when deciding if your page is properly optimized or not.
That score is called the "Lighthouse score". It's not a random number: there are 6 components to it.
These components are called "Web vitals" and they all have different weights as to how they contribute to the total Lighthouse score googlechrome.github.io/lighthouse/sco…
Now, as for SEO, Google only takes 3 out of those 6 signals into consideration when deciding if your page should rank high or not.
I was trying to keep one of my New Year’s resolutions, so I started tracking my daily progress on a sheet of paper.
But I soon realized that was not the best solution. You may forget your sheet at home, lose it altogether – or even if that's not the case, it's difficult to place it in a spot where you'll see it every day without just learning to ignore it.
Awesome! @Wakefy_app's functionality has just been copied! Not by minor players, but by Spotify and Google themselves! This can only mean I might be onto something with Wakefy!
Just for Android devices, though. Hard to make that work on iOS as far as I know (without forcing users to leave their screen on the whole night). Plus, Wakefy was born with a different use case in mind. But still that's a copy – and only 4 months after Wakefy's launch.
Pattern:
If your product's main functionality is another successful product's lacking feature, chances are they'll just copy your whole product.
Solution:
Don't base your products entirely off another successful product.