Every now and then someone from the Rails sphere would make a statement about performance (1/2 req/core/second as the norm 🤦) or (15K reqs from a mega machine) that would sound ridiculous to outsiders who start speaking about how slow Ruby is and delusional are Ruby devs
1/n
Ruby is not slow, in the sense that for building web apps it is in the same league as PHP, Python and similar interpreted languages.
You can build really fast web apps in Ruby, microsecond request latencies, and the toolbox has vast options to help you achieve that
2/n
But all that doesn't come close to what something as big as Rails is doing. Rails does A LOT of work per request, and that's an understatement. All the flexibility you get is due to that heavy lifting under the hood. Replicating all Rails will most likely yield similar perf
3/n
That's not entirely true though, because some parts in Rails were built with extreme flexibility in mind and as a result dog slow, ActionView for example supports bizarre options at the cost of being very expensive CPU wise
Rails sphere response is always it doesn't matter
4/n
But the reality is a bit more subtle, if your cost per req is lower than your revenue per req then all good, you can even invest in building jit engines to shrink the cost more.
If your app is high usage, low revenue per req, at one point it won't make economical sense
5/n
As a community, we can make the situation 10x better though, we can fix ActionView and similar bottlenecks
We can promote solutions like Phlex and use a much faster Rails today. Or someone can deliver an ERB style engine that's really fast and we can serve all tastes
6/n
We need to appreciate performance more, especially at the core of Rails, we need not settle for or advise others to settle for low performance for their Ruby/Rails applications.
7/7
• • •
Missing some Tweet in this thread? You can try to
force a refresh
In 2014, I had two Ruby apps with over 1M MAUs combined, both were hosted on a single server that cost us ~$75.
It was composed of. Multiple Ruby processes running Sinatra, processing requests in fibers, and each process connects to a BerkeleyDB shard (with fixed hashing)
Each BDB shard had data belonging to a set of users, shards can be combined or split, up to 256 of them. Each process had exclusive access to a single shard.
Since this was running web games, we had to have validation logic running on the server replicating the game logic
⬇️
We used background, stateless node.js processes for that purpose, the Ruby process would pickup a backend, send the game state to it for processing and get the results
By using Fibers we did all these requests concurrently and the node.js backend could scale as we needed
A typical Ruby/Rails web app performance is usually limited by one of the following (depending on usage patterns):
1. Compute 2. Memory 3. IO (network & disk)
The good thing is, you can ease these without having to touch your application code.
Read on ⬇️
1. Compute
It's everywhere, even db ops can be compute intensive. Though Ruby itself is usually the culprit, specially if there is a lot of logic or template/json rendering
Quick fixes:
- Enable Yjit
- Oj gem for JSON
- A fibered server like Falcon (less context switching)
2. Memory
Ruby is known for being memory hungry, your app processes can consume a lot of memory specially when you use many processes/threads
Quick fixes:
- Use Jemalloc
- A preforked server like Falcon
- A fibered server like Falcon (less memory fragmentation)