A typical Ruby/Rails web app performance is usually limited by one of the following (depending on usage patterns):
1. Compute 2. Memory 3. IO (network & disk)
The good thing is, you can ease these without having to touch your application code.
Read on ⬇️
1. Compute
It's everywhere, even db ops can be compute intensive. Though Ruby itself is usually the culprit, specially if there is a lot of logic or template/json rendering
Quick fixes:
- Enable Yjit
- Oj gem for JSON
- A fibered server like Falcon (less context switching)
2. Memory
Ruby is known for being memory hungry, your app processes can consume a lot of memory specially when you use many processes/threads
Quick fixes:
- Use Jemalloc
- A preforked server like Falcon
- A fibered server like Falcon (less memory fragmentation)
3. IO
A lot of your app time is spent on IO, whether the network or any data store you are using.
Quick fixes:
- Use SQLite (via Litestack, eliminate network and lots of Ruby overhead)
- A fibered server like Falcon to increase concurrency
In summary:
- Falcon as your server
- Litestack for data
- Oj for JSON
- Jemalloc
- Yjit
The best part? All these are transparent (or mostly transparent) and they require no changes to your app besides some config params
Configure them once and get a faster & leaner app
• • •
Missing some Tweet in this thread? You can try to
force a refresh
In 2014, I had two Ruby apps with over 1M MAUs combined, both were hosted on a single server that cost us ~$75.
It was composed of. Multiple Ruby processes running Sinatra, processing requests in fibers, and each process connects to a BerkeleyDB shard (with fixed hashing)
Each BDB shard had data belonging to a set of users, shards can be combined or split, up to 256 of them. Each process had exclusive access to a single shard.
Since this was running web games, we had to have validation logic running on the server replicating the game logic
⬇️
We used background, stateless node.js processes for that purpose, the Ruby process would pickup a backend, send the game state to it for processing and get the results
By using Fibers we did all these requests concurrently and the node.js backend could scale as we needed