Let's talk a bit more about setting work_mem in #PostgreSQL. 1/n
The basic rule is "as high as required, and no higher." But how high is required? 2/n
A lot of tools and websites will give you formulae, usually based on some combination of system memory and number of connection. These aren't *wrong* exactly, but... 3/n
... they're really no more than a starting point, and not a huge improvement on just setting it at 4MB to start. 4/n
The actual value is workload-dependent. Very simple OLTP workloads can get away with a very small setting. One-gigantic-query-at-a-time data warehouses need a much higher number. 5/n
Setting too low can result in sorts and other operations being done using disk, which is vastly slower than doing it in memory. Setting it too high can result in queries failing with out-of-memory errors. 6/n
(or the OOM killer on Linux kiling off PostgreSQL entirely, but that's a different topic.) 7/n
The best way is to run the system with a realistic production workload with log_temp_files = 0, and look in the logs for temp file creation. pgbadger is a great help here in processing the logs. pgbadger.darold.net 8/n
Set work_mem high enough to get rid of the temp files. Generally, 2-3x the largest temp file is a good target. But remember that every operation can get that much, so if there are many temp files being created in parallel, go lower. 9/n
Of course, be judicious. If the larget temp file is 12GB, you probably don't want to set work_mem to 36GB unless you have a huge system and that's the only query running that grabs that memory. 10/n
work_mem can be set on a per-role, per-session, and even per-transaction basis, so you can set it higher for the role that does analytic queries, or higher for that session just before those queries, leaving it lower for everyone else. 11/n
And if you don't see any temp files at all, or they're all very small (less than 1K), you don't need to mess with work_mem at all. Yay! 12/end
• • •
Missing some Tweet in this thread? You can try to
force a refresh