Using JMC to view your allocation/lock/IO profiles (regardless of where your JFR data came from)?
The profiles don't reflect the quantity you are interested in, and that can be very misleading.
Consider allocation profiling for instance. 1/n
The "Memory/TLAB" tabs in JMC will show you lists of allocation quantity/ration broken up by instance class/method/thread. The chart will show an allocation bar chart. The units here are appropriate: byte/KiB/MiB/GiB.
But if you look at the profile, the unit has changed! 2/n
The profile units are always "samples". This is appropriate when looking at a profile where all the samples have the same "weight". But for allocation/lock/IO samples this is not the case. Allocations come in 2 types:
* TLAB: <allocSize, tlabSize>
* Out of TLAB: <allocSize> 3/n
How should we weigh the profile? The answer that adds up correctly here is:
`weight = TLAB ? tlabSize : allocSize`
Because TLAB samples represent a random sample for a TLAB (and they come in different sizes), but OOTLABs are all captured. The sum is all allocated bytes. 4/n
The difference in resulting profile can be huge. This is particularly true for applications where a large portion of allocations are out of TLAB.
For locks the profile weight should be time. For IO the default should probably be time. 5/n
How can we extract and view the correctly weighted profile? There are several options. If you want a FlameGraph, Async-Profiler has your back with a handy converter tool:
- default: produce the "samples" profile
- `--total` : produce correct profile 6/n
Alternatively you can use another library to extract the profile into the collapsed stacks format and view it in IDEA(`Run/Open profiler snapshot`), or SpeedScope, or convert it to FlameGraph or manipulate it as text or whatever. 7/n
I love JMC, but the profile views leave something to be desired in this case. They clearly state "samples", but that is easy to overlook, and they offer no alternate weighting mechanism.
For now: extract and re-weight. 8/8
Buy yourself a coffee, you read a lot!
• • •
Missing some Tweet in this thread? You can try to
force a refresh