Overall, JPEG XL at default cjxl speed outperforms AVIF even when using a very slow libaom setting (s1, >30 times slower). At a more reasonable libaom s7 (about half as fast as default cjxl), the improvement JPEG -> AVIF is comparable to the improvement AVIF -> JPEG XL.
Of course behind the overall picture, there are differences depending on the image contents. For example, for images of sports or rooms, AVIF actually does (slightly) better than JPEG XL (if you don't mind the extra encode time).
For landscapes or portraits, on the other hand, JPEG XL has a clearer advantage.
The video-based image formats (WebP and AVIF) particularly struggle with images containing subtle textures, like clothing or cloudy skies. For those, they can be even worse than (moz)JPEG. Overly aggressive deblocking filters are probably to blame for this.
Encoder consistency is another aspect, very important for deployment. "mozjpeg q80" has an average subjective quality (DMCOS) of 85, with a standard deviation of 5, so it can easily be 80 or 90. More complicated encoders tend to have less consistent results. Except for JPEG XL.
If you try to improve encoder consistency — or just evaluate encoders — using objective metrics (as opposed to subjective testing, which is of course much harder to do), be careful what metric you use. Simple metrics correlate only poorly.
What about HEIC? And better, proprietary AVIF encoders? We also have data on that. At half the encode speed of cjxl, HEIC (x265) more or less matches JPEG XL. Aurora, at one third the speed of cjxl, matches it at the high end but not at the low end (somewhat surprisingly).
One category the video-codec derived formats are particularly good at, is (lossy) non-photo images: logos, text, diagrams etc. On such images they get excellent results. JXL has some catching up to do there (which it can, there is still significant room for encoder improvements).
Note that hardware encoders (not tested here) will almost certainly perform significantly worse than software encoders, since hw design is inherently about cutting corners. E.g. hw JPEG is lot worse than mozjpeg, Apple's hw HEIC encode is likely a lot worse than x265, etc.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
"Desperate times call for desperate measures."
Hamas and the Israeli regime have in common that they justify their own unjustifiable, inhumane actions by pointing to the unjustifiable, inhumane actions of the other side, leading to a vicious cycle of desperation.
Although it is linguistically perhaps not so clear in English, the only antidote for desperation is HOPE. Hope is the optimistic, constructive feeling that we can work together to build a better world, a world of peace, freedom, solidarity and social justice.
Bombs and rockets shatter hope and feed desperation. Bloodshed feeds bloodlust. No weapon can ever stop this cycle of violence. No wall can ever bring freedom.
So the Chrome team (or is it the AVIF team? I am a bit confused now) finally released the data that was the basis for the decision to remove JPEG XL support in Chrome. Here it is: storage.googleapis.com/avif-compariso…
It would be good if all people with experience in image compression take a closer look at this data and perhaps try to reproduce it. I will certainly do the same. I will perhaps already give some initial remarks/impressions.
Decode speed: why is this measured in Chrome version 92, and not a more recent version? Improvements in jxl decode speed have been made since then — that version of Chrome is more than 1.5 years old.
@laughinghan@atax1a@atomicthumbs Video codec intra frame from today is compression-wise indeed better than image formats from the 80s and 90s like jpeg and png.
But there are some downsides. Let me elaborate a bit.
@laughinghan@atax1a@atomicthumbs 1) Video codecs are designed for low bitrate, since they need to do lots of frames per second so bandwidth is a bigger concern and also you don't have time to look at each single frame anyway. Compression techniques for low bitrate are different than for high fidelity though.
@laughinghan@atax1a@atomicthumbs (e.g. low bitrate benefits from directional prediction modes and aggressive deblocking filters that can hide artifacts well; high bitrate benefits from context modeling and high precision)
Chrome's decision may actually turn out to be a blessing in disguise, in the long run. Allow me to explain. 🧵
WebP and AVIF were created specifically for one use case: web image delivery. In both cases, the reasoning was "we have this video codec in the browser anyway, so we might as well use it for images too".
They are not very suitable for other use cases, like image authoring, printing, or capture, since they inherit the limitations of video codecs designed for web delivery: the compression is not designed for high-fidelity, the features are limited to what is useful in video.
There's something about color spaces and their transfer curves that has been bothering me for a while. Put bluntly: image (and video) codecs can be racist. Of course not intentionally, and it's a pretty subtle thing, but 'being bad at dark shades' has implications. An example. 🧵
Consider these two images. I'm hoping twitter recompression will not ruin them too much — these are the original images, before compression.
Let's use an AVIF encoder to significantly reduce their filesize. Using the exact same encode setting for both images, I get the following result. Of course both images have artifacts. Which one has the worst artifacts though?