I wish avif images had more reasonable computational requirements. I find the format inferior to jpeg-xl but the difference isn’t that huge - both are good enough imo. Sadly a folder full of avif files will make pretty much any consumer cpu in existence chug like mad, it’s completely unusable for actually using those images as an average end user does unless you happen to have something silly like 64 core epyc. jxl is already slower than I’d like, but it’s good enough on a modern machine. avif… isn’t.
This is exactly my experience. On paper and in laboratory settings with 64-core machines and 128GB memory, AV1/AVIF is better in every way but in the real world it's too taxing on ordinary hardware.
AVIF uses AV1 as the codec, so I assume the hardware units used to accelerate video decoding should work for images too, at least when it matches the profiles supported by the hardware.
Hardware video decoding APIs often have significantly more latency than software decoders, to the point that it's a noticeable several hundred milliseconds of delay. If they have this delay, they're unusable for images.
What hardware are you using? My Mac’s seem to handle large amounts of avif just fine without chugging.
TIL: AVIF can be nicely embedded and looped like a gif. That's awesome. With most of the video embedding I've seen so far, the issue has always been that it was embedded as a video player.
Huh? What is wrong with
<video autoplay muted loop playsinline preload="auto">
<source src="video.mp4" type="video/mp4">
</video>
If you can't host the video yourself get a hoster that offers a link to the video file (e.g. Vimeo).Sadly you can almost never upload an animated AVIF, WEBP or APNG file anywhere. Not even places that support GIFs or MP4s.
I'm looking at you, Reddit o_O
Why .avifs when we have .webm already? Seems like overcomplicated replacement for already existing de-facto standard.
Nitpick - the question should have been:
> Why .avifs when we have .webps already?
Webp and avif are both image containers, both of which support animated images. The key difference here between avif|mp4 and webp|webm is the mime type and the associated UX with each of these. Image types are presented without controls, and are looping by default if they have multiple frames. Video types are presented with controls and with many other options.
It's a good question as to why avif though. Webp is entirely sufficient in most situations. Where AV1 as a codec shines is for more advanced compression, which may not be necessary for a simple looped gif-analog, but you'll still get some gains. The gains come at a processing speed tradeoff though, so they're good to use when you have advanced hardware on a low-bandwidth connection. I personally don't find the tradeoff worth it, so all of my media encoding pipelines opt for webp/webm by default.
1. How does one say that webm is the de facto standard? It’s common but not so ubiquitous that it can’t be replaced easily.
2. AVIF is a better codec with better compression, bit depth and hardware decode support.
I think OP is referring to the container than the codec when they talk about .avif and .webm - https://www.webmproject.org/docs/container/ (e.g. MP4 or MKV are container formats that support multiple codecs within them like OPUS, AVC, HEVC, AV1, mp3 etc).
1. It's VERY common, sometimes pretending to be a .gif file. Many major image hosters are serving .webm even if users upload gif files.
2. AVIF is not a codec but a container. Webm also can contain AV1 video (but usually contains VP9). Also, difference between VP9 and AV1 is not that huge to be noticable on small gif-like animated pictures
If it has better hardware decode support, why are there complaints in another thread that a folder full of avifs would slow a computer to a crawl? I'd expect hardware-accelerated decoding to be smooth and efficient.
Hardware acceleration requires your device to have the requisite hardware support. Unlike AVC or HEVC, hardware support for AV1 has been quite limited and only recently has seen a slow uptick (for example, Intel CPUs now offer AV1 hardware decoding). Not sure if Apple supports it yet though. But yeah, if it requires special hardware support to be "smooth", in my mind, it is clearly inferior to its competitor codecs that work fine with software decoding (i.e. running on the CPU).
How do we know those people have both a system with hardware support and a decoder that uses it.
Without specifics of hardware it’s hard to know.
WebP also supports animation.
Very different UX. They autoplay, no controls, no fullscreen, no sound, easy to copy, etc.
videos have all of this too
What is the advantage of using avif over using an MP4 file?
Alpha transparency is supported which you won't get with (normal) AV1 in MP4. The author seems more interested in the default behavior which seems like a waste since AV1 will do videos better.
smaller file size, transparency support and circumventing autoplay restrictions on Safari.
Since every frame is independent they can be quickly decoded
Very cool. I didn't know that AVIFs had a full alpha layer, which seems to imply that we now have a format that can do animation AND transparency at reasonable file sizes in all modern browsers.
Perhaps I have a niche use case but I need to display animated images (or video) that has adaptive transparency (meaning the transparent area changes, for example think of a bonfire with smoke as an isolated object).
Previously I was using some Frankenstein setup with old encoders so I could feature detect and serve either WebMs (which have animation+transparency) or Apple ProRes for Safari (because Safari does support WebM, but not the full spec). Frustrating, and a workflow nightmare!
WebP also has animation and transparency does it not?
That's a good alternative however I need full programmatic control (start/stop/events). You can't do that with images.
WebM supports transparency.
Checking, neither AVIF with transparency nor WebM with transparency are suppored in Safari ... sigh
(A little off topic?) but your comment made me appreciate AVIF.
Now if I may ask as I am not familiar with the encoder space but how good would AVIF be for say video streaming.
Like If I may ask, what's the best encoder for video live streaming in general (and is AVIF within the picture?) and an additional question being which is a good format for text based videos where there might be a huge amount of texts which might change (think a person showing terminal etc.), yes one can have a direct change of text which could be the most efficient and I am just asking it out of curiosity but what would be the second best option?
AVIF frames are independent; you'd want an actual video codec for streaming. AV1 is probably the most advanced video codec that has wide support.
You have the right intuition – AVIF is an image format based on the encoder for AV1 (which is a really, really good video codec).
If you're interested in video, you might be interested to know that AV2 is in development.
> If you're interested in video, you might be interested to know that AV2 is in development.
Oh interesting to know that! What would be the differences between Av1 and AV2?
Found a website (https://www.geekextreme.com/av1-vs-av2-video-codec/) which gave me some interesting results
AV2 delivers 30% better compression efficiency than AV1, which already compresses 30% better than HEVC (H.265). AV2 encoding demands 2-3 times more computational power than AV1, requiring advanced hardware like RTX 5090 for practical use. AV2 will officially release by end of 2025, with widespread hardware support expected around 2027 or later. AV2 introduces advanced features like split-screen delivery, enhanced AR/VR support, and dynamic bitrate switching for adaptive streaming. 88% of AOMedia members plan to implement AV2 within two years, despite infrastructure and hardware compatibility challenges.
If there's any other difference then let me know too but Honestly a bit curious but it mentions that it requires RTX 5090
Wouldn't this be a little bad for the market too? Sure it compresses 30% more but not everybody has rtx 5090
Are we gonna see multi codec in things like say netflix where to devices which don't support av2 will be sent av1 but they would prefer to send av2 if the hardware category is matched?
Just in case you missed it, your quote was referring to encoding requirements. Decoding (eg. Netflix users) will have a different set of requirements. The situation will also improve over time as dedicated hardware encoders and decoders become available.
For the moment, I don't really mind if it requires more GPU power to encode media, since it only needs to happen once. I expect it will still be possible on a weaker card, but it would just take longer.
Great, yet another image format that cannot be controlled by browsers "disable autoplay" setting.
[dead]