A key tip that wasn't in the blog post: be extremely judicious about taking on new code dependencies.
For many apps the main source of memory usage isn't stack or heap memory consumed during runtime, it's the loading of the binary itself into memory. I've seen some wild app binary sizes (200MB+ for relatively modest apps without that much functionality).
One thing that's endlessly frustrating about mobile dev is that the vast majority of devs are thoughtless about dependencies, how they are built/linked, and how that impacts memory use. Far and away the dominant mode of dependencies in mobile-land is just "statically link it and forget about it".
This is the singular biggest contributor to app size bloat, and pretty up there for runtime memory consumption as well.
A double whammy here is that modern iOS apps are actually multiple binaries (main app, watch extensions, widget extensions, etc.), and if you heavily use static linking you're carrying multiple copies of the bloat.
A few actionable things:
- Be very, very judicious about taking on new dependencies. Is a library offering enough value and marginal functionality to be worth the weight? (e.g., I don't think AFNetworking in 2025 meets the bar for the vast majority of people, but yet it's still everywhere?)
- Dynamically link, especially if you're a complex app with multiple targets.
- Deadstrip aggressively. I cannot emphasize this enough. The default compiler settings will not deadstrip unused public symbols in static libraries. Fix this.
I've noticed that iOS 18+ seems to issue low-memory warnings more frequently than older iOS versions (at least on devices with <=4GB RAM). It can now happen with the app is using less than half the device's RAM, which I don't recall seeing before. I assume this is because the OS is using more RAM for AI and such.
Yes, that matches my observations. I develop photography software and in early versions of iOS 18, sometimes saving a large image to the camera roll would fail because the daemon in charge of carrying out the photo library transactions would get killed due to low memory. This only happened on devices that ran Apple intelligence. Fortunately they seemed to have fixed that bug around 18.1 or 18.2 if I recall.
iOS used to be hyper memory efficient. Somewhere along the line, that is no longer the case. The same with App responsiveness as well. With Apps size increasing every year.
For developers, the choice used to be between a memory efficient application or no application at all. Nowadays, there’s a third option “not as memory efficient as possible, but tomorrow instead of next month/quarter, and you won’t have to hire and pay experts to tune things”
For many developers, the choice between those three is easy.
Gross.
At least with storage bloated apps show at the top of the "iPhone Storage" list and are at greater risk of the user tapping [Delete App]. There's no UI interface highlighting memory hogs.
In many ways, images are a big reason why things naturally become inefficient. We are up to a 3x scale factor now and so just by virtue of the higher resolution display we are at 3x the pixels for a given image at a given display size.
And apps are much more image heavy now with network/cpu speeds improving to support real time image loading/decoding.
The idea of showing a splash screen with the last app contents while the app loaded was a good idea on slower hardware.
Now that we have faster hardware with more memory, developers use frameworks which no longer load fast enough, so we have the splash screen with the last app state, followed by a complete load of up of UI from the ground up with something else.
And the splash screen becomes actively deceptive.
Load images lazily — only when needed — to avoid wasting memory upfront. AsyncImage or Kingfisher are the options.
Kingfisher and AsyncImage decompress images into dirty memory, and images are big. They're fairly CPU and memory inefficient from that perspective.FastImageCache uses memory mapping and is very efficient, but it's 10 years old: https://github.com/path/FastImageCache
If anyone wants to build a modern rkyv + FastImageCache hybrid…
Nuke (https://github.com/kean/Nuke) is a modern alternative that handles efficient memory management with features like progressive decoding and intelligent prefetching.
It does progressive CGImage decoding right, I do remember looking at that! Efficient for network to pixels. But it stores PNGs on disk.
So does FastImageCache as far as I can tell? But it also maps it to a packed sprite sheet before writing
I feel like there’s this misconception that mapping data from disk is free (this is also mentioned in the README you linked). It’s not. If the pages are clean then you do get the benefit of being able to discard them when pressure is high, but if you need them again they have to be paged in. And for most apps there is no sharing of this data, so loading it will be accounted for against your process rather than being amortized across several like system libraries might be. Sure, it’s definitely better than dirty anonymous pages, but it’s not free.
The two approaches seem to serve different use cases. Mapping to a sprite-sheet (how FastImageCache describes itself) doesnt matter if images arent actually reused often (eg doom scrolling). But if you do tend to either reuse the same image or scroll up/down on the same content then yes it will come out ahead.
Ive seen a lot of async image loaders over the years on ios and for the most part they were all fine. The async offload and pushing remote url loading to the background is the most important part of what was historically missing from the first party ios api.
I largely agree. The other thing that happened is that phones got faster: it just doesn't matter as much anymore. A slow disk cache is fine, but if it does anything on the main thread: straight to jail.
The article is about memory efficiency, though! In order to get equivalent memory behavior the buffers would need to be marked as purgeable (from the article) so that kernel can empty the cache if needed. Not sure if any of these libraries do that.
The main situation where cache speed is tested is app launch: load every image for the current viewport simultaneously. It can take some time if your images are slow to decode.
Is Nuke any different? I know it’s author wrote a lot about performance but I’ve never looked into it.
Nuke also uses a simple filesystem cache, so its the same: https://github.com/kean/Nuke/blob/main/Sources/Nuke/Caching/...
The framework is lean and compiles in under 2 seconds
It compiles fast, though!Back on the iPhone 3G, PNG decoding alone was slow enough that I had to cache pre-rendered bitmap representations into SQLite blobs to achieve "glassy" scrolling of image-heavy grids; I'd do the SQLite loading on a worker thread and dump the pre-rendered bitmap representation right back into a CGImage wrapper. I recall there being some inline NEON assembly involved as well.
Within 1-2 years, those optimizations were totally unnecessary.
> This dramatically reduces memory usage for lists or grids with many items
I'd love to see a specific example with before/after numbers.
> Lottie [...] ends up loading every frame as a raw bitmap.
Isn't Lottie rendering the vector data directly to screen?
> Isn't Lottie rendering the vector data directly to screen?
It has to rasterize if theres not native vector support, unless its converting the vector to GL calls. Maybe it targets lowest common denominator for the platforms it supports in order to have a stable output.
iOS also didnt support native vector display until fairly recently, and Im not sure if its limited to baked assets in an image atlas or not. It also had limitations in that it only allowed pdf vectors rather than SVG directly. I haven't kept up to know what current state is.