Zip with no compression is a nice contender for a container format that shouldn't be slept on. It effectively reduces the I/O, while unlike TAR, allowing direct random to the files without "extracting" them or seeking through the entire file, this is possible even via mmap, over HTTP range queries, etc.
You can still get the compression benefits by serving files with Content-Encoding: gzip or whatever. Though it has builtin compression, you can just not use that and use external compression instead, especially over the wire.
It's pretty widely used, though often dressed up as something else. JAR files or APK files or whatever.
I think the articles complaints about lacking unix access rights and metadata is a bit strange. That seems like a feature more than a bug, as I wouldn't expect this to be something that transfers between machines. I don't want to unpack an archive and have to scrutinize it for files with o+rxst permissions, or have their creation date be anything other than when I unpacked them.
Isn't this what is already common in the Python community?
> I don't want to unpack an archive and have to scrutinize it for files with o+rxst permissions, or have their creation date be anything other than when I unpacked them.
I'm the opposite, when I pack and unpack something, I want the files to be identical including attributes. Why should I throw away all the timestamps, just because the file were temporarily in an archive?
> Why should I throw away all the timestamps, just because the file were temporarily in an archive?
In case anyone is unaware, you don't have to throw away all the timestamps when using "zip with no compression". The metadata for each zipped file includes one timestamp (originally rounded to even number of seconds in local time).
I am a big last modified timestamp fan and am often discouraged that scp, git, and even many zip utilities are not (at least by default).
Yes, it's a lossy process.
If your archive drops it you can't get it back.
If you don't want it you can just chmod -R u=rw,go=r,a-x
> If your archive drops it you can't get it back.
Hence, the common archive format is tar not zip.
> Zip with no compression is a nice contender for a container format that shouldn't be slept on
SquashFS with zstd compression is used by various container runtimes, and is popular in HPC where filesystems often have high latency. It can be mounted natively or with FUSE, and the decompression overhead is not really felt.
Wouldn't you still have a lot of syscalls?
Yes, but with much lower latency. The squashfs file ensures the files are close together and you benefit from fs cache a lot.
Headline is wrong. I/O wasn't the bottleneck, syscalls were the bottleneck.
Stupid question: why can't we get a syscall to load an entire directory into an array of file descriptors (minus an array of paths to ignore), instead of calling open() on every individual file in that directory? Seems like the simplest solution, no?
One aspect of the question is that "permissions" are mostly regulated at the time of open and user-code should check for failures. This was a driving inspiration for the tiny 27 lines of C virtual machine in https://github.com/c-blake/batch that allows you to, e.g., synthesize a single call that mmaps a whole file https://github.com/c-blake/batch/blob/64a35b4b35efa8c52afb64... which seems like it would have also helped the article author.
Not sure, I'd like that too
You could use io_uring but IMO that API is annoying and I remember hitting limitations. One thing you could do with io_uring is using openat (the op not the syscall) with the dir fd (which you get from the syscall) so you can asynchronously open and read files, however, you couldn't open directories for some reason. There's a chance I may be remembering wrong
What comes closest is scandir [1], which gives you an iterator of direntries, and can be used to avoid lstat syscalls for each file.
Otherwise you can open a dir and pass its fd to openat together with a relative path to a file, to reduce the kernel overhead of resolving absolute paths for each file.
>why can't we get a syscall to load an entire directory into an array of file descriptors (minus an array of paths to ignore), instead of calling open() on every individual file in that directory?
You mean like a range of file descriptors you could use if you want to save files in that directory?
If you don't need the security at all then yes. Otherwise you need to check every file for the permissions.
Something that struck me earlier this week was when profiling certain workloads, I'd really like a flame graph that included wall time waiting on IO, be it a database call, filesystem or other RPC.
For example, our integration test suite on a particular service has become quite slow, but it's not particularly clear where the time is going. I suspect a decent amount of time is being spent talking to postgres, but I'd like a low touch way to profile this
There's prior work: https://www.brendangregg.com/FlameGraphs/offcpuflamegraphs.h...
There are a few challenges here. - Off-cpu is missing the interrupt with integrated collection of stack traces, so you instrument a full timeline when they move on and off cpu or periodically walk every thread for its stack trace - Applications have many idle threads and waiting for IO is a common threadpool case, so its more challenging to associate the thread waiting for a pool doing delegated IO from idle worker pool threads
Some solutions: - Ive used nsight systems for non GPU stuff to visualize off CPU time equally with on CPU time - gdb thread apply all bt is slow but does full call stack walking. In python, we have py-spy dump for supported interpreters - Remember that any thing you can represent as call stacks and integers can be converted easily to a flamegraph. eg taking strace durations by tid and maybe fd and aggregating to a flamegraph
See if you can wrap the underlying library call to pg.query or whatever it is with a generic wrapper that logs time in the query function. Should be easy in a dynamic lang.
Tracing profiler can do exactly that, you don't need a dynamic lang.
"I/O is the bottleneck" is only true in the loose sense that "reading files" is slow.
Strictly speaking, the bottleneck was latency, not bandwidth.
profile profile profile
Sounds more like the VFS layer/FS is the bottleneck. It would be interesting to try another FS or operating system to see how it compares.
Some say Mac OS X is (or used to be) slower than Linux at least for certain syscalls.
https://github.com/golang/go/issues/28739#issuecomment-10426...
https://stackoverflow.com/questions/64656255/why-is-the-c-fu...
https://github.com/valhalla/valhalla/issues/1192
https://news.ycombinator.com/item?id=13628320
Not sure what's the root cause, though.
This would not be surprising at all! An impressive amount of work has gone into making the Linux VFS and filesystem code fast and scalable. I'm well aware that Linux didn't invent the RCU scheme, but it uses variations on RCU liberally to make filesystem operations minimally contentious, and aggressively caches. (I've also learned recently that the Linux VFS abstractions are quite different from BSD/UNIX, and they don't really map to eachother. Linux has many structures, like dentries and generic inodes, that map to roughly one structure in BSD/UNIX, the vnode structure. I'm not positive that this has huge performance implications but it does seem like Linux is aggressive at caching dentries which may make a difference.)
That said, I'm certainly no expert on filesystems or OS kernels, so I wouldn't know if Linux would perform faster or slower... But it would be very interesting to see a comparison, possibly even with a hypervisor adding overhead.
there are a loooot of languages/compilers for which the most wall-time expensive operation in compilation or loading is stat(2) searching for files
I actually ran into this issue building dependency graphs of a golang monorepo. We analyzed the cpu trace and found that the program was doing a lot of GC so we reduced allocations. This was just noise though as the runtime was just making use of time waiting for I/O as it had shelled out to go list to get a json dep graph from the CLI program. This turns out to be slow due to stat calls and reading from disk. We replaced our usage of go list with a custom package import graph parser using the std lib parser packages and instead of reading from disk we give the parser byte blobs from git, also using git ls-files to “stat” the files. Don’t remember the specifics but I believe we brought the time from 30-45s down to 500ms to build the dep graph.
Amazing article, thanks for sharing. I really appreciate the deep investigations in response to the comments
Same thing applies to other system aspects:
compressing the kernel loads it faster on RAM even if it still has to execute the un compressing operation. Why?
Load from disk to RAM is a larger bottleneck than CPU uncompressing.
Same is applied to algorithms, always find the largest bottleneck in your dependent executions and apply changes there as the rest of the pipeline waits for it. Often picking the right algorithm “solves it” but it may be something else, like waiting for IO or coordinating across actors (mutex if concurrency is done as it used to).
That’s also part of the counterintuitive take that more concurrency brings more overhead and not necessarily faster execution speeds (topic largely discussed a few years ago with async concurrency and immutable structures).