We tried to use this on our compute cluster for silicon design/verification. We gave up in the end and just went with the traditional TCL (now Lua) modules.
The problems are:
1. You can't have apptainers that use each other. The most common case was things like Make, GCC, Git etc. If Make is in a different apptainer to GCC then it won't work because as soon as you go into Make then it can't see GCC any more.
2. It doesn't work if any of your output artefacts depend on things inside the container. For example you use your GCC apptainer to compile a program. It appears to work, but when you run it you find it actually linked to something in the apptainer that isn't visible any more. This is also a problem for C headers.
3. We had constant issues with PATH getting messed up so you can't see things outside the apptainer that should have been available.
All in all it was a nice idea but ended up causing way more hassle than it was worth. It was much easier just to use an old OS (RHEL8) and get everything to work directly on that.
I think of using Apptainer/Singularity as more like Docker than anything else (without the full networking configs). These are all issues with traditional Docker containers as well, so I’m not sure how you were using the containers or what you were expecting.
For my workflows on HPC, I use apptainers as basically drop-in replacements for Docker, and for that, they work quite well. These biggest benefit is that the containers are unprivileged. This means you can’t do a lot of things (in particular complex networking), but it also makes it much more secure for multi-tenant systems (like HPC).
(I know Docker and Apptainer are slightly different beasts, but I’m speaking in broad strokes in a general sense without extra permissions).
You can also run Docker itself in rootless mode[1]. And if for some reason you don't want to run Docker, you can also use Podman or Incus instead, and they both support Docker images, as well as running unpriviliged. Finally, there's also Flox[2], which is a Nix-based application Sandbox, that I believe would align more towards your (and OP's) use case (unless you specifically require Docker image compatibility).
So unfortunately your example doesn't illustrate why Apptainer is a better option.
So you are using container app and your biggest problem with it is, that it's doing exactly as advertised -> containers :D
If you want to be unnecessarily dismissive about the problems with containers, sure.
You don't mix and match pieces of containers, just like you wouldn't mix and match binaries from different distributions of Linux.
You can use a container as a single environment in which to do development, and that works fine. But they are by definition an isolated environment with different dependencies than other containers. The result of compiling something in a container necessarily needs to end up in its own container.
...that said, you could use the exact same container base image, and make many different container images from it, and those files would be compatible (assuming you shipped all needed dependencies).
> you wouldn't mix and match binaries from different distributions of Linux.
You can absolutely mix and match lots of different binaries from different sources on one Linux system. That's exactly what we're doing now with TCL modules.
> and make many different container images from it
Well yes, that's the problem. You end up either putting everything in one container (in which case why bother with a container?), or with a combinatorial explosion of every piece and version of software you might use.
TCL modules are better. They don't let you cheat like containers do, but in return you get a better system.
> You can absolutely mix and match lots of different binaries from different sources on one Linux system. That's exactly what we're doing now with TCL modules.
Doing this across different Linux distributions is inherently prone to failure. I don't know about your TCL modules specifically, but unless you have an identical and completely reproducible software toolchain across multiple linux distributions, it's going to end with problems.
Honestly, it sounds like you just don't understand these systems and how they work. TCL modules aren't better than containers, this is like comparing apples and organgutans.
> Doing this across different Linux distributions is inherently prone to failure.
Sure if you just take binaries from one distro that link with libraries on that distro and try and run it on a different one... But that's not what we're doing. All of our TCL modules are either portable binaries (e.g. commercial software) or compiled from source.
> Honestly, it sounds like you just don't understand these systems and how they work.
I do, but well done for being patronising.
> TCL modules aren't better than containers,
They are better for our use case.
> this is like comparing apples and organgutans.
If apples and orangutans were potential solutions to a single problem why couldn't you compare them?
How do you guys use Lua modules?
We put all of our software in a module and then we have a script for our projects that loads the appropriate software. I'm not sure I understand the question...
Great to see Apptainer getting some attention. It generally excels over other container options (like Docker and Podman) in these scenarios:
- Need to run more than one activity in a single container (this is an anti-pattern in other container technologies)
- HPC (and sometimes college) environments
- Want single-file distribution model (although doesn't support deltas)
- Cryptographically sign a SIF file without an external server
- Robust GPU support
> Want single-file distribution model (although doesn't support deltas)
You can achieve that with docker by `docker save image-name > image-name.tar.gz` and `docker load --input image-name.tar.gz`.
It likewise doesn't support deltas but there was a link here on HN recently to something called "unregistry" which allows for doing "docker push" to deploy an image to a remote machine without a registry, and that thing does take deltas into account.
OCI image repositories are pretty ubiquitous nowadays and are trivial to setup.
I am sure that a lot of people have them deployed and don't even realize it. If you are using Gitea, Gitlab, Github, or any of their major forks/variations of you probably already have a place to put your images.
So I really don't know what the advantage of 'single file distribution model' is here.
This is probably why people don't bother sharing tarballs of docker images with one another even though it is has been a option this entire time.
If you're sharing with lots of people registries are great.
If you're deploying to a server, I don't see a point in setting up a registry, regardless of how trivial it is. It seems even more trivial to just send the deployment package to the server.
> (this is an anti-pattern in other container technologies)
It is? I have no issues packing my development containers full of concurrent running processes. systemd even supports running as a "container init" out of the box, so you can get something that looks very similar to a full VM.
Apptainer and singularity ce are quite common in HPC. While both implementations fork the old singularity project, they are not really identical.
We use singularity in the HPCs (like Leonardo, LUMI, Fugaku, NeSI NZ, Levante) but some devs and researchers have apptainer installed locally.
We found a timezone bug a few days ago in our Python code (matplotlib,xarray,etc.), but that didn't happen with apptainer.
As the code bases are still a bit similar, I could confirm apptainer fixed it but singularity ce was still affected by the bug -- singularity replaces the UTC timezone file by the user's timezone, Helsinki EEST in our case in LUMI HPC.
> Apptainer and singularity ce are quite common in HPC. While both implementations fork the old singularity project, they are not really identical.
Apptainer is not a fork of the old Singularity project: Apptainer is the original project, but the community voted to change its name. It also came under the umbrella of the Linux Foundation:
* https://apptainer.org/news/community-announcement-20211130/
Sylabs (where the original Singularity author first worked) was the one that forked off the original project.
Oh, that's correct. Thanks!
Luckily they’re still compatible with each others containers. Can use Apptainer to build the container then run it on Singularity and vice-versa.
Yeah, we haven't found any issues so far besides this one with time zone. Other than that, we've been able to run the same containers with both.
Funny this is here. Apptainer is Singularity, described here:
https://journals.plos.org/plosone/article?id=10.1371/journal...
If you ever use a shared cluster at a university or run by the government, Apptainer will be available, and Podman / Docker likely won't be.
In these environments, it is best not to use containers at all, and instead get to know your sysadmin and understand how he expects the cluster to be used.
Why are docker/podman less common? And why do you say it's better not to use containers? Performance?
docker and podman expect to extract images to disk, then use fancy features like overlayfs, which doesn't work on network filesystems -- and in hpc, most filesystems users can write to persistently are network filesystems.
apptainer images are straight filesystem images with no overlayfs or storage driver magic happening -- just a straight loop mount of a disk image.
this means your container images can now live on your network filesystem.
Do the compute instances not have hard disks? Because it seems like whoever's running these systems doesn't understand Linux or containers all that well.
If there's a hard disk on the compute nodes, then you just run the container from the remote image registry, and it downloads and extracts it temporarily to disk. No need for a network filesystem.
If the containerized apps want to then work on common/shared files, they can still do that. You just mount the network filesystem on the host, then volume-mount that into the container's runtime. Now the containerized apps can access the network filesystem.
This is standard practice in AWS ECS, where you can mount an EFS filesystem inside your running containers in ECS. (EFS is just NFS, and ECS is just a wrapper around Docker)
Scale of data we see on our HPC, it is way better performance per £/$ to use Lustre mounted over fast network. Would spend far too much time shifting data otherwise. Local storage should be used for tmp and scratch purposes.
The docker image is a scratch purpose.
Imagine copying 8gb image to 96000 ranks over network
It's called caching layers bruv, container images do it. Plus you can stagger registries in a tiered cache per rack/cage/etc. OTOH, constantly re-copying the same executable over and over every time you execute or access it over a network filesystem wastes bandwidth and time, and a network filesystem cache is both inefficient and runs into cache invalidation issues.
yes, nodes have local disks, but any local filesystem the user can write to is ofen wiped between jobs as the machines are shared resources.
there is also the problem of simply distributing the image and mounting it up. you don't want to waste cluster time at the start of your job pulling down an entire image to every node, then extract the layers -- it is way faster to put a filesystem image in your home directory, then loop mount that image.
> yes, nodes have local disks, but any local filesystem the user can write to is ofen wiped between jobs as the machines are shared resources.
This is completely compatible with containerized systems. Immutable images stay in a filesystem directory users have no access to, so there is no need to wipe them. Write-ability within a running container is completely controlled by the admin configuring how the container executes.
> you don't want to waste cluster time at the start of your job pulling down an entire image to every node, then extract the layers -- it is way faster to put a filesystem image in your home directory, then loop mount that image
This is actually less efficient over time as there's a network access tax every time you use the network filesystem. On top that, 1) You don't have to pull the images at execution time, you can pull them immediately as soon as they're pushed to a remote registry, well before your job starts, and 2) Containers use caching layers so that only changed layers need to be pulled; if only 1 file is changed in a new container image layer, you only pull 1 file, not the entire thing.
there generally is no central shared immutable image store because every job is using its own collection of images.
what you're describing might work well for a small team, but when you have a few hundred to thousand researchers sharing the cluster, very few of those layers are actually shared between jobs
even with a handful of users, most of these container images get fat at the python package installation layer, and that layer is one of the most frequently changed layers, and is frequently only used for a single job
> container images get fat at the python package installation layer, and that layer is one of the most frequently changed layers
This might be mitigated by having a standard set of packages, which you install in a lower layer, and then changing ones, at a higher layer.
Just to review, here are the options:
1. Create an 8gb file on network storage which is loopback-mounted. Accessing the file requires a block store pull over the network for every file access. According to your claim now, these giant blobs are rarely shared between jobs?
2. Create a Docker image in a remote registry. Layers are downloaded as necessary. According to your claim now, most of the containers will have a single layer which is both huge and changed every time python packages are changed, which you're saying is usually done for each job?
Both of these seem bad.
For the giant loopback file, why are there so many of these giant files which (it would seem) are almost identical except for the python differences? Why are they constantly changing? Why are they all so different? Why does every job have a different image?
For the container images, why are they having bloated image layers when python packages change? Python files are not huge. The layers should be between 5-100MB once new packages are installed. If the network is as fast as you say, transferring this once (even at job start) should take what, 2 seconds, if that? Do it before the job starts and it's instantaneous.
The whole thing sounds inefficient. If we can make kubernetes clusters run 10,000 microservices across 5,000 nodes and make it fast enough for the biggest sites in the world, we can make an HPC cluster (which has higher performance hardware) work too. The people setting this up need to optimize.
example tiny hpc cluster...
100 nodes. 500gb nvme disk per node. maybe 4 gpus per node. 64 cores? all other storage is network. could be nfs, beegfs, lustre.
100s of users that change over time. say 10 go away and 10 new one comes every 6mths. everyone has 50tb of data. tiny amount of code. cpu and/or gpu intensive.
all those users do different things and use different software. they run batch jobs that go for up to a month. and those users are first and foremost scientists. they happen to write python scripts too.
edit: that thing about optimization.. most of the folks who setup hpc clusters turn off hyperthreading.
Container orchestrators all have scheduled jobs that clean up old cached layers. The layers get cached on the local drive (only 500gb? you could easily upgrade to 1tb, they're dirt cheap, and don't need to be "enterprise-grade" for ephemeral storage on a lab rackmount. not that the layers should reach 500gb, because caching and cleanup...). The bulk data is still served over network storage and mounted into the container at runtime. GPU access works.
This is how systems like AWS ECS, or even modern CI/CD providers, work. It's essentially a fleet of machines running Docker, with ephemeral storage and cached layers. For the CI/CD providers, they have millions of random jobs running all the time by tens of thousands of random people with random containers. Works fine. Requires tweaking, but it's an established pattern that scales well. They even re-schedule jobs from a particular customer to the previous VM for a "warm cache". Extremely fast, extremely large scale, all with containers.
It's made better by using hypervisors (or even better: micro-VMs) rather than bare-metal. Abstract the allocations of host, storage and network, makes maintenance, upgrades, live-migration, etc easier. I know academia loves its bare metal, but it's 2025, not 2005.
the "network tax" is not really a network tax. the network is generally a dedicated storage network using infiniband or roce if you cheap out. the storage network and network storage is generally going to be faster than local nvme.
on a compute node, / is maybe 500gb of nvme. thats all the disk it has.
the users mount their $home over nfs. and get whatever quota we assign. can be 100s of tb.
i actually allow rootless podman to run. but frown at it. its not very hard for a few jobs to use up all that 500gb if everyone is using podman.
i don't care if you run apptainer/singularity though. since it exists entirely within your own $home and doesnt use the local disk.
Flatpak considers to move from OSTree to containers, citing the well-maintained tooling as a major plus point. How would that differ from Apptainers?
Maybe the idea is that flatpak can have better sandbox control over applications running in flatpak using xdg-dbus ie. you can select the permissions that you want to give to a flatpak application and so sometimes it can act near native and not be completely isolated like containers.
Also I am not sure if apptainers are completely isolated.
Though I suppose through tools like https://containertoolbx.org/ such point also becomes moot & then I guess if they move to container, doesn't it sort of become like toolbx?
To be honest, I think a lot of tools can have a huge overlap b/w them and I guess that's okay too
Containers in Linux are more a conceptual collection of different isolation techniques. Mostly just based on Linux namespaces. But things like cgroups, Linux capabilities, occasionally MAC (selinux, etc) and a few other items often get thrown in the mix.
https://www.redhat.com/en/blog/7-linux-namespaces
After a quick view of the apptainer documentation it looks like it minimally takes advantage of user and mount namespaces. So each apptainer gets its own idea of what the users/groups are and what the file system looks like.
Flatpak is more about desktop application sandboxing. So while it does use user and mount namespaces like apptainer it takes advantage of more Linux features then that to help enhance the isolation.
Which appears to be the opposite of the point of apptainer. Apptainer wants to use containers that integrate tightly with the rest of the system with very little isolation versus Flatpak wants to be maximally isolated with only the permissions necessary for the application.
That isn't to say that apptainer can't use more Linux features to increase isolation. It supports the use of cgroups for resource quotas and can take advantage of different types of namespaces for network isolation among other things.
Now as far as "OSTree vs containers" statement you are replying to... This is kinda misleading.
OSTree is designed to manage binaries files in a way similar to git with text file. It isn't a type of container technology in itself. It just used for managing how objects on the file system are arranged and managed.
It is used by some flatpak applications, but it is used for things besides flatpak.
The 'containers' he mentioned is really a reference to OCI container image format.
OCI container images is, again, a way to manage the file system contents typically used in containers. It isn't a container technology itself.
It is like a tarball, but for file system images.
OCI containers is a standardized version of Docker images.
Due to the popularity and ubiquity of OCI image related tools and hosting software it makes sense for Flatpak to support it.
OCI images, when combined with bootc, also can be used to deploy Linux container images to "bare hardware". Which is gaining popularity in helping to create and deploy "immutable" or "atomic" Linux distributions. Fedora Atomic-based OSes seem to be moving to use Bootc with OCI over pure OSTree approach... although they still use OSTree in some capacity.
Incidentally Apptainer supports the use of OCI images (in addition to it's native SIF) as well as other commonly used container technologies like CNI. CNI is container network interface and is used with Kubernetes among other things.
Thanks a crazy lot for writing this as it actually made me genuinely understand the differences.
And also, I must say that one of the most underrated parts which you told which I didn't knew about was that apptainer can be "unisolated?" ie. we don't have to do crazy shenanigans for it to access my files and it can just do it simply.
Like someone else had mentioned https://nixery.dev/ and I wanted to see if I could use nix tools via docker and use them as if they were installed on my own system and apptainer really nailed it. I read that nixery.dev had to do some shenanigans to prevent the 150 layer or something but I suppose SIF doesn't have to deal with it so I am actually excited a little too haha. Thanks a lot!!
Side Note: I think that there might be better ways to run nix apps like nix-appimage but I am just trying out things because why not. Its fun.
I agree with Havoc, the message is unclear: Is Apptainer a replacement for Flatpack on the desktop, or is it targeting the server?
Server - but this is kind of a wrong question. Apptainer is for running cli applications in immutable, rootless containers. The closest tool I can think of is Fedora Toolbx [1]. Apptainer is primarily used to distributing and reusing scientific computing tools b/c it doesn't allow root, doesn't allow for changes to the rootfs in each container, automatically mounts the working directory and works well with GPUs (that last point I can't personally attest to).
[1]: https://docs.fedoraproject.org/en-US/fedora-silverblue/toolb...
> Apptainer is for running cli applications in immutable, rootless containers.
What's the appeal of using this over unshare + chroot to a mounted tarball with a tmpfs union mount where needed? Saner default configuration? Saner interface to cgroups?
Usability, for one. I know how to `apptainer run ...`. I don't know how to do what you're describing, and I'd say I'm more Linux savvy than most HPC users.
Apptainer, like the vast majority of container solutions for Linux, take advantage of Linux namespaces. Which is a lot more robust and flexible then simple chroot.
In Linux (docker, podman, lxc, apptainer, etc) containers are produced by combining underlying Linux features in different way. All of them use Linux namespaces.
https://www.redhat.com/en/blog/7-linux-namespaces
When using docker/podman/apptainer you can pick and choose when and how to use namespaces. Like I can use just use the 'mount' namespace to create a unique view of file systems, but not use the 'process', 'networking', and 'user' namespaces so that the container shares all of those things with the host OS.
For example when using podman the default is to use the networking namespace so it gets its own IP address. When you are using rootless (unprivileged) mode it will use "usermode network" in the form of slirp4netns. This is good enough for most things, but it is limited and slow.
Well I can turn that off. So that applications running in a podman container share the networking with the host OS. I do this for things like 'syncthing' so that the container version of that runs with the same performance as non-containered services without having to require special permissions for setting up rootful network (ie: macvlans or linux bridges with veth devices, etc) )
By default apptainer just uses mount and user namespaces. But it can take advantage of more Linux isolation features if you want it to.
So the process ids, networking, and the rest of it is shared with the host OS.
The mount namespace is like chroot on steroids. It is relatively trivial to break out of chroot jails. In fact it can happen accidentally.
And it makes it easier to take adavantage of container image formats (like apptainer's SIF or more traditional OCI containers)
This is Linux's approach as opposed to the BSD one of BSD Jails were the traditional limited Chroot feature was enhanced to make it robust.
I'm aware of all of this, it might not be clear if you've not used it directly yourself, but unshare(1) is your shell interface to namespaces. You still need to use a chroot if you want the namespace to look like a normal system. Just try it without chrooting:
unshare --mount -- /bin/bash
> It is relatively trivial to break out of chroot jails. In fact it can happen accidentally.
Same is true for namespaces actually.
https://www.helpnetsecurity.com/2025/05/20/containers-namesp...
Very good, thank you. I did miss the significance of 'unshare' in your post.
Not a constructive comment, but I find the name "Apptainer" doesn't really work. Rolls funny on the tongue and feels just "wrong" to me.
In my environment, the number one reason Apptainer is used has nothing to do with deployment, isolation, or software availability: it is to work around inode limits.
On our HPC cluster, each user has a quota of inodes on the shared filesystem. This makes installing some software with lots of files problematic (like Anaconda). An Apptainer image is a single file on the filesystem though (basically squashfs) so you can have those with as many files as you want in each.
Installing the same software normally is easy and works fine though, you just exchaust your quota.
If any developers are looking to isolate different dev projects from each other using containers, I wrote a tool for it (using podman), maybe someone finds it useful or can thrash its security.
Find the code on https://github.com/evertheylen/probox or read my blog post on https://evertheylen.eu/p/probox-intro/
Why didn't toolbox fit your needs? I found toolbox to be a very reasonable way to install development dependencies on a per project basis while not managing multiple hidden filesystems.
toolbx is not actually intended to provide any security or isolation, see e.g. https://github.com/containers/toolbox/issues/183
It would be more accurate to say that toolbx is based on Podman, but is intended to provide tight configuration with your user's outside environment.
If you want to use toolbx for more isolation you'll have to end up turn off a bunch of features and configuring it in weird ways that ultimately defeats the purpose of having toolbx in the first place....
It is a lot easier to just to cut out the middle man and use podman directly.
Fully agree, that's why my python script is ultimately just a simple wrapper for podman but it makes my life a lot easier anyway.
It is very handy in SLURM clusters and servers where you have no sudo.
Some attrition using it though: is there a good in-depth book about it?
I used it on a SLURM cluster before. The documentation is done well and should be a good introduction. The only issue I had was that the cluster didn't have fakeroot or sudo, so I had to build the apptainer locally and transfer it to the server.
Exactly: in most HPC clusters / servers one has no sudo.
I have yet to see a container technology that doesn't break a myriad of things.
I thought the "hardened images" were a step in the right direction. It's a pain to have to deal with vulnerabilities on ephemeral short-lived containers/instances. Having something hyper up to date is welcome.
https://www.docker.com/blog/introducing-docker-hardened-imag...
Could be, I have to see it before I believe it. This container tech is too fragile.
To some extend I understand the problem that these solution are trying to address, I'm just not sure that simply stuff things into containers is really the right solution.
Perhaps the problems need to be addressed on a more fundamental level.
https://dl.acm.org/doi/10.1145/3126908.3126925
This paper might help
I tried looking into this, but I do development work on Mac, which is not supported, and so it became a non starter.
See: https://apptainer.org/docs/admin/main/installation.html#mac
Just like with Docker, it spins up a Linux VM that integrates with Apptainer. You can install/use it with Lima (much like Docker).
You can also install it with `brew install lima` and then run `limactl start template://apptainer` to get a running Apptainer compatible VM running.
Hm, not sure why I missed that, or maybe I didn't miss it and for some reason decided to just go with Docker. Either way thanks for pointing it out, I'll keep this in mind.
It's not perfect... For example, I don't think there is an easy way to use Apptainer containers w/ VS Code. But it's there if you want to play with it.
I was only partially aware of it as I tend to use Colima more than Lima, but have started to move towards Lima more in general.
That said, I still stick to Docker-style containers personally as they are more widely supported (e.g. VS Code). However, I also work a lot in HPC, so migrating workflows cross-platform to Apptainer containers is a goal of mine.
Whoever made this is trying to sell something to people who don't know how containers work. The "encryptable" part is giving major snake oil vibes. Is there some clueless administrator somewhere demanding encryption, not really getting what it's for?
Why doesn’t the OS simply provide this by default? I’ve never understood that.
Process isolation should be the default. You should be able to opt out of certain parts of it as required by your application.
This should not be something you add on top of the OS, nor should it be something that configures existing OS functionality for you. Isolation should be the default.
Only MacOS does anything like this out of the box, that I’m aware of, and I’m not sure that it is granular enough for my liking as it is today. I often see apps asking for full disk access or local network access and deny them, because they don’t need those things, they maybe need a subset of it, but I can’t allow a subset of “full disk access” or “local network access” if the application is running as myself.
Very interesting .. I was recently tasked with getting a bespoke AI/ML environment ready to ship/deploy to, what can only be considered, foreign environments .. and this has proven to be quite a hassle, because, of course: python.
So I guess Apptainer is the solution to this use case - anyone had any experience with using it to bundle up an AI/ML application for redistribution? Thoughts/tips?
I did start to use them for AI development on the HPC I have access to and it worked well (GPU pass-through basically automatically, the performace seemed basically the same) - but I mostly use them because I do not want to argue with administrators anymore that it's probably time they update Cuda 11.7 (as well as python 3.6) - the only version of Cuda currently installed on the cluster.
use conda?
Ah, right. So, no matter what container comes along to solve this problem, there's still the BOFH factor to deal with ..
Curious though, how are you doing this work without admin privs?
It's a bit annoying, but you can install conda without admin privileges and apptainer was installed for compliance with some EuroHPC project and luckily made accessible to all users. The container allows me to have an environment where I have "root" access and can install software.
The most annoying thing is not the lack of privileges, but that the compute nodes do not have internet access (because "security") beside connecting to the headnode, so there is the whole song and dance of running the container (or installing conda packages) on the headnode so I can download everything I need, then saving the state and running them on the compute node.
Seems to me that you might benefit from containerizing your apptainer development environment. ;)
/ducks
Apptainer excels for AI/ML distribution because it handles GPU access and MPI parallelization natively, with better performance than Docker in HPC environments. The --fakeroot feature lets you build containers without sudo, and the SIF file format makes distribution simpler than managing Docker layers.
conda
Honestly switched to systemd isolation features (chroot, ro/rw mounts, etc) and never looked back.
How do you use them to give you portability and reproducibility of builds?
Wish these sort of projects would do a better job articulating what the value proposition is over leading existing ones.
Like why should I put time into learning this instead of rootless podman? Aside from this secret management thing it sounds like same feature set
From the Introduction [1]
Many container platforms are available, but Apptainer is focused on:
Verifiable reproducibility and security, using cryptographic signatures, an immutable container image format, and in-memory decryption.
Integration over isolation by default. Easily make use of GPUs, high speed networks, parallel filesystems on a cluster or server by default.
Mobility of compute. The single file SIF container format is easy to transport and share.
A simple, effective security model. You are the same user inside a container as outside, and cannot gain additional privilege on the host system by default. Read more about Security in Apptainer.
[1] https://apptainer.org/docs/user/main/introduction.htmlThis project is way older than (rootless) podman.
But I guess aren't their premise just the same though? I wonder how different "learning" apptainer is compared to "learning" podman given atleast in podman with podman-compose and many other such things, podman just is really equivalent to docker in a lot of scenarios with a 1:1 bind mostly
You should put time into learning this if you are going to be running HPC jobs on clusters, because some HPC clusters support this for jobs and not much else.
So is this popular in science or data analysis / forecasting or something like that?
I'm not familiar with it (I don't know if it changed names or just didn't notice)
I don't think people use it for reproducible environments on their own machines the same way Docker is sometimes used, I've mostly encountered it in academic compute clusters as a way to install the libraries/languages you're using onto the cluster in an easy to remove way. So it's popular in HPC clusters specifically and not really tied to a field of research beyond that.
Used to be called “Singularity”
[flagged]
Argh, yet another way to distribute userland images. AppImages does it right by including the run-time with the image itself - no prior installation needed.
More nix less containers, btw.
E.g. docker run -ti nixery.dev/shell/cowsay bash for on-the-fly containers based on Nix.
fewer containers, surely :)
Of course we ignore the taking shots at a project just for its existence thing aside
I actually really like nixery.dev idea. Sounds kinda neat.
If I am being really honest, there are a lot of ways to go around tbh, there are ways to run nix inside of docker and docker inside of nix too.
There are ways to convert docker images into os too and there are tools like coreos.
There is nix-shell and someone on hackernews told me about comma and I am still figuring out comma (haha! Thanks to them!)
And if one just wants isolation, they can use bubblewrap or (pledge by jart) and I guess there is complete beauty and art in such container-esque space and I truly love this space a lot.
I am actually wondering right now that using traefik (as load balancer) + nats (for a modular monolith) + podman/coreos + (cloudflare tunnels?) + any vps and you can use nix to build those containers too or you can go the other way around by having a nixos on vps with traefik + nats can be a really good alternative to kubernetes.
I mean, There is docker swarm too if you don't want any of such complexity but people say that its less worked upon but still I guess there is a sort of fun in reinventing the wheel of kubernetes, but I guess I don't have tooo much problems with kubernetes I suppose because of the existence of helm charts (I haven't used kubernetes) but helm charts are written in go templates and I think they are a bit clunky but still I love golang and I feel like I would be okay with writing helm charts but I guess I am one of the people who just believes to scale horizontally first than vertically untill the economic scale gets broken and its more cheaper to use kubernetes / learn it than not.
Appimage is terrible. It works by trying to make applications in appimages adhere to a lowest common denominator between Linux distros... which amounts to forcing application developers to develop for the oldest version of Linux appimage supports.
Nix is a huge pain to deal with.
Nix makes me think of the old Zawinski joke of:
"Some people, when confronted with a problem, think 'I know, I'll use regular expressions.' Now they have two problems,"
Except there are less upsides for using Nix over something like OCI.