« BackSmartOSdocs.smartos.orgSubmitted by ofrzeta a day ago
  • re-lre-l a day ago

    I am a huge fan of SmartOS. Back in the 2010s (around 2012), I was advocating its use in production at a small startup I worked. The SunOS kernel, ZFS, zero install, immutable core, convenient way to manage containers and VMs together - all of this looked great on paper, especially containers.

    In reality, I ended up running almost everything in VMs. The only thing worked well natively was nginx. MongoDB, Mysql, even our php backend (some libraries) had issues, unfortunately.

    A year ago, I considered SmartOS again as a home lab driver, and no success again, Linux just has better support: drivers, pci passthrough, etc... and now with containers+vm through Proxmox or anything else. You can even run a k8s+kubevirt with zfs practically out of the box as a complete overkill though.

    • fridder a day ago

      not sure if you have given FreeBSD a chance yet and it has an in-progress jail/vm frontend: https://github.com/AlchemillaHQ/Sylve

      • rtaylorgarlock a day ago

        Ah, very cool. Thanks for sharing; will try it out

      • abrookewood 17 hours ago

        You can get some of that with IncusOS (https://linuxcontainers.org/incus-os/introduction/), which includes ZFS, immutability and manages both containers and VMs. I haven't used the OS yet, but have been enjoying Incus + Ubuntu.

        • rufugee 16 hours ago

          Using incus heavily on Omarchy here and love it. I created a script to read yaml configs and create ephemeral incus containers with certain capabilities and certain directories mounted within. It's a wonderful experience for sandboxing Claude Code.

          • abrookewood 15 hours ago

            Interesting idea. How short lived are the containers?

        • gr4vityWall 20 hours ago

          These days, you're indeed better off using Illumos/SmartOS to run GNU/Linux zones/VMs, rather than native applications, from what I hear.

          • gigatexal 19 hours ago

            If you’re just going to run things in VMs then QubesOS one might as well

          • Zaskoda a day ago

            So many PHP libraries are just wrappers for some other library. I think that's mostly a strength, but in this case it was clearly a weakness.

          • nwilkens a day ago

            SmartOS is the core operating system for Triton datacenter -- https://www.tritondatacenter.com Triton is the orchestration of SmartOS compute nodes.

            Code + issues are active under https://github.com/TritonDataCenter (smartos-live, illumos-joyent, triton, etc.), and docs are at https://docs.smartos.org/.

            SmartOS is released every two weeks, and Triton is released every 8 weeks -- see https://www.tritondatacenter.com/downloads

            And Triton object storage will have S3 support in the next release!

            [edit: removed semicolon from link!]

          • nZac a day ago

            > SmartOS is a "live OS", it is always booted via PXE, ISO, or USB Key and runs entirely from memory, allowing the local disks to be used entirely for hosting virtual machines without wasting disks for the root OS.

            Does anyone know if something like this is possible with Proxmox? I've got three servers I'm thinking of setting up as a small cluster and would like to boot them from a single image instead of manually setting PVE on each. Ansible or salt is an option but that tends to degrade over time.

            • ktm5j 21 hours ago

              It's close, but there are some missing pieces I think. The way it manages storage pools would fit your use case, if you import a zpool, for example, it will scan the datasets and can figure out what zvols should be attached to which VMs..

              but there's also VM config info under `/etc/pve` or something similar. I'm pretty sure that's some kind of FUSE filesystem, it's supposed to be synchronized between cluster members.. you might be able to host that externally somehow. But that'll probably take some effort.

              You'll also need to figure out how to configure `/etc/network/interfaces` on boot for your network config. But that's doable.

              Would be pretty neat.

              • jeffbee a day ago

                It depends on what "this" you meant, but in general the ways of netbooting an OS are many and varied. You'd have to declare what kind of root device you ultimately want, such as root on iSCSI.

                Personally, I feel that "smartOS does not support booting from a local block device like a normal, sane operating system" might be a drawback and is a peculiar thing to brag about.

                • cyberpunk a day ago

                  There was a brilliant incident back in the joyent days where they accidentally rebooted an entire datacenter and ended up dossing their dhcp server ;)

                  • ptribble 19 hours ago

                    SmartOS can, of course, boot from a local zfs pool, but it treats it logically as just another source for the bootable image. See the piadm(8) command.

                    • nZac a day ago

                      What I'm looking to achieve are three identical proxmox host boxes. As soon as you finish the install you now have three snowflakes no matter how hard you try.

                      In the case of smartOS (which I've never used) it would seem like that is achieved in the design because the USB isn't changing. Reboot and you are back to a clean slate.

                      Isn't this how game arcades boot machines? They all netboot from a single image for the game you have selected? That is what it seems smartOS is doing but maybe I'm missing the point.

                      • ekropotin 21 hours ago

                        It doesn't look like it's achievable with vanilla Proxmox.

                        I think if you really-really want declaratively for host machines, you'd need to ditch Proxmox in favor of Incur on top of NixOS.

                        There is also https://github.com/SaumonNet/proxmox-nixos, but it's pretty new and therefore full of rough edges.

                    • xenophonf a day ago

                      You can boot ProxMox VMs via PXE:

                      https://blog.kail.io/pxe-booting-on-proxmox.html

                      But why bother? A read-only disk image would be simpler.

                      • ktm5j 21 hours ago

                        Pretty sure they want to boot the hypervisor itself via PXE, not the VMs.

                      • oooyay 21 hours ago

                        Kind of. You can run Talos on Proxmox so I don't see why you couldn't run this, but frankly I'd just install Talos or SmartOS on the metal like god intended.

                      • QuantumNomad_ a day ago

                        I remember several years ago, SmartOS was being mentioned many times on HN.

                        Joyent, the company behind SmartOS, was since acquired, and I don’t usually see anyone talking about SmartOS nowadays.

                        Is anyone on HN using SmartOS these days?

                        • DvdGiessen 21 hours ago

                          Running a number of production services on-premise on a big machine using native zones, a few using LX zones (the built-in Linux compatibilty layer), and a single bhyve zone. Actually, years ago this machine was the very first server we set up when our company was just getting started and for the first few years it ran pretty much everything. Zones were ideal for that, also to allow us to pack more services on less hardware, while having decent separation and everything snapshotting/backupping using ZFS. Nowadays we have a bunch more servers, with varying *nix operating systems (SmartOS, Debian, FreeBSD), as well as macOS and even Windows for some specific CI functions. (:

                          The global zone works great as a hypervisor if you prefer working over SSH in a real shell, and being able to run a lot of services natively just makes things like memory allocation to VM's and having a birds eye view of performance easier. Being able to CoW cp/mv files between zones because it's actually the same filesystem makes certain operations much easier than with actual VM's. Bhyve works well for the things that need an actual Linux kernel or other OS, at the cost of losing some of the zone benefits mentioned earlier.

                          Highlighting a few things we today run on SmartOS, grouped by their technology stacks: C (haproxy, nginx, PostgreSQL, MariaDB), PHP (various web apps), Java (Keycloak), Elixir/Phoenix (Plausible, fork of Firezone), Rust (rathole, some internal glue services), Go (Grafana, Consul, Prometheus). Most of those are readily available in the package manager, and a few offer native Solaris binaries which run fine on illumos. Others we do local builds in a utility zone before copying the binary package to the where it actually runs.

                          On LX zones we also run a number of services without problems, usually because they have Debian packaging available but are not in pkgsrc (for example Consul/Nomad, Fabio, some internal things that was already Linux-specific and we haven't bothered to port yet).

                          And at home a LX zone also runs Jellyfin just fine. (:

                          • cyberpunk 20 hours ago

                            I used it for smaller scale (low 10s of physical servers) back in the day also. But my problems with that started when i needed a lot more, the sysadmin/devops/whatever story doesn't scale.

                            Yes, ansible exists but it's actually quite hard to run ansible on a few hundred machines -- you need lots of RAM just to run the playbook and your first hundred or so separate deployments, you do need to reach for something like Kubernetes.

                            As for LX, why emulate linux when it's .... right there? The linux kernel is not a lot of overhead vs having to justify emulating the linux ABI on an OS the industry has largely abandoned.

                          • samtheDamned 21 hours ago

                            I use it. It's not the most practical I'll admit but I like the simplicity (I'm not super experienced with linux server administration so both of them feel similarly foreign, but SmartOS is pretty minimal and has been pretty straightforward to manage), and I won't lie it's a fun gimmick to be running a descendant of SunOS for random household services.

                            • cthalupa a day ago

                              Home stuff was the last holdout for me, but even that has been replaced by Proxmox these days. I used SmartOS for a solid 7-8 years, though, and like it for most of that time.

                              I couldn't point to any one single major reason that prompted the switch - just lots of small annoyances stemming from the world expecting you to be running Linux instead of Solaris, and once you move away from zones, you lose one of the most compelling reasons for being on SmartOS

                              • mirashii a day ago

                                Certainly Oxide computer company has some use of illumos still, which is strongly related to SmartOS

                                • mbreese a day ago

                                  But isn't the end goal with Oxide to run primarily Linux(/Windows?) VMs on an Illumos host?

                                  Are there any workloads (other than as a VM host) that run on SunOS derived OSes?

                                  • panick21_ a day ago

                                    The whole cloud orchidtration platform and all you need for that.

                                    But that is the same for most server images nowdays.

                                    What in portend is that Oxide upstreams all their work so 'traditional' users should get benefit from it too.

                                • irusensei 21 hours ago

                                  I think the big hit on Solaris and FreeBSD popularity since then was OpenZFS adopting Linux as their main reference operating system.

                                  • EvanAnderson a day ago

                                    I have a personal box I keep updated running some utility zones and a couple VMs. I enjoy the tooling very much but it's so niche that I'm wary of using it for Customer engagements.

                                    I never used Solaris in my real life but I can understand the appeal for people who did.

                                    • rjzzleep a day ago

                                      It was acquired by Samsung, which is notoriously bad at open source. But the reason why it quietly faded into the background wasn't that. It was that Joyent's ex Sun people had an annoying elitism that made them not care about working with the community.

                                    • eduction 20 hours ago

                                      I use it for a home server. Zones provides a secure way to have (on one physical machine with one physical network interface) some stuff you can only get to on the local network and some things you can get to over the public internet and some things via internet if you have the right ssh key. Each contained natively from each other. Crossbow firewall provides a nice way to contain traffic securely as well. ZFS let me set up two big external usb drives as a raid array, the resulting zvol can (iirc) have multiple filesystems for use by multiple zones although I only use it from one right now for the lan only zone. That zone shares via SMB to my network so I can use it for backups and media streaming.

                                      I’ve been able to do almost everything in native zones. I had a bhyve zone set up to run a photo related GitHub code base that really needed Linux.

                                      SMF is a joy to use for services and package management with pkgsrc is great. The whole thing just feels very thoughtfully put together.

                                      You can probably achieve all this on Linux with docker and the right iptables (or whatever succeeded it) config I imagine? But on smartos I am using facilities that are integrated deeply into the os going back like 20 years now. I also just prefer the old sun stuff.

                                    • bluesounddirect 14 hours ago

                                      I ran a fairly large SmartOS with project-fifo.net as the controller for a number of years . We also had the amusing time with Pluribus networks switching too. When they started they were also a Illumos based switchos . Bhyve on illumos came out of Pluribus and happened partly due to a number of FreeBSD users heckling the "SUN GODS" over why the official kvm port was a piece of shit .

                                      In any case in the 6 years os smartos we never had any dataloss from failed disks, sure fifo and smartos had their warts but lx-zones works amazing well and i think we got Garrett D'Amore to go back to BSD land for some time . In the end we had to jump to VMWARE when Heinz gave up on fifo.

                                      snarl/howl/chunter

                                      https://project-fifo.net/ https://www.arista.com/en/support/pluribus-resources

                                      • tombert a day ago

                                        Genuine question: in 2026, what does SmartOS (or any other Illumos/Solaris OS) buy you over something like Linux or FreeBSD?

                                        • __tyler__ 21 hours ago

                                          Oxide uses Helios, their own illumos-based distro: https://github.com/oxidecomputer/helios

                                          They’ve written up their reasoning in this RFD: https://rfd.shared.oxide.computer/rfd/0026#_comparison_illum...

                                          • stonogo 21 hours ago

                                            After reading that document twice, I'm still not sure why they based it on Illumos. I strongly suspect it's just down to the personal preference of the founders, which is a perfectly valid reason. This document very much reads like "here are the pieces we will use, let's work our way back to why"

                                            • ironhaven 18 hours ago

                                              The reasoning can be simplified to two things. 1. Linux does not have the bhyve hypervisor ported 2. Maintaining a Linux distribution will require more effort and have more churn than illumos.

                                              Because Linux is just a kernel and users have to provide all of their own user space and system services there is a lot of opportunity for churn. Illumos is a traditional operating system that goes from the kernel to the systemd layer. Illumos is also very stable at this point so most of the churn is managed up front

                                              The choice is between porting a handful of apps to illumos or jumping on to the Debian treadmill while pioneering a new to Linux hypervisor. Would Linux have enabled a faster development cycle or just a easier MVP?

                                              • stonogo 17 hours ago

                                                There's no churn in a graveyard, either. Debian's not much of a treadmill on stable; it's famous for it.

                                                The justifications for bhyve over kvm are similarly inscrutable; you can simply not build the code you don't want. Nobody's forcing you to use shadow paging. Comments like "reportedly iffy on AMD" are bizarre. What does "iffy" mean? This wasn't worth testing? Why should I, a potential customer, believe that these people are better at this than the established companies who have been producing nearly-identical products for twenty years? At the domain of development they're discussing why bother using an x86_64 processor from a manufacturer who does not bother to push code into the kernel you've chosen?

                                                Again, it's their company, and if they (as I suspect) chose these tools because they're familiar, that's a totally supportable position. I just can't understand why we get handwaving and assurances instead of any meat.

                                                • bcantrill 16 hours ago

                                                  You may disagree with our rationale, but it is absolutely absurd to complain that that RFD 26[0] does not have "any meat." This is in fact dense technical content (10,000+ words!), for which I would expect a thorough read to take on the order of an hour. Not that I think you read it thoroughly: you skimmed to parts, perhaps -- but certainly glossed over aspects that are assuredly not your domain of expertise (or, to be fair, of interest to you): postmortem debuggability, service management, fault management, etc. These things don't matter to you, but they matter to us quite a bit -- and they are absolutely meaty topics.

                                                  Now, in your defense, an update on RFD 26 is likely merited: the document itself is five years old, and in the interim we built the whole thing, shipped to customers, are supporting it, etc. In short, we have learned a lot and it merits elucidating it. Of course, given the non-attention you gave to the document, it's unlikely you would read any update either, so let me give you the tl;dr: in addition to the motivation outlined in RFD 26, there are quite a few reasons -- meaty ones! -- that we didn't anticipate that give us even greater resolve in the decision that we made.

                                                  [0] https://rfd.shared.oxide.computer/rfd/0026

                                                  • stonogo 15 hours ago

                                                    I did indeed read your document (twice, as I explicitly reported). I didn't address those parts because I found them better-supported. Instead, I addressed the parts I found confusing, and since your rebuttal here is just whining about what you think my behavior is, I continue to be mystified. That's okay; nobody expects you to explain yourself to me. If I thought it would help, I would suggest that perhaps a more effective defense would involve answering literally any of the questions I already asked. However, I don't appreciate accusations of bad faith based on your unwarranted assumptions about what I did or did not do and, bizarrely, what you imagine my motivations are. I'll just assume that the answers to the "why" questions I asked are rooted in similar wild-ass speculation.

                                                    • simeonmiteff 8 hours ago

                                                      There is a reasonable explanation for the "foregone conclusion" flavour of the RFD that doesn't cast aspersions (quite as much as you are) on the authors:

                                                      It is simultaneously an assertion of the culturally determined preferences of a group of people steeped in Sun Microsystems engineering culture (and Joyent trauma?), and a clinical assessment of the technology. The key is that technology options are evaluated against values of that culture (hence the outcome seems predictable).

                                                      For example, if you value safety over performance, you'll prioritise the safety of the DTrace interpreter over "performance at all costs" JIT of eBPF. This and many other value judgements form the "meat" of the document.

                                                      The ultimate judge is the market. Does open firmware written in Rust result in higher CSAT? This is one of the many bets Oxide is making.

                                                      Frankly, I don't think Oxide would capture so much interest among technical folks if it was just the combination of bcantrill fandom + radically open engineering. The constant stream of non-conformist/NIH technology bets is why everyone is gripping their popcorn. I get to shout "Ooooooh, nooo! Tofino is a mistake!" into my podcast app, while I'm feeding the dog, and that makes my life just a little bit richer.

                                                      • benmmurphy 5 hours ago

                                                        i'm not sure if Dtrace interpreter was safer than EBPF. I guess in theory it should be because a JIT is just extra surface area but I'm not sure in practice. Both EBPF and DTrace had bugs. Also, I always thought EBPF JIT was just a translation to machine code and it didn't do any kind of optimization pass so should be very similar to how DTrace works. They both ship byte code to the kernel. But I guess the big difference is EBPF relies more on a verification pass while I think most of DTrace safety verification was performed while executing the bytecode. I remember there was a lot of stuff in EBPF where the verifier was meant to be able statically determine you were only accessing memory you were able to. I think there was a lot of bugs around this because the verifier would assume slightly different behaviour than what the runtime was producing. But this is also not necessarily a JIT problem you could have an interpreter that relied on a static safety pass as well.

                                                      • linksnapzz 13 hours ago

                                                        ...but your top post didn't ask any questions; certainly not ones that would justify a detailed answer.

                                                        It was several assertions, plus your admission of confusion. I mean, there are no stupid questions, but there wasn't even a question there, so I don't blame anyone for thinking you're communicating poorly.

                                            • linksnapzz a day ago

                                              I prefer SMF to systemd, mdb is pretty nice, and Real Actual RBAC, and in-kernel smb service.

                                          • EvanAnderson 21 hours ago

                                            SmartOS has grown a web UI in the last couple years, too. I haven't gotten around to trying it out on my last remaining SmartOS homelab box. I enjoy the CLI tooling very much. For some, though, the web UI might be worthwhile: https://docs.smartos.org/web-interface/

                                            • ofrzeta a day ago

                                              I was intrigued by the idea that in the Manta object store you could schedule computations on the storage nodes. However I am not sure how much improvement that brings in practice. Any practical experience with this?

                                              https://apidocs.tritondatacenter.com/manta/index.html

                                              • cmdrk a day ago

                                                bcantrill gave a great talk many years ago about compute-data locality. would be nice to know if those ideas panned out for some customers, but it seems the world has by-and-large continued to schlep data back and forth.

                                                it's too bad too. The concepts behind Manta were such a great idea. I still want tools that combine traditional unix pipes with services that can map-reduce over a big farm of hyperconverged compute/storage. I'm somewhat surprised that the kubernetes/cncf-adjacent world didn't reinvent it.

                                                • cyberpunk a day ago

                                                  I did use it on a project, it was meh, alright? In the end the main cost of our processing wasn’t storage latency but code, and this quite arcane scheduler was a barrier too much for most of our team.

                                                  I believe it was removed shortly after i left the project..

                                                • jama211 a day ago

                                                  I’m confused by the wording “without wasting disks for the base OS” - I wouldn’t normally consider this a “waste”, would anyone else? There are big downsides to running off of a USB key all the time unless I’m missing something

                                                  • zja a day ago

                                                    > This architecture has a variety of advantages including increased security, no need for patching, fast upgrades and recovery.

                                                    SmartOS was developed by Joyent for their cloud computing product, it's primary use case isn't desktop computing. I think the advantages mentioned above were probably a bigger factor than the disk space. I would also guess that PXE would be the standard way to boot in a datacenter, not USB.

                                                  • fennec-posix 18 hours ago

                                                    Glad to see SunOS/Solaris still alive in some form in the Open Source space. Had heard of Illumos and OpenIndiana, but didn't know about SmartOS. It's definitely won me over with the NetBSD pkgsrc package manager... Very nice.

                                                    • keeganpoppen a day ago

                                                      wait this seems totally awesome? i hadnt remembered until reading the comments now that this was a joyent thing, and that somehow it has largely disappeared despite seeming like an awesome way to do all sorts of things.

                                                      • gr4vityWall 20 hours ago

                                                        I know someone who runs SmartOS for their home server. They only had good things to say about it. It's been working well for a few years now.

                                                        • sneak a day ago

                                                          So, Solaris > OpenSolaris > Illumos > SmartOS? Do I have that right?

                                                          • linolevan a day ago

                                                            I believe SmartOS is a distro of Illumos (in the same way that debian is a distro of linux).

                                                            • 0x457 21 hours ago

                                                              OpenSolaris was an official OSS version of Solaris which was essentially Solaris, but developed in the open by Sun.

                                                              Illumos started as "remove all close source bits and replace with OSS", after Oracle closed down OpenSolaris, Illumos became a full-on fork and Solaris-like rather than another version of Solaris.

                                                              From there, multiple distros were born (because Illumos didn't want to be distro), notably OpenIndiana and SmartOS. OpenIndiana being a general purpose distro of Illumos. While SmartOS went with something like "OS for HCI datacenters"

                                                              So it's Solaris > OpenSolaris > Illumos.

                                                            • nicolasjungers a day ago

                                                              An opportunity for a SmartOS successor is IncusOS.

                                                              • ekropotin 21 hours ago

                                                                Curious why the separate OS should exist. NixOS + Incus sounds like a good solution for declarative provisioning of hypervisor machines.

                                                                • abrookewood 17 hours ago

                                                                  Not really ... "IncusOS is built on top of Debian 13". It has some similarities (immutable, ZFS, containers & VMs etc), but it isn't a derivative of Solaris.

                                                                • calvinmorrison a day ago

                                                                  great product. sadly dead! bought up by Samsung and now the briantrust has left to go work at a much cooler place

                                                                  • ptribble 19 hours ago

                                                                    Not dead, still going strong under new ownership (MNX).

                                                                    • abrookewood 17 hours ago
                                                                      • gertrunde 19 hours ago

                                                                        Nice to hear it's not dead, as the website and github repos do give that impression.

                                                                        I'll have to give it a spin.

                                                                        • gr4vityWall 3 hours ago

                                                                          Their Github repos seem fairly active, from a quick look: https://github.com/TritonDataCenter

                                                                          Their website is indeed out of date. Reminds me of Haxe in that aspect. The language itself is receiving significant development, but the website looks abandoned, and no new blog posts have been posted in a while.

                                                                    • fsflover a day ago

                                                                      See also: Qubes OS, which is a desktop OS based on virtualization, https://qubes-os.org

                                                                      • znpy a day ago

                                                                        This looks OT?

                                                                        judging by https://doc.qubes-os.org/en/latest/_images/qubes-trust-level... it looks very linux-centered.

                                                                        • eduction 19 hours ago

                                                                          No it's a very similar idea, just on workstations instead of servers. Qubes is built around "do everything in VMs" as SmartOS is "do everything in zones."

                                                                          It's just a usage detail that Qubes may have a slightly higher percentage of linux containers vs smartos - at this point both are probably mostly linux containers on both OSes in terms of usage. (Qubes can also do Windows vms and they amped up support for this in the latest release, while smartOS has native zones and i believe you can do freebsd and maybe others on bhyve.)

                                                                          Differences are many, including that Qubes has no concept of a "native" VM (dom0 is just a thin fedora wrapper around Xen) and that the global zone in SmartOS is significantly beefier than dom0 in Qubes, since Qubes offloads networking and usb io and bluetooth and sound to independent service qubes (VMs). And their development has been entirely separate. But they are spiritually siblings. I think it's an inspired comparison.