Yocto can be incredibly simple, this is my favorite example: https://github.com/bootlin/simplest-yocto-setup/
Only the kernel and bootloader usually need to be specialized for most modern arm boards: the userland can be generic. Most of the problems people have with yocto are due to layers from hardware vendors which contain a lot of unnecessary cruft.
Yocto can appear incredibly simple.
Until something somewhere deep inside the build process breaks, or you need to enable a peripheral the default device-tree for your board doesn't enable, or a gnat farts on the other side of the world, and it completely stops working.
The more your hardware vendors work upstream, the more Yocto will simplify your life.
If you buy hardware from a vendor who hands you a "meta-bigco" layer with their own fork of u-boot and the kernel, you're gonna have a bad time...
I spent at least a week to understand Yocto, started reading a book. I couldn't wrap my head around it. Just went back to the RPI OS image builder scripts.
Which book was that? The bootlin course slides are pretty good
Last time I tried Yocto, some people here on HN suggested that I try Buildroot instead.
I don’t see so many mentions of Buildroot in this thread yet.
If you are interested in Yocto it might be worth having a look at Buildroot as well. I liked it a lot when I tried it.
My thread from years ago, where people told me about Buildroot:
https://news.ycombinator.com/item?id=18083506
The website of Buildroot:
My experience with buildroot is that it's really slow to compile because it doesn't compile packages in parallel (so you'll only get the parallelism of an individual package's build system, with sequential stuff inbetween), and you end up recompiling from source a whole lot because it doesn't do dependency tracking between packages so if you change a library, you either have to manually remember to recompile the whole chain of dependents, or do a clean build. Yocto, on the other hand, compiles packages in parallel and tracks which packages need to be recompiled due to a changed recipe or config file.
Buildroot has package-parallel builds when using BR2_PER_PACKAGE_DIRECTORIES (see https://buildroot.org/downloads/manual/manual.html#top-level...). It's for some reason still marked as experimental in the docs but it has been solid for me for many years.
The lack of dependency tracking isn't great but other than working around it like you described just using ccache has worked pretty well for me. My Buildroot images at work do full recompiles in under 10 minutes that way.
Meanwhile the Yocto projects I've worked on used to have a ton of chaff that causes partial rebuilds with trivial changes to take longer than that. This probably isn't an inherent Yocto/BitBake thing but the majority of Yocto projects out there seems to take a very kitchen-sink approach so it's what you'll end up having to deal with in practice.
There’s also SkiffOS (https://github.com/skiffos/SkiffOS).
It’s a project that uses buildroot to create a small Linux for a specific device that’s only used to start a container.
I’ve wanted to try it sometime after getting headaches with both Buildroot and Yocto. Particular adding more libraries tends to break things.
That doesn't sound very performant.
Why not? The only overhead I can see is some storage and memory overhead due to duplicate libraries, and some possible small startup time penalty? Containers are just normal Linux processes after all, it's not like there's a VM involved
I think, in a lot of cases, the choice between Buildroot and Yocto comes down to "which one does the SoC vendor support."
I think that's fair, but it does depend on what you want the relationship with your SoC vendor and the Yocto community to be. A lot of SoCs have pretty good community support in Yocto (and probably Buildroot), and using a community-maintained BSP meta layer will make things easier for you in some ways. SoC vendors aren't always great at following Yocto best practices. Plus, unless you have excellent support contracts with your vendor and are prepared to use it, you'll probably go to the Yocto community for support with weird Yocto issues you run into; and Yocto developers are (understandably) much more helpful if you say you use mainline Linux with a BSP maintained by the Yocto project than if you use a vendor's kernel fork with a BSP maintained by the SoC vendor.
Yocto is synonymous with low-end IoT these days, and causes more problems than it solves in the long-term for many folks.
Also, bootstrapping your own application launcher shell on a raw kernel is usually not a difficult task (depending on vendor firmware.) Some folks just drop a full Lua environment for an OS that fits in under 2.7MB ISO even with a modern kernel.
Nir Lichtman posted a tutorial for mere mortals here:
https://www.youtube.com/watch?v=u2Juz5sQyYQ
Highly recommended exercise for students =3
buildroot to bringup, yocto to ship
Q: How do you guys centrally update field devices?
I am working on professionalizing our IOT setup that currently consists of a few dozen raspberries which run docker containers. They are individually updated by sshing into them and running apt update manually. Docker containers are deployed with a commercial solution. I want to have a centralized way to update the OSes, but it does not really make sense for our small team to introduce yocto knowledge, because that would make us fall behind development schedule even more. Also, the hardware needs are just too boring to justify rolling our own os. I have not yet found a hardware independent Linux distro that can be reliably updated in an IOT context.
I am now looking if we can buy ourselves out of this problem. Ubuntu Core goes in the right direction, but we don't want to make us dependent on the snap store. Advantech has a solution for central device management with Ota updates, maybe we are going that route.
How do you guys update field devices centrally? Thanks!
> have not yet found a hardware independent Linux distro that can be reliably updated in an IOT context
I'm part of the team that builds an immutable distro based on OSTree (https://www.torizon.io) that does exactly that.
Docker/Podman support is first-class as the distro is just a binary, Yocto-based one that we maintain so users don't have to. You can try our cloud for free with the "maker" tier. To update a device you just drop a compose file with the web ui and massively update a fleet. You can even use hardware acceleration from the containers using our reference OCI images.
The layer is open (https://github.com/torizon/meta-toradex-torizon) and will get Raspberry Pi support soon but you can integrate already easily with meta-raspberrypi (we can also do this for you very quickly ;-)).
Happy to answer any questions.
I would suggest taking a look at bootc https://github.com/containers/bootc which enables you to use OCI / Docker containers as a transport and delivery system for OS updates. That makes available much of the tooling used to build and deliver container images for the purposes of delivering os updates.
Such possibilities include the various registeries available for storing OS updates and branches. Tooling for security scanning, sbom generation, signing Docker or podman for building the image.
It's important to note that the container image itself is not executed upon boot, but rather unpacked before hand.
I've used RAUC (https://rauc.io/) professionally for a couple of projects and am happy with it. There's a RAUC meta layer which provides bbclasses for generating rauc bundles from an image recipe. It's not that complicated to set up boot partition selection in u-boot.
For embedded systems, I strongly prefer the "full immutable system image update" approach over the "update individual packages with a package manager" approach. Plus you get rollbacks "for free": if the system doesn't boot into the new image, it automatically falls back to booting into the previous image.
+1 for "full immutable system image update"
People who suggest updating individual packages (or even worse, individual deb packages for instance) have never deployed any large scale IoT/Embedded projects. These devices are very different than servers/desktops and will break in ways you can't imagine. We started out using deb packages at Screenly before moving to Ubuntu Core, and the amount of error/recovery logic we had written to recover from broken deb package state was insane at that point.
Not what you're looking for, but https://sbabic.github.io/swupdate/swupdate.html
It's meant (I think?) for immutable style distros like Yocto. You basically create a cpio archive and a manifest of what file goes in which partition (plus bells and whistles like cryptography). It's a good idea to have double buffering, so that if boot fails to come to a reasonable state, the device will revert after a few tries.
IMO the mutable distro model is way too fragile for long term automated updated. Errors and irregularities accumulate with each change. Besides, the whole "update while the system is running" is actually not well defined behaviour even for Linux, it just happens to work most of the time.
+1 for swupdate. Implemented it three or so different times at this point and really like it.
I would look at Balena if you are already using Raspberry Pi’s and docker. Alternatively maybe look into ROAC but don’t know if it supports docker. The SD cards will be your biggest failure point, so select them wisely.
I’ve deployed Ubuntu Core at scale. It’s great but does have its learning curve. Theirs is also somewhat of a lock in, even if you can run everting yourself. However, their security is really good.
Yocto + Mender is one option, but you don’t the yocto pain. We are trying Balena at the moment and liking it. It manages both the OS and the docker bit.
With Balena you are shipping entire Linux distros that you did not build, right? How do you deal with licences?
E.g. if you ship an Ubuntu container, you have to honour the licences of all the packages that you are shipping inside that Ubuntu container. Do you?
Yocto is pretty great! Unfortunately I feel like it gets a lot of criticism, but usually from people who haven't gotten to learn it. Like "I had to spend 2h on Yocto and this thing suuuuucks, I threw a docker image there and called it a day".
Which is a pity, because when used correctly it's really powerful!
From the article, I can't help but mention that one third of the "key terminology" is about codenames. What do people have with codenames? I can count and easily know that 5 comes after 4. But I don't know how to compare Scarthgap and Dunfell (hell, I can't even remember them).
Part of why it gets so much criticism is that Yocto’s learning curve is pure brutality.
Out of the box configurations for Yocto images and recipes are fabulous.
Trying to modify those configurations below the application layer… you’re gonna have a bad time. Opaque error messages, the whole layers vs recipes vs meta issues, etc. I also can’t shake the feeling that yocto was made to solve a chip company’s problems (I.e. supporting Linux distros for three hundred different SOCs) rather than my problems (I.e. ship working embedded software for one or two SOC platforms).
I’ve had a lot more success with buildroot as an embedded Linux build system and I recommend it very highly.
> pure brutality
And that's not hyperbole.
It's an odd mix of convention and bespoke madness. The convention part is that you set up a few variables and if the build system of the software is a good fit to common convention, things will just tend to work.
The bespoke madness comes in when there are slight departures from common convention and you must work out what variables to set and functions to define to fix it.
There are parts of the build system that are highly reminiscent of 1980s era BASIC programming. For example, I have seen build mechanisms where you must set variables first and then include or require a file. This is analogous to setting global variables in BASIC and then calling a subroutine with GOSUB because functions with arguments haven't been invented yet.
I've done both and I'll add that the one thing I miss about Yocto is that it could package up an SDK with installer that could be deployed on a different machine. With a single install you have the correct crosstools, libraries, and headers to build directly for target. And when we used to develop with Qt that was a huge advantage in helping others get started.
But now I use Buildroot and I get things done without all the extra anxiety.
Fair point but buildroot reached parity with that feature by allowing you to zip a toolchain and then point to it as an external tarball.
"Part of why it gets so much criticism is that Yocto’s learning curve is pure brutality."
At one time when SoCs were RAM lean... and build specific patching, stripping and static linking was considered an acceptable tradeoff in the yocto build systems for IoT etc. The use-cases are extremely difficult to justify these days with 256MB of ram on a $5 SoC...
However, the approach was commercially unsustainable from maintainability, security, and memory-page cache-hit efficiency metrics. It should be banned given it still haunts the lower systems like a rancid fart in an elevator. =3
From experience, none of the difficulty of Yocto comes from the fact that it strips binaries; it builds stripped packages and puts debug info in separate -dbg packages, which is super standard in the Linux world.
Yocto doesn't do static linking unless you specifically ask for it, libraries end up as .so files in /usr/lib like on all other Linux systems.
When Yocto carries patches, it's typically because those patches are necessary to fix bad assumptions upstreams make which Yocto breaks, or to fix bugs, not to reduce RAM usage.
I don't understand where you're coming from at all.
Buddy what the fuck are you talking about
Yocto launched in 2010
Buildroot launched in 2005
Both of these ecosystems coexisted in the era of sub $100 embedded Linux dev boards with way more than 256MB RAM
Yocto has no excuse for making toolchain and system configuration modifications as difficult as it does.
There is a big difference in just about everything relating to selling something with a sub $10 BOM and something approximating a “sub $100 dev board.”
The difference in unit volumes drives wide variances in tolerances of additional development difficulty/cost.
Indeed, the economics of chip component choices at scale change development priorities. Depends on the use-case, and how much people are willing to compromise on the design. Performant SoC and Flash memory are no longer premium budget choices.
Some people seem irrationally passionate about the code smell of their own brand. =3
It's powerful but bitbake wasn't so much designed as emerged from a primordial soup, it's easy to go completely insane trying to debug it due to the amount of action-at-distance recipes and layers can create. (try playing the "where did this compile flag come from?" game)
I don't disagree, but wanted to note- if you're ever stuck trying to trace a value, you can see everything that went into its calculation by using "bitbake -e".
Yeah, I was a bit scared off by it and both the terminology and the curious mixing of Python and bash can be a bit confusing. But it’s powerful and also very extensible without (generally) having to fork upstream layers.
I'm honestly impressed by how...well it works. Considering it's building an entire, totally custom Linux distro from scratch it requires a surprisingly little amount of hand-holding.
I agree. I don't understand how people prefer buildroot. Buildroot feels like an adhoc system of glued together Makefiles, whereas yocto actually feels like it was built for purpose.
Yocto feels like a ball of mud duct taped together, but thankfully has good documentation. It reminds me of CMake. Buildroot is nice for relatively simple situations. Nixos is arguably better than both.
Love Yocto! It has a learning curve but it took about a week from nothing to an embedded image including Swift and Flutter apps, U-Boot, etc. A curve worth climbing.
I always found buildroot a lot easier to fathom and harness. And certainly flexible enough with the ability patch every included recipe and package.
I cut my teeth on Buildroot but greatly prefer Yocto now. Buildroot is fast and loose, where Yocto forces you to do the right thing.
I think it is easier. But for some projects it becomes harder to maintain.
Yeah it definitely isn’t straight forward. But it is complicated for good reasons given how much more complicated stuff it does behind the scenes.
A few years ago I had to build a custom embedded image for a high-quality scientific instrument that was going into production, and I made a pass at Yocto - but ultimately decided it wasn't worth the heavy load to get everything in place to do a full build, for the specific SO-DIMM module we were using, so ended up with a custom build script to build the bootable image and all intended embedded applications. This worked out, but I've always been bothered that Yocto didn't pass my first sniff test.
I ended up completing the project on time and under budget by adopting a strict "compiler on-board" approach (i.e. no cross-compiling), so that's where I got a bit dissatisfied with the Yocto approach of having a massive cross-compiling tooling method to deal with.
I'll have to give it another go, but I do find that if I have to have a really beefy machine to get started on an embedded project, somethings' not quite right.
>you can’t run “apt update”
if you want to get a little weird, you can tell yocto to compile everything into deb packages and host them yourself with something like aptly
Yeah that’s true. But if these are embedded devices, you probably want an A/B partition scheme with full transactional updates and rollback.
I once built a Yocto system that had both... We'd use our package index for quick hotfixes, and push a full OS image to the A/B partition for larger, riskier changes. It was nice to have options.
That’s neat!
Or use opkg
Or you can, you know, just run Debian.
i was going to just comment "but systemd" but i just found out debian ostensibly supports uninstalling it and installing openRC instead and that makes me like debian more. I use debian for generic VMs, prod VMs are 50/50 gentoo and ubuntu. I've been messing with Devuan too as my primary linux VM on my desktop PC. At one point i had it booting to fully logged in in around 8 seconds (after the bootloader selection thing.) unfortunately i broke that feature so now it takes like 40 seconds. But it is also openRC (there's a pattern here)
Also Yocto supports systemd. I’m using it in my build.
I am actually scared of switching jobs in case my next job doesn't involve yocto.
How would I make use of the countless hours I have already invested in this piece of software? Countless keywords and the dark magic of the ever changing syntax.
But when it works it works..
> How would I make use of the countless hours I have already invested in this piece of software? Countless keywords and the dark magic of the ever changing syntax.
That sounds like sunk-cost fallacy. What if you switch jobs and they use something else that just works without needing dark magic syntax? If it's the best tool then so be it, but I question your reason for clinging to it.
Just curious, what is the procedure that does NOT involve Yocto? I guess a ton of shell scripts? Where can I learn it (i.e. build a Linux system for any embedded system without using Yocto or similar tools)? Is the LFS project the first place I should visit?
Background: I just switched to Ubuntu 22.04 for my daily use (mostly coding for side projects) but TBH I'm just using it as Windows. I use a Macbook Pro for work and know a bit of shell scripting, some Python, a bit of C and C++. Basically your typical incompetent software developer.
> Just curious, what is the procedure that does NOT involve Yocto? I guess a ton of shell scripts? Where can I learn it (i.e. build a Linux system for any embedded system without using Yocto or similar tools)? Is the LFS project the first place I should visit?
There are other tools in the same space like buildroot, but I would personally tend to recommend LFS to start from the fundamentals and work up, yes.
Your ability to learn and apply such dark magic is the more general skill. If you can wrangle To to, you can wrangle Buildroot. Or Android SDK or whatever else.
As someone in the Software Supply Chain business. Yocto SBOMs are considered low quality because they include things that do and do not exist in the final compiled artifact. When you compare what exists inside, physically from a binary perspective, what is included in the manifest, and what is generated in the build root, you will find they will never align unless you get creative and map artifacts together. Today they are accepted as meeting the compliance checkbox, but once the industry matures, they will need to adjust their approach.
May I ask what you recommend?
Since it is easy for me I prefer the Yocto SBOM, but the security side forces blackduck binary scanning on us which while finding most things on the binary constantly misidentifies a lot of versions, resulting in a lot of manual work.
It also does not know which patches Yocto has applied for fixing CVEs.
And none of these can figure out what is in the kernel and therefor triggers an ungodly amount of CVEs in parts of the kernel we don't have compiled in.
There is no tool at the moment that solves this, but it is being worked on amongst some players in the industry by those that fundamentally understand the problem. It is a very niche skill set that the greater compliance world doesn’t understand the need for yet. I would say we are 1-3 years away from solving the noise problem of SCA/BCA.
How would yocto adjust their approach to improve their SBOM output?
It would seem to be a nearly impossible thing to automate.
To be clear, it isn’t just a yocto problem. It is an industry wide issue and usually requires resolution between binary, build, and manifest or SCA. But at the end of the day developers are still very creative.
Ah BitBake and OpenEmbedded. That’s what Palm used for WebOS. It was simultaneously amazing and a nightmare. In 2024 you should not be using it. There are better alternatives.
> There are better alternatives.
Such as?
Nix, Bazel, Buck come immediately to mind
What I would really like is something like Docker to build images for my raspberry pis. Just a single file, shell commands, that's it. I feel that Yocto is already too complicated if you want a reproducable setup for you raspberry pi at home.
I've been working on something recently that you might find interesting: https://github.com/makrocosm/makrocosm
It's not a shell script, but it has makefile rules that make it relatively simple to build a Docker image for your architecture, export it and turn into a filesystem image, build a kernel, u-boot, etc The referenced "example project" repo builds a basic Alpine image for the Raspberry Pi (https://github.com/makrocosm/example-project/tree/main/platf...) and others
It was motivated by frustrations with Yocto at a new job after 8 or so years working on firmware for network equipment using an offshoot of uClinux. Hoping to convince new job to use Makrocosm before we settle on Yocto.
Maybe https://github.com/RPi-Distro/pi-gen works for you.
That’s what Balena does. Base immutable OS and docker images.
I think long term yocto and build root are going to be replaced by container tooling. Theres not that big of a difference between compiling an OS image and building a container image.
Well one difference is that docker lives well above the metal and in a nice cozy environment on mostly standard operating systems and yocto builds that nice cozy environment for all kinds of nonstandard hardware.
It's crazy that you have to use this custom "embedded" tooling when the vendor should be implementing support in vanilla Linux distros.
It is not "custom embedded tooling"! It is tooling you run on your main machine to build a custom distro. Imagine you follow the "Linux from Scratch" tutorial, then start writing scripts to automate parts of that, and eventually create a framework that allows you to create a custom Linux from Scratch. After years of work you may end up with something that looks like Yocto.
The whole point of using Yocto is that you want a custom distro. You could build a totally "standard" distro with Yocto but... at this point you can also just use Gentoo or Debian or whatever works.
I agree that vendors should upstream, but into packages that are upstream of even the distros. It's not like x64 where the same binaries can support multiple systems. My product OS can't even boot on the chip maker's devkit and it's impossible to make a binary that works on both.
A vanilla distro doesn't want to support a platform with a few hundred thousand units and maybe a few dozen people on the planet that ever log into anything but the product's GUI. That's the realm of things like OpenWRT, and even they are targeting more popular devices.
I understand the hobbyist angle, and we don't stand in their way. But it's much cheaper to buy a SBC with a better processor. For the truly dedicated, I don't think expecting a skill level of someone who can take our yocto layer on top of the reference design is asking too much.
There's a lot more to Yocto than just building the kernel. It's still useful when kernel support is upstreamed, such as including vendor tooling and test programs.
Upstreaming also takes a very long time and is usually incomplete. Even when some upstream support is available you will often have to use the vendor specific kernel if you want to use certain features of the chip.
Nobody can wait around for upstream support for everything. It takes far too long and likely won't ever cover every feature of a modern chip.
This is way harder to do with SBCs than you would think. You don't have a BIOS.
This comment makes zero sense. It's a meta-distribution: it builds a custom one for you. Professional custom embedded distros are a different beast altogether from the vanilla distros.
The one thing I still don't like about Yocto is the setup process. You need to check out multiple layer repositories, make sure you check out the right commit from each repository (need reproducibility!), put everything in the correct directory structure, and then set up `bblayers.conf` and `local.conf`.
I've got a script that does all this, but it's still a pain.
I've been thinking about putting everything in a monorepo, and adding poky, the third-party layers, and my proprietary layers as submodules. Then, when the build server needs to check out the code or a new developer needs to be onboarded, they just `git clone` and `git submodule update`. When it's time to update to the latest version of Yocto, update your layer submodules to the new branch. If you need to go back in time and build an older version of your firmware image, just roll back to the appropriate tag from your monorepo.
Anyone else have another solution to this issue?
Oh yeah, and the build times. It's crazy disk I/O bound. But if you're using something like Jenkins on an AWS instance with 96GB of RAM, set up your build job to use `/tmp` as your work directory and you can do a whole-OS CI build in minutes.
I recently found out about the 'kas' tool that tries to be a better version of the hacky scripts we all write for this. Here's a link to an example YAML config to give you a taste: https://kas.readthedocs.io/en/1.0/userguide.html#project-con...
There's ongoing work on an official setup solution, "bitbake-setup". See https://lists.openembedded.org/g/openembedded-core/topic/111....
Shameless plug, there is also my own tool, yb. It's very early days though: https://github.com/Agilent/yb
I just use git submodules because, whilst they can be frustrating, it's a workflow I'm familiar with. Other options would be kas or gclient.
You could see whether kas[0] could help you there. It fixes some of the manual steps, while adding tons of goodies.
I read just the title and wondered if this was a yocto post.
I have (accident) become the yocto SME at my $dayjob. Probably the biggest positive has been free SBOM generation, and cooking things like kSLOC counts into recipes.
The learning curve stinks, the build suite is very powerful.
> One limitation of the current disk image for Rock Pi is that you don’t have a functional TTY.
I believe on systemd-based systems these are service-units you need to enable, and with yocto, possibly install?
systemctl enable -now getty@tty0 (etc)
Or something like that. I’ve experienced similar issues while working on a x86 based NAS and also on the RPi when enabling serial-consoles.Oh nice! Thanks. Will give that a try.
This toolchain is about half my dayjob.
Bitbake is a meta-compiler, and the tool suite is very powerful. Just realize to this means you need to be an expert error-message debugger, and able to jump into (usually c/c++) code to address issues and flow patches upstream.
It really is gratifying when you finally kick out a working image.
Yocto error messages are absurdly awful. Hope you like fishing out the one gcc error out of 10,000 lines of garbage output.
There's nothing as disappointing as starting a build, going out for a couple hours, and coming back to a terminal full of red.
But when it works, it works.