I was feeling a bit like the Petunia and thought "Oh no, not again." :-) One of the annoyances of embedded programming can be having the wheel re-invented a zillion times. I was pleased to see that the author was just describing good software architecture that creates portable code on top of an environment specific library.
For doing 'bare metal' embedded work in C you need the crt0 which is the weirdly named C startup code that satisfies the assumption the C compiler made when it compiled your code. And a set of primitives to do what the i/o drivers of an operating system would have been doing for you. And voila, your C program runs on 'bare metal.'
Another good topic associated with this is setting up hooks to make STDIN and STDOUT work for your particular setup, so that when you type printf() it just automagically works.
This will also then introduce you to the concept of a basic input/output system or BIOS which exports those primitives. Then you can take that code in flash/eprom and load a binary compilation into memory and start it and now you've got a monitor or a primitive one application at a time OS like CP/M or DOS.
Its a fun road for students who really want to understand computer systems to go down.
At my school, we did the following project : https://github.com/lse/k
It is a small kernel, from only a bootloader to running elf files.
It has like 10 syscalls if I remember correctly.
It is very fun, and really makes you understand the ton of legacy support still in modern x86_64 CPUs and what the os underneath is doing with privilege levels and task switching.
I even implemented a small rom for it that has an interactive ocarina from Ocarina of Time.
This is really neat. So many engineers come out of school without ever having had this sort of 'start to finish' level of hands on experience. If you ever want to do systems or systems analysis this kind of thing will really, really help.
What is your school? I thought it was the London School of Economics, but it’s another LSE.
It's EPITA, in France.
LSE is the System's laboratory of EPITA (https://www.lse.epita.fr/)
No BIOS necessary when we're talking about bare metal systems. printf() will just resolve to a low-level UART-based routine that writes to a FIFO to be played out to the UART when it's not busy. Hell, I've seen systems that forego the FIFO and just write to the UART blocking while writing.
I hope nobody was confused into thinking I thought a BIOS was required, I was pointing out the evolution from this to a monitor. I've written some code[1] that runs on the STM32 series that uses the newlib printf(). I created the UART code [2] that is interrupt driven[3] which gives you the fun feature that you can hit ^C and have it reset the program. (useful when your code goes into an expected place :-)).
[1] https://github.com/ChuckM/
[2] https://github.com/ChuckM/nucleo/blob/master/f446re/uart/uar...
[3] https://github.com/ChuckM/nucleo/blob/master/f446re/common/u...
This was my attempt at a minimal bare-metal C environment:
That's awesome. Back in the day this was the strong point of eCOS which was a bare metal "platform" for running essentially one application on x86 hardware. The x86 ecosystem has gotten so complicated that being able to do this can get you better performance for an "embedded" app than running on top of Linux or another embedded OS. That translates into your appliance type device using lower cost chips which is a win. When I was playing around with eCos a lot of the digital signage market was using it.
Does anyone still do it that way?
With AMD64 style chips? Probably not. Multi-core systems really need a scheduler to get the most out of them so perhaps there are some very specific applications where that would be a win but I cannot think of anything that isn't super specific. For ARM64 chips with a small number of cores, sure that is still a very viable too for appliance type (application specific) applications.
This sounds fascinating and absolutely alien to me, a Python dev. Any good books or other sources to learn more you can recommend?
You can start here, https://wiki.osdev.org/Expanded_Main_Page
Also regardless of what others say, you can have a go trying to feel how it was to use BASIC in 8 bit computers to do everything their hardware exposed, or even 16 bit systems like MS-DOS, but with Python.
Get a ESP32 board, and have a go at it with MicroPython or CircuitPython,
https://docs.micropython.org/en/latest/esp32/quickref.html
https://learn.adafruit.com/circuitpython-with-esp32-quick-st...
There's always the Minix book!
// QEMU UART registers - these addresses are for QEMU's 16550A UART
#define UART_BASE 0x10000000
#define UART_THR (*(volatile char *)(UART_BASE + 0x00)) // Transmit Holding Register
#define UART_RBR (*(volatile char *)(UART_BASE + 0x00)) // Receive Buffer Register
#define UART_LSR (*(volatile char *)(UART_BASE + 0x05)) // Line Status Register
This looks odd. Why are receive and transmit buffer the same and why would you use such a weird offset? Iirc RISC-V allows that, but my gut says I'd still align this to the word size.My sweet summer child… this is backwards compatibility to the I/O register set of NatSemi/Intel's 8250 UART chip…
…from 1978.
https://en.m.wikipedia.org/wiki/8250_UART
The definitions are correct, look up an 16550 datasheet if you want to lose some sanity :)
Oh damn, thanks!
Newlib is huge and complex (even including old K&R syntax) and adapting the build process to a new system is not trivial. I spent a lot of time with it when I re-targeted chibicc and cparser to EiGen, and finally switched to PDCLib for libc and a part of uClibc for libm; see https://github.com/rochus-keller/EiGen/tree/master/ecc/lib. The result is platform independent besides esentially one file.
For a static library it does not matter whether it is huge and complex, because you will typically link into your embedded application only a small number of functions from it.
I have used a part of newlib with many different kinds of microcontrollers and its build process has always been essentially the same as a quarter of century ago, so that the script that I have written the first time, before 2000, has always worked without problems, regardless of the target CPU.
The only tricky part that I had to figure the first time was how to split the compilation of the gcc cross-compiler into a part that is built before newlib and a part that is built after newlib.
However that is not specific to newlib, but is the method that must be used when compiling a cross-gcc with any standard C library and it has been simplified over the years, so that now there is little more to it than choosing the appropriate make targets when executing the make commands.
I have never needed to change the build process of newlib for a new system, I had just needed to replace a few functions, for things like I/O peripherals or memory allocation. However, I have never used much of newlib, mostly only stdio and memory/string functions.
> it does not matter whether it is huge and complex
I was talking about the migration effort and usage complexity, not what the compiler or linker actually sees. It may well be that Newlib can be configured for every conceivable application, but it was more important to me not to have a such a behemoth and bag full of surprises in the project with preprocessor rules and dependencies that a single developer can hardly understand or keep track of. My solution is lean, complete, and works with standard-conforming compilers on each platform I need it.
220k just to include studio? That's insane. I have 12k and still do IO. Just without the overblown stdio and sbrk, uart_puts is enough. And only in DEBUG mode.
I thought this was going to talk about how printf is implemented. I worked with a tiny embedded processor that had 8k imem, and printf is about 100k alone. Crazy. Switched to a more basic implementation that was around 2k, and ran much,much faster. It seems printf is pretty bloated, though I guess typical people don't care.
I implemented a secure printf_s and its API is the problem. You cannot dead-code eliminate all the unused methods. And it's type unsafe. There are much better API's to implement a safe printer with all the formatting options still. format is not one of them
In most C standard libraries intended for embedded applications, including newlib, there is some configuration option to provide a printf that does not support any of the floating-point format specifiers.
That is normally enough to reduce the footprint of printf by more than an order of magnitude, making it compatible with small microcontrollers.
In school we were taught that the OS does the printf. I think the professors were just trying to generalize to not go on tangents. But, once I learned that no embedded libc variants had printf just no output path, it got a lot easier to figure out how to get it working. I wish I knew about SWO and the magic of semihosting back then. I don't think those would be hard to explain and interestingly it's one of the few things students asked about that in the field I'm also asked how to do by coworkers (the setting up _write).
> But, once I learned that no embedded libc variants had printf just no output path
Did you mean "once I learned that no, embedded libc variants have printf"?
To clarify as I had to check, embedded libc variants do indeed have some (possibly stripped-down) implementation of printf and as you say they just lack the output path (hence custom output backends like UART, etc).
Has anybody played with newlib, but grown the complexity as the system came together?
It seems like one thing to get a bare-bones printf() working to get you started on a bit of hardware, but as the complexity of the system grows you might want to move on from (say) pushing characters out of a serial interface onto pushing them onto a bitmapped display.
Does newlib allow you to put different hooks in there as the complexity of the system increases?
Newlib provides both a standard printf, which is necessarily big, and a printf that does not support any of the floating-point format specifiers.
The latter is small enough so that I have used it in the past with various small microcontrollers, from ancient types based on PowerPC or ARM7TDMI to more recent MCUs with Cortex-M0+.
You just need to make the right configuration choice.
You can always write a printf replacement that takes a minimal control block that provides put, get, control, and a context.
That way you can print to a serial port, an LCD Display, or a log.
Meaning seriously the standard printf is late 1970's hot garbage and no one should use it.
char buffer[100];
printf("Type something: ");
scanf("%s", buffer);
Come on, it’s 2025, there’s no need to write trivial buffer overflows anymore.It’s a feature to rewrite your OS kernel on the fly.
It's 1990, maybe 1999, in embedded land.
honestly love reading about this stuff - always makes me realize how much gets glossed over in school. you think modern cpus and all the abstraction layers help or just make things messier for folks trying to learn the real basics?
While "newlib" is an interesting idea - the approach taken here is, in many cases, the wrong one.
You see, actually, the printf() family of functions don't actually require _any_ metal, bare or otherwise, beyond the ability to print individual characters.
For this reason, a popular approach for the case of not having a full-fledged standard library is to have a fully cross-platform implementation of the family which "exposes" a symbol dependency on a character printing function, e.g.:
void putchar_(char c);
and variants of the printf functions which take the character-printing function as a runtime parameter: int fctprintf(void (*out)(char c, void* extra_arg), void* extra_arg, const char* format, ...);
int vfctprintf(void (*out)(char c, void* extra_arg), void* extra_arg, const char* format, va_list arg);
this is the approach taken in the standalone printf implementation I maintain, originally by Marco Paland:As replied on your other comment, when you introduce a custom printf for an embedded platform it makes more sense to just edit in support for your local I/O backend rather than having the complexity of a putch() callback function pointer.
cf. https://news.ycombinator.com/item?id=43811191 for other notes.
I always felt with these kinds of things you strip out `stdio.h` and your new API/ABI/blackbox becomes `syscall` for `write()`, etc.
I am coding RISC-V assembly (which I run on x86_64 with a mini-interpreter) but I am careful to avoid the usage of the pseudo-instructions and the registers aliases (no compressed instruction ofc). I have a little tool to generate constant loading code, one-liner (semi-colon separated instructions).
And as a pre-processor I use a simple C preprocessor (I don't want to tie the code to the pre-processor of a specific assembler): I did that for x86_64 assembly, and I could assemble with gas, nasm and fasmng(fasm2) transparently.
What's wrong with compressed instructions?
I don't feel comfy using duplicate instructions for a 'R'educed instruction set.
That said, I know in some cases it could increase performance since the code would use less memory (and certainly more things which I don't know because I am not into modern advanced hardware CPU micro-architecture design).
I was very confused by the title, expected someone writing their own printf — i.e. the part that parses the format string, grabs varargs, converts numbers, lines up strings, etc.
I'd have called it "Bare metal puts()" or "Bare metal write()" or something along those lines instead.
(FWIW, FreeBSD's printf() is quite easy to pluck out of its surrounding libc infrastructure and adapt/customize.)
FreeBSD’s printf is my goto, too! It’s indeed enormously simple to pluck out, instantly gives you full-featured printf, and has added features such as dumping memory as hex.
Funnily enough we're not even referring to the same one, the hexdump thing is in FreeBSD's kernel printf, I was looking at the userspace one :). Haven't looked at the kernel one myself but nice to hear it's also well-engineered.
(The problem with '%D' hexdumps is that it breaks compiler format checking… and also 'D' is a length modifier for _Decimal64 starting in ISO C23… that's why our hexdump is hooked in as '%.*pHX' instead [which still gives a warning because %p is not supposed to have a precision, but at least it's not entirely broken.])
Is it? Could you elaborate/provide links to examples of this?
What customization would it support? Say, compared to these options:
https://github.com/eyalroz/printf?tab=readme-ov-file#cmake-o...
> Is it? Could you elaborate/provide links to examples of this?
https://github.com/FRRouting/frr/tree/master/lib/printf
Disclaimer: my work.
Customised to support %pHX, %pI4, %pFX, etc. - docs at https://docs.frrouting.org/projects/dev-guide/en/latest/logg... for what these do.
> What customization would it support?
I don't understand your question. It's reasonably readable and understandable source code. You edit the source code. That's the customisation?
> Say, compared to these options: https://github.com/eyalroz/printf?tab=readme-ov-file#cmake-o...
First, it is customary etiquette to indicate when linking your own code/work.
Second, that is not a POSIX compatible printf, it lacks support for '%n$' (which is used primarily for localisation). Arguably can make sense to omit for tiny embedded platforms - but then why is there FP support?
Third, cmake and build options really seem to be overkill for something like this. Copy the code into the target project, edit it. If you use your own printf, you probably need a bunch of other custom stuff anyway.
Fourth, the output callback is a reasonable idea, but somewhat self-contradictory. You're bringing in your own printf. Just adapt it to your own I/O backend, like libc has FILE*.
> You edit the source code. That's the customisation?
I meant, customization where you don't have to write the customized code yourself, just choose some build options, or at most set preprocessor variables.
> First, it is customary etiquette to indicate when linking your own code/work.
You're right, although I was only linking to the table of CMake options. And it's only partially my code, since I'm the maintainer rather than the original author
> You're bringing in your own printf. Just adapt it to your own I/O backend, like libc has FILE.
One can always do that, but - with the output callback - you can bring in an already-compiled object, which is sometimes convenient.
> If you use your own printf, you probably need a bunch of other custom stuff anyway.
My personal use case (and the reason I adopted the library) was printf deficiencies in CUDA GPU kernels. And - I really needed nothing other than printf functions. Other people just use sprintf to format output of their mostly, or wholly, self-contained functions which write output to buffers and such. Different strokes for different folks etc.
But - I will definitely check out the link.
> Second, that is not a POSIX compatible printf, it lacks support for '%n$' (which is used primarily for localisation).*
That is true. But C99 printf and C++ printf do not support that either. ATM, the aim is completing C99 printf support (when I actually work on the library, which is not that often). So, my priority would be FP denormals and binary FP (with "%a"), before other things.
> * Arguably can make sense to omit for tiny embedded platforms - but then why is there FP support?*
It's there because people wanted it / needed it; and so far, there's not been any demand for numbered position specification.
> I meant, customization where you don't have to write the customized code yourself, just choose some build options, or at most set preprocessor variables.
Honestly, if you're shying away from customising an 1-2kloc piece of code, you probably shouldn't be using a custom printf().
Case in point: function pointers are either costly or even plain unsupported on GPU architectures. I would speculate that you aren't using the callbacks there?