C sounds nice if your task is simple enough, or at least if you can decompose this to a series of loosely-connected simple-enough tasks.
But sometimes, there is an inherent complexity in what you are trying to implement, then C becomes way, way complex than C++. You build stuff, and there is so many manual steps, and none of them must be missing, or things will subtly break.
A good example is "gstreamer", the multimedia streaming framework. It is implemented in pure C. It needs to use basic data structures, so it uses GLib. It also need to support runtime-defined connection graph, so it is built on top of GObject.
Yes, build times are amazingly fast. But you pay for it - look at something simple, like display_name property[0]. There is display_name member, and PROP_DISPLAY_NAME enum, and switch case in setter (don't forget to free previous value!) , and switch case in getter, and it's manually installed in class_init, and you need to manually free it in dispose (except they forgot this).
So many places just for a single property, something that would have been 1 or 2 lines in a well-structured C++ app. And unlike C++, you cannot make simple rules like "never use char* except for 3rd party libs" - it's all those multi-point checklists which are not even written down. A
[0] https://github.com/GStreamer/gst-plugins-good/blob/master/sy...
> It needs to use basic data structures, so it uses GLib. It also need to support runtime-defined connection graph, so it is built on top of GObject.
That's running into the age old trap of trying to shoehorn an OOP system into C, just don't do that ;) E.g. don't design your systems around the OOP paradigm in the first place.
If at least C solutions took advantage of abstract data types as advocated by modular design approaches before OOP took off, but no it is all reaching out to field data directly with macros, and clever pointer tricks that fail down.
There are several books on the matter, that obviously very few read.
Here one paper example from 1985 on the subject, "Modular programming in C: an approach and an example"
> If at least C solutions took advantage of abstract data types as advocated by modular design approaches
People have been writing C code with ADTs and "Modules" from the very beginning.
Two excellent examples which come to mind are; Andrew Tanenbaum's Minix book Operating Systems Design and Implementation and David Hanson's C Interfaces and Implementations: Techniques for Creating Reusable Software.
And of course the Linux Kernel is full of great modular C techniques which one can study.
Unfortunely I have seen plenty of counter examples since 1991.
Starting with RatC from "Book on C", 1988 edition, over to Turbo C 2.0 in 1991, all the way to modern times.
That is just not how most C codebases look like.
Nope, you are just generalizing your opinion which is not quite true. My (and my colleagues) experience studying/programming C/C++ from the beginning-90's has been pretty good.
When the PC explosion happened, a lot of programmers without any CS background started with C programming and hence of course there is a lot of code (usually not long lasting) which do not adhere to software engineering principles. But quite a lot more C code was written in a pretty good style which was what one picked up at work if not already exposed to them during studies.
I still remember the books from late-80's/early-90's on the PC side, by authors like Al Stevens (utils/guis/apps using Turbo C) who wrote for Dr. Dobb's Journal. On the Unix side, of course you had Richard Stevens, P.J.Plauger, Thomas Plum etc. They all taught good C programming principles which are still relevant and practiced today.
Each one is their own anecdote.
I have also all those books and magazines, pitty most coders of the code I have seen on my lifetime don't.
The regular developers, those that don't give a shit online forums exist, other than Stack Overflow, and go home to do non computer related stuff after work.
As i said, you cannot generalize from your experiences alone.
You have to look at the programming community as a whole and industry practices developed and adopted over time in the real world.
There is enough data here to show that C does not deserve the negativity that i often see here on HN.
You know what gstreamer does, right? It's a dynamic multimedia framework - you give it pipeline defined by string, like:
ximagesrc display_name=:1 ! video/x-raw,framerate=20/1 ! videoscale ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host=127.0.0.1 port=5000
and it automatically loads .so files, creates all those components and connects them to each other. Super handy for all sorts of fun audio/video processing.
So all that C ceremony is required because user _should_ be able to say "ximagesrc display_name=:1", and possibly dynamically change this attribute to something else via script command (because a lot of time gstreamer is embedded in other apps).
So if you know how to achieve the same without trying to "shoehorn an OOP system into C", do let me know. But I bet whatever solution you came up with would be very close to what GStreamer ended up doing, if not even more complex.
(Unless what you are trying to say is: "If problem's most-efficient representation is OOP-like, don't use it with C because C is for simpler problems only. Use complex languages for complex tasks." If that's the case, I fully agree)
> and it automatically loads .so files, creates all those components and connects them to each other. Super handy for all sorts of fun audio/video processing.
I created a quite similar OOP system for C around 1995 (as I guess did most programmers at that time who were fascinated by Objective-C), classes were implemented in DLLs and were loaded on demand, classes were objects themselves, the tree of class objects could be inspected (e.g. runtime-type-information and -reflection), and the whole system was serializable - this was for a PC game (https://en.wikipedia.org/wiki/Urban_Assault).
It looked like a neat thing at the time, but nothing a couple of structs with function pointers or a switch-case message dispatcher wouldn't be able to do just as well, definitely not something that should be the base for more than one product, and most definitely nothing that should have survived the OOP hype of the 90's ;)
This kind of automatic property serialization/deserialization, however, has traditionally been a sore spot for C++ as well.
You can do it, but you will have to either repeat yourself at least a little, use some very ugly macros, or use a code generator.
And many of those ugly macro tricks work in C as well. So do code generators.
That said, as C++ has added features, this type of metaprogramming has gotten easier and easier, and more and more distinct from C. This culminates in C++26 reflection which will finally make it possible to just define a struct and then automatically generate serialization/deserialization for it, without hacks. Once reflection is implemented and widely adopted, then I will agree with you that this should be 1 or 2 lines in a well-structured C++ app.
Unfortunately your insightful comment is 30 years too late. You'll have to find a time-machine and go back to the 1990s and tell GNU/GTK/Gnome/etc that they are doing it wrong.
Good luck making any sort of UI without OOP-like methods. The moment you have grouped state (say "button_enabled", "button_shown", and "button_checked") and corresponding methods, you get something OOP-like.
The only way to work around is immediate mode UI, but this requires fairly powerful CPU, so it's only feasible on the modern machines. Certainly not something that people would want about 30 years ago, they still cared about performance back then.
Immediate mode UI doesn't require a powerful CPU and was invented in 2002, so about 24 years ago. I think the belief that it necessarily sacrifices performance is somewhat of a misconception.
Compare a hypothetical "Immediate Mode" counter:
void render_and_handle_button_click(struct ctx *ctx){
draw_text(ctx, "Count: %d", ctx->count);
if(draw_button(ctx, "Increment")){
ctx->count++;
}
}
To the equivalent "Retained" counter: void render(struct ctx *ctx){
set_textbox_text(ctx->textbox, "Count: %d", ctx->count);
}
void handle_button_click(void *ctx_in){
struct ctx *ctx = ctx_in;
ctx->count++;
render(ctx);
}
void init(struct ctx *ctx){
ctx->textbox = create_textbox(ctx);
ctx->button = create_button(ctx);
set_button_text(ctx->button, "Increment");
set_button_click_handler(ctx->button, ctx, handle_button_click);
render(ctx);
}
The only difference I see here is whether the stateful "cache" of UI components (ctx->textbox and ctx->button) is library-side or application-side.You are looking at it from the user side, while all the interesting parts are on the implementation side:
- If you have partial redraw request, can you quickly locate _only_ the controls which are covered and only redraw those?
- If you are clicking on something, can you quickly locate which component will receive the click?
- If you are moving mouse, or dragging a selection, can you quickly determine if any components should change the state? Can you avoid running the full event loop on every mouse move event?
- If your application state has updated, do you need to force redraw or not? (Many immediate mode UIs fail badly here, never going to idle even if nothing is happening)
This is all trivial in the old-style UI - efficient redraw / mouse mapping is table stakes in the older GUIs. While all that immediate mode can do is to keep running redraw loop _every_ _single_ _time_ something as trivial as mouse move is happening, just in case this can change highlighted item or something.
(Unless the "immediate mode UI" is just a thin veneer, and library is using good-old OOP based GUI components under the hood... but in this case, it's still faster to cut out the middleman and control components yourself)
And yes, back when I was doing "game development" class in college, around that time, I've used the immediate mode UI for menus. This only makes sense - games run on foreground _anyway_, and they basically consume 100% of CPU anyway. But for regular apps? Please don't.
Example: I just opened https://www.egui.rs/#demo in background tab... The browser's task manager shows this tab never goes idle, consuming between 0.1% and 3%. Considering I often have 40+ tabs open, this can take a significant part of CPU.
Immediate mode GUIs, unless in app which already uses 100% CPU, like game or video player, will always be less efficient than classical event-driven ones.
You appear to be making assumptions about immediate mode UI limitations based on some implementations you've worked with and not based on what's actually dictated by the interface itself. You touch on this somewhat by claiming that it's possible to be fast as long as the UI is merely a "thin veneer" over something more stateful, but that isn't a distinction I care about.
I'm not a good advocate for IMGUI; there are resources available online which explain it better than I can. I'm just trying to point out that the claim that immediate mode GUIs are some sort of CPU hog isn't really true. That's what I meant by "doesn't necessarily sacrifice performance," not that there is literally zero overhead (although I wouldn't be surprised if that were the case!).
> ...The browser's task manager shows this tab never goes idle...
As far as I can tell, the demo you linked appears to be bugged. You can see from poking around in a profiler (and from the frame timer under the "backend" popout) that the UI code does in fact go idle, but the requestAnimationFrame loop is never disabled for some reason. Regardless, even if this particular GUI library has problems going idle, that's not an inherent problem with the paradigm. I get the impression you understand this already, so I'm not sure why you've brought it up.
> Many immediate mode UIs fail badly here, never going to idle even if nothing is happening
ImGui's author deliberately doesn't fix this because this is one of the main issues preventing ImGui to be widely adopted on desktop potentially attracting too many users at once but lacking support for all of them.
https://github.com/ocornut/imgui/issues/7892#issuecomment-22...
> You build stuff, and there is so many manual steps
"The real goal isn’t to write C once for a one-off project. It’s to write it for decades. To build up a personal ecosystem of practices, libraries, conventions, and tooling that compound over time."
You mean, you are not worried about high complexity of codebase because you work with it every day for decades, so you know all this complexity by heart?
This basically requires one to be working solo, neither receiving not sharing the source code with others, treating third-party libraries as blackboxes.
I guess this can work for some people, but I don't think it would work for everyone.
> so you know all this complexity by heart?
No. It's that you've built up a personal database of libraries, best-practices, idioms, et al. over decades.
When you move on to a new project, this personal database comes with you. You don't need to wonder if version X of Y framework or library has suddenly changed and then spend a ton of time studying its differences.
Of course, the response to this is: "You can do this in any language!"
And you'd be right, but after 20 years straight of working in C alongside teams working in Java, Perl, Python, Scheme, OCaml, and more, I've only ever seen experienced C programmers hold on to this kind of digital scrapbook.
I don't see how this can work.
You have your personal string library.. and you move to a new project, and it has it's own string library (it's pretty much given, because stdlib C library sucks). So what next? Do you rewrite the entire project into _your_ string library, and enjoy familiar environment, until the next "experienced C programmer" comes along? Or do you give up on your own string library and start learning whatever project uses?
And this applies to basically everything. The "personal database" becomes pretty useless the moment the second person with such database joins the project.
(This is a big part of Python's popularity, IMHO. Maybe "str" and "logger" are not the best string and logger classes in the world, but they are good enough and in stdlib, so you never have to learn them when you start a new project)
It's not the "string library" that's important, but standardized interface types - so that different libraries can pass strings to each other while still being able to select the best string library that matches the project's requirements.
In C this standardized string interface type happens to the poiinter to a zero-terminated bag of bytes, not exactly perfect in hindsight, but as long as everybody agrees to that standard, the actual code working on strings can be replaced with another implementation just fine.
E.g. a minimal stdlib should mostly be concerned about standardized interface types, not about the implementation behind those types.
Are you saying that if you join existing project which uses acuozzo_strzcpy, and you need to do some string copying, instead of using the same functions that everyone already uses, you'll bring your own library and start using flohofwoe_strjcpy in all code _you_ write? (Assuming both of those work on char* types, that is)?
This.. does not seem like a very good idea if you want your contributions to be received well.
> neither receiving not sharing the source code with others, treating third-party libraries as blackboxes.
Tbh, this is an intriguing idea. Determine the size of a library (or module in a bigger system) by what one programmer can manage to build and maintain. Let the public interface be the defining feature of a library, not its implementation.
Right.
The article is well written and the author has touched upon all the points which make C still attractive today.
People point to GObject say it's complicated and compare it to C++ classes. But the same thing as C++ classes in C would also be far simpler. GObject is so complicated, because it is essentially runtime creation and modification of classes (not objects, classes). Doing that in C++ will also be some work and look ridiculously complicated.
> Code gets simpler because it has to, and architecture becomes explicit.
> The real goal isn’t to write C once for a one-off project. It’s to write it for decades. To build up a personal ecosystem of practices, libraries, conventions, and tooling that compound over time. Each project gets easier not because I've memorized more tricks, but because you’ve invested in myself and my tools.
I deeply appreciate this in the C code bases I work in (scientific computing, small team)
Agreed.
I generally try to use C++ as a "better C" before the design complexity makes me model higher-level abstractions "the C++ way". All abstractions have a cognitive cost and C makes it simpler and explicit.
Personally, I tried that, but it already breaks down for me, once I try to separate allocation from initialization, so I am back to C really quickly. And then I want to take the address from temporaries or create types in function declarations, and C++ is just declares that to be not allowed.
> The language shows you the machine, a machine which is not forgiving to mistakes.
Assembly does that, C not really, it is a myth that it does.
True, it doesn't give you the bare machine. What it gives you is the thinnest of machine abstraction with the possibility of linking to your own assembly if you have the demand for it.
Yet another myth, plenty of languages since JOVIAL in 1958 offer similar capabilities.
I am curious, what was it I said that you consider to be a myth? If I have some misunderstanding I would like to know. I looked at JOVIAL on wikipedia quickly but I can't see exactly how it would be thinner than C or if it's compiler would output something vastly different to a C compiler. Or did you mean it's as thin as C but it came out earlier?
Both, the properties that UNIX crowd assigns to C aren't unique.
Most think that way because they never learnt anything other than C and C++.
I see, you thought I meant that C was the only language with this property. No there are plenty of others, I was fully aware of that. I on the other hand thought you meant that JOVIAL in some way was even thinner or more tuned to underlying architecture in some way that made it thinner than C.
Ok, if you insist on ultra precise description - "C is is the lowest level language among widely used".
Not even that, because C compilers nowadays are written in C++.
It is unrelated to the point.
I'm being pedantic, but on modern hardware, the ISA is an abstraction over microarchitecture and microcode. It's no longer a 1-to-1 representation of hardware execution. But, as programmers, it's as low as we can go, so the distinction is academic.
Still one layer below C, and with plenty of features not available on C source code.
Compiler intrinsics do give you C/C++ api access to relevant ISA subsets as platform-specific extensions.
I wound up going back to C in a big way about five years ago when I embarked on Scheme for Max, an extension to Max/MSP that lets you use s7 Scheme in the Max environment. Max has a C SDK, and s7 is written in 100% ANSI C. (Max also has C++ SKD, but is far less comprehensive with far fewer examples and docs).
I was, coming from being mostly a highlevel language coder, suprised at how much I like working in this combo.
Low level stuff -> raw C. High level stuff -> Scheme, but written such that I can drop into C or move functions into C very easily. (The s7 FFI is dead simple).
It's just really nice in ways that are hard to articulate. They are both so minimal that I know what's going on all the time. I now use the combo in other places too (ie WASM). It really forces one to think about architecture in what I think is a good way. YMMV of course!
Nice.
Reminds me of optimizations done in the early days of Erlang and BEAM using C for performance reasons - https://www.erlang.org/blog/beam-compiler-history/
First off, I want to congratulate you on reaching this milestone. I think this is the state where the most seasoned programmers end up. They know how to write code that works and they don't need a language to "help" or "guide" them.
Enjoy!
If software development taught me anything it is that everything that can go wrong will go wrong, the impossible will happen. As a result I prefer having less things that can go wrong in the first place.
Since I acknowledge my own fallibility and remote possibilities of bad things happening I have come to prefer reliability above everything else. I don't want a bucket that leaks from a thousand holes. I want the leaks to be visible and in places I am aware of and where I can find and fix them easily. I am unable to write C code to that standard in an economical fashion, which is why I avoid C as much as possible.
This is, perhaps surprisingly, what I consider the strength of C. It doesn't hide the issues behind some language abstraction, you are in full control of what the machine does. The bug is right there in front of you if you are able to spot it (given it's not hiding away in some 3rd party library of course) which of course takes many years of practice but once you have your own best practices nailed down this doesn't happen as often as you might expect.
Also, code doesn't need to be bulletproof. When you design your program you also design a scope saying this program will only work given these conditions. Programs that misbehaves outside of your scope is actually totally fine.
Empirically speaking, programmers as a whole are quite bad at avoiding such bugs. Humans are fallible, which is why I personally think it's good to have tools to catch when we make mistakes. One man's "this takes control away from the programmer" is another man's "friend that looks at my work to make sure it makes sense".
How is one in full control of SIMD and CPU/OS scheduling in NUMA architecures in C?
Linux has libnuma (https://man7.org/linux/man-pages/man3/numa.3.html) while Windows has its own NUMA api (https://learn.microsoft.com/en-us/windows/win32/procthread/n...)
For CPU/OS scheduling, use pthreads/OpenMP apis to set processor affinity for threads.
For SIMD, use compiler intrinsics.
Nothing of that is written in pure C, as per ISO C standard.
Rather they rely on a mix of C compiler language extensions, inline or external Assembly written helpers functions, which any language compiled language also has available, when going out of the standard goes.
I think you are being nitpicky here.
When most people say "I write in C", they don't mean abstract ISO C standard, with the possibility of CHAR_BIT=9. They mean "C for my machine" - so C with compiler extensions, assumptions about memory model, and yes, occasional inline assembly.
I am, because people making C something special that it isn't.
Other languages share the same features.
That is not an argument. ANSI/ISO C standardizes hardware-independent parts of the language but at some point you have to meet the hardware. The concept of a "implementation platform" (i.e. cpu arch + OS + ABI) is well known for all language runtimes.
All apps using the above-mentioned are written in standard ANSI/ISO C. The implementation themselves are "system level" code and hence have Language/HW/OS specific extensions which is standard practice when interfacing with low-level code.
> any language compiled language also has available
In theory yes, but in practice never to the ease nor flexibility with which you can use C for the job. This is what people mean when they say "C is close to the metal" or "C is a high-level assembly language".
It is, because C is nothing special, those features are available in other languages.
Proven before C was even a dream at AT&T, and by all other OS vendors outside Bell Labs using other systems languages.
Then people get to argue C can X, yeah provided it is the Compiler XYZ C dialect.
Not quite.
C took off because system programmers could not do with other languages what they wanted, with the ease and flexibility that C offered.
Having a feature in a language is not the same as how easy it is to span hardware, OS and application in the same language and runtime.
I'm perfectly happy writing C With Classes. There isn't a problem in my domain that can't be solved with C, and frothing at the mouth about "safety" simply isn't relevant. I program machines and I need a language that doesn't do everything possible to hide that fact.
C is a fine language. Sure it's got sharp edges to poke your eyes our and big gears that will rip your arm off, but guess what? So does the machine. Programming a machine is an inherently unsafe activity and you have to act like a responsible adult, not some cargo culting lunatic that wants to purge the world of all code written before 2010.
I'm going back to statically allocating my 2KB of SRAM now. Humbug, etc
> In C, you can see what the machine is doing. Allocations don’t hide behind constructors, and destructors don’t quietly run during stack unwinding. You can profile at the machine-code level without feeling like you’re peeling an onion, appropriately shedding tears the whole time.
This is why explicit control flow is important design goal for systems programming language. This is basically 2/3 of core design principles in Zig.
Like setjmp()/longjmp() and signal(), very explicit. /s
The control flow is explicit; there is no language "magic" here. Non-local gotos in the former case and asynchronous callbacks from the OS in the latter case are pretty well known.
Except knowing where the jump lands, very explicit.
These are low-level api and hence if you know the caveats to follow, then using them correctly is not difficult; Eg. keep the context of the function calling setjmp active, don't use jmp_bufs allocated on a stack etc.
Not knowing how to do something is the fault of the programmer and not the language/tool.
I did a lot of c++ in the mid-90s, often on teams with experienced C programmers new to C++.
They had little appetite for C++, it was 90% mgmt saying ‘use the shiny new thing we read about’. I was the FNG who ‘helped’ them get thru it by showing them the tools & lingo that would satisfy mgmt.
OOP is non-scientific and the snake-oil hype made it cancerous. C++ has ballooned into an absurd caricature. It obfuscates business logic with crypto-like strength, it doesn’t clarify anything. I feel like a war criminal. Replacing C++ is one thing but ridding the world of the OOP rot is a far deeper infection.
I later spent years doing my Typhoid Mary bit in the Java community before abandoning the heresy. Repent and sin no more, that’s all one can do.
> OOP is non-scientific and the snake-oil hype ... ridding the world of the OOP rot is a far deeper infection.
You are spewing nonsense.
Read Bertrand Meyer's Object-Oriented Software Construction, Barbara Liskov's Program Development in Java: Abstraction, Specification, and Object-Oriented Design and Brad Cox's Object-Oriented Programming: An Evolutionary Approach for edification on OOD/OOP.
I program in C and I like many of the reasons they mention here, are things I like about C programming. They use C89 (and sometimes C99), although I do use some of the GNU extensions (which, as far as I know, both GCC and Clang support them).
Returning to C after a decade away, I think the bottom line is that the reason why C has stuck around for so long is straight up path dependence. C is the foundation of Unix, and Unix-like operating systems took over the world, and C is a decent enough systems programming language that people started using it to write other OSes as well.
It bothers me that there’s some kind of mysticism around C. I keep seeing weird superstitions like, “modern processors are designed to run C code fast”. Or that there’s some inherent property of C that makes it “closer to the hardware”. The reality is just that C has been around for so long that there are millions of lines of optimizations handcrafted into all the compilers, and people continue to improve the compilers because there are billions of lines of C that will benefit from it.
FORTRAN is also crazy fast, but people don’t worship it the same way. SBCL and even some BASIC compilers approach the speed of C. And C is a high level language, despite what many people who have never touched assembler may assert.
C is not a bad language, and once you get your head around it you can write anything in C, but it’s absolutely full of landmines (sorry, “undefined behaviors”).
The author makes some really great points about the standard library. A lot of C’s pain points stem from memory management, string handling, etc. which stem from quirks in the standard library. And yet it’s possible to completely ignore the standard library, especially if you’re on an embedded system or bare metal. Personally I feel that a large standard library is a liability, and a much stronger property of a language is that the base language is small enough to keep the entirety of it in your head and still have the ability to implement any algorithm you need in a minimal number of lines without resorting to mental gymnastics, and still be able to read the code afterwards. I think this is why Lisp is so revered. I feel like Lua also falls into this bucket.
We need to stop starting flame wars about which language is best, cargo culting the newest shiny language, and instead just learn the strengths and weaknesses of our tools and pick the right tool for the job. You can machine almost anything on a lathe, but sometimes a drill press or a mill are much better suited for the task.
C's memory model made (some) sense when computers were very slow and mostly isolated but it is a complete disaster for code connected to the internet.
Good Article. The author has touched upon all the points that make C still attractive today.
A few more points;
C allows you program everything from dinky little MCUs all the way to honking big servers and everything in-between. It also allows you to span all levels of programming from bare-metal, system-level (OS/System utilities etc.) to any type of applications.
There has also been a lot of work done and available on Formal Verification of C programs Eg. Frama-C, CBMC etc.
Finally, today all LLM agents are well trained on the massive publicly available C codebases making their output far more reliable.
PS: See also Fluent C: Principles, Practices, and Patterns by Christopher Preschern for further study.
I don't think most of the recent essays we're seeing defending C are coming from experienced C veterans, but instead from much younger programmers who were introduced to "The C Way" of doing things by Handmade Hero.
It's surprising the number of people for whom that series appears to have completely rewritten their understanding of programming. It's almost like when someone reads Karl Marx or Richard Dawkins for the first time and finds themselves questioning everything they thought they knew about the world. It's such an outsized impact for such a seemingly straightforward tutorial series.
C's memory model requires shared invisible invariants that can't be encoded in the type system or signatures. This makes C code incredibly fragile.