> ~/.cargo/
This reminds me...
STOP putting your shitty dot-directories full of a mix of config, cache and data in my god damned home directory.
Concerned that not doing this will break things? Just check if you've dumped your crap in there already and if not then put it in the right places.
Worried about confusing existing users migrating to other machines? Document the change.
Still worried? How about this: I'll put a file called opt-into-xdg-base-directory-specification inside ${XDG_CONFIG_HOME:-$HOME/.config} and if you find that file you can rest assured I won't be confused.
Thanks in advance!
I hate XDG. I prefer app/kinds-of-data. If home directory clutter is the problem, I would prefer having a ~/crap folder. So it would be ~/crap/.cargo
I am all for a "Please put everything in random sub directories of /" config option for the installer of your linux distribution of choice where / is your home directory, PATH, configuration directory and everything else. But I think you're in the minority.
Stop giving me a mix of locations that aren't even used correctly and just put the dot file/directory in my home directory.
Thanks in advance.
If you have to blame the developer, blame the OS. Shit drops down from above. Maybe it's just old and crappy.
Blame the OS? Why? And what good would that do? The developers could have used XDG_CONFIG_HOME at the very least.
As a non linux expert I always wonder where to put crap - most places seem not fit for purpose.
There is a standard which you should follow. Here you go https://specifications.freedesktop.org/basedir-spec/latest/
Yep, likely an example of downstream problems that should not exist.
So like ~/.ssh/ is bad? Or does it get grandfathered?
SSH has the argument that they are older than the specification, now cargo, nah, a lot of application that use ssh hardcoded .ssh, cargo don't have that excuse
I thought my comment addresses this specifically.
Yes, it's bad. If you predate the specification then see options 2 (check), 3 (document) or 4 (come up with a new spec for opting in).
I feel like a healthy mix is fine. Just like I wouldn't want my repo to be 90% cache, having 1-2 folders is fine.
I would upvote at least 10 times if I could.
This!
>Debug information tends to be large and linking it slows down linking quite considerably. If you’re like many developers and you generally use println for debugging and rarely or never use an actual debugger, then this is wasted time.
Interesting. Is this true? In my work (java/kotlin, primarily in app code on a server, occasional postgres or frontend js/react stuff), I'm almost always reaching for a debugger as an enormously more powerful tool than println debugging. My tests are essentially the println, and if they fail for any interesting reason I'll want the debugger.
In my experience Java debuggers are exceptionally powerful, much more so than what I've seen from C/C++/Rust debuggers.
If I'm debugging some complicated TomEE application that might take 2 minutes to start up, then I'm absolutely reaching to an IntelliJ debugger as one of my first tools.
If I'm debugging some small command line application in Rust that will take 100ms to exhibit the failure mode, there's a very good chance that adding a println debug statement is what I'll try first
CLion adds the power of IntelliJ debuggers to Rust. It works exceptionally well.
Do you have more information about this?
Last time I debugged Rust with CLion/RustRover, the debugger was the same as VSCode uses.
Sure. It’s got breakpoints, and conditional breakpoints, using the same engine as IntelliJ. It’s got evaluate, it’s got expression and count conditionals, it’s got rewind. It has the standard locals view.
Rust support has improved in 2024 pretty strongly (before this year it just shelled out to lldb); the expr parser and more importantly the variable viewer are greatly improved since January.
Does it support evaluating code, in context, while debugging?
Yes*, mostly
It can do any single expression, and the results are better than lldb, but it can’t do multiple statements and not everything in Rust can be one expression; you can’t use {} here
> much more so than what I've seen from C/C++/Rust debuggers.
...have you ever used the Visual Studio integrated debugger for C/C++ instead of 'raw' gdb/lldb without a UI frontend?
I mean, even then c/CPP will optimize out stuff and you won't get as nice one-to-one mapping as you do with eg. Java.
That's why the debug build config one uses during development only does little to no optimizations.
My point is that even that little can sometimes lead to less pleasant experiences, like stepping half a function's body ahead.
Hmm, I've only seen that in some 'new-ish' languages sometimes (like currently Zig), which I think is a compiler bug when generating debug info. In C/C++ I see such random stepping only when trying to debug an optimized program.
Depends on the developer. in the Practice of Programming https://en.m.wikipedia.org/wiki/The_Practice_of_Programming by Brian W. Kernighan and Rob Pike they say they use debuggers only to get a stack trace from a core dump and use printf for everything else. You can disagree but those are known very good programmers.
But what source code debuggers did they have available?
Other than gdb, I can't name any Unix C source code debuggers. I believe they were working on Unix before GDB was created (wikipedia says gdb was created in 1986 - https://en.wikipedia.org/wiki/Gdb).
Plan 9 has acid, but from the man page and english manual, the debugger is closer to cli/tui than gui.
see https://9fans.github.io/plan9port/man/man1/acid.html and https://plan9.io/sys/doc/acid.html
Looks like there was an adb in 1979 which is supposedly the successor to db.
What if the program doesn’t crash? It just black-boxes the data incorrectly? I can find that error infinitely faster with a debugger.
Printf debugging excels on environments where there's already good logging. Here I just need to pinpoint where in my logs things have already gone wrong and work my way backwards a bit.
You could do the same with a debugger setting up a breakpoint, but the logs can better surface key application-level decisions made in the process to get to the current bad state.
On a debugger I need to wind back on all functions, which it might get awful as some of them might be likely correct library calls that you need to skip over when going back in time, but that will take a huge portion of the functions called before the breakpoint. I don't think it's impossible to do with a debugger, but logging sort of bypasses the process of telling the debugger what's relevant so it can hide the rest, and it might already be in your codebase, but there's no equivalent annotations already there in the code to help the debugger understand what's important.
To me printf helps surfacing the relevant application-level process to get to a broken state, and debuggers help understand hairy situations where things have gone wrong at a lower level, say missing fields or memory corruption, but these days with safer languages lower level issues should be way less frequent.
---
On a side-note, it doesn't help debuggers that going back in time was really hard with variable-length instructions. I might be wrong here, but it took a while until `rr` came out.
I do think that complexities like that resulted in spending too much time dealing with hairy details instead of improving the UI for debugging.
I appreciate your candor.
I really value debuggers. I have spent probably half my career solving problems that weren’t possible to solve with a debugger. When I can fall back on it, it helps me personally quite a bit.
I find them amazing, it's just that printf is unreasonably good given how cheap it is.
If I had the symbols, metadata, powerful debugger engine and a polished UI, I'll take that over printf everyday, but in the average situation printf is just too strong when fighting in mud.
Why not both? For any sufficiently complex app you will have some form of logging either way, and then you can further pinpoint the issue with a debugger.
Also, debuggers can do live evaluation of expressions, or do stuff like conditional breakpoints, or they can just simply add additional logs themselves. They are a very powerful utility.
Your existing logs will tell you roughly where and you just insert some more log lines to check the state of the data.
Depends how fast your build/run cycle is and how many different prcocesses/threads whether a debugger will be faster/easier but a lot of it just comes down to preference. Most time spent debugging for me at least is spent thinking about the probable cause then choosing what state to look at.
Logs are gold. Parsing logs can be very exhausting.
They use printf. Which they claim is faster.
Faster than pressing F9 (to set a breakpoint on the current line) and then F5 (to start into the debugger)?
Printf-debugging has its uses, but they are very niche (for instance when you don't have access to a properly integrated debugger). Logging on the other hand is useful, but logs are only one small piece of the puzzle in the overall debugging workflow - usually only for debugging problems that slipped into production and when your code runs on a server (as opposed to a user machine).
It’s very interesting. I’ve tried to observe myself. It seems that if I can see a breakpoint somewhere and then examine state and then see what the problem is, a debugger is great.
If, however, it’s something where I need to examine state at multiple times in the execution, I lose track in my mind of the state I’ve seen before. This is where print debugging shines: I can see how state evolved over time and spot trends.
I'm not against printf at all, my lifetime commit history is evidence of that. Do you also think that in the case of a coredump not existing, that printf is faster? Sincere question. I'm having an internal argument with myself about it at the moment and some outside perspective would be most welcome.
Most of my time with printf degugging is spent trying to reason about the code not compiling.
though you should note that I'm repeating their claims. What I think is hidden.
printf isn't faster if you want to single step through code to find math precision errors.
I've had to do that on a embedded system that didn't support debugging. It was hell.
I’ve always wondered why embedded devs make less than “JavaScript-FOTM” devs.
The only time I use a debugger with Rust is when unsafe code in some library crate messes up. My own code has no "unsafe". I have debug symbols on and a panic catcher that displays a backtrace in a popup window. That covers most cases.
Rust development is mostly fixing compile errors, anyway. Once it compiles, it often works the first time. What matters is compile time for error compiles, which is pretty good.
Incremental compile time for my metaverse client is 1 minute 8 seconds in release mode. That's OK. Takes longer to test a new version.
Debuggers are not only useful for actual debugging as in 'finding and fixing bugs', they are basically interactive program state explorers. Also "once it compiles, it works" is true for every programming language unless you're a complete newbie. The interesting bugs usually only manifest after your code is hammered by actual users and/or realworld data.
You're obviously not a Rust programmer.
Rust only protects from a very small subset of bugs (memory corruption issues and data races) but not from logic bugs which are far more common.
I use the debugger fairly regularly, though for me I'm on a stack where friction is minimal. In Go w/ VS Code, you can just write a test, set your breakpoints, hit "debug test", and you're in there in probably less than 20 seconds.
I am like you though, I don't typically resort to it immediately if I think I can figure out the problem with a quick log. And the times where I've not had access to a debugger with good UX, this tipping point can get pushed quite far out.
Debugging in Rust is substantially less common for me (and probably not only for me) because it is less often needed and more difficult - many things that are accessible in interpreted world don't exist in native binary.
I do care about usable tracebacks in error reports though.
Main challenge with debuggers in Rust is to map the data correctly into the complex type system. For this reason I rarely use debuggers, becase dbg! is superior in that sense.
println debugging is where everyone starts. Some people never graduate to knowing how to use a debugger.
Debugging through log data still has a place, of course. However, trying to do all of your debugging through println is so much harder, even though it feels easier than learning to use a debugger.
I am comfortable using a debugger, but println debugging is easy, fast, and disproportionately effective for most of my debugging in practice.
I reach for a “real” debugger when necessary, but that’s less than 5% of the time.
I wonder, do you use a separate debugger, or a debugger that's integrated into your IDE? "Reaching for a debugger" is just pressing F5 in an IDE.
E.g. I keep wondering whether the split between people who can't live without debuggers vs people who rarely use debuggers is actually people who use IDEs versus people who don't.
Data point: I develop in Java and I use IntelliJ. I run everything in debug mode. So it’s really easy for me to enter the debugger.
But I find that if I have to step around more than a handful of times to find the issue then I forget what happened five steps ago. So I teach for print debugging quite often.
I use VS Code, and there's an extension that provides a debugger for the languages I use.
To be fair, if your code is multithreaded and sensitive to pauses, it becomes harder to debug with a debugger.
Ultimately, if you have a good logging setup and kinda know where the issue is a quick log message could be faster than debugging if all you want to do is look a variable value.
That is where OS tracing like DTrace and ETW come into play, which can then be loaded into a debugging session.
Logging can change timing issues though. There are too many cases where an added log statement "fixed" a race condition, simply by altering the timing/adding some form of synchronization inherent in the logging library.
That’s true but boy howdy does pausing the program at a breakpoint change timing!
printed/println debugging works if you wrote the code or have a good idea of where to go.
I frequently find myself debugging large unfamiliar code bases, and typically it’s much easier to stick a breakpoint in and start following where it goes rather than blindly start instrumenting with print statements and hoping that you picked the right code path.
I also don't get it, debuggers as integral part of the programming workflow are a productivity multiplier. It does seem to be a fairly popular opinion in some programmer circles that step-debugging is useless, but I guess they never really used a properly integrated debugger to begin with (not a surprise tbh if all they know is gdb in the terminal).
That is why I found so great that Carmack's opinion on debuggers is similar to ours, at least there is some hope to educate the crowds that worship Carmack's achievements.
Is that crowd getting bigger or smaller though? When he worked for id Software, he was pretty popular in my circle of friends, because we were playing ioquake3 forks that we kept making mods for and so forth.
I would say among the folks that care about game development and graphics programming, people still listen to him with attention.
Outside that circle, maybe not.
In c++ for debugging a mid-sized app, gdb will sometime take up to 5 min to start (assuming no remote symbol cache used). On fairly powerful hardware - i7 13000-something, 64g of RAM. I have the time to do 15 compile-edit-run cycles adding prints in that time span before I have even reached main() in it. (And I really tried every optimisation, gdb-index, caches, split DWARF etc. It jus is absolutely mind bogglingly slow and sometimes will even just crash when reaching a breakpoint. Same for lldb. Those are just not reliable tools. And I'm not even talking of the MSVS debugger which I once timed to take 18 minutes from "start debugging" to actually showing a window with all the symbol server stuff.
Vs story does not match my experience for AAA game projects. First vs always had start debugging without loading any symbols at all. 2nd one can load each one module on demand. 3rd local file cache for symbol servers can be very much warm (ie have most of needed symbols in RAM). 4th if your project is stuck on old Vs version you can still debug with latest version of debugger in many cases. Ie for us there are no limits of how many versions of vs dev has on their pc. It might be only available if org has volume deals with Ms though.
Downloading symbols for first time from network symbol server is long but its not part of debugging cycle, at least after 1st run.
I trained in the cout school of debugging. I can use a debugger, and sometimes do, but it's really hard to use a debugger effectively when you're also dealing with concurrency and network clients. Maybe one day, I'll learn how to use one of the time traveling debuggers and then I can record the problem and then step through it to debug it.
I work in data engineering. I tend to do println debugging because the production data sets are not available from my machine. I tend to prefer REPL or notebook driven development from a computer that is connected to the production environment.
I came here to write exactly this .. if I was drinking something I would have spit it everywhere laughing when I read it.
I guess 'many developers' here probably refers to web developers who don't use the debugger, cause it's mostly useless/perpetually broken in JS land ..? I rely heavily on the debugger; can't imagine how people work without one.
Ironically the JS debugger is the only one I ever use because it's the only one that "just works".
The debuggers integrated into web browsers are actually really good, about the same level as most IDE-integrated debuggers.
when you write async JS code the debugger essentially adds no value over printing
Not in my experience, async JS/TS code is perfectly fine debuggable (at least with setting a breakpoint here and there).
> If you’re like many developers and you generally use println for debugging and rarely or never use an actual debugger
You lost me here.
Using a debugger to step through the code is a huge timesaver.
Inserting println statements, compiling, running, inserting more println, and repeating is very inefficient.
If you learn to use a debugger, set breakpoints, and step through code examining values while you go then the 8 seconds spent compiling isn’t an issue.
Engh... I used to rely on a debugger a lot 25 years ago when I was learning to program, and was extremely good at making it work and using its quirks; but, I was building overly-simplistic software, and it feels so rare of a thing to be useful anymore. The serious code I now find myself actually ever needing to debug, with multiple threads written in multiple languages all linked together--code which is often performance sensitive and even often involves networked services / inter-process communication--just isn't compatible with interactive debugging.
So like, sure: if I have some trivial one-off analysis tool I am building for which I am running into some issue I could figure out how to debug it, but even then I am going to have to figure out yet another debugging environment for yet another language, and also of course surmount the hassle of a ton of ensuring that sufficient debugging information is available and I'm running a build that is somehow not optimized enough for debugging and yet also not so slow that I'm gouging my eyes out, I could use a debugger, but I'd rather sit and stare at the code longer than start arguing with a debugger.
I wouldn't even start a new project before figuring out a proper debugging strategy for it. This also includes not using languages or language features that are not debuggable (stepping from one compiled language into another usually isn't a problem btw, debug information formats like DWARF or PDB are language-agnostic).
Your serious code isn't performance sensitive if it's a distributed monolith like you're describing.
It's just another wannabe big scale dumbsterfire some big brain architect thought up, thinking he's implementing a platform that's actively being used by millions of users simultaneously, aka Google's or a few social media sites.
Trace debugging can be inefficient, but it's also highly effective, portable between all languages, and requires no additional tooling. Hard to beat that combo.
> Inserting println statements, compiling, running, inserting more println, and repeating is very inefficient.
That all depends on how long it takes to compile and run.
> If you learn to use a debugger, set breakpoints, and step through code examining values while you go then the 8 seconds spent compiling isn’t an issue.
Meh, it's fine.
You still have to set new ones and re-run your code to trigger the reproduction iteratively if you don't know where anything is wrong.
Clicking "add logging break point here" and adding a print statement is really not very different. In my experience the hard part is know where you want to look. Stepping through all your code line by line and looking at all values every step is not a quick way to do anything. You have to divide and conquer your debugging smartly, like a binary search all over your call sites in the huge tree-walk that is your program.
Stepping through code and adding breakpoints is a spectacular way to figure out where to put log statements. Modern code is so abstract that it’s nigh impossible to figure out what the fuck function even gets called just from looking at code.
I guess it depends if you are working with your own code or someone else's. If you work with your own code, you should have pretty good idea already.
Debuggers are pretty good when you want to understand someone else's code.
Just ignore that part then? You don't have to stop reading the rest of the (very good) article lol.
Is it really saving time or are you not thinking enough about what is wrong until you stumble on an answer? I can't answer for you but I find that the forced wait for build also forces me to think and so I find the problem faster. It feels slower though but the clock is the real measure.
It's one click to set the breakpoint in the ide or one line if you're using gdb from the command line. I'm not sure how printf debugging could be quicker even if you didn't have to rebuild. Having done both, I'd take the debugger any day.
the important time is thinking time and debuggers don't help. Often they hurt because it is so sudctive to set those breakpoints instead of stophing to think about why.
Sometimes "just thinking harder" works, but often not. A debugger helps you understand what your code is actually doing, while your brain is flawed and makes flawed assumptions. Regardless of who you are, it's unlikely you will be manually evaluating code in your head as accurately as gdb (or whatever debugger you use).
I think a lot of linux/mac folks tend to printf debug, while windows folks tend to use a debugger, and I suspect it is a culture based choice that is justified post hoc.
However, few things have been better for my code than stepping through anything complex at least once before I move on (I used to almost exclusively use printf debugging).
Step through debuggers and tracers are two different dimensions of debugging, and not directly comparable.
> Using a debugger to step through the code is a huge timesaver.
This is too slow and manual of a process to be a huge timesaver, you'd need a "time-travel" debugging capability that eliminate these to save time
There are a lot of different t views on debugging with a debugger vs print statements and what works better. This often seems to be based on user preference and familiarity. One thing that I haven’t seen mentioned is for issues with dependencies. Setting up a path dep in rust or fetching the code in whatever language you’re using for your project usually takes more time than simply adding some breakpoints in the library code.
Not linking debug info must be some kind of sick joke. What is the point of a debug build without symbols?
Enable them when you need to debug. This is for speeding up "edit-build-run" workflows.
It's a lot faster to add a log point in a debugger than to add a print statement and recompile. Especially with cargo check, I really don't see the point of non-debuggable builds (outside of embedded, but the size of debuginfo already makes that a non-starter).
Yes, but we're talking about time spent with "add a print statement and recompile" vs time saved by not including debuginfo on every other build. You have to do that comparison yourself.
1. Skipping some optimizations to build faster.
2. Conditionally compiling some code like logging (not sure if matters for typical Rust projects, but for embedded C projects it's typical).
3. Conditionally compiling assertions to catch more bugs.
I'm using logs, because debugger breaks hardware. Very rarely do I need to reach debugger. Even when hard exception occurs, usually enough info is logged to find out the root cause of the bug.
> because debugger breaks hardware
What? Seems like you’re talking about embedded but I’ve done a lot of embedded projects in my time and I’ve never had a debugger that breaks the HW.
On embedded, debuggers almost never work until you get to really expensive ones.
In addition, debuggers tend to obscure the failure because they turn on all the hardware which tends to make bugs go away if they are related to power modes.
One of my "best" debuggers for embedded was putting an interactive interpreter over a serial interface on an interrupt so I could query the state of things when a device woke up even if it was hung--effectively a run time injectable "printf".
Crude, but it could trace down obscure bugs that were rare because the device would stay in the failure mode.
The bigeest problem was maintaining the database of code so that we knew exactly what build was on a device. We had to hash and track the universe.
I’ve worked with many M3s and M4s and some Cypress microchips and the JTAG debuggers always worked fine as far as I recall. There were some vendors that liked to force you to buy really expensive ancillary HW but a) there was plenty of OSS that worked fairly well b) you could pick which vendor you went with.
All of those chips you mentioned will turn on all the units at full power when you connect a debugger to them.
And the OSS stuff never works correctly. I wind up debugging the OSS stuff more than my own hardware. And I've used a LOT of OSS (to the point that I wrote software to use a Beaglebone as my SWD debugger to work around all the idiocies--both commercial and OSS).
Depends what you're working on. Stopped in an unfortunate place? That one element didn't get turned off and burned out. Or the motor didn't stop. Or the crucial interrupts got missed and your state is now reset. Or...
are there debugging tools specifically for situations like that? do you just write code to test manually? How do you ensure dev builds don't break stuff like that even without considering debugging?
The most useful tool is a full tracing system (basically a stream of run instructions you can use to trace the execution of the code without interrupting it), but unfortunately they're quite expensive and proprietery, and require extra connections to the systems that do support them, so they're not particularly commonly used. Most people just use some kind of home-grown logging/tracing system that tracks the particular state they're interested in, possibly logged into a ringbuffer which can be dumped when triggered by some event.
You ensure dev builds don't break stuff like that with realtime programming techniques. Dev tools exist and they're usually some combination of platform specific, expensive, buggy, and fragile.
printf and friends are fantastic when applicable. Sometimes the cost to even do an async print or even building in any mode except stripped release is impossible though, which usually leads to !fun!.
I mean that stopping execution will often break software logic, for example BLE connection will time out, SPI chip will stop communicate because of lack of commands, watchdog will reboot the chip, etc. And then the whole program will not work as expected, so debugging it will not make further sense. Sorry for miscommunication, I did not mean that hardware physically breaks. It might be possible to solve some of those issues, but generally printing is enough, at least that was my experience.
Doesn't rust have overflow checks in debug and skips them in release?
Rust has compiled time overflow checks enabled by default in any profile. Runtime overflow checks are disabled by default in release profile.
split-debuginfo = "unpacked" (-gsplit-dwarf for C/C++) is the way. Your tools (debuggers, profilers, etc) probably support it by now.
It's funny that two different posts today are having the same "tabs vs spaces" about "printf vs debugger" https://news.ycombinator.com/item?id=42146338 (Seer: A GUI front end to GDB for Linux)
Surely it would be better to use a debugger and avoid recompiling your code for the added print statements than to strip out the debug information to decrease build times.
I build wasm as a target and this is sadly not an option, I have to rebuild for each little bit. I will be trying a different linker and see how it goes though!
This post is from February, any interesting changes/improvements since then? Progress on “wild”?
Looks like he's been continuing to work on it.
GitHub repo: https://github.com/davidlattimore/wild
List of his blog posts, a couple of which are about wild: https://davidlattimore.github.io/
Just my lack of experience here, but I'm trying to verify I'm successfully using sold. The mold linker claims it leaves metadata in the .comment section, but on mach-o does that exist? Is there a similar evaluation command using objdump as the mold readelf command?
Last time I tested mold/sold on Mac, there was no measurable time difference.
Good set of tips. Thank you.
It was my pleasure.
[flagged]
thanks
You're welcome
I get downvoted for thanking you. stupid HN folks :D