On a general note, I would recommend any new (and experienced!) programmers to master the debugging tools of their ecosystem. I've seen countless experienced developers use printf-based debugging and waste hourse debugging something which could've been easily figured out by setting a breakpoint and stepping through your code. This is also a good way to understand code you're unfamiliar with.
This is one area where I believe a GUI tool is so much better: I can hover over variable names to view their values, expand and collapse parts of a nested structure, edit values easily, and follow execution in the same environment I write my code in.
Sure, it doesn't help much for some scenarios (one I've heard people mention is multithreaded code, where logs are better?), but for most people it's not that far from a superpower.
Interesting.
My experience is the opposite: I see developers waste hours stepping through their code a line at a time when a few judiciously placed logs (printfs() are fine, but we can do better) would have told them exactly what they needed in a jiffy.
If you have a fairly shallow bug, that is a single point in your code that always behaves incorrectly, then I find debuggers reasonably effective.
But most of the bugs that I see aren't that shallow, with code misbehaving when the context is just so and perfectly fine otherwise. In those cases, I need to see lots of different invocations and their context. The debugger is like trying to drink the information ocean I need through a straw. A mostly plugged straw.
I wonder what makes our experiences so different? Do you unit test a lot? Particularly with TDD? I am guessing that this practice means I just don't get to see a lot of the bugs that a debugger would help me with.
(And it doesn't mean I never fire up the debugger. But it is fairly rare).
I have more or less the same experience like you. Logging is a very resilient and adaptable technique – I can use it on my laptop or on remote HPC clusters, almost regardless of programming language (except maybe Haskell), it works fine on parallelized code, and so on, with very little configuration needed. It’s also important to me that it can be done “async”, since some of my larger codes can only be run on HPC clusters by putting a job in a process queue and waiting.
I’ve tried debuggers and see the appeal but I find it less useful than print debugging / logging.
I also rely heavily on unit tests when writing new code, so that also reduces the surface that I need to look for bugs based on the log. Moreover, most of my projects have 1-3 programmers and can largely “fit in my head” (<10,000 lines of code), so it’s probably different if you work at a FAANG company or something.
I think you have a great point here. Debugging tools make you dependent on a particular environment. Printing based debugging can work pretty much everywhere. If you master printf programming you can solve any debugging task.
Yes, portability and simplicity are the best parts of printf.
> If you master printf
The skill ceiling is low. Printf only does so much.
You could rope in environmental optimization to the skill discussion -- the ability to isolate areas of functionality, replicate problems, reason about unknown state, and do the legwork so that you can quickly spin the increased amount of iteration required by a simpler debugging tool -- but by then you have thoroughly sacrificed both simplicity and portability and are far past the skill floor of a debugger.
If we assess this by looking for problems created by overcommitting to one approach or another, overcommitting to a debugger looks like burning time trying to get tooling to work on a problem that doesn't really need it while overcommitting to printf looks like spending way too much time iterating on tiny steps that could have been jumped over given better visibility. I've seen both, of course, but I tend to see more of the latter and more denial about the latter. When you're burning time fighting tools it's obvious. When you're burning time because you don't know how a tool could have saved you time, it's less obvious.
YMMV.
> the ability to isolate areas of functionality
This is the key. You need to be able to narrow down where the bug is.
Not the OP but...
> programmers to master the debugging tools of their ecosystem. I've seen countless experienced developers use printf-based debugging and waste hourse debugging something which could've been easily figured out by setting a breakpoint and stepping through your code.
If you're wasting hours with printf-based debugging, I don't think you've 'mastered the debugging tools of the ecosystem'.
There are multiple ways to debug - step debugger tools, printf, logging to a file, etc. Each have their place.
If you're spending hours on any one approach, and perhaps that's the only approach you know, that's a red flag.
If you've spent hours going through printf, logging and step debugging and STILL don't have a good answer... bring in external eyes.
I've found/fixed bugs in a few minutes because of adding some log stuff first, because in those cases, it's the easiest approach. In other cases, running a debugger and setting a couple breakpoints is indeed the easier approach to start with, and I've done that.
Sometimes you find it with the first approach, sometimes you need to try the next approach.
I would guess longer compile would encourage breakpoints over printf and this would be programming language specific.
Being able to change breakpoints at runtime helps a lot when tracking down something more complex. Visual Studio breakpoints are great, and they’ve added conditional breakpoints which are even better. Previously I would approximate this by having code specifically branch to hit a breakpoint, ‘if (X) { breakHere();}’
I write a fair amount of native C++ code but only call it from either Python or dotnet so when I make a mistake here it’s usually a segfault / memory access issue which kills the process. There might be a way to debug the C++ from dotnet or Python but logging to std out helps me isolate the location of the issue which is sufficient. It’s not a big enough problem and I worry that either writing tests in C++ or learning a native debugger will pay off in time saved.
A good debugger can provide more than just stepping thru code.
In IntelliJ with Java, you can set conditonal breakpoints with complex evaluations, you can set filters (only hit a breakpoint depending from where it is being called), use exception methods that only hit on certain exceptions instead of a specific line code, you can also use logging breakpoints, that act like printf debuging, but you don't need to scatter your code with print statements all over the place.
You can group, add descripitons, disable, enable and add temporary breakpoints, they are pretty powerful! I just wish intellij had a time travel debbuger like Visual Studio Pro.
https://www.jetbrains.com/help/idea/2024.3/using-breakpoints...
Inserting a breakpoint is just as easy as a printf, and as long as you're still using a debugging build, you don't have to recompile. With the printf you might not have considered all the variables you need, so you have to go back, insert, and recompile. With a breakpoint you can inspect the contents of anything at that scope, and even see what the code flow is with that given state. You can even save a core dump to go back to later.
You can also script breakpoints to output the info you want and continue, giving you your information ocean.
Basically, a debugger is a more efficient and powerful tool. In the one situation where you're not skilled with a debugger feature, a printf can be quicker than having to learn, but it's objectively worse.
You can insert and remove breakpoints while running. You can inspect variables the instant you realize they might be relevant.
During my long career, I’ve always been told “You should know you code well enough that a few well placed printfs is the most you’ll need to understand a bug”.
But, most of my career has been spent debugging large volumes of code written by other people. Code I’ve never seen before and usually will never see again.
A debugger making a 10X productivity difference for me is no joke.
> With the printf you might not have considered all the variables you need, so you have to go back, insert, and recompile
In some languages, such as Python, it's fairly easy to write a debug-print function that prints all the local variables (as well as the function name and line number it was called in).
That misses the mark. You can’t really compare a hackish ”print the world as a string“ function against a debuggers ability to stop time, walk around, pick things up, slice them open, put them somewhere else, and start time again.
That’s not just not the same league, it’s playing a whole different game.
I agree with both of you. Printf is not enough, breakpoints are not enough. The solution lies between. Ability to gather rapidly relevant information to converge on wrong states.
ps: I wish I could work on a porcelain layer to manage the breakpoints in a more logical manner. Considering a problem you'd like to create different sets of breakpoints, run various tests and gather results to reviews. With the ability to add or remove layers rapidly. It's probably not too hard to do.
I have found combining these things to be useful: breakpoints that print stuff and auto-resume the program. Allows you to attach trace points at-will without requiring a recompile or losing state.
Yes losing state is a killer
Yes exactly, and I'll probably say very generally that I usually use breakpoints when I am in the exploration stage of a significant state bug, and I'll usually use logging when I generally know where the bug should be but I need to pin point the exact place
Another thing to consider and is important to me - logging objects and state isn't always so simple. It can often be easier for me to open the debugger to look at the state of an object which cannot easily be printed.
Out of interest - what sort of objects are hard to print in this way but easy to view in a debugger?
An object with many fields (in a language with no conveniences for it). An object tree with multiple levels of nesting. A list or dictionary of such objects.
In general, print-based debugging requires a greater degree of specificity. If you know exactly what you're looking for it's great.
If you are performing a more exploratory sort of debugging, a decent graphical debugger will save you a ton of time.
In my case it's basically everything since I work in java, jackson's object mapper can easily get stuck or deserialise something incorrectly if the class hasn't been annotated correctly. So it's simpler for me to pull up the debugger and I can see the "actual" data thats making up my object, it also lets me run "queries" against anything of interest (i.e. computed fields).
The default toString method I've found to be useless almost every time I wanted to inspect an object in our codebase since it just prints the type + "id" for the object
> If you have a fairly shallow bug ... But most of the bugs that I see aren't that shallow
Oh come off it, debuggers shine the brightest when there are lots of unknown unknowns. With printf debugging you can peel back exactly one layer at a time (oops, need to log one more thing) whereas with a debugger you can slice through the Gordian knot.
I agree with this comment but you really didn't need the first 4 words.
Try using a debugger to debug a globally distributed system that humans aren't given access to for security reasons in a post mortem and then "come off it" yourself
Is that the only kind of difficulty you can think of? Lol.
I've also used both GUI-based debuggers and printf, and I prefer printf. But the most important thing it to write your code so there aren't many bugs and when there are they are easy to find. I do this using modular code, unit tests and regression tests.
I use profiling tools more than step-debugging, and printf()/var_dump()/IO.inspect/System.out.println/&c. much more than both, because most of the time I just need to see what the data looks like in a few locations to have a solution.
Sometimes the problem doesn't show up immediately in data and the code is too complex or uses a lot of wormhole techniques like particular forms of exception abuse, that's when I might fire up the debugger and browse frames instead.
It depends on the code as much as anything. I wrote a regex engine in Zig, and the instant I get a bug report I set breakpoints on a failing test and step through.
On the other hand, I'm working on an interactive application, and when I see a problem with it, I add more logging statements until I figure out what the problem is. Any time the logs have excessive detail as a consequence, I gate them behind an 'extra' flag on a per-unit basis, only removing the ones which amount to "got here".
If I had to pick one technique, it would be logs. I naturally think in terms of a trace through the execution pathway, rather than a step-by-step examination of the state of a small window into the code. It clearly works the other way around for some people.
One thing that makes this approach better for me is that debug logging is literally free, Zig uses a lazy compilation model so logging code which doesn't apply to a release compilation isn't even analyzed, let alone compiled, let alone included. In a language which doesn't work that way, there's motive to use printf-only debugging, and clean up after yourself afterwards, and that's extra work compared to firing up a debugger. So it shifts the balance.
My grief with debuggers is due to C++ and template code (usually STL) and optimizations zeroing out values. I wish it had better training wheels to say "nope that's STL library code, you probably don't want to step any deeper". That's largely a criticism of C++ itself, though. But yes for this reason I prefer printfs despite 20 years in the game.
Visual Studio has natstepfilter
https://github.com/ocornut/imgui/blob/master/misc/debuggers/...
gdb has a skip function and you can exclude STL headers and functions named in special ways so that reduces the noise quite a bit.
But also, experienced programmers should never forget their printf debugging roots.
I was debugging something earlier this week that was hit like a hundred times in a tight loop. After the first dozen or so times I told gdb to continue, I realized, wait, this will be faster if I just fprintf some relevant information to a file. Sure enough the file pointed me in the right direction, and I was able to go back and get fancy with "disp" and "cond" and hit that breakpoint only when I needed to.
You could also use GDB's Dynamic Printf (https://sourceware.org/gdb/current/onlinedocs/gdb.html/Dynam...) to do the logging directly from GDB.
Essentially you set it like a breakpoint (attaching a printf style string to a code location) and then just "continue" until you've gathered what you want.
Oh sweet. I didn't know about that. I will be adding that to my toolbox.
In Jetbrains you can also do a Conditional breakpoint to active only if i=100 or something.
Well yeah, but I didn't know I wanted i = 100 until I had examined the fprintf output.
You need a time travelling debugger!
https://learn.microsoft.com/en-us/windows-hardware/drivers/d...
What happens if the loop executes some non-idempotent calls? I guess printf debug still has some value :)
Sometimes I have had much better results with adding logs especially if an issue doesn't occur always and I am not so sure about the steps either because breakpoints take a lot of time as well. Also, in some cases (esp. mobile UI) breakpoints might actually break the flow and you might not get a proper flow. But yeah mastering the debugger is indeed a must and a GUI debugger is better than a CLI debugger. It's just that at least for me personally logging is first line of debugging :|
> Sure, it doesn't help much for some scenarios (one I've heard people mention is multithreaded code, where logs are better?), but for most people it's not that far from a superpower.
Debuggers can be great for understanding multithreaded code - and you can potentially freeze threads and continue others in order to provoke a particular race condition.
However they're potentially quite weak at stepping through a concurrency bug - stopping after each line to understand the sequence of events has a good chance of making your bug go away.
I'd say you want Time Travel Debugging if you need to capture and step through a rare event: you get to record the bug happening (without interrupting it) and then step through the recording.
On Linux, Undo.io (disclaimer: where I work) and rr (open source) are good at this.
On Windows, you have Microsoft's own Time Travel Debug solution: https://learn.microsoft.com/en-us/windows-hardware/drivers/d...
(nb. there's also GDB's built-in process record technology but I'd recommend against that for any non-trivial software as the overheads are very high)
I've had logging slow down a concurrency issue enough to cause the race condition to never appear when logged, but setting a breakpoint (and stopping all threads) prevailed.
Idk. I feel like the second you need to use a debugger your code and design has become too complicated and needs to be rethought.
In general anything you would want to debug should probably be exposed as a unit test and the area of concern should have test cases made that trigger the behavior you are concerned about.
The entire process of debugging essentially results in the same process as you would need to do to create unit test. While it is faster it is lost once done, making the entire process one shot.
I would add to that that in most scenarios where people think debugging doesn't help or won't work - it can.
Running inside Docker, multithreaded, multiprocessed, all can be debugged with a little effort. Most often much less effort that repeatedly printf debugging.
And a good debug log can help more than either of those.
I am not sure I understand your point. Can you be more explicit ?
People who think that case X cannot be debugged without printf often don't know the features of their debugger. I.e. look at several of the comments which seem to miss that you can:
- Remote debug.
- Use conditional breakpoints.
- Use breakpoints to trigger commands, e.g. log values, enable other breakpoints, etc. instead of stopping. execution.
- Debug multi-threaded code.
- Disassemble a fragment.
Just yesterday I gave a talk at MeetingC++ in Berlin on debugging multithreaded code. It's amazing how few developers know anything beyond the very basic of their debugger. If all you know is print, break, continue, next and then you dismiss the debugger as "not very useful" then you've not made a judgement based on information but on initial reaction.
I have a bunch of (36 if you're counting :) short videos and blog posts introducing the advanced features of GDB: https://undo.io/resources/gdb-watchpoint/
Many people believe that a debugger won't work in their specific scenario, but often they are wrong; debugger can connect across network boundaries or into other processes that weren't launched by the IDE
I use debug logs extensively. I log a LOT. I can put the logs and code next to each other and trace through the code. So much better than a debugger. With the logs, I don't have to worry about timers or concurrency or any of that. I can take my time and read the code and reason about what's going on.
Edit: Logging helps me look at what is going on in prod as well. I can trace messages/transactions completely through the path and if there's an issue, I'll see it.
At least in embedded, often the tools suck.
Some ancient version of NetBeans leaking ram like a sieve until it brings down the machine, or a decade old version of Eclipse that can't pull in a newer CDT, running on a fork of OpenOCD with nothing customized for the CPU architecture running dog slow.
Sadly, it can be faster to reserve a GPIO, bitbang a TX-only UART, and get on with it.
Rider shows the value of assignments in your code, on the line they are assigned when you are debugging. This has saved me time when I notice that a value I didn't even think about looking into was wrong.
I honestly believe it's a cultural thing. I don't think there's any rational reason to not want to use a debugger, but I've met many people who swear it's useless. I think people in web tend not to use debugger, and a lot of Linux people as well. Everybgame developer I've met and most windows programmers use them
I made heavy use of interactive debuggers earlier in my career. After several years working in environments where debuggers were broken, unhelpful, or not available, I completely lost the habit. It was not so much a cultural thing as simple practicality: logging always works. I'd rather maximize my limited brainpower by focusing on the software I'm building and thinking as little as possible about the tools I'm using.
It may be somewhat cultural in that influence from functional programming changed the way I think about state and state transitions, leading me to design my code differently, reducing the amount of debugging I have to do and making it easier to do via logging.
Even in multithreaded code, it's absolutely amazing to be able to pause a running program and look at the list of running threads and the values in scope and see where deadlocks might be sitting.
It's immediately obvious you're deadlocked, which is actually kind of tricking to suss out with log-style debugging.
Modern debuggers can do so much, being able to lay down conditions to only break when certain values are set, etc. etc. Some can even "rewind" programs. I'd say most people (including myself) are using only 25% of their debugger's capabilities.
Aside: One the reasons I despise working with async Rust code is the mess it makes of working with a debugger.
In case you are in Windows, and connected to Linux and/or using WSL, you can also use WinDBG/VisualStudio to debug (remotely) Linux processes!
oh nice - reminds me of DDD(1) DDD was like magic first time I saw it. Oh wow - DDD is still maintained ?? :-D
DDD was taught to me when in University, 20 years ago, and it already felt clunky, my views are now much more moderate but Motif still feels like an eyesore.
Conversations over the years have shown me that DDD was a great inverse marketing tool, ironically pushing developers towards the embedded debugger UI in their favorite IDEs... despite DDD itself being indeed very powerful. But even "usefulness over aesthetics" has its limits!
There's one DDD feature that I haven't found elsewhere: its graphical representation of a struct and its contents. You can double-click on a pointer field and then it draws whatever that field pointed to, with a nice arrow connecting the two.
I've found it a very powerful yet compact way to visualize the state of a program when debugging.
yes! this was so great in college to learn pointers and visualize linked lists
Current DDD under the updated OpenMotif with TTF fonts can look much better than it did in the 90's and 00's, miles ahead than LessTif/former propietary Motif. It blends perfectly with EMWM where I have Liberation Sans/Mono for almost everything.
motif does not work well with high resolution display, sadly
Motif today supportṡ TTF and for the HD issues, you can the DPI option for X11 at ~/.Xdefaults
Yeah, I remember DDD being an incredible tool back in the day, but it was clunky even when it was new (1995).
I love that DDD has a variety of graphical visualisations built in. I always thought the ability to data structures was particularly cool.
A while ago there was a project to port it to GTK3 but I think that went away. I'm glad the mainline project is still going.
yeah - the 'data display' part was the real killer feature :-)
> DDD is still maintained?
Absolutely. I wrote about its features here https://begriffs.com/posts/2022-07-17-debugging-gdb-ddd.html
Since the article was written, the maintainers fixed the issues I pointed out. No need for many of those workarounds now. Versions 3.4.0 and 3.4.1 are substantial.
DDD is great. I still use it, but I am a fossil. I sought out DDD when I was looking for something similar to dbxtool, which I used on the early Sun Microsystems machines. Folks today are spoiled with things such as Source Level Debugging.
I always liked the concept with DDD, but I could never keep it from crashing more than the program I was trying to fix.
It is much better now.
And it still uses Motif! Awesome!
I built it and tried it out a bit with Godot on Linux. It seems OK (the UI is a bit on the "- how many widgets do you want? - yes" side), but also a bit janky. Trying to change the font for the editor didn't work, hovering over a variable to see its value either does nothing (but there is a sub-second cursor change that indicates something is supposed to happen) or it shows an error from GDB about trying to use an expression with a type or keyword (so there was an intent to show a value on a tooltip, it is just broken) - doubleclicking on a variable does add it in some panel with its current value and a timestamp, so the functionality for reading values/expressions from the UI is there too, just not done in the same way as the tooltips.
If polished a bit it could be useful, though from all the frontends i've tried the one i disliked the least (none are great) is Gede[0] (which i just noticed had a new release a few hours ago) as it has a very simple and straightforward UI and while it doesn't expose much functionality, what exposes seem to work fine without bugs.
> it has a very simple and straightforward UI and while it doesn't expose much functionality, what exposes seem to work fine without bugs.
Nice one, I will add it to my notes to use it next time I need debugging. The least thing I want when looking for a bug in my own code is to have to deal with bugs in the debugging tools.
After trying many frontends for gdb I find that the TUI is the best. You just need to know about Ctrl + L to redraw if your program is printing stuff because the interface then become garbled.
I just put :
layout src
set confirm off
in my $XDG_CONFIG_HOME/gdb/gdbinitI've used "gdb-dashboard" a lot, would recommend it. It's similar to the TUI (though I haven't used the TUI much), but you can pick and choose to display a large variety of information and the colors make the output much easier to read.
You can also make the dashboard display on another or across multiple terminals, letting you create a much nicer window layout. I've scripted this up with tmux before to have it automatically create the terminal layout and connect them to gdb, you can create really nice layouts that way (though it can be a lot of effort).
I like a colored prompt with
set prompt \001\033[01;36m\002(gdb)\001\033[0m\002
and I save history with set history save on
set history size 500000
set history filename ~/.cache/gdb/history
I like to do that as well. Just want to keep it short :) Also I use vim mode for bash and since it's in .inputrc I also have a vim mode in gdb which I like a lot even though it's as good as zsh's
GDB also has a built-in text user interface (TUI) that is surprisingly easy to use[1]. It even supports mouse interaction.
[1] https://sourceware.org/gdb/current/onlinedocs/gdb.html/TUI.h...
Only works if GDB has been built with TUI support, sadly :(
Is it not usually? I've never had to compile gdb myself to get TUI
… If you are debugging c code surely you’re able to compile a debugger with whatever options you want?
This is a Qt UI for GDB.
There's also gdbgui that I know of, a web-based UI for GDB:
Always good to see more movement in the debug tooling
To add on to the pile of more GDB GUIs, here's one I've made: https://github.com/dzaima/grr. Though it's still got a fair bit of missing features that may be essential for some uses (due to my usage being largely assembly-level, which doesn't need too many fancy features).
Speaking of web-based debuggers, I recenty created a similar project but focused on x86-64 assembly debugging: https://github.com/robalb/x86-64-playground
Vscode also has an ok gdb frontend, very nice when you are debugging embedded microcontrollers
I sometimes need to use gdb to investigate bugs in C or Ada, but it is not my main activity. As a result I will not invest days to setup a debugging environment that I will not remember how to use 6 month later. My solution: I use emacs and have a short note with instructions: M-x gdb -i=mi exe_full_name -p 29123 M-x gdb-many-windows set follow-fork-mode child
A mediumish discussion 2 years ago https://news.ycombinator.com/item?id=33044885
When I used to program in C++ on Linux 10+ years ago, I used Qt Creator which has a built-in debugger (GDB frontend). It worked great and I don't see a reason to use anything else for C++ [and Qt].
For the Emacs users in the crowd, GUD is a pretty great GDB integration.
Ever since the advent of LSP, Emacs has felt superior to everything else. I have no reason to leave it. Especially once they made it faster with native comp.
Like why should I keep trying this month's new editor with a couple new gimmicky features, when I can just pop a plugin onto Emacs that adds that exact feature set, while maintaining everything else how I like it.
I first really got into coding when Atom was a thing, and then that died off and became VS Code and I was pretty sad about it, because while VS Code is good, it doesn't follow the same philosophy as Atom. But then I took the time to learn Emacs ~4 years ago, and nothing new ever comes close to convincing me it's outdated tech that I need to move on from.
That was a random rant, but I'm just really appreciate Emacs, and I'm glad it's stuck around.
Even before the dawn of the LSP era, Emacs was pretty great with ctags.
Yeah I first gave it a try before LSP and I don't think I was ready to be redpilled yet, because it didn't stick. So I can't comment on the state of it before then. I kinda joined once the LSP stuff got fairly smoothed out.
I prefer the GDB Graphical Interface in Emacs[1] (M-x gdb), rather than the more basic integration via GUD[2] (M-x gud-gdb). I’ve had to switch to GUD to run lldb recently, and I miss having dedicated windows that show breakpoints, threads, the current stack, etc.
The one nice thing about GUD is that the interface is consistent across debuggers, so I don’t need to refresh myself on the keyboard shortcuts when switching between debugging Python with pdb and C++ with lldb.
[1] https://www.gnu.org/software/emacs/manual/html_node/emacs/GD...
[2] https://www.gnu.org/software/emacs/manual/html_node/emacs/St...
lsp-mode+dap-mode also work well but requires some manual fiddling with launch.json files
Anyone has experience with this and can compare it to kdbg?
Debugger vs printf:
Has anyone found a reliable way to use a debugger when you have a) multi-process b) multi-threaded c) async d) timeouts? I would love to use a debugger but printf and logs “just work”
eclipse-cdt includes a GDB integrated debugger UI for C and C++ since forever. What's new here?
Eclipse CDT is a whole IDE; is rather complex, and one might even say convoluted; and is Java based. Seer is none of these things.
Of course - it's not like a GDB GUI is a novelty in itself, there are quite a few. But a GDB-GUI-only utility is a meaningful and important niche to consider.
Qt GUI itself can use gdb/ldb and display a fair amount of data structures from the standard library.
Note that gdb is also scriptable with python, so you can easily register your own printers.
See also https://github.com/nakst/gf
Slightly off topic but I think it is a good place to ask: One of the few things from windows that I miss when using linux is the debugging experience with visual studio (not code). When debugging a medium-sized C++ project on windows, the launch of the debug build is pretty fast and stepping over lines is almost instantaneous. On linux launching the executable using gdb takes like 10 seconds loading modules and stepping over each line takes like half a second which I think is intolerable (lldb is even worse). Yet I don't see people complaining about this online very much. Am I missing something? E.g. is there a compiler flag that speeds up debug launch time and step speed that I am not using?
Ensure you're building with DWARF5, and enable accelerator tables like .debug_names, which will allow debuggers to receive a pre-prepared index of the symbol names in the program (and thus it doesn't have to parse all the DWARF on startup).
Slow stepping is a surprise; there's no OS reason for that to be slower. Possibly if your types are really large and complicated, the debugger has to fetch a lot of data to refresh its view of state each time?
Yea, our software stack pulls many dependencies (our software probably totals over 200k loc). We depend on Qt, OpenCASCADE, and a few other heavy C++ Libraries. Stepping over a single line of code in GDB using the TUI can take 3-5 seconds in the worst case. I’ve been meaning to investigate or profile it further when I get the time, but it functionally means I avoid using the debugger except as a “last resort”, or only using it to catch segfaults or unhandled exceptions.
It’s very odd. It’s like it doesn’t cache something and ends up doing some strange expensive symbol search every time it hits a breakpoint or something.
Curious if anyone has a good solution to this also
A long time ago, "set use-deprecated-index-sections on" improved some things; don't know if any modern compilers need that anymore. I also have a "disable pretty-printer global builtin" in my ~/.gdbinit, though I don't recall what that was for, or if I even determined that to improve speed. Currently it seems "set style sources off" reduces some startup time. Don't think I've debugged anything your scale though, and I doubt any of my suggestions will actually help.
> I don't see people complaining about this online very much
I complain about gdb all the time, speed is just one aspect. Step-by-step debugging is just terrible on Linux. Maybe that's actually the reason few people complain about it, they just don't use gdb, instead relying on other tools, especially printf(). I am not in the video game industry, but they seem to be way, way ahead of everyone else, especially Linux (non-game) developers. Maybe some collaboration is in order.
As for your specific problem, I don't know. Do you have optimization turned on when debugging? gcc/gdb and the LLVM equivalents let you debug optimized builds, but it is not ideal as knowing which instruction corresponds to which line is complicated, and maybe gdb is working extra hard for it. The "-Og" flag is supposed to only do "debugger friendly" optimizations, also "-ggdb" or "-ggdb3" is supposed to be better than plain "-g" for use with gdb.
Haven't observed anything like that. Even with remote gdbserver on a low power embeddeded device, stepping has always been instant for me. Could be a C++ vs C thing.
The only thing which takes time is debuginfod downloads.
How large is your code base? I am talking about 100K+ LOC with many complex dependencies (mainly Qt modules).
2 Mio LOC, no problem here.
Sounds like maybe you have reverse-debugging enabled? I mean target record or target record-full?
Anyway, I seldom do step-by-step. I typically work with dprintf.
Nope, with reverse-debugging enabled it will be insanely slow, not something that you can miss.
I am curious does your project have large external dependencies or is it self-contained.
Are you talking heavily templated C++ code as well? I’ve got my suspicions about how much this affects things…
This is opposite the typical experience so you have some wedrd setup. Gdb is instant and VS' debugger takes forever. It's why companies like epic games have their own debuggers
Have you actually compared the debug performance of a large cross-platform application on windows vs linux?
I am not saying that you haven't just trying to make sure that your argument is backed by data rather than heresay.