> When was the last time you had to debug an ancient codebase without documentation or help from a team?
All the time. 300-400k SLOC in C++. Legacy in the sense that there were no tests of any kind. Little-to-no documentation. Solo developer at the tiny company. Fix bugs and add features while keeping the system available to the tens of thousands of users.
A more recent example: here’s a patch for a critical feature we need. It was written over a year ago. The original author isn’t available anymore. You can write the code from scratch or try to resurrect the patch against master.
Being able to jump into a project and lead people towards some goal is definitely a skill for senior developer positions. Yes, you generally have a team you can lean on and have the ability to do research and all that. But how do you show that you can do all that in an interview?
Agree with the conclusion that a good thing to test for is for problem-solving.
The tech side depends a lot on what you’re doing. Although it gets ridiculous and organizations get lazy with this part. You don’t need to be white boarding graph algorithms for a junior web developer role. If your application is a social networking role and you’re interviewing a senior developer or architect? Definitely. They’re going to be teaching this stuff and need to understand it at a deep level.
+1 for "all the time". Today I have been debugging a critical piece of the system which is written in Python (none of the rest of the system is) and largely hasn't been updated since 2020 and, you'll not be surprised, has no comments, no documentation, and a fucked up deployment system which makes me cry every time I have to think about it.
Last week I was debugging some similarly uncommented, undocumented, Go code from 2020 written by lunatics that is also a critical piece of the system.
It hurts.
+1 here too. The situation always comes up after some kind of fucked up acquisition. Nothing like inheriting an entire code base built on someone else's stack when all the original developers decided to quit or were laid off. Ancient versions of Python and Java, deployment done by pull and pray. Multiple versions of the same service in different places because they were half way through some modernization when it all came to an end. Fun stuff. Getting your bearings fast is a real skill.
Yeah from the title I thought this was going to be about leetcode problems, but this is truly something that comes up regularly.
Where are these dev jobs where _don't_ have to figure out some mysterious issue in a barely maintained GitHub repo semi-regularly?
> Where are these dev jobs where _don't_ have to figure out some mysterious issue in a barely maintained GitHub repo semi-regularly?
Oh, my job is one of those.
Our code is in Perforce.
This is the most Software Engineer answer. Emphasizing only the distinctions that make no difference :D
[dead]
Yeah, I feel like jobs that don't require you to reverse engineer a bunch of stuff are the exception, not the inverse.
Hall, I do greenfield embedded programming. Most of the code I touch is completely new; the entire codebase has been replaced and rewritten by me over the last three years. Not one other developer has contributed any significant amount of code. Even then, in this scenario, I'm still reverse engineering some arcane undocumented code at least once a week. Nobody ever documents their libraries properly so I have to rip everything apart to see how it works. Or I need to adapt some vendor example code or a GitHub repo into a slightly different form. Even just existing in the ESP-IDF environment I have to dig into the core code to figure out what the hell it's doing and where these error messages come from.
If you can't read someone else's Gordian knot of undocumented code, I'd argue you are probably not very good at writing code. Reading bad code is a vital part of learning to write good code, and is generally just a fundamental core skill that every programmer needs to have.
How are you defining bad code?
Imo unreadable and undecipherabe code is one subset of bad code.
I would argue, that in that vast majority of cases, the readability of code is the single most important metric.
If I encounter third party code that is terminally broken, I don’t really care how readable it is, except insofar as I can probably tell it’s terminally broken faster if it’s more readable.
A repo would be nice. I was recently given a 500 meg .zip of "backend stuff" and asked to figure out what's going on. No repository, no history, the read me is useless, the APIs look like they were built by rabid animals...
My favorite description of this sort of setup is "coded by vandals".
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
Heck, I have to debug the stuff that some idiot (me) wrote six months ago and it might as well have been someone else who wrote it for all I remember about how to debug it
So many times I'm reviewing code and really wish unpleasant words with the developer that created this horrible code. Problem is, people think I'm weird when I talk to myself like that.
Edit: apparently this wasn't written clearly enough that I was the original dev that I'm having words with???
"What complete knucklehead thought that... git blame Oh, oops."
When is the last time you’ve been trying to get help from another team, only to realize that the engineer you are talking to can’t write code, even with the AI help?
Me but a day ago.
haha, then find code that loves performance and hates the reader.
Real. Engineers who don't think they have this problem, are engineers who see their dependencies and the lower layers of their stack as ossified black boxes they "can't" touch, rather than something they can reach into and fix (or even add features to!) when necessary.
IMHO the willingness to "dig your way down" to solve a problem at the correct layer (rather than working around a bug or missing feature in a lower layer by adding post-hoc ameliorations in "your" code) is one of the major "soft skills" that distinguishes more senior engineers. This is one of those things that senior engineers see as "the natural thing to do", without thinking about it; and fixes that take this approach work subtly to ensure that the overall engineered system is robust to both changes up and down the stack, and to unanticipated future use-cases.
And contrariwise, fixes tending not to take this approach even when it'd be a very good idea, is one of the central ways that a codebase starts to "rot" when all the senior engineers quit or are let go/replaced with more-junior talent. When, for example, every input-parsing edge-case is "fixed" by adding one more naive validation regex in front of the "opaque, scary" parser — rather than updating the parser grammar itself to enforce those validation rules — your codebase is actively evolving into a Ball of Mud.
Of course, the tradeoff for solving problems at the "correct" layer, is that you/your company often ends up having to maintain long-lived, trivial, private hotfix-branch forks of various ecosystem software you use in your stack. More often than not, the upstreams of the software you use don't see your problem as a problem, and so don't want to take your fix. So you've just got to keep it to yourself.
(Funny enough, you could probably trivially measure an engineer's seniority through a tool that auths against their github profile and calculates the number of such long-lived trivial private hotfix branches they've ever created or maintained. Much simpler than a coding challenge!)
That's I'd the company you're at will let you take the time to "do it right". Usually that's not the case. They want features yesterday. You try and fix something properly by going down to the correct layer? "You shouldn't be doing that."
Well, that's what this thread is fundamentally about / the point that the GGP poster was making.
Coding challenges should measure your ability to quickly and efficiently dive into a big unknown codebase to fix something — because that's the prerequisite skill that develops with engineering seniority, that makes the use of this approach practical.
When you're a fluent speaker of the language of "arbitrary huge foreign codebases", it's usually actually much faster to fix the problem on the "correct" layer. Outside-the-encapsulation-boundary "compensating" solutions have an inherent overhead to their design and implementation, that just solving the problem at the correct layer doesn't.
To reuse the same example from earlier: the parser library is already a parser, while your own business-layer code is not already a parser. (If it was, why would it need to be invoking a parsing library?)
The parsing library is the "correct place" to fix parsing edge-cases, not only because that keeps all the parsing logic in one place, but because, as a parser, it already has all the relevant "tools" available to rapidly express the concept that your bugfix represents.
Whereas, when you attempt to fix the same bug outside the parser, you don't have all those tools. You might have some analogous primitives available to you (like regexes); but usually just the stateless ones. You almost certainly are missing any kind of "where am I in the larger state machine of what's going on, and what is the intermediate analysis state derived from the other checks that have been done so far?" information that would be available to you in e.g. a transform-callback function in a .yacc file.
In other words, a wrong-layer fix is one that requires you to derive non-exposed internal information through heuristics from external side-channel signals. You're essentially doing Signals Intelligence to the library you're wrapping the behavior of — and SIGINT is expensive! And not something a pure SWEng is likely to be very good at!
Approaching things on the wrong layer, inevitably leads to one of three outcomes — either:
1. you end up not really fixing the bug (if you run an overly-weak pre-validation regex that can't hit every case);
2. or you introduce new bugs by making some valid inputs invalid (if you run an overly-strong pre-validation regex that hits cases it shouldn't);
3. or you Greenspun half the implementation of a parser (usually a much-less-efficient recursive-descent parser, which now gives you perf problems) so that you can "do SIGINT" — i.e. copy what the state-transitions the parser would be doing, to derive "just enough mirror state" necessary so that your validation check can be invoked only on the parser-states + lexemes it should actually apply for.
---
Here is, then, a flowchart of what will end up happening in practice, depending on the team dynamics in play:
1. In a "blind leading the blind" situation with only junior programmers, you might see options 1 or 2, where nobody realizes the bug isn't actually fixed, and the system is actually getting less robust with every change.
2. When a junior programmer is working for senior programmers, and the senior programmers aren't doing a very good job of guiding the junior, but are pointing out in code review that the junior hasn't truly solved the problem, then you almost always eventually get option 3 (after much iteration and wasted time.)
3. When a junior programmer is working for senior programmers, and the senior programmers do guide the junior, then what happens likely depends on time pressure and concurrent workload.
3.a. If there's enough capacity, then they'll likely nudge the junior into trying to fix the problem on the "correct" layer — even though the junior doesn't have this skill yet, and so will take longer to do this than they'd take even for option 3 above. The senior programmers want to give the junior the opportunity to learn this skill!
3.b. If there isn't enough capacity among the junior programmers, but one of the senior programmers has strong principles about code quality and at least a few off-hours to dedicate, then the senior programmer will likely steal the bug from the junior and fix the problem themselves, quickly, on the correct layer. (This might only happen after the junior has already flailed around for quite a while; the fact that this looks like "wasted time" to PMs is something senior programmers are very conscious of, and so will often actively defend the competence of the juniors they steal these bugs from — usually explaining that they couldn't have been expected to solve this problem, and it should have been assigned to someone more senior in the first place.)
3.c. If there isn't enough capacity — and that includes overworked senior programmers or senior programmers all pulling double-duty as PMs or such, leaving them no mindspace for implementation-level problems — then they'll probably just tell the junior to solve things through option 3 (because they know it'll be faster) and make a mental note of this as an intentional temporary introduction of technical debt to be repaid later (by moving the solution to the correct place) when they have more time.
4. And if a senior programmer gets directly assigned the problem... then they'll just go directly to fixing the problem at the correct layer.
(These are all from my personal experiences over my career as a junior engineer, senior engineer, and now CTO.)
+1
Jumping into an unknown codebase (which may be a library you depend on) and being able to quickly investigate, debug, and root cause an issue is an extremely invaluable skill in my experience
Acting as if this isn’t useful in the real world won’t help. The real world is messy, documentation is often missing or unreliable, and the person/team who wrote the original code might not be around anymore.
> Solo developer at the tiny company.
> Fix bugs and add features while keeping the system available to the tens of thousands of users.
Don't tens of thousands of users warrant more developers? Or having enough of a budget to work on tests, or other things to improve the developer experience and be able to work without lots of stress? That's unfortunate.
I think I managed about 10k users with my mac shareware games in 2009/10, and that didn't get me ramen profitability let alone modern pay scales.
So not necessarily any budget for more than they did.
Single dev codebases have a really hard time growing to 400kloc though
Hmm. I've found that it's much easier and faster to write a lot of code than to take the time and be thoughtful of what you end up shipping.
> I didn't have time to write a short letter, so I wrote a long one instead. Mark Twain [0]
[0] https://www.goodreads.com/quotes/21422-i-didn-t-have-time-to...
That’s true to an extend, but you need to work to make that much code.
Not necessarily — I've had the misfortune of working with someone who repeatedly, rather than subclass, duplicated the entire file.
Including the comments I'd added saying "TODO: de-duplicate this method".
That only got the project up to about 120 kloc before I'd had enough of that nonsense and left.
My understanding is that line count was mostly down to that one dev, as he didn't only ignore what I had to say when I was there, he also ignored the occasional contractor.
The point is that one cannot be confident that 10k users maps to "enough money to engineer this properly".
Oh, certainly, the 10k users can’t. That’s why I’m not talking about that. I’m saying that instead of looking at the number of users, just the fact they have a 400k lines codebase suggests that they must have enough money to hire a team that builds that (or had, at some point in the past).
Maybe so, but not necessarily. I have a couple of 300+kloc projects that I wrote all by my lonesome. It's far from an impossibility. On the other hand, there's a reason I only have a couple of those.
> Instead, they put developers in situations they’d never face in the workplace, where collaboration and support are standard.
I thought the page was satire at first.
Collaboration and support being standard seem very much like situations you'd never face in the workplace. No satire there.
> > When was the last time you had to debug an ancient codebase without documentation or help from a team?
> All the time.
I guess the minor difference was that it was part of your paid job instead of a huge time sink to get hired.
This is the biggest issue I have with all these tasks is that they take precious time that could be utilized better. In my case: working on real open-source software instead of throwaway made-up problems. (Source: I had to spend 3 days to write a transaction engine toy once and then I didn't get the job for another reason.)
+1. Jumping into an unfamiliar code base with no docs and no access to the original developers offers a lucrative bottomless pit of work for a skilled freelancer. I made this my specialty a decade ago.
That’s funny; it’s literally my career description.
> All the time.
Not just all the time — right now in the other browser tabs I'm ignoring in favor of this one lol
To reinforce, an task I had when interviewing was to solve an issue with a webapp written in a language I had never used and a webserver I had never used.
It wasn't that hard if you know the fundamentals (http, ssl). It was sorta fun and I got the job.
Don't understand one thing: I can read any code of any legacy code base, sure. What I cannot do is to asses if the code is supposed to do its job (e.g., the business logic behind it). Is the code I'm reading supposed to calculate the pro-rated salary according to the "law"? Just by reading the code, you cannot know that. You need help, either from other developers who perhaps know the codebase of from product experts that know the business logic. Or documentation, sure, but this is usually a luxury one doesn't have.
So, definitely, I always need help debugging any kind of code (unless it's rather code that doesn't deal with product features).
> What I cannot do is to asses if the code is supposed to do its job
You can, to a degree. If you can get the system under test you can make an assertion about how you think it works and see if it holds. If you know what the "law" is you can test whether the system calculates it according to that specification. You will learn something from making those kinds of assertions.
Working Effectively with Legacy Code by Michael Feathers goes into detail about exactly this process: getting untested, undocumented code people rely on into a state that it can be reliably and safely maintained and extended.
Depending on the situation I often recommend going further and use model checking. A language and toolbox like TLA+ or Alloy is really useful to get from a high-level, "what should the system do?" specification down to, "what does the system actually do?" The results are some times surprising.
You're right that at some level you do need to work with someone who does understand what the system should do.
But you can figure out what the system actually does. And that is de facto what the business actually does... as opposed to what they think it does, logic errors and all.
A good senior programmer, in my opinion, thinks above the code like this.
Update: interviewing for this kind of stuff is hard. Take-home problems can be useful for this kind of thing in the right context. What would you do differently?
There's also the likely chance your business doesn't have good documentation on how features should work, there are no tests and the thing you're testing is fairly custom tailored to your business that you need to have someone with knowledge of the product and history of it to tell you "oh yes that's supposed to happen that way".
Sure if there is a law written down or past written regulations your business follows, that's easy. You've got docs.
Yeah I thought that was a weird thing to highlight. The "debug this!" kind of interview is pretty much my favorite, because it's by far the closest to what I do for like 90% of my time at work (when I'm not doing communication tasks...).
another +1 from me -- my job, with some exceptions, is literally "plop this guy into an unfamiliar project and implement some user need be it a bugfix, feature, or (silently) a refactor.
agnostic programming flueny/software practices, including being comfortable debugging unfamiliar things or navigating uncharted waters without many domain experts to guide me, is the number one skill required in my role.
bonus points if i end up becoming an active maintainer to the project, or someone who can help other devs w it, the more times i dip into it.
I've twice gotten a job because all, or all but one of the previous team members left. Not a lot of help to be had.
Surprisingly, both jobs were fine too.
> All the time
If you didn't solve your problem in 45 minutes with zero assistance from tools or Google or colleagues, were you immediately fired?
+1 as well: 2M LOC codebase with little to no tests and a lot of lost knowledge due to disbanded teams, people moving on etc. In my experience very common state of affairs honestly.
When was the last time you needed to do it in 4 hours? :)
More recently than never, unfortunately.
Yep all the time. 30+ years old C/C++ large scale software used by companies you have heard about around the world.
Yeah I think OP's overall argument is right but this particular example isn't needed and has plenty of counter-examples. It's more like when was the last time you had to solve [insert leetcode problem here] without any resources, under time constraints, with someone judging you and staring over your shoulder at all times, at work. I find under these conditions I have maybe 40% of my usual brainpower..
"> When was the last time you had to debug an ancient codebase without documentation or help from a team?"
I mean, even if you have a team, you are hired to do specifically this kind of work. I'm confused what OP thinks a software job is if not EXACTLY this.
Single dev supporting mission critical 400k LOC C++ project is a direct path to a spectacular failure for the company.
you probably missed the whole point,the coding test might just be an effective way to filter out old engineers
> > When was the last time you had to debug an ancient codebase without documentation or help from a team?
> All the time.
So you were forced to do it in a silo? No access to any type of documentation, whatsoever? Not able to refer to any C++ manuals or any other type of documentation? Not permitted to access the internet to look things up? Barred from using online forums to seek help?
You've read the post, right?
The OP says: "A “4-hour” assignment". This is take-home, so candidates are free to use any C++ manuals, documentation, AI, online forums, whatever...
4 hours!
Yes, I'll do it, that'll be $(4 hours x hourly rate - 1 cent), please
A small anecdote.
A partner of a friend quit their job earlier this year. They then took 4-6 weeks to prepare for each interview with Big Tech companies (4-6 weeks for Meta, 4-6 weeks for Stripe, etc.). Along the way, they also took random interviews just to practice and build muscle memory. They would grind leetcode several hours a day after researching which questions were likely to be encountered at each Big Tech.
This paid off and they accepted an offer for L6/staff at a MAANG.
Talked to them this week (haven't even started the new role) and they've already forgotten the details of most of what was practiced. They said that the hardest part was studying for the system design portion because they did not have experience with system design...but now made staff eng. at a MAANG. IRL, this individual is a good but not exceptional engineer having worked with them on a small project.
Wild; absolutely wild and I feel like explains a lot of the boom and bust hiring cycles. When I watch some of the system design interview prep videos, it's just a script. You'll go into the call and all you need to do is largely follow the script. It doesn't matter if you've actually designed similar or more complex systems; the point of the system design interview is apparently "do you know the script"?
Watch these two back to back at 2x speed and marvel at how much of this is executed like a script:
Back when I had a job hiring people we created problems we could walk people through and see what they figured out on the fly/what they knew but didn't know they knew. That was what I was taught whiteboard problems were, not this lame leet code. But I grew up with both parents in 1980s/90s Santa Cruz tech. The current scene adopted the practice but made it exclusive when it was intended to be inclusive (because there wasn't a huge talent pool back then so they were looking for people they could grow into developers).
One of my hires was a physicist who didn't know any of the jargon and had a look of terror when she first started on our whiteboard problems. Once we led her to a comfortable spot and she got into it she started talking about tools she made to help with her physics work with zero dev knowledge (she was shocked when we called her tools software, she was like, but it wasn't REAL software, we were like hate to break it to you, but that was not only real software you wrote, but crazy impressive). The white board problems were a tool to highlight potential.
I get that these big companies just want drop in widgets not people and that is what they are searching for, I just think it's funny they are using something that was created for the exact opposite purpose.
Part of me is kind of glad I'm too old to be one of the cool kids and have no hope to land a job that does these sorts of code challenges.
Well it shows that you are ready to invest a lot of your own free time into being recruited there - i.e. you'll be a loyal and hard worker. And that you're smart enough to learn it all. So I would say it has a character + IQ component to it.
Of course 95%+ of it will usually be useless in your real-life work, to solve most of those problems in the time given you need to know the problem and the optimal solution to it so basically memorization with little thinking.
Yep. I get why some companies would want to use it to select for that. It's just funny because it's so much the opposite of the original purpose of white board problems/old school tech scene culture.
I'm from the Bay but I've never heard "Santa Cruz Tech" before. Its always been a sleepy beach/uni town to me. Have any interesting anecdotes?
Not to share here. I was at boring places my parents had the interesting experiences but those aren't my stories to share.
It had lots of small industry specific solutions, lots of hardware/software tied together solutions (so a large 'tech' community where 'tech' was the title for hardware technicians that hand built/designed/produced the hardware devices/circuit boards/etc). I'm surprised you never heard of the Santa Cruz software scene. Some examples:
Victor Technologies (specifically Victor 9000 Computer)
Texas Instruments had a fab/facility there.
Digital Research had a presence (CP/M, PDP)
Seagate
SCO
Borland
EMU Systems (hardware music samplers) (lots of cool musicians used to visit their facilities)
(current) Antares (Auto tune/music tools)
(current) Plugin Alliance (music tools)
(current?) Plantronics
But mainly it was smaller obscure companies. For example Parallel Computers working on parallel computing early on. IDX working on radio tags in the 80s. Triton doing sonar imaging with SGI boxes.
It was very much it's own sub-scene with people who picked it for lifestyle so a bit different mindset (more hippie, schedules around the tides so people could surf, everyone going to the Wednesday night sailing races together). I was just a kid but it seemed cool. And it was open and friendly (I always had summer jobs as a kid starting from duplicating floppies for CP/M or PDP software as a 10 year old). Plus just a cool vibe. I remember next door to where my mom worked going and talking to Lorenzo Ponza (inventor of the pitching machine) when I got bored of playing Trek or Rogue on the VAX at her office (she was a workaholic always working weekends and dragging me along) or skating the ramp (for the stoner tech(nician) crowd) in the parking lot.
Assuming it's reference to Santa Cruz Operations.
If you talk to hiring at some of these companies, it is intentionally designed to be this way so that it is fairly meritocratic.
In other words, anybody, regardless of what university they went to or what courses they took out what advantages or disadvantages they had, can learn this stuff in a couple of months if they have what it takes for the role. And because the skills are so standardized, the process is pretty differently objective.
It's not expected that they'll actually use the specific skills in the job. But it shows they can learn skills of that type and then perform them at a high level in a stressful interview situation. Which is a great signal for whether they can learn the skills needed for the specific project they wind up on and perform in a high stakes deadline scenario.
I'm not defending the system, but I am saying there's a clear logic behind it.
The thing I think is funny in all this is that hiring a new manager is fraught with a high amount of risk (more so than an engineer), but they don't have nearly the level of hurdles to get over. Does the company interview past employees of the manager? Did the manager applicant have alcohol, drug issues, or weird sexual things he did to his direct reports or others? Or, instead, would you enjoy his presence on a golf outing?
I know one manager who had issues with all three got hired at Google. So. Think of the poor HR person that will have to clean up that mess.
The idea of it being positioned as meritocratic is hilarious to me. As if having a process outside of the leetcode question bank will lead to any more bias. I'm generally of the belief that an interview process should be standardized in an attempt to reduce bias, but I disagree that it needs to be something people can study for to the extent that leetcode questions can be studied.
I've interviewed people with leetcode questions where my co-interviewers made the candidate's confidence in presenting the correct answer the tipping point for hiring (with women mostly being targeted in this category). Bias can happen in any process, and with "culture" frequently being a consideration it's hard to tell what should be justified. Meritocracy in the tech industry is mostly a joke outside the overachieving outliers.
> IRL, this individual is a good but not exceptional engineer having worked with them on a small project.
Have you worked with them since they went through this regiment? Doing a DS&A coding problem regiment + system design will change you as an engineer. You might be surprised how good they've become.
Also they say they've forgotten. But if they were able to get that position, they probably could do medium level leetcode problems. So, I'd doubt they've forgotten all of it. I'm pretty sure they'd still be able to solve easy level problems, which most people can't solve off the cuff. They also probably still know the complexity of a bunch of essential backend operations (search, sort, array and hash lookups, tree & graph traversals, etc).
No home study grind on puzzle problems is a substitute for years of practical experience, which is what most teams actually hope for in their higher-level engineers.
The point is not that the friend didn't pick up some implicit knowledge or become a sharper engineer than they were before grinding, it's that by exploiting the screening strategy, they got placed into a job they're not truly qualified for.
Are they bright enough to fake it until they make it? Maybe, but that's not going to be the case for many of the countless placements that were made like this, and hints at why both product and software quality is in bad shape these days.
Yeah, in some ways I like "system design" type questions, because at least it's pretty creative and more like what I do at work than writing some little algorithm with a custom data structure.
But on the other hand, it's also not a very strong signal for my years of experience working on and designing portions of systems. I've never actually done "design a system from scratch", because every system that is big and complex evolved from some smaller and less complex system.
So the kinds of questions that come up in these system design interviews are best answered in real life with "I dunno, I'd pick something simple and refactor it later if it becomes a bottleneck".
"What kind of database would you choose for this?" "Whatever we're already using in production for other things." "How would you model this data?" "As simply as possible, so that we can get it out fast and start learning things."
But these are not interesting answers in these interviews. I have usually found myself saying "well, in real life, I would advocate for simplicity and iteration, but for the purposes of this interview, I'm happy to discuss potential trade-offs in technology selection, architecture, and data modeling".
But I think it's easier to study up on the "right" answers to the architecture and modeling questions, without spending a lot of time learning how to start simple and when to evolve toward complexity, etc.
I think the context is important in the question. I’d argue that a system design question is optimally answered differently depending on the size of the company and the scope of the internal tech stack.
Doing more of the design up front is important in a big tech environment almost all the time because:
a) many different possible infrastructure (eg. databases) are already at your fingertips, so the operational cost to using a new one is a lot lower. Also, cross-organisation standardisation is usually more important than this marginal extra local operational cost.
b) scale makes a big difference. Usually good system design questions are explicit about the scale. It is a bit different to systems where the growth might be more organic.
c) iteration is probably way more expensive for production facing systems than in a smaller company, because everything is behind gradual rollouts (both manual and automated) and migration can be pretty involved. E.g. changing anything at all will take 1 week minimum and usually more. That’s not to say there is no iteration, but it is usually a result of an external stimulus (scale change, new feature, unmodelled dependency etc), rather than iteration towards an MVP.
Now, a lot of this is pretty implicit and it is hard to understand the environment unless you’ve worked in a different company of the same scale. But just wanted to point out that there is a reason that it is the way it is.
Source: SRE at Google
> No home study grind on puzzle problems is a substitute for years of practical experience, which is what most teams actually hope for in their higher-level engineers.
Perhaps.
I had 9 years of practical experience myself before going through that grind (leetcode + system design) some years ago. It is no substitute, but you don't come out of it the same engineer. You read, write and understand code differently.
Also, I have to admit that with the new perspective, there's a reason why I'd probably hire the newbie who's done this over the experienced engineer who hasn't (and maybe continues to dismiss it). The two gaps are not equivalent. In my experience the newbie will probably cover many of the necessary bases on the job fairly quickly with mentorship. I can also appreciate that 9 years of practical experience didn't help me cover DS&A or system design. I doubt that any kind of mentorship would've helped me either. Learning this stuff is a grind, plain and simple.
A newbie will also make mistakes that I can understand. I can see them pick a heap, or a linked list because it's the algorithmic sound choice for the problem. Then I can point out that for the specific use case, an array might not be as efficient, but it's a good trade-off to simplify the code. They will understand this. But an engineer that indiscriminately reaches for arrays and hash for any situation is a different story. Exponential inner loops for big searches, huge recursive calls. Those things show up in code written by experienced coders IRL. You click on a button and wait 5 seconds for the response.
I'm not sure who's faking it until they make it, but based on my own experience, I'd say that perhaps I successfully faked it for 9 years.
In this example, we're talking about an L6/staff role.
If the role is correctly staffed, nobody should have to be looking over that person's shoulder questioning data structure choices and algorithm efficiency regardless of what the person's background is.
It's absolutely the case that there are people with 9 years of experience who are not prepared to meet the needs of that role. There are people with 19 and 29 and 49 years of experience that fit into that bucket. Practice alone doesn't get you there.
But as you note, as a rule, you expect your bright, leetcode-grinding "newbie" to need mentoring and oversight to compensate for their propensity to rely on shallow theory where they don't know better from experience. That's not someone a team wants in a L6/staff role either!
For high level roles, you need bright, curious people who are experienced but not rigid. Admittedly, there's only so many of them, but when you start having to fill those roles with people that don't meet each of those criteria, you can't be surprised when the team's morale and output suffer. Which is exactly what we see as an outcome of the recent boom (and each prior one), because you end up having the blind lead the blind through the minefield of engineering.. and that's going to be messy no matter how clever or book-studied those folk happen to be.
That’s a false dichotomy. I’ve never felt the need to do the DS&A grind for one of these roles and I would never dream of writing the ungodly inefficient code you just imputed to my peer group.
> Exponential inner loops for big searches, huge recursive calls. Those things show up in code written by experienced coders IRL.
The thing is, those things are easily fixable after the fact. A bad architecture is not, and without practical experience designing systems in the real world, you don't have those instincts to guide you towards the correct choices because you haven't made any mistakes to learn from.
And thanks to modern CPU design, they are often actually faster than more complicated data structures.
I know in interviews they are looking for the solution that is the most efficient by big O standards so that’s what I use
But big O is often just not a reflection of actual performance on modern computers
> I'd say that perhaps I successfully faked it for 9 years.
Sounds like it. IME >90% of everything you encounter is typically covered in university DS&A introdoctory courses and the grind would be part of the studies. The good news is it's never too late to start again.
Did you study CS before you went into the workforce?
How is one supposed to get a job that requires years of practical experience if all jobs require years of practical experience?
You may remember these details from the GP comment and my own:
> L6/staff
> higher-level engineers
Traditionally, one would work up to senior roles over the course of one's career, often by pursuing internal opportunities to learn and exercise new skills within one's current role before applying for new roles that rely on those now-practiced skills.
Placing into senior roles because you did well at Stanford, spent a couple years writing CRUD apps, and then grinded puzzles for a couple months is inevitably something that happens when the industry needs to fill more senior roles than there are engineers with suitable experience, but it's not a healthy phenomenon and definitely shouldn't be treated as the norm.
> Have you worked with them since they went through this regiment? Doing a DS&A coding problem regiment + system design will change you as an engineer. You might be surprised how good they've become.
Having done this prep, I can't tell if this comment is sarcasm. Building real systems makes you a good engineer. Maintaining systems over a long period of time makes you a good engineer. Working with other experienced engineers makes you a good engineer.
Doing this prep you do learn a few things along the way, but it's contrived. You already know what you need to learn, which is often the hard part on the job. It's works as a filter since the ability to learn concepts is important, but it's usefulness continues to trend downwards as it becomes more standardized.
> ...it's usefulness continues to trend downwards as it becomes more standardized
I think this is a key point that a lot of commenters in this thread gloss over. That as the questions become more "standardized" and "predictable" (leetcode being a prime example), the results of using these questions tests for something, but that something is not what necessarily makes a "good engineer".The system design questions are probably even worse because the expected responses are so templatized -- even more than leetcode.
> It doesn't matter if you've actually designed similar or more complex systems
You know you've designed more complex systems. The interview has no way of knowing if that's true, or you're an actor following the "I've designed complex systems" script.
And acting talent is arguably more common among the population than programming talent.
I think there are ways to tell by simply asking deep questions about the system someone claims to have built, critiquing the design decisions, diving into why this technology over that technology.
When I've hired or been a part of a hiring process, I always place emphasis on a candidate's past projects and ask deep technical questions around those. I also always review their GitHub repos if one is provided in the profile and I will ask them deep questions about their repos, why they chose this tech or that, interesting pieces of code, design tradeoffs they made, etc.
That type of interview allows all kinds of unconscious bias to creep in which is why a lot of tech companies moved away from it
Sounds like the system worked exactly as intended then. A seemingly smart person got a good job. What's the problem with this story exactly?
A moderately smart person was selected for a good job perhaps over many many better possible hires simply because that person had the leisure to learn the game. Inefficient. But nice for that individual, naturally.
But is there a good way to find the "better possible hires" which doesn't have other significant disadvantages? If you have a convincing method of doing that, many companies would be interested in your ideas.
Starfighter doesn’t seem to have gotten anywhere
interview followed by paid internship/probation. watch them work on your real system. Keep them if they're good.
In the US this doesn't work well outside of college internships. Most tech workers don't want to shift to a new employer with a probation period. We already live under an "at will" employment relationship so employers can let you go anytime and workers can leave anytime. To have real value to a probation period for workers, we have to guarantee its harder to fire you past the probation period.
This will strongly favor people who believe that that they cannot get a good job, because if a candidate has multiple offers, why would they choose probation vs regular job?
(Note I am saying people who "believe" they cannot get a good job. This would be people who worry a lot, people with unusual experiences that other companies avoid, and under-performers who got fired. I am sure there will be some great hires in those groups, but likely less than during regular hiring)
That's what companies already do.
Are you suggesting that any coding interviews and challenges are simply removed from the existing processes? That just means you end up with more candidates to choose from, which doesn't sound helpful at all if your goal is to end up with better hires as the comment above was suggesting.
leetcode doesn't help either. it's just cargo culting at this point.
Even with probation the cost difference between saying no at interview and firing them during their probation is enormous.
Yes, ask give interview tasks which are realistic depictions of the actual job tasks.
No, the hiring managers that are into cargoogle culting are not actually that interested in how to do interviewing properly. Not unless, say, google does it and they can copy it.
For them the important thing is that leetcode is a safe, defensible choice because "everyone else does it that way".
One of the main questions I ask in interviews is basically "we have a data pipeline with goal X and constraints A, B, and C. How would you design it?" Depending on how they do, we'll discuss various tradeoffs, other possible goals/constraints, and so on.
This is based on a real system I designed and have been maintaining for ~5 years, and is also very similar to other systems I've run at previous jobs.
About half the candidates complain that it's not a realistic question.
As you climb the engineering IC ladder your responsibilities involve larger and larger tasks and timescales. It's not possible to measure whether someone can make the right decisions about the architecture of a service as requirements change over years in a 1-day interview. Anything that isn't "be the engineer maintaining this service for 1 year" is going to be a proxy for that, which can be learned and gamed.
What would be a "proper" interview in your opinion?
> Yes, ask give interview tasks which are realistic depictions of the actual job tasks.
I think this is exactly what a lot of companies try to do when interviewing. Depending on how much time they want candidates and interviewers to spend on the task, this ranges from leetcode-style problems to bigger coding challenges (possibly with some debugging or collaboration involved).
What would you suggest concretely?
Since when is studying, practicing and preparing, gaming the system?
Is reading a book on software engineering to become a better programmer also not allowed?
I feel like people want these jobs to be distributed "fairly" based on "natural" ability/talent.
But it has never and will never work that way.
> Since when is studying, practicing and preparing, gaming the system?
Fair question.. the problem today is the emphasis on studying and practicing irrelevant things like memorizing algorithms. That's become the paved short-cut to well paying jobs so naturally people do it. To the point that even the people doing the hiring have forgotten what it meant to be actually qualified, not just a leetcode memorizer.
If you need to hire a musician for your band, do you pick the person who has spent six months practicing a handful of chords to perfection, but possibly doesn't know anything about composing songs or jamming with the band? Or do you pick someone who has been composing and playing live shows for 10+ years?
The first one is just academic memorization that has some value, but very little. The second one is real-life experience that's worth a lot.
I have zero musical skills but even I have managed to learn to play a couple songs on the piano by sheer memorization of which buttons to press in what sequence. If you ask me to play one of those songs it might seem like I know what I'm doing even though I'm completely incompetent in music. That's the equivalent of hiring for software roles based on leetcode memorization.
I don't think its common for people to try to "memorize leetcode."
Most interview loops at places that do algorithm interviews are 2-3 rounds and each round will have up to 2 questions. Its very very unlikely the interviewee will only encountered questions they have memorized.
More likely, the interviewee encounters questions similar to ones they have solved and they know the pattern around solving, then are able to apply their learned skill to the new problem.
Similar to your music analogy: you can absolutely be a strong guitar player in a band if you just memorize a few different chord shapes and can apply them up and down the fretboard to different keys (lookup the "CAGED system").
I don't like the "state of the art" of tech hiring but I think this thing you say can be said to any of us in our jobs and that's not fair.
We all part from the same place (understand me: from the standpoint of the hirer, I know we have different vital stories). They open the position to all of us and want to be impressed. The one that makes it, gets the job.
It is the best way? Absolutely not. Is it unfair? No.
I don't suppose you got a good job because you had the leisure to spend 4 years in college? Or 4 years in high school?
Recall this anecdote the next time someone mentions "meritocracy" in the context of tech. At best, only a limited population gets to participate.
It’s like the bar exam for lawyers. It bears absolutely no relationship whatsoever to the actual job and the work you will be doing. It’s a pointless ritual. And the point of the above story is that the person performed the ritual and then promptly forgot all the words once they succeeded. It illustrates the pointlessness.
It's not a pointless ritual. It tests for determination, grit, and willingness to grind on difficult, frustrating and ultimately value-free tasks.
All crucial skills in the modern technology or legal workplace.
I was a lawyer. I passed the bar exam (first try, no less). I practiced. The bar exam does none of what you claimed for it. Now, I offered it here as an analogy, but I am not a software developer. It just sounded akin to what I do know from my own experience and from talking to literally hundreds of other lawyers, law school professors, judges and their clerks. So maybe it’s not a good analogy to software development. I don’t know.
At least you only have to pass the bar once.
Unless you move.
> It's not a pointless ritual. It tests for determination, grit, and willingness to grind on difficult, frustrating and ultimately value-free tasks.
This is what people say whenever one criticizes the methodology of any test. They say that the real test was actually the friends we made along the way. The real test was the degree to which you are willing to humiliate yourself, or to accomplish things you don't understand the purpose of, or to accomplish things that you do understand the purpose of but disagree with logically or morally.
We filter for the worst, most damaged, and most desperate people.
> We filter for the worst, most damaged, and most desperate people.
These are probably the best candidates, from the perspective of a manager who is concerned with stability and not rocking the boat, at the cost of results and workplace quality.
And by that argument, difficult big tech system design interviews are interchangeable with bar examinations.
I actually think we should keep the system design interviews because they are closer to what I've actually experienced and researched in service of my work.
The leetcodes feel completely unrelated.
I feel like you forgot the "/sarcasm" at the end.
> It’s like the bar exam for lawyers.
At least lawyers don't have to take the bar every other year for the rest of their careers.
It took forever. Literally tens of thousands of dollars of opportunity cost, with a lot of risk of having nothing to show for it.
The bummer is that you're right, it actually is worth even this much investment, because these companies do pay extremely well. But it's still horrendously inefficient, because the companies are getting a very small improvement to their signal to noise ratio, at this great cost (which, notably, they don't bear).
Yeah that's how education works.
Someone working at billion dollar VC funded outfit should understand the cause of making smart but speculative bets.
Yes, and they should also be able to understand when the system that creates those smart but speculative bets is creating deadweight loss, and identify that as a problem.
BTW, I have personally made this bet in the past (though I didn't put 3 to 4 months into it...) and it was indeed an excellent bet to make. But benefitting from a system does not preclude identifying problems with it.
"Seemingly smart person getting a good job" is a criteria for filling headcount during a boom, but regresses the industry's maturity by placing inexperienced people into critical roles.
The problem with this story is that people need to work with and rely on this person for responsibilities they're not yet qualified to meet. In some cases, a "smart person" will be able to grow into the role, but the road there is long and messy, which becomes a frustration for colleagues, supervisors, clients, users etc.
Because of the prolonged boom we just went through, the industry -- especially at FAANG's -- is now saturated with smart, naive people trying to fake it until they make it, leading to a gross decline in quality and consistency compared to where we have been and might otherwise be.
Apparently they expect people to work for free for more than a month to learn material they then won't use.
In my mind that's a rather nasty practice.
Not that I'm complaining. I'm happy to pick up people that are good at computers but wouldn't be able to pass that hurdle, and probably wouldn't hire anyone that has.
> Not that I'm complaining. I'm happy to pick up people that are good at computers but wouldn't be able to pass that hurdle, and probably wouldn't hire anyone that has.
You're looking for people who feel like they can't be bothered to learn some of the science and fundamentals of their craft? And you'd actively oppose hiring people who have the determination to go the extra mile? You'd actually discriminate against people who know how to use basic data structures and algorithms, and have that enriched landscape of knowledge to apply to problems that might come up?
Fortunately for you, there are lots of these people available to hire who felt like they are too good to put in the work.
I'm not saying that the data structures and algorithms knowledge is needed to do most software engineering jobs on a daily basis. But for lots of jobs, hiring people with a demonstrated willingness to dive in deep and learn things that aren't necessarily easy or fun actually can be a very good thing, because a lot of engineering problems require a similarly difficult dive into some aspect of specialized domain knowledge.
You're speaking from a position of great privilege where putting in a month or two of free labour doesn't impact your life much, maybe because you have money, or because you don't have kids.
Whether someone demonstrates a willingness to dive in deep and learn things I catch with an entirely different technique, like putting them into contact with a new programming language.
Filtering out people with experience from "FAANG" will likely get rid of some people that think they are "too good to put in the work", because they have that line on their resumes. And those I've met were absolutely insufferable and incompetent.
Actually I put in that work myself, while working, and while raising kids. It took me a lot longer than a month or two. It took a ridiculous, embarrassing amount of time.
> And those I've met were absolutely insufferable and incompetent.
I understand this last part, and annoying and insufferable people absolutely do exist. But consider that you may be generalizing and discriminating against a pretty big pool of people using a small sample size, and give some of those people a chance-- as long as they're not being insufferable and are treating you the way you want to be treated. Those people exist in FAANGs too.
No way. If someone has worked in one of those companies and tells me about it, then they're tainted by it, similar to how they would be from having worked at Raytheon or whatever.
If they have the sense to try and hide their "FAANG" background I might consider them.
It's very mild compared to postgraduate education. Or getting a degree in general. That's years!
If you're getting a PhD solely for a job in marginally related part of industry, that's on you. The experience is useful if you're doing it for right reasons, instead of as a proxy ritual to boost your CV. There are more efficient ways to do the latter.
Sure. Why is it a worthwhile comparison?
although its a different paradigm that I’m used to, I agree with this general concept
if you provide utility to the market, you shouldn’t need credentials or a corporate ladder to climb to prove it, it should be immediate compensation no matter how disparate
so despite how coveted tech compensation packages are at this tier of company, a seemingly smart person getting a good job after doing a contrived aptitude test does meet that criteria. other people outside the field (and within) have difficulty passing it and don't have the cognitive ability to study for it, or the financial stability to prioritize studying for it
I also agree that a less contrived aptitude test would be better
They passed over all the people who didn't study hard enough on random word problems, and then practice those types of problems sufficiently such they could hammer out a solution in under 20 minutes. Multiple aspects of the "leetcode interview" require practice outside of work experience for many people, despite them being great problem solvers.
I took a similar approach -- took 6 weeks off doing interview prep, then concurrently applied to three FAANG employers. Got offers from all three, the worst of which was a 36% raise over my former non-FAANG job.
> They said that the hardest part was studying for the system design portion because they did not have experience with system design
Your friend is not alone, and I don't understand what separates us. Even when I was early career, system design was my favorite.
My biggest issue is performance anxiety when there's a concrete problem to solve that I've never seen before, so coding sessions are absolute hell. Inability to focus, forgetting approaches I've known forever, etc.
Ask me to do system design though, and it's generally freeform, I can talk about different approaches, deep dive on the fly, and draw a nice diagram or many nice diagrams, and interviewers love me and think I'm smart. I've had interviewers who were suspicious after my bad coding performance try to nail me on esoteric implementation details but because I have actual, hard-earned experience I can pretty much explain any approach in my niche and then tell you all the tradeoffs. I can also quickly code up examples, etc. Never had a bad feedback on this portion.
Something about the complete detachment of the coding problem from real life is such a huge mental block and I am continuing to fight it 15 years into my career, after dozens of similar interviews.
Your absolutely right about the "system design" interview but people act like it's an amazing type of interview.
People generally don't even want you to go deep on anything, but you're can't stay to shallow.
It's just bsing, yes your absolutely right, it feels like memorizing a script and saying the right words at the right time
Interviewers would be better off focusing on data fundamentals: data modeling, validation, schema design, indexing, sharding/distribution strategies. When you screw this stuff up, it can be very, very hard to fix. The "problems" cascade and build up. Garbage in, garbage out. I've seen databases, in production, with no primary keys or indexes. It is absolutely bonkers.
The thing with the system design script is that it seems so Rube Goldberg. Why not just ask the important bits?
If the interview is testing for the ability to learn new difficult things quickly, then it would seem to be successful.
Is "cramming" the same as "learning"? Is knowing the chemical reaction between an acid and baking soda the same as being a world class patissier? Many of these exercises are like "demonstrate that you can make this dough rise" but not actually considering "can you make an artful and delicious pastry"; the two are totally different.
The most surprising thing when asked about their experience was that most of the details have already been lost before even starting the role.
You can only get so far as a pastry chef knowing the recipes. Understanding the chemistry does take your craft to a higher level.
I’m never really surprised that there are so many security vulnerabilities out there. Tons of software is completely half-assed: slow, error-prone and inefficient. The market supports it because VCs keep pumping out companies that throw spaghetti at the wall until something sticks. And it has had an influence on the kinds of skills we optimize for at the hiring level.
The problem I see isn’t that we’re trying to hire better developers. It’s that the tests we use are misaligned. Getting into a MAANG with all these hazing rituals and then you get stuck resizing the corners on buttons or editing XML configuration for some ancient system is a waste of time.
But so is hiring programmers that only understand frameworks.
At some inflection point, though, cramming becomes learning. This is in a different place for each individual, but it's true regardless.
Nobody hires anybody other than artists based on whether they can "create an artful and _____ ____". People are hired based on either what the have done, can claim to be able to do, or whom they know.
Depends on the situation and job of course, but cramming isn't necessarily always a negative thing. Sometimes you may only need to learn a technology/skill/etc. to accomplish something quickly, but then not need it again.
But yeah in general I agree with you – I just don't think it's necessarily always so clearly a waste of time.
The problem is not everyone has the chance to work on super-scale systems. So what are you going to do? Weed out anyone with the potential to become great at that just because they've never had the chance to try?
Demonstrating the ability to learn, even if it's just scripted, should be good enough. Granted, they might get passed over by someone with actual experience, but the lack of real experience shouldn't be a deal breaker. It's fine as long as the employer has realistic expectations of the employee.
Systems Design is IMO the most useless part of those interviews.
Interviewers are just trying to find someone who would design a system the same way they did. They used a queue? They want you to add one. They used serverless? Better say the word "Lambda" somewhere. They hate serverless? Ooops better stick to regular servers.
In the end yeah, it's about following a script, but also reading the room.
I think you missed the part where you describe their current experience.
If he was even considered for L6 staff, then his on resume credentials already justified him at L5 Senior or L6 Staff.
The interview performance is what pushed him over the edge into L6.
It seems like you think they don't deserve a staff level position?
> If he was even considered for L6 staff, then his on resume credentials already justified him at L5 Senior or L6 Staff.
I don't want to put anyone down here, but working experience with this individual says "probably not".Did you also post this in another comment recently (yesterday?)
I recently ran an interview process for a relatively senior eng role at a tiny startup. Because I believe different interview methods work better for different people, I offered everyone a choice:
1. Do a takehome test, targeted to take about 4 hours but with no actual time limit. This was a non-algorithmic project that was just a stripped-down version of what I'd spent the last month on in actual work.
2. Do an onsite pairing exercise in 2 hours. This would be a version of #1, but more of "see how far we get in 2 hours."
3. Submit a code sample of pre-existing work.
Based on the ire I've seen takehome tests get, I figured we'd get a good spread between all three, but amazingly, ~90-95% of candidates chose the takehome test. That matches my preference as a candidate as well.
I don't know if this generalizes beyond this company/role, but it was an interesting datapoint - I was very surprised to find that most people preferred it!
Interesting. I think I slightly prefer take homes, mostly because they're not as hectic. The problem ends up being, that I don't have any guarantee that the other side will spend any time on it. At least with an in person, I know that the company had to incur a significant cost so I know my time is less likely to be wasted. There's already a growing asymmetry with job searches, and this is one more trend that accelerates that. I still have a non technical friend who's annoyed with me for not completing a take home years ago, because, while the main ask was obviously trivial, there was enough slop in the directions that I could kill hours trying to decide if something was good enough to submit. On top of that it was expecting some fairly narrow set of techs, that, while not difficult to pick up, were not something I was interested in trying to learn enough about for a chance to maybe help some company that "couldn't find any technical talent".
> The problem ends up being, that I don't have any guarantee that the other side will spend any time on it.
This is a key thing for me. I consider it a moral obligation to give material feedback to anyone who does a takehome test for me - to invest at least some serious time evaluating it. Likewise, I would never give a takehome test until the candidate has at least had a phone screen with someone at the company and there's some level of investment on both sides.
On the other hand, I know a few junior devs right now who are submitting resumes and getting sent takehome tests off-the-bat. And, of course, after spending hours on those challenges, they get only form-letter rejections. I understand why companies do that - there's a glut of junior devs right now and any early-career role gets flooded with resumes that you need to somehow pare down - but I still consider it unconscionable.
I understand why you might not want to risk dealing with that.
I'm your average skilled introverted engineer. But after more than a dozen years of experience and problem solving, I'd go with #2. I feel i'd be able to explain myself much more easily, have to do much less work, and probably have much more ways to impress the interviewers with face to face. I have also been on the receiving side of take-home tests and I know how hard it is to impress someone with those.
For me problem word I see here is „impress”.
When I hire I don’t need someone to impress me, he has to present himself as able to do the job.
I think a lot of devs think they have to hire someone that impresses them and we end up with insane tests and adversarial interviews.
Yeah of course there will be people who will blow through the task much faster than others with better quality than average and they will impress me and they will get the offer in first place - but often times they already get multiple offers and after negotiation they will pick some higher offer and we have to get someone we can afford to do the job.
> ~90-95% of candidates chose the takehome test
I was expecting "3. submit a code sample" to be the overwhelming winner here - did anybody choose that? Seems like a no-brainer, since it's already done...
The code I've written in personal projects have been pretty messy -- whereas code I've written professionally tend to be better (not perfect), but of course I can't share that code.
On the employer side, I would prefer the take home assignment over existing code, unless the existing code was super high quality or highly relevant to the company.
Work samples are a trap. Real problems are always way more hairy than contrived ones, which is not appreciated by people who were just introduced to the problem 30s ago. Since you wrote it while trying to actually accomplish something rather than polish up a perfect little nugget to impress someone, you probably spent way less time per line.
I only had one code sample.
As others have pointed out, there are a lot of problems:
- Many engineers don't write code outside of work and so don't have much code they can show off without breaching their employer's trust
- Those that do often don't have recent code.
- Those that have recent, personal code to show off have often written it to accomplish some goal and the code isn't necessarily a great example they feel like showing off
Really, the only people I've seen have good code samples are those who do extensive open-source work. And for those people, I'm probably already aware of that work when I checked their resume + opened their github.
99.9% of the code I’ve written is proprietary.
Did you ask why they chose the take home?
I'd favor 1 and 2 over 3, because I feel that you understanding the problem I'm solving gives a better context to appreciate my approach.
If I take it home and hand it later, you can compare my solution to yours (or others) and better grasp my coding style, even if it's different to yours, since we worked on essentially the same task.
If we're working as a pair, you can observe my thought process. I can better express my reasoning for picking certain idioms.
With past code, there's almost no context. It's just code that was written for some other project. In my opinion, this can be very useful in preliminary steps, when I'm sending out my resume and you need to validate that I can indeed write code. In later stages, I think it can be impressive if I have a working app and can walk you down the code for a particular feature. But who has working software laying around? We all have code that works with a few hacks, sorta...
I'd favor 1 over 2, because it reduces the probability to blunder. Between spending 2 hours on-site with a higher risk to mess up (due to pressure, Murphy's law, etc) and take the assignment home to work on it for an extra 2 hours, with ample time to freely research whatever I don't understand. It's a no-brainer for me.
Interesting! I like the idea of choice, but as a hiring manager it makes my problem harder. How do you compare the results from different choices equitably? I find trying to compare candidates fairly to be quite difficult, even when they have the exact same interview.
Last time I did a coding interview for real, I had the choice of any programming language, and could choose between 3 different problems to solve. I liked that quite a bit, and was offered the job. Being able to choose Python, instead of, say, C++ in a time-bound interview almost feels like cheating.
> as a hiring manager it makes my problem harder. How do you compare the results from different choices equitably?
That makes sense, and it's the perspective that's being drilled into a lot of us. Implicit bias and all that. But in my experience, the comparison problem has never been that big of an issue in practice. I guess it depends on the hiring climate, but I'm much more familiar with spending a lot of time going through mediocre candidates and looking for someone who's "good enough". And this is when the recruiter is doing their job, and I understand why each candidate made it to the interview stage. Sure, sometimes it's a hot position (or rather, hot group to work for) and we get multiple good candidates, but then the decisionmaking process is more about "do we want A, who has deep and solid experience in this exact area; or B, who blew us away with their breadth of knowledge, flexibility, and creativity?" than something like "do we want A who did great on the take-home test but was unimpressive in person, or B whose solution to the take-home test was so-so but was clearly very knowledgeable and experienced in person?" The latter is in a hypothetical case where everyone did the same stuff, just to make it easier to compare, but even in that setup that's an uncommon and uninteresting choice. You're comparing test results and trying to infer Truth from it. Test results don't give a lot of signal or predictive power in the first place, so if you're trying to be objective by relying only on normalized scores, then you're not working with much of value.
Take home tests or whiteboard tests or whatever are ok to use as a filter, but people aren't points along a one-dimensional "quality" line. Use whatever you have to in order to find people that have a decent probability of being good enough, then stop thinking about how they might fail and start thinking about what it would look like if they succeed. They'll have different strengths and advantages. Standardizing your tests isn't going to help you explore those.
Highly agree with all of that, perhaps save the conclusion. I still try to standardize the interview process, but we have enough different kinds of interview phases to capture different strengths and weaknesses of candidates. I still want the interview to be fair even when people respond very differently. You’re right that it doesn’t often come anywhere close to a tie. But sometimes candidates aren’t vocal and don’t vouch for themselves strongly but they are great coders, and sometimes people are talkative and sell themselves very well but when it comes down to technical ability they aren’t amazing. The choice of coding interview could obscure some of that, so mainly I include other parts to the interview, though I’m still interested in how to compare someone who does take home coding to someone who does the live pair programming, for example. I kinda want to see how candidates handle both. ;)
Oops, sorry, I think I mangled my conclusion a bit. I'm not against standardizing tests. If it doesn't get in the way of other attributes, standardization is very valuable. It's just that those results aren't enough by themselves.
> I still try to standardize the interview process, but we have enough different kinds of interview phases to capture different strengths and weaknesses of candidates.
100% agree with this approach.
> Take home tests or whiteboard tests or whatever are ok to use as a filter, but people aren't points along a one-dimensional "quality" line.
They are unless you're going to hire everyone who ever applies. At some point, you're choosing between two candidates, and the only way you can make that choice is by projecting them onto a one-dimensional line. The approach you describe:
> stop thinking about how they might fail and start thinking about what it would look like if they succeed. They'll have different strengths and advantages.
is one method of doing that, but not an especially effective one.
in one interview for a python job i was allowed to submit solutions to tests in pike which i had more experience with, but none of the devs at the company had ever seen. for python they had an automated testsuite but i remember my interviewer telling me that the devs were pouring over my test results to verify them manually while we continued other parts of the interview. i got the job.
Yeah, that was the explicit tradeoff: I'm losing the ability to compare apples-to-apples. I decided it was worth the risks associated with that - that I'd wind up with two candidates who picked different options and I couldn't decide between them because I got different signal between them. As it turned out, it didn't really come up - nearly everyone chose the takehome. On a larger scale, though, you'd definitely have to grapple with that.
Yeah, I saw that coming a mile away. Don't you remember uni? Students always breathed a sigh of relief when the final exam was a take home project.
Now add this to the fact that a vast majority of your applicants are going to be feeding your assignment directly into an assistive LLM.
As an interviewer, you really can't win.
- If you have a take-home test, people will either complain that it's too involved, and time consuming.
- If you do whiteboarding, people will claim that it's not representative of the actual job.
- If you do paired programming, people will claim it's unfair to those who don't test well under pressure.
That's because these three methodologies test different things. Take-home is the best representative, but it's heavily skewed by the prize. Whiteboarding only test for algorithms knowledge and a bit of problem-solving. Paired programming relies more on communication. And the last two add time pressure that rarely exists in real world scenario.
The fact is, it all depends on the person you need and the team they will be slotted in. But it seems that the ones who know the criteria is rarely involved in these matters.
That's why I offered the choice: different people do well under different circumstances. I figured that it'd allow me to hire a great dev who hates timed challenges or a great dev that hates take-home busywork. In practice, though, everyone like the take-homes, so ¯\_(ツ)_/¯
As for feeding it into an LLM, I go back and forth on that. With their current capabilities, the project is slightly too complex for that to work. You could get a lot of help from an AI, but you still have to put the pieces together yourself. And I'm fine with that! If AI tooling makes you a faster/better dev on a test, I'd expect it to do the same in real work.
Longer-term, though, I worry that LLMs still won't be able to "write the whole thing" for real work, but will be able to "write the whole thing" for takehome tests. And as such, I'll have to figure out how to handle that.
Why would you even do any of that for a senior role? I wouldn't waste my time with it, and it shows you don't know how to interview/evaluate for a senior position.
> Why would you even do any of that for a senior role?
I used to think like that ... then we hired a batch of 3 "senior" devs who could not do simple, everyday coding tasks, even in their ostensible languages of preference. All had come with exceptional resumés and personal recommendations.
So now, no one at any level one gets hired without demonstrating that they can at least write a fizzbuzz and commit it to a repository.
Now, I doubt that senior engineers in other fields have to do this sort of thing. But, most other engineering fields have licensing/accreditation with accompanying post-graduate academic curricula. We don't (but probably should).
I've come around to the idea that all people who will be expected to write code as part of their job need to be evaluated for their ability to write code as part of the interview process. I've seen too many people write convincing and impressive resumes without a single ounce of technical ability to their name.
You're hiring for a senior role, not a junior or mid.
Your candidates aren't fresh out of school or just starting their careers. They already have plenty of work experience. They already know how to code and have been doing it for years. A conversation will tell you much more about their approaches to problems solving, team work, and mentoring than a coding challenge will. Looking at their GitHub, their work experience and references will tell you the rest.
It's like asking a doctor to go over basic human anatomy.
Have you ever tried to hire developers? I have, many times. It's shocking how incapable most people with a history of working as senior developers for 10+ years are at basic programming questions. I'm talking about stuff at the level of Fizz Buzz, or barely more difficult, being solved in a language of their choice with ample time.
Through all of the technical interviews I've run on behalf of multiple companies, I've become convinced that our industry is full of imposters. A terrifying number of people who are employed as developers cannot code, beyond making tiny edits to existing code, or slinging StackOverflow answers and AI generated content around. They cannot think for themselves. They rely heavily on their coworkers to make any substantial progress. If anything, modern AIs have made these people more difficult to spot. I've found these imposters within the companies I've worked for too, and if it's not possible to move them to a non-technical role where they can be productive, then it's best to fire them. If that isn't possible, then keep them out of the way of any critical work, but know that they'll continue to be a drag on their team, and will decrease the moral of those on their team who have to keep covering for them.
Once I even worked on a team (as a junior) where only 3 members out of ~16 even knew the language that was exclusively used for the job. The remaining team members accomplished nothing every day. Those stand-up meetings were painful. Drawn out to 45+ minutes of standing in a circle while the unproductive team members found new ways to explain why they haven't made progress. Then I'd see them return to their desks, and not even open their IDEs for weeks at a time. They seemed to keep their jobs because their boss didn't want to fire them, and it cost him nothing personally to keep them around, offering them a sort of corporate welfare. Perhaps having such a large team even helped him politically, but it's unclear if he cared about that since he also spent almost all of his time slacking off, playing games in the break room, while the 3 productive team members carried the rest of the team.
>They already know how to code and have been doing it for years.
I'm telling you from my own experience that this is observably not true. Twice now I've been burned by internal hires who objectively had years of experience (easy to verify when they have been working within my division) and yet couldn't code their way out of a paper bag.
And tangentially, with some of the health care experiences I've had I very well might try to administer a basic anatomy quiz to potential doctors if I thought I could get away with it.
> A conversation will tell you much more about their approaches to problems solving, team work, and mentoring than a coding challenge will.
Having hired many developers over multiple decades, let me assure you that you should definitely make sure they can actually code. Do definitely have those conversations, too, but trust me when I say that a technical conversation, a degree and several listed years of experience as a software developer on a resume are not enough to guarantee that they can actually code.
Why wouldn’t you? How should one evaluate a senior position, in your opinion? Are you suggesting a senior dev shouldn’t need to demonstrate any coding skill? (If not, why not?) Or are you suggesting that there are other faster ways to show coding skills?
I responded to a different comment but it's also relevant here.
You're hiring for a senior role, not a junior or mid.
Your candidates aren't fresh out of school or just starting their careers. They already have plenty of work experience. They already know how to code and have been doing it for years. A conversation will tell you much more about their approaches to problems solving, team work, and mentoring than a coding challenge will. Looking at their GitHub, their work experience and references will tell you the rest.
It's like asking a doctor to go over basic human anatomy.
You’re underestimating the variance in senior devs, and you’re failing to look at this from the interviewer’s perspective. Sometimes doctors should be quizzed on anatomy, btw.
They need to be able to compare candidates, and senior devs will be expected to code. Their goal is to pick the best candidate. Asking them to demonstrate their coding skill isn’t just fair game, it’s what half the interviewees actually want. A lot of people prefer being evaluated on code than on soft skills (just read the threads here for tons of evidence.)
It’s useful to know whether a candidate is going to be on the principal engineer path or management path. It’s useful to know whether they are actually good at coding, and how good exactly. And it’s also useful to know if they are willing to do what needs to be done once hired. Someone interviewing for a senior coding position who refuses to code during the interview is about as big of a red flag as you can have, and I’ve been part of hiring teams that will politely excuse someone for that, and I agree with the reasoning.
This might not make sense until you’re on the hiring side of things, but having a resume does not entitle one to being hired for a higher title and more money.
You'd think so, but during the interviews I did, there are plenty of candidates to senior positions who just cannot code.
They talk the great talk about problem solving and team work and mentoring, but once it's time walk the walk, they just can't seem to write any code - and we are not talking advanced algorithms, we are talking Fizz-Buzz level.
Maybe it's different for doctors, but as long as such people exist and apply to jobs, we need to ask programming questions.
This also means that if _you_ are applying to a job in a larger team for a senior position and they did not ask you for any coding questions, they either:
- Really good for firing people fast for low performance
- Have people at "senior" position which just give advice and nasty code reviews, but don't actually write any code.
Either way, it's a red flag for the company.
What would you waste your time with?
I exclusively give pairing interviews, usually just 1 hr. The variance between good and bad candidates is amazing, and you wouldn't guess at all from resumes or casual conversations. Most candidates have given me very positive feedback. Why wouldn't you want to be interviewed like this?
I'm a senior developer. Of course I know how to code. Do you know how many code challenges I've done over the course of my career? 99% of them where terrible at evaluating my ability to code. They are a waste of time at the senior level. You are much better off with a conversation, looking at past work and experience and references than you are doing code challenges.
After interviewing ~50 people using this technique - I'm sorry but there's no "of course". I cannot tell by looking at your past work experience or talking to you whether you know how to code.
I know this, because I used to look at resumes and have long conversations before I ran my pairing interview. And I found the results almost uncorrelated. Now I save the fun conversations for people who make it through the screening. But by then I already know if I want to hire them.
My pairing interview (it's a pairing session, not a code challenge) covers aspects of engineering that I specifically care about. It's not terribly hard. But at the end I'll have a pretty good idea if I want to turn this person loose in my codebase.
I second all of that heartily. Of course the senior eng with 10 years hands-on experience at a household name tech company knows how to code... except that they don't. At all, apparently. That seems so ludicrously unlikely unless you've actually interviewed a lot of people, then it's just the sad fact.
To others reading along, I'm not talking about someone not being able to write a multitasking OS on their own. I mean, I'm not sure how these people possibly graduated college not knowing how to write a trivial program in the language of their choosing. Turns out you can get surprisingly far into a software engineering degree without every writing code. I wouldn't have believed it. Evidence proves it though.
Based on my interviewing, there's a ~25% likelihood that you're flat out lying, and a ~75% likelihood that you're overstating your skills.
The number of people with impressive looking resumes, and good command of the jargon, but who still can't successfully code hello world is shocking.
I don't believe you (general you, not just the parent) can even code until you demonstrate that to me, or until the industry comes up with a reputable way to tell me that you can code. Until then I'll be making sure you can at least write code that compiles and finishes a brain-dead simple task.
Because all evidence[0] I've found has shown that work-sample tests and repeatable, structured indicators are the best indicator of job performance. I understand that a lot of devs think they can get a feel for a candidate by just having a good ol' conversation with them, but the body of study on the subject says that that's just wrong. And so I have people do a task that's as close to the actual job as humanly possible and evaluate them on that.
https://www.researchgate.net/publication/283803351_The_Valid...
Because for a senior role you need to hire a person with senior role skills, I suppose. Just N years of experience at X company doesn't provide that. I'd rather look at someone's github repo than talk to their reference tbh.
I've never worked at a company with a public repo. I never understood this. I have a life outside of work
Neither did I. But I'd rather just "smuggle" out some past work and "rewrite it" to make it inconspicuous, if I didn't have my own personal work to show. The company I work for isn't well known so saying "I worked there in a senior role for 10 years" isn't going to get me the next job. If I didn't have anything to show for that I either made for fun or could display from a past job, I'd not apply for another until I had tbh.
People say "No I don't have any past hobby work, never contributed to OSS, and all my professional work is verboten to show" that's not unusual, but it's also not an encouraging sign to me when interviewing.
The elitism is rampant. As an embedded sector guy everyone thinks every other RTOS isn't a real RTOS and the knowledge and wisdom don't translate.
Every time I find some open source I would contribute to, what I wanted to implement is already there or is already in the works. I'm not going to change shit just to change it. I'll buy them a coffee tip and leave.
If I'm interviewing for people to work in some specific tech stack or business, it's just as good to see completely different things they can show. Could be a php site for their book club or whatever. But the key is to have some code to discuss that is their code. Because I want to see them describe and reason about code they know, and it works 10x better if it's code they know, rather than some code I present and say "here, let's reason about this code".
Why not discuss a problem you have? You're obviously hiring for someone to solve your problem. Looking to check if they can code (in a specific language) seems like the XY problem.
Because they aren't familiar with or enthusiastic about the code I give them, they could be about code they have written. It's the classic "describe one time you did something poorly/well"-question, but with code. I want to see them critique some code, or explain what's so great, or why it ended up the way it is.
Reasoning about a problem i have (even showing some code) is also a good part of an interview. But it's my side of the field. I just found it's much better to move the interview to the interviewee's side of the turf, because they are more comfortable there.
I would like to just be asked. I can't post my code on github but I can talk about my projects at length.
For all this thread solution is simple. Small take home assignment where you write the code to discuss during the interview.
I know people are vocal about not wanting to do take homes. But if take home is reasonable and used as a talking piece it checks all the boxes for good tool.
I think in reality I had single person that outright refused and of course bunch of people who didn’t bother to deliver - great candidates delivered it the same day, busy great candidates delivered it over the weekend.
Wouldn't that be a red flag if they don't have any public code at all as a senior developer? They aren't fresh out of school or just starting their careers. They should have something.
I'm a senior developer and my github is half guitar tabs. I'm not interested in peacocking. Maybe it's because Hackaday refused to put my name on the article with my senior design project years ago and I just don't want to play the game.
That's a great signal (that you have a hobby playing guitar). If the other half is also interesting, it sounds like a great portfolio.
I appreciate it. After thinking about this I think I just need to get over my hangups about duplicating things already done or things that aren't really important.
I just take tabs and rearrange them for personal use. Learn from my mistake and just post whatever to your github, if you fret over its usefulness or purpose you might just never grow your portfolio at all. This was a mistake. You don't have to have some gnu front-page exploratory project.
If you want to shred guitar look up Troy Grady to grok efficient mechanics that won't break your wrist (we type alot for work as well)
And that might be hurting you in the long run. ¯ \ _ ( ツ ) _ / ¯
All of the meaningful code I write is private for the company I work for. This is how most code is stored.
Mine too. Or 99.99% of it at least. But if I were to apply for a job, I'd focus more on having at least a little repo of something to show, than on polishing my CV or doing leetcode excercises. That 0.01% code I wrote which is my github is my CV. That 99.99% I wrote for my last employer won't be seen by my next employer.
Yes exactly. The code IS the cv of a developer.
> These high-stress, solo coding assignments don’t reflect the actual job. Instead, they put developers in situations they’d never face in the workplace, where collaboration and support are standard. When was the last time you had to debug an ancient codebase without documentation or help from a team?
I have had to do this, on numerous occasions. Sometimes there is a system that no one who built it is left, the documentation doesn't exist or can't be found, and that's it. This is not a great example. Being able to debug a program, follow requests around and see what's happening is a good skill.
What this example is really bad at though is that you pigeonhole yourself into developers. If you have a legacy php application, you are weeding out the java, python, ruby, go etc developers that do have those skills but don't know php and will be slower at it. Despite them possibly being stronger developers given a couple months of hands on php.
> A “4-hour” assignment can quickly turn into 8, 10, or even more, just to ensure it’s in great shape.
This I agree with. Suggested time can easily be gamed, so someone who has more time can look like a better candidate.
When I interviewed at Stripe, they had a "debug this!" question ready to go in a number of languages. (I think the list was something like: ruby, java, python, javascript, go, maybe one or two more.) I thought this was pretty great, but also seemed pretty labor intensive.
That is the way to do it, good on Stripe. Its definitely labor intensive, but I think that you either need to outsource it or be big enough that you can source the talent to build a half-dozen plus idiomatic and kind of big applications to make it a good test.
Yeah. In general I was impressed with Stripe's interview process. It was still the usual miserable day-long gauntlet, but with some thoughtful decisions mixed in, IMO. (This was 2022, FWIW.)
I'm pretty sure the thing they had me debug was a real bug they had hit in an open source dependency. It seemed like they had hit the bug, taken a snapshot of the code at that version, and written a test to exercise it. (And presumably then they fixed it, separately.) So it felt a lot like bugs I run into in the wild, but with some of the hard debugging work out of the way (getting it to the point of an easy repro in a unit test) in order to time-box it. I really thrived in this interview. It may be the only interview I've ever done where I felt like I was just using my actual skills like I would during the work day.
> This is like asking a Ruby developer to debug PHP as a test of flexibility.
Sounds like an OK test to me. Great (senior) developers should be able to do that kind of thing. Categorizing yourself exclusively as "a Ruby developer" is a career trap.
> Categorizing yourself exclusively as "a Ruby developer" is a career trap.
And a lucrative one at that.
In the short term. And then the entire industry experiences some kind of a technological shift, as it will for many more cycles, and you're jobless as early as the first wave.
It's ridiculous how developers mindlessly accept that you should constantly be learning to keep yourself relevant, but keep it shallow by just jumping from one tool to another, instead of encouraging deeper knowledge of generalizable patterns that stay relevant across waves of technological disruption.
Exactly. I managed to ride a nice maybe 8 year wave as an "OpenGL expert" but the industry moved on and I didn't go more general with 3D graphics to climb out of my niche. I haven't been in that industry since the OpenGL waterline crested and went back out to sea. Learned that lesson.
Be a generalist. Yes, deep specialists exist and yes, some of them have successful careers based on their deep specialty, but betting on specialization is like a high school kid planning to be a MLB baseball player.
that's what i am trying to explain in every job application. but almost every job description expects multiple years of experience with a very specific tech stack. so far being a generalist with senior level experience did not yield a single positive response, let alone an offer.
So frustrating. Keep trying! You’ll find someone who values your background.
Embedded C and C++ should be good for at least another 20 years or so (except AI will take over by then).
Ruby is so god awful yet so many successful companies use it as the bedrock language there's always going to be jobs maintaining the piles of crap it leads to.
Yep. Craftspeople take their craft seriously, sweat the details, learn the patterns, go deep.
And that’s complimentary with marketing oneself as Ruby/Rails dev or whatever else the market needs at the time.
It’s lucrative if your bar is 6 figures.
No engineer that makes 7 figures calls themselves a ruby developer with the exception of DHH.
Nominated for most HN comment of the week.
Come on! Everyone posting on HN makes $500K a year, has a vacation home in Tahoe and drives a brand new Ferrari. They think "L6 at Google in the Bay Area" is the median job level for the entire software industry.
If this article and the comments here are to be believed, all you need to do to make L6 is grind on leetcode.
Sounds like a good deal to me.
Where’s n-gate.com when you need them, eh? :)
Man I miss n-gate. Sometimes I go back and read the archives just for fun.
Isn't the average for software developers in the US ~100k(before taxes)?
Assuming that high earners are offsetting that to the higher end, most people aren't making 6 figures, and the bar isn't which language they're programming in.
One of the few areas of reliable statistics about US software developer pay come from the US Government Bureau of Labor Statistics. The median wage as of May 2023:
$132,270
This means half of all full time employed devs are higher, and half are lower. The mean is more skewed by higher earners but is similar:
$138,110
It also varies quite widely by geographic location, from a mean high of $173,780 in California to only $125,890 in Texas, from $199,800 in San Jose to $132,500 in Austin to $98,960 in rural Kansas (where I have actually developed software before!)
The short of it is, the vast majority of software developers do not make the top salaries. Even L6 is rare within the top tier of tech. There is a lot of delusion in this field around pay, so it's important to be well informed. As a field we are still very well paid compared to most other jobs especially considering our safe working conditions and lack of needed credentials and education. Compared to most of the work on this planet, it's still a goldmine.
100k is on the lower end of junior salary
https://www.bls.gov/ooh/computer-and-information-technology/...
Junior salaries go down much lower than $100k.
That doesn't match my experience on both ends of the hiring table. And forgive me if forwarding the BLS statistics to candidates doesn't get them to accept offers, because I know it wouldn't help me when I can get paid a lot more elsewhere.
> That doesn't match my experience on both ends of the hiring table.
Congratulations, your experience is limited. The BLS stats represent the actual US salary data, not just your limited experience. If you want to make a claim about salaries in the US then look at data across the US and not just whatever is true within your limited bubble.
> And forgive me if forwarding the BLS statistics to candidates doesn't get them to accept offers
Did I ever even suggest such a thing?
I take the BLS numbers with a huge grain of salt, they are useful for identifying trends and not absolute facts on the ground.
> Did I ever even suggest such a thing?
My point is that the BLS doesn't set market rates or report on them.
Is this some coveted attempt to push tech salaries higher?
...in tech hubs. That's a pretty great living in most other places.
Most places don't have qualified tech workers in spades.
There's no need to be shooting for 7 figures. Get a life
Working 40 hours and making 7 figures, gives you plenty of time to have a life.
The key here is not to categorize as a “language developer”.
Is 7 figures even realistic? I was under the impression that even a senior FAANG job was still just under $300k typically. Does anyone outside of a CTO/CIO make $1mil+?
I know half a dozen of Senior Staff at FAANGs and they make I would say around 600k - 700k, more of course if stocks appreciate but that's hard to control.
Consistent total compensation offers of $1mil+ is probably reserved for roles above these, sometimes called Principal or Distinguished. I would say the rate of having one of these in an org are like 1 for ~150 engineers.
How many developers are out there making $1,000,000/year? Not in the hopes of hitting it big someday off of equity, but actual annual payoff?
How many companies are out there paying $1,000,000/year for devs?
How many devs who can command that kind salary are going to put up with bullshit coding challenges?
Many thousands of Sr. Staff engineers+ and Sr. managers+ across FAANG.
Google alone has 4000 directors all making a million at minimum.
I have only worked at FAANG across my nearly decade long career. So, it’s a biased but very large sample.
If you sample only from the 10,000 or so current Olympic athletes, you will draw similarly incorrect conclusions about the 8 billion planetary general population's athletic ability.
Okay, and let's say there's another 4000 Sr Staff+ engineers at Google making that much. That's 4% of the company.
That's not unheard of, but it's certainly rare. $1 million+ is not a "typical salary", even at Google.
Bro this comment is so out of touch it's ridiculous. A 6 figure salary is lucrative. 7 figures is a crazy pipe dream that basically nobody will ever experience.
That's either a fun or terrible exercise depending on who is administering and how. However, if you said it's a Ruby role and the candidate is good enough to be picky, you may scare them off when they think your description of the role was a lie.
I agree. I've had this happen often enough on the job that it's not a totally made up example. And usually you'll be one of two or three people in the whole company who is able and willing.
Debugging old DSL vendor specific languages or code so old using, frameworks and standards long out of fashion and support, that they are half way to being a different language.
Adding support for some back ported features or patching security holes in an old client or legacy stacks.
Or at a big company we had some escrow code from a much smaller partner that we ended up becoming responsible for.
Often getting the environment setup for proper debugging is more work than anything.
But yes, it's a good test for a senior+.
Career over sanity is a life trap, though.
Most of my development experience is in C/C++ and Python.
Know what I'd do if the interviewer asked me to debug PHP? Pretty much return the question:
"I've never used PHP. Are there logging macros/functions defined somewhere? Where do I see the output? syslog maybe? Is there a debugger of some sort I can use? How do I run each `piece` of code in isolation?"
(I am assuming the job listing did not explicitly mention PHP experience. If it did, both myself and the recruiter would absolutely deserve to fail me for this interview).
I once got hired after passing a JavaScript test, never having written a single line of it. The challenge was more conversational, like:
Interviewer: Now reverse this array.
Me: OK, in Python that would be array.reverse(), or reversed(array). I bet JS has one of those, probably the .reverse method.
Interviewer: Great guess!
That was genuinely fun. I came out of it feeling like I'd learned a few things, and the other person got to see how I'd reason about a new problem.
I interviewed for a Scala role one time despite never having written it professionally. I suppose it's obscure enough that they couldn't afford to be too picky.
It was a pair programming exercise and so with some help from the interviewer and the IDE I was able to fumble through to a working result. I agree it was fun and educational.
being able to do that without significant time constraints isn't that bad imho, but inside of an interview?! That's borderline laughable, for me, as you're only going to be filtering out people based on whether they have any previous PHP exposure or not.
The more people online complain about coding interviews, the more confident I am that they are the absolute best way to filter candidates for a software development job. Across the industry there are way too many talkers/pretenders/meeting schedulers and not enough people who can roll up their sleeves, jump into the code and actually get stuff done. And this problem becomes worse at higher levels. You can bitch about it all you want, but you aren't owed that cushy $500K/yr FAANG job. If you can't get yourself to brush up on basic programming and write some for loops then companies will simply move on to someone who will.
I am good at passing these interviews and am at a faang (will be moving to another one this month). These interviews are useless and provide a false signal on problem solving skills and people's abilities to learn things. The interviews specifically don't test whether or not somebody can roll up their sleeves and jump into code, because if it did why do I as a new hire have to explain so much about software engineering and debugging practices to people who have been here so long?
If you've actually worked at a large company, you'd know that 90% of the real work is done by like 5% of people (maybe even less). If the interviews worked this ratio would be so much better.
what do you think would be a better test?
For new grads, just let them in. Just make sure they know something so a simple coding exercise is maybe good there. For more senior people, actually talk about their resumes and try to understand what they did, why they did it, and importantly, if they actually did it. I never got asked anything about my previous experience when I got into my first faang beyond some superficial things like "what tools did you use".
I've seen interviews where you actually have to present a thing you've done and then explain the decision making process behind it, tradeoffs, outcomes, etc. those are more common outside of CS and I'd like to see more of that in CS. Or hell, have them write an essay describing the tradeoffs about some theoretical engineering decision. That'll give me actual info on how a person thinks.
For context, I used to work in self driving cars and had to interview many people who claimed to work in that field and claim to have worked on huge projects themselves. Then you dig deeper and it turns out they were part of a huge group, they never heard of this problem, that problem, never heard of this industry standard, etc. it's like, forget coding, this person doesn't even know the domain as much as they claim and not being upfront about that disqualifies you immediately in my books.
I have mentioned this before. The best interview I had as a candidate was - they gave me access to their codebase, explained an actual relatively small bug, asked me to fix it and left me alone (I was seated among their devs, they were minding their business, I was minding my own). I think it took me half hour or so (can't remember the exact time, it was a few years ago). I fixed it, they asked me to explain how I found the bug/fixed, they made an offer.
No talking (other than me showing them how I fixed it). No bullshit questions like "where do you see yourself in 5 years" or "talk to us about your strengths" etc.
Second best - they asked me to design and write pseudo code for a simple system. Don't worry about syntax, but make sure to follow good design practices, within reason. Gave me a pen and notepad and left me alone for an hour. I wrote it, explained my thought process, they made an offer.
Then I had shitty interviews - one very large, very famous insurance company - they had 5 rounds of interviews back to back, for a normal developer role, lol. Asked me about some obscure options for grep etc. It was an exercise in them showing off their skills (more like their memory of Linux commands) than learning about my skillset. I couldn't wait to get the hell out of that building.
Of course interviews can't be light when you are hiring for highly technical, high critical positions (like security, for example). But for most software dev positions, the formats above are very efficient. Most software devs are writing "glue" code, not rewriting some mission critical real time OS.
I sometimes ask obscure questions but I don't hold it against people if they can't get it. It just tells me a bit about how the person works, who they are, where they might fit ... I've certainly recommended people for hire that score an actual 0% on those questions.
Yea, I happen to hate coding challenges, but not because they're hard--because they bias towards "people who have time to do coding challenges". That said you're absolutely right about how many phonies are in the industry and coding tests are probably the best we have to weed them out.
The problem I see in a lot of cases is putting too much emphasis on solving the problem in the interview, instead of working through it. Many interviewers claim that they care more about the latter, but in practice failing at the former ends up disqualifying you.
instead of working through it
Loonnnng time ago (25+ years) I remember appearing for an entrance exam of a very famous, hard statistical course. It was a 3 hour exam, with only 7 or 8 questions. The exam explicitly stated that they do not care much for the actual answer, but the path taken to get to the answer (all questions were math problems). As a kid this sounded weird to me, but now it makes sense. Someone with good problem solving instincts will get to the right solution (even if it takes a couple of attempts to get there) vs someone who got lucky the first time (or brute forced their way)
This goes beyond software interviews and is just how the world works.
"I want to work for your company"
"Ok, prove yourself"
"Sorry, I don't have time"
How do you expect that conversation to go from there? If you don't have time then make time. It isn't anyone's problem but your own.
That's not it though.
Pretend I was hiring a restaurant chef and the industry standard test was "run a mile in 4 minutes", a test that 98% of all candidates fail!
So professional chefs practice by jogging daily so they could prove themselves during the famous "run a 4 minute mile" weeding test so they can be hired to prepare meals for people.
That's what we're talking about here. I've had to write out the mathematical proof of various algo complexities of B* trees exactly 0 times when writing SDKs for some CRUD application but that's what's expected during the interview.
It's as correlated with the ability to do the job about as much as running a 4 minute mile in that younger, more desperate, easier to control, and thus cheaper people can do it better. That's what the actual filter is for - to find candidates that are easier to abuse and take advantage of - smart enough to do the work and foolish enough to take the job.
I've hired large teams at multiple companies where such tests were used and that was exactly why we did it. We didn't want the experienced 45 year old wanting to work 40 hours with vacation, we wanted the foolish 25 year old that gave us their weekend.
Actually saying that is illegal, but using a stupid test that selects for it is not.
Hiring professional chefs is actually a great parallel. "Chop an onion" or "make a french omelette" are basic tests that even the most experienced chefs being hired for top positions will have to go through.
"But this isn't relevant to my job, a subordinate will always do this kind of stuff."
"We are a fine dining dinner restaurant, I'll never need to make an omelette."
Doesn't matter, make a damn omelette. If you can't, you aren't suited to be a head chef, or any other kind of chef.
Knowing basic data structures and algorithms and having problem solving skills is absolutely critical to a software engineering job. In fact it is the entire job.
And even that isn't really true. Rarely people will have to work with anything but the most basic usually language builtin data structures. Usually people glue together a few libraries and build some CRUD.
I'm sure I can do great in any coding challenge that actually test for problem solving and not esoteric algorithms knowledge. And when you stumble on the latter , it's hard to muster any motivation for the place. It's like applying for a driver position and they ask to verify how quickly you can change a tire (below 5 minutes is a must).
Maybe there were in the past, currently there is entire industry there to help you game the system.
- Cracking the coding interview.
- Elements of programming interviews in Java|Python|whatever...
- leetcode & other sides with paid premium subscriptions...
- mock interview bootcamps...
It's no longer about skill, it's only about gaming the system.
There are physics textbooks and YouTube videos everywhere and yet we aren't all physics experts. Existence of knowledge and accessibility of information does not guarantee everyone can learn to do something and it especially doesn't guarantee that everyone can learn to do something well.
LLMs are another great example
Typical table stakes prep work for a high paying job with a lot of competition.
I’m serious. It’s not even guaranteed you’ll get the job if you do the above but everyone who passed the big tech interviews will acknowledge they fucking studied for it. What do expect here?
But who is talking about big tech? Aren't we talking about just most companies here? I have had some interviews and every single one of them had broken hiring processes, silly reviewers, who get stuck up at things like you not using their favorite code autoformatter, for code that is 1 screen long, or did the ghost job thing, where they apparently did not actually want to hire anyone.
The ease with which interviews allow the unscrupulous to inflate their abilities has been well-known for long enough for it to be metagamed and exploited for profit and now the effects have become abundantly apparent. Imposters institutionally infested in this manner across industries have reached a critical threshold where their fundamental lack of competency can no longer remain hidden.
Have you got the high paying big tech job then?
If you do the above prep work and pass the multiple interview loops you’re pretty smart honestly. The vast majority couldn’t do it even with all the study possible and even the smartest couldn't do it without any study at all. The bar is pretty high.
What do you mean “gaming the system”?
The companies ask that candidates learn how to solve algorithm problems and the candidates do it.
I would call it “everyone playing a game they agreed to play by the rules they agreed to play with.”
And I don’t know what you mean by “it’s no longer about skill.” It still takes a lot of skill to be able to solve hard algorithm problems, even if you took a course on how to solve them and practiced solving them for 6 months.
When you audition for an orchestra they give you the sheet music ahead of time. It doesn’t mean you have no skill if you practice first instead of just showing up to sight read the music.
Tech interview preparation mostly boils down to rote memorization not really what I would call “developing a skill”. You just cram enough until you can pattern match any kind of DSA or system design problem and apply the solution you memorized. Once you finally land the job, you’re free to forget everything. Then you begin to develop “real skill” on the job.
This
You: “Coding interviews are the absolute best way to filter job candidates for a software development job.”
TFA: Coding interviews are a standard way of hiring.
Also you: “Too many people in the software development industry don’t know what they’re doing.”
The lack of cognitive dissonance is remarkable.
This is not cognitive dissonance.
GP is of the opinion that coding interviews filter out people who can't code.
And they also think the industry is full of people who can’t code. Despite the industry relying on a technique that purports to weed those people out. One might conclude, then, that maybe the technique isn’t as effective after all.
the current system just allows people who understand the system and prep to beat the exam to succeed instead. yet you still find the phonies get through, so is it really solving the original problem then?
outside the geography where the salaries are an order of magnitude less, we yet go through the pain because of companies copying each other's approach.
Even if we buy that leetcode is good in general, what about for the increasingly more dominant variant, the ML/AI engineer?
How can we even ban them using LLMs in their "ML-leetcode" problem when the problem is on LLM tokenization or something?
Like, if I lose a candidate because they're experts at using cursor + claude to do their coding (i.e. they solve it by writing 3 prompts and filling in 2 errors which are easily spotted in a total of 10 minutes), despite them having NeurIPS caliber publications and open source code, did I filter out a fake grifter, or did I filter a top tier candidate?
I just don't buy that leetcode style problems make any sense for anyone whose doing anything involving LLMs, and like it or not, increasingly large amounts of the cushy 500K/yr FAANG+ jobs will involve LLMs going forward.
If its easier to do with AI help then you'd expect supply of qualified devs to go up and salaries to plummet
It is not only about FAANG though. owadays every tiny fart of a company feels entitled to have a hiring process like that and not pay FAANG salaries.
Agreed. And a lot of people will say that you can just memorize a bunch of leetcode problems, but I've always created custom coding problems when I've acted as an interviewer so good luck finding the solutions online.
And frankly, if they're able to adapt a random leetcode problem to be able to solve the coding test that I give them, then that's exactly the kind of adaptability that I'm looking for in a prospective software engineer.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
Literally every day this week.
Documentation? The code was last updated in June, the documentation was last updated in January - of 2022.
The team? The half that didn't get laid off two years ago all quit for greener pastures. Sure, there's a team that owns this project - along with three others, all of which they've contributed maybe a dozen lines to over the past year, and 11 of those were dependency updates.
I have no issue with doing this on a job, but i have issue with it in interviews. Not everyone is able to be comfortable in a strange environment with strange people. Give me a damn take-home test and stop talking amongst yourselves while i'm at the whiteboard.
I had a very bad experience at a company that did not conduct any coding tests; there were just 2 or 3 interviews, and that was it.
The issues began when we started working together in a Data Engineering team. Most of our stack was based on Hadoop, Kubernetes, Ruby, Python, and several other technologies that required a basic understanding of cloud computing.
However, since we had such a wide range of work experiences and backgrounds, I often found myself not doing the work I was hired for but instead covering for others. I ended up doing a lot of "glue work" to compensate for the fact that some colleagues couldn’t even handle basic tasks, such as Python CLI packaging using Docker or inspecting jobs in Hadoop. After some time, I decided to leave—not because I thought I was special, but because I was fed up with spending 80% of my time providing support for people who were hired at the same level or even higher than me.
I recognize that the company had very poor hiring practices, but some form of basic technical testing is necessary.
When you say they "couldn't even handle basic tasks" do you mean they were completely unfit to complete their assigned tasks, or that they were getting hung up on simple tasks, while otherwise being independent and capable? The other key question is whether you had to explain the same things to the same people again and again. Being ignorant is excusable, but being unwilling to learn isn't.
> they were completely unfit to complete their assigned tasks, or that they were getting hung up on simple tasks
They were unfit to complete the tasks. But again was not their fault, I think was the hiring that failed.
One concrete example was the fact that our pipelines was quite straightforward for data engineers: packaged Ruby and Python CLIs that runs commands. The runtime was k8s. One of the biggest issues were when something broke in production, none of the people couldn’t go to the container, check the logs and understand the failure.
There’s one situation where I think makes sense to hiring without a test: if the company has the resources and money to provide levelling training for all new hires with no exceptions. I do not know how practical it is.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
> This is like asking a Ruby developer to debug PHP as a test of flexibility
> If the job requires specific tech skills, test those skills
Sounds like someone failed a coding challenge.
In all seriousness, you need to understand that companies rarely implement their hiring practices for the sake of it. It is usually the best their collective minds could come up with. Telling them to get rid of it without attempting to understand what problem they're trying to solve and without offering alternatives, is, to say the least, not a productive use of anyone's time.
I find in software development we try to reinvent the wheel all too often instead of borrowing practices from other professions. Let's have a look at what civil engineers have to go through to get hired at half the salary of a software dev, and let's incorporate that into our practice. That will make everyone a happy trooper !
> In all seriousness, you need to understand that companies rarely implement their hiring practices for the sake of it. It is usually the best their collective minds could come up with.
Unlikely. It's usually what the most senior person either read about or experienced in the past. In fact I'd say that it is highly unlikely that they did any introspection at all as to what they are actually trying to accomplish.
They just did it like everyone else because they believe exactly what you do -- that someone somewhere created the process with intention.
I feel like we've so lost the plot with tech hiring that we settled on what appears at best to be a local maxima.
Folks on this forum mostly won't remember but about 20 years ago the in-vogue way of interviewing software engineers was to ask "puzzle questions" (e.g. "I have 100 ball bearings and two are a different weight than the rest....") and "lateral thinking" questions ("why are manhole covers round?") because that's what they did at Microsoft and everyone copied Microsoft.
I'm told this is still the common style of interview for mechanical engineers, which says something about what it's like to work in that industry too.
> In fact I'd say that it is highly unlikely that they did any introspection at all as to what they are actually trying to accomplish.
Based on the observations I've made of hiring managers I've worked with, what they're trying to accomplish is to provide the appearance to HR that they at least attempted an unbiased hiring process. The result often resembles a "literacy test" and I suspect one has to be very intentional about avoiding that to practically avoid it.
Might make for better software, though…
> A “4-hour” assignment can quickly turn into 8, 10, or even more, just to ensure it’s in great shape.
The fun part of take-home assignments is that there's a moral hazard incentive to spend extra amounts of time on an assignment even if they say "only spend X amount of hours on it," even lying amount the amount of time spent if the interviewing company asks.
More than once I have completed a take-home assignment with emphasis on sticking in the timeframe they recommend but still solving the problem adequately, and then I get chewed out in the followup interview "why didn't you implement X?"
Relatedly, a massively open-ended take-home assignment is a bad take-home assignment since it favors those who have more free time to be thorough. (shoutout to the take-home data science assignments which ask for the final deliverable to be a PowerPoint presentation)
This exactly. And then some person who never met you or talked to you, sitting in the company complains about too many comments. Or about you not using their favorite code formatter. Or optional type checking. Dude! If you want a full blown project setup guide, pay for that!
Remember that in other fields like medicine, finance, academia, and law, getting in involves 5+ years of hoop jumping and commitment signaling that have nothing to do with the final job.
We are blessed.
Genuinely curious, knowing nothing about this field: does, for example a neurosurgeon with 15 year of experience, when looking for a new place to work, have to pass surgery "challenges", like doing a lumbar puncture in 15 min on a fake body to prove his experience?
No, but the medical profession has a ridiculous amount of gatekeeping, so you can't become a neurosurgeon with 15 years of experience in the first place (there are maybe a few thousand worldwide). On the other hand anyone with a computer and a curious mind is a software engineer.
Does the gatekeeping reliably ensure quality, or just filter down the quantity of doctors?
Let me answer your unknowable question with another one:
Which doctor would you prefer do surgery on your brain - the one that jumped through all the hoops and went to 15yrs of med school - or the one that learned brain surgery through a YouTube tutorial?
I think you know the answer. The problem is not gatekeeping in general it’s the detail and nuance of how gates are being kept and how to find the right amount of gatekeeping.
> Which doctor would you prefer do surgery on your brain - the one that jumped through all the hoops and went to 15yrs of med school - or the one that learned brain surgery through a YouTube tutorial?
My answer is that I don't have access to what I'd really like to base that judgment upon: their success rate, adjusted for the difficulty of the procedure.
Simpler, less-risky procedures would therefore have a lower skill/experience bar.
Basically, I'd want someone with decades of experiences to remove a tumor deep inside my brain, but I'd be much more willing to use someone who learned via YouTube this time last year if I'd suffered a TBI and was experiencing swelling and intra-cranial pressure.
It would be great if there were certain specialized mini-doctor programs that focused on simple things like treating the least risky problems, maybe even with AI to help filter which problems need a “real doctor” vs a mini specialist.
We could theoretically have the best of both worlds, right?
No unnecessary gatekeeping, but maximum necessary quality standards (which I distinguish from gatekeeping).
Theoretically? Maybe.
Practically? No one has figured out how. The best we have are coding challenges.
Yeah, what is hard is actually implementing this.
Both
No, but they'll ask for (and actually follow up on) references from their fellow surgeons, and check that they've passed their board exams, and that they're licensed in the state, and that they haven't had huge malpractice claims, and otherwise verify that the person is, in fact, actually a neurosurgeon who does neurosurgeon things in a hospital and is acceptably good at it.
It's not a perfect system, but it's a lot harder to fake being an experienced doctor than it is an experienced developer.
No, because the fact that a neurosurgeon with 15 years of experience is not in jail means they know what they're doing. Not the case for devs unfortunately.
I have a couple family members who are doctors. Medical school and residency were nightmares, but after that they are golden. One of them works one week a month and makes 400k. Others have pretty cushy 9-5s and are taking home ludicrous salaries. They are also pretty recession proof and their salaries are fairly immune to economic pressures. I don't think mass doctor layoffs happened in 2008 or 2022.
With software you never have to stop proving yourself, and your skillset is always a few years away from being outdated. A doctor with 20 years of experience would be welcomed anywhere, but an engineer with 20 years experience is viewed with trepidation. The next "big thing" could roll out at anytime and suddenly crypto engineers are getting 800k job offers so everyone furiously tries to learn crypto stuff. A few years later that all dries up and now you have 100k engineers who are out of work and learned tech that no one cares about anymore. All the LLM engineers might be in the same position next year.
> but after that they are golden.
Of course. They paid an enormous cost and beat other competitors for that privilege which is carefully guarded by regulation and certification boards.
No doubt software has more instability, but the trade off is that you don’t have to do that multi year grind. Easier fire, easier hire. Similar pay.
> skillset is always a few years away from being outdated.
I strongly disagree with this. In your example you use “crypto” and “llm”. If this is your skill set you are fad chasing and needlessly increasing your exposure to changing markets.
Engineers solve problems by applying math and science expertise. This is a timeless skill.
Just because other people are worse, it doesn't mean we shouldn't try to do better.
In the case of medicine and law, that's because the government restricts who is legally allowed to work in those professions
Dunno, all my "normal" engineering job interviews just revolved around my previous work, going through big picture design problems, and specific things related to the job.
Didn't have to solve PDE's on the whiteboard, or regurgitate integral transformations.
Normal engineering isn’t a vehicle to the upper middle class. Supply of engineering degree holders (challenging) is much closer to demand.
True, but someone with 10+ years in those fields doesn't face an interview asking them to prescribe treatment for a cold or explain the difference between civil and criminal law.
10 years in is exactly when you have to go do oral boards again in medicine, actually
Yes, and the medical licensing board is in charge of that, not the employer, and every doctor is evaluated to the same standards. Also, doctors are only one of the professions mentioned.
Somehow in programming every interview starts from the assumption that the candidate must prove competence to the employer, and the employer decides what "competent" means.
Physicians generally have to recertify every 10 years, so yes they kind of do.
Yes but not for a specific job, given by the employer, and certainly not for every job. And this is only one of the professional fields mentioned.
And you can't easily move country either.
Remember to watch out for people who remind you that not being insane is a blessing. Where the sole pseudoargument is that things could be worse.
It’s not insane for companies to filter candidates. More people want to be highly paid software engineers than positions available.
The insane thing would be to expect someone to take you at your word for 150k*, when hiring managers know that 50-70% of people with your resume fail to write a nested for loop.
I make less than that. So don’t worry about me.
As someone who interviews i support these tests, my reasoning is very different
1. Applicants getting hired because the manager and the candidate is from the same institute or they like the same team or band.
2. Bias based on age or race. I try to remove my own sub conscious bias as best as I can but most don't.
3. HR thinking you are not good enough and rejecting because you did java 18 and not java 19. Or you can write impressive React but haven't done Vue.
It takes the managers and HR a little out of the equation. They just rubber stamp your decision.
Take home exercises are something we tried but the incidents of cheating were so high and good code is so subjective that we gave up.
I think the reality is that it's too costly to put together a process that filters better. This has been deemed "good enough" and they can fire you if the filter didn't work properly.
I hear it all the time ‘coding interviews are useless’, ‘peer review in scientific journals is broken’ and so on and so on.
I’d say yes and no.
Yes, these are the problems that cannot be solved perfectly.
No, because in such areas any ‘reasonable’ filter is better than nothing. People say that these assignments don’t have anything with reality, but, well, we don’t have months to try to work with each other, we only have 3x1h.
I worked as individual contributor for years, but also had a chance to try a hiring manager role for the past 3 years a lot. We do standard leetcode-style interview (without hardcore) + system design. And I always consider both as a starter and bite to see how the candidate behaves; talks; do they ask questions to clarify something and how. And I always try to help if I see that candidate is stuck. By the end of all interviews you will have some signal, not a comprehensive personality profile. Do we do mistakes? I’m pretty sure, yes. But I think it just works statistically.
Companies aren't incentivized to eliminate false negatives in hiring (talented people getting overlooked), they just need to make sure there are no false positives (bad hires). I'm guessing it's probably rare for companies to think about the bigger picture, like the long term health of the economy and nation, where it's actually really bad if there are a bunch of really talented people that are being underutilized.
If I were interviewer, I'd probably do a coding assignment that builds on an already existing mini codebase, quarter or half day assignment (2-4 hours), with a strict time limit to turn it in by the end of the allotted time. It would test for someone's ability to read unfamiliar codebases, with the codebase being reasonably small for the little time that's available, and the ability to build features on top of it, to write clean readable code compatible with the codebase, etc. And an a 30-60 minute more open-ended informal knowledge interview to gauge how knowledgeable the candidate is and where their strengths and weaknesses are. And previous work would probably be a good indicator, like open source work and/or previous job experience.
I think the analogy by Steve Yegge still holds: if you claim you're a juggler, you should be able to juggle in front of people at any time. So, yeah, coding challenges is a good filter, at least.
The problem is not coding challenge per se, but that people can now cram it on sites like leetcode. See, before we had leetcode, only two types of people could solve a large number of algorithmic problems organically: those who were naturally talented, and those who were so geeky that they devoured the works of Martin Gardner, Knuth, and the like. Excelling those challenges showed raw talent in old days.
And companies like the young Microsoft and young Google absolutely loved such talent, and such talent did shine in those young companies.
You know, some people do love devouring works from Knuth and such and still can't pass those tests. Yegge just assumed that such tests work for any type of individual who is willing to study hard, which is simply not true. The frustrating bit is that nobody told me when I embarked on this career path how much I'd have to struggle and put myself through the same traumatising experience over and over just so I can get a job.
No argument about it. I was just explaining from the perspective of the companies. Per another thread, the companies usually do not care about false negatives and they recognize that there will be false positives. In the meantime, they get so many resumes so they can afford having false negatives. Case in point, even IBM got tens of thousands of resumes every week in the 2000s.
Not that I can pass the interviews, by the way, and I definitely don't want to prepare for such coding challenges. I'm just trying to explain the phenomena as an observer.
Sure, and all I'm saying is that it's possible to have a software engineer career without putting yourself through such tests. Yes, it's harder to find decent jobs, but it pains me so much when I see people giving up after years of effort just because they're not aware they can get hired and practice what they enjoy doing without having to pass such tests.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
I've had to do this at some level in every job.
Somebody left. The code was left to rot, whether it was a separate codebase or it was an entirely separate project. I got assigned to fix it. Sure, I had a team, but nobody knew more than I did, so it was just other devs who might have another perspective.
Collaboration and support might be standard, but mostly in the form of other smart people, not always documentation, up to date software, or people who know anything about the code.
I personally prefer these kinds of challenges as they mean I don't need to maintain two skillsets.
> build a mini-app from scratch in just a few hours
It depends on what kind of functionality we're talking about, but this kind of task is exactly what people at my current startup have been assigned at times. It is absolutely possible to build a CRUD web app with reactive UI using modern tools in a few hours.
> This is like asking a Ruby developer to debug PHP as a test of flexibility
Again it depends on what the debugging task is. At every startup I've worked at, it's expected that an engineer is able to jump into a task that they know very little about. Granted it becomes less reasonable the more niche the task, but PHP and Ruby are not particularly far apart in skillsets in the grand scheme of things. I would expect any web engineer to be able to do this.
> Hiring processes should focus on problem-solving, collaboration, and growth in relevant areas
I agree with this. And, hiring should also focus on technical ability which does include working through difficult and unknown problems by oneself.
> is absolutely possible to build a CRUD web app with reactive UI using modern tools in a few hours.
Yes, but it also leads to tons of bikeshedding. "Should I implement a linter? Should I write unit tests? Integration tests? Should I mock HTTP calls or use a mock server? How should I deploy things? How do I structure my CI?" All these choices are extremely subjective and context dependent and then somebody's gonna pass on you because they don't like that you used Cucumber.
Setting up a project is not something you do often, and if you do, you probably have templates and some team standards.
It's much better to give somebody a project structure and ask them to implement some basic task in it.
> build a mini-app from scratch in just a few hours
But how often do devs normally set up a project from scratch? We have like 3 new apps in a few years where I work now in addition to the long running ones. So on average one in like hundred devs here have been part of creating something from scratch.
Sure, when you know what you're doing it's quick. But the first time can be slow, no matter how good you are. So for a coding task that should take a few hours, you might have spent all the time just getting up and running, and then you actually start the task as over time.
Code challenges are great if well made, and horrible if not. They have to be short enough not to be unpaid labor (few will go through the trouble of paying their victims to work on interview tasks, and if I have an existing job I don't want to plow even 8 hours into getting any new job).
But having a simple coding task does some great things in that it will make the interviewee have to fix problems. They need to ask when unsure. You'll know who stops and asks and who plows through and solves the wrong problem. You'll know who throws their hands up when instructions are unclear and just says "I can't do this" and gives up. It's one of those extremely valuable and very effective filters that you just shouldn't be without.
The actual assignment can be however simple, but if they manage to install the tools, clone your repo, read the instructions, build the thing etc. then you already filtered out half the applicants or more. And that's without even starting to code.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
Yesterday! Working on a large project means there’s always some issue with a dark corner no one has looked at recently and because there’s no team to support, I get to go and bug hunt.
Many of us grew up when that was the norm, n00bs weren't given features. Coding jobs were like apprenticeships. You were mentored and free to explore. You learned what did and didn't work and how to leave the codebase a better, more maintainable place.
Today, privilege means starting from no real job experience to writing enshitified_prod_tokenizer() and laughing at the poor sap having to fix it in a few years.
The take-home challenges at my last jobs have been close enough to the work to be useful, and much more relevant than anything else I've seen in any interview.
That’s how we do it too. Take home test that’s actually based what we do. We even time cap it like our day to day tasks are.
> Hiring processes should focus on problem-solving, collaboration, and growth in relevant areas. Unrealistic expectations don’t attract the best talent – they just exhaust and discourage it. If companies want adaptable developers, they should focus on the long-term ability to learn, not how fast someone can tackle an arbitrary test. Dropping these absurd assignments and focusing on what really counts could foster a better, more inclusive tech culture.
Sure, how? There are no perfect processes, only drawbacks that you can live with.
I also find the idea that interviewing is somehow exploitative of the interviewee ("feel like working unpaid shifts for a job they don’t even have yet") to be somewhat bewildering. It's not like your coding challenge is useful work for company that you are doing for free, in most cases interviewing is also a huge time sink for the company as well. Not every interaction should be quid pro quo...
At most places I know, a coding challenge is not "the way" to get accepted, it is a way to get rejected.
The hiring manager might hd really liked a candidate and conclude that a coding challenge just shows that the new hire would need extra time to ramp up.
Yet, ignoring money, you could excel in the coding part and the hiring manager might not like you because you lack "X" (e.g. "communication skills", "team culture", etc), which in reality is a way to put they just didn't like you or there were not really looking for someone to deliver, example, they were looking for someone to play politics or that might benefit the hiring manager more in the long term.
Not always the more technically capable candidate gets the job.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
This is pretty much my day job. OP is spoiled if they think this is an uncommon situation.
Whiteboard coding interviews are underrated. Sure, they suck to learn and master, but once you gain the experience it's a passport to a wide range of top-shelf jobs.
Can people game them? Are they flawed? Absolutely. But that's true of literally any interview practice.
The idea behind whiteboard interviews is if they're challenging enough, the only people who can fully exploit them are either very bright, very motivated, or some combination of both. That's a better signal than asking someone to lie about their "passion," which any slob can do.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
Pretty much all the time. I also found that debugging things without asking for too much help was a great way to learn the systems involved, which eventually made me a better and more useful programmer, with more domain knowledge.
Granted not everyone has the luxury to spend time doing this, but I don't think it's that rare. In many cases it's even what companies want you to do, when that documentation or help is simply unavailable.
I see lots of complains about coding challenges but what are the suggested alternatives?
Here the article suggests "Hiring processes should focus on problem-solving, collaboration, and growth in relevant areas". Ok, but how? Problem solving is typically tested using puzzles of some kind, like... coding challenges, but apparently, it isn't the right way, so what is the right way? And how do you test collaboration before hiring? The hiring process is competitive by nature. As for growth, again, hard to test in advance.
Suggestions I have seen are:
- Interviews: great in theory, but some people are great at bullshitting, others are competent but just don't do well in interviews
- Open source contributions and past projects: great if you worked for companies doing open source before, not great if your past work is confidential
- Assignments: Hopefully paid, but in any case, very time consuming
- Puzzles, IQ tests, school-like knowledge tests, ...: like coding challenges, but worse
- Probation period: hiring someone just to fire him a week later is not great for morale
- Dice rolls: maybe worth considering
Coding challenges have a number of advantages. They are unbiased with regard to age, race, gender, religion, etc... They test a skill that is at least related to the job. They can be done remotely and don't have to take that long. Whatever replace coding challenges should meet these criteria too.
The part I always feel like is missing from the coding interview discussion is:
"Are you testing the candidate on what the job will actually require, and ONLY that?"
With an onsite whiteboarding challenge, you're most likely testing if they're really good at thinking and talking in front of a crowd when extremely nervous, not writing code.
Same with a live coding challenge over zoom (CoderPad style) - this tends to be a test of nerves more than coding ability. But, if you're planning on a lot of pair programming, and want them to be able to dive into that on day one, maybe it's the right way to interview.
I find take-homes often the most accurate way to interview, as it most closely reflects the reality of most coding jobs. You're on your own, you don't have to deal with the extremely odd (and for some of us terrifying) feeling of someone watching you type. Hopefully, the take home questions are carefully designed to reflect what the job requirements will be. As almost every other comment has pointed out, debugging an ancient code base without documentation may be a perfect question, or it may be completely irrelevant and unnecessarily eliminate candidates.
But please, make the interview match the requirements you're actually hiring for, otherwise, everyone's time is being wasted.
> Are you testing the candidate on what the job will actually require, and ONLY that?
As a hiring manager, I strive to evaluate for potential and adaptability, not a highly specific list of skills. I think many candidates prefer that, too, that they will be allowed and encouraged to learn how to do this job. I do not want to hire people that don’t want to learn or can’t do things they haven’t done before. And I may or may not even know exactly what the job will require. Do not assume your interviewer can tell you exactly what will happen once hired — they can’t! One thing I do know, for sure, is that the job will require communication with others, adaptability to changing requirements, and a willingness to learn the company’s workflow. I need to make sure those boxes are checked, sometimes more than I need to see how well someone knows jQuery or CUDA or sorting algorithms.
I have literally 10 years of experience in Android development (starting with Java and transitioning to Kotlin around seven years ago). There are many topics in Java and Kotlin that are irrelevant to Android development, but I focus only on what I need. I quickly learn and build on new topics when required.
I’m speaking here as a developer who has done more than just bug fixes or UI tweaks. I’ve built apps that have served millions of users. I even have personal projects that currently serve 20–30k users daily, with over 1 million downloads and a 4.9-star rating from 41k reviews. Beyond Android, I have experience managing servers with Nginx, setting up SSL certificates, using Cloudflare, AWS, and Hetzner.
I’ve designed an app with users, posts, and likes, handling both the front-end and back-end using cloud functions and Firebase. From writing Firebase rules to encrypting everything, I understood and implemented it all within a few months. I’ve also written numerous web scrapers in Python for personal projects, which are still in use. While some backend concepts might seem basic, I believe knowing how the backend works is a huge advantage for an Android developer.
Despite all this experience, I recently failed to solve a "number of islands" problem in an interview and got rejected:
Given an m x n 2D binary grid representing a map of '1's (land) and '0's (water), return the number of islands. This left me feeling useless. It made me question whether I’m fit to be a software developer. I studied DSA in college, but now I struggle to recall or prepare for it. Does this mean I’m a poor developer? Does someone who grinds LeetCode just necessarily have better skills in general? Is this what companies want? Are they okay to spend money and time in training someone just grinds LeetCode for the purpose interviews. Does someone who does this just for interviews is likely to stay in same company?
I believe there must be a better way to evaluate candidates. It makes sense to test DSA in campus interviews, where students are fresh and studying it. But for experienced developers, especially for roles like Android development, DSA is rarely relevant to the job requirements.
I’m not saying I can’t practice and do DSA—I can. But it has never been a job requirement for me, and I doubt it ever will be for an Android developer or a similar role. If it’s ever truly needed, I’m confident that most experienced developers, including myself, can excel through research and applied knowledge.
#Kill-DSA
You’re not alone with this story, other people have shared similar stories. There are even a few very famous cases, like when the Homebrew maintainer complained about being rejected by Google https://news.ycombinator.com/item?id=15713801.
That said, you are making some assumptions. They might be right, but don’t forget to consider the possibility that they’re not, and then think about how to validate your assumptions, and about what it means and how to respond or adjust if your assumptions aren’t right. Also don’t forget to think impersonally about whether the job is the right fit for you. The job with the islands question might be a job for which it will be important to know and write similar algorithms, and based on the experience you describe, it might not be the right job for you. You sound like a full-stack dev, or even maybe a devops or generalist kind of person, and any given job might be looking for a more specific algorithm developer. When the job isn’t right for you, it is not a reflection of your existing talents or abilities, it could mean nothing more than they’re looking for someone with different experience or interest than you have. Being rejected might be best for you, before you know it.
> DSA is rarely relevant to the job requirements
I don’t know what this means. Generally speaking, a basic working knowledge of data structures and algorithms apples to damn near everything in computer programming jobs. I’d agree that most jobs don’t require knowing the subtle differences between parallel mergesort and parallel shell sort, for example. But most jobs that involve programming do require knowing some data structures and some algorithms.
Furthermore, companies are usually looking for well-rounded candidates. If I can hire someone who does full-stack Android and knows a lot of data structures and algorithms, that is better for me than hiring someone who’s great a full-stack stuff but gets stumped if they have to fix a connected components algorithm. Companies want devs who are good at as many things as possible, even when they say they’re looking for something specific.
Keep in mind that you’re competing with others, not being evaluated by yourself on your own merits. Also keep in mind that a single job rejection is nothing in the big picture, many things can go wrong in an interview, so look at the big picture and not individual interview questions.
I feel your pain. I've got similar amount of experience in my own area, I've been part of several successful software projects, and I have several popular open-source projects that I'm maintaining, but it all seemed to not matter a whole lot as I slogged month after month through interviews with rejection after rejection.
All I can say is that perhaps all those places that reject you are not actually the places where you really would like to work.
At one point I managed to get a job at a company which from the outside looked like a really cool place to work in. However all these interviews painted a very different picture of the work that I was actually assigned to do. I felt like I was being paid a senior engineer salary to do the job of a junior engineer. I was utterly bored and I complained that it's too boring and way below my skill level. This ended up getting me fired. Strangely, I've never had so many work-mates expressing to me how sad they are that I'm leaving. Several of them left the company not long after me.
Currently I'm in a happier place. Not getting as big of a salary but having an opportunity to work on a way more interesting software. And how did I land this job... after a single short interview they told me: would you like to start tomorrow?
I kind of agree with you that in practice very few jobs actually require any implementation of textbook algorithms.
The weird thing is that while people keep complaining about the focus on algorithms in the interview context, it seems like nobody objects when it comes to the recommended CS education.
I think there is a deep dissonance when people strongly recommend learning data structures and algorithms as part of CS and software engineering curriculum, but then when employers actually ask about these in interviews there is strong pushback.
I personally tend to think the right balance is in the middle. At least, whatever is taught in a typical software engineering curriculum should be fair game for interview questions.
The “number of islands” problem could be something that comes up in a DSA assignment in college.
I don’t know whether not knowing how to code it up in an interview setting makes you a bad engineer, but those who still remember their college classes got ahead of you apparently.
I often argue and get downvoted in the discussions when I claim algorithms and advanced math isn’t really needed as a prerequisite in the software industry. It’s only when people get gatekept for forgetting things taught in their college classes that they complain. Imagine that.
Some problems I see with take-home assignments:
* Some companies design take-home assignments that mirror challenges their team has spent years refining, setting unrealistic expectations for candidates to match or exceed that work. Talent is sometimes dismissed over minor differences, like testing styles or even coverage percent.
* Working with people is about compromise. When passionate people collaborate on a creative endeavor, there's always some back-and-forth of ideas. But in these assignments, often only the unilateral opinion of the evaluator matters. I guess being rejected from a monoculture early in the process may actually be a blessing in disguise.
* Many companies fail to provide specific, actionable feedback after candidates spend hours on assignments.
I don't mind take-home assignments, but I avoid investing excessive time in them, as it’s not sustainable and often has low ROI.
> * Many companies fail to provide specific, actionable feedback after candidates spend hours on assignments.
AthenaHealth in Watertown MA, this is you, you miserable pricks. Didn't even send me a "No thanks" email.
i think we need to come to terms with the reality of coding challenges in the interview process. i know i hate them personally, and dread having to interview again because i'll need to open leet code and remember how to do stupid shit like DFS on a graph, or manipulating linked lists. at the same time, a job opening for a SWE is opening soon in our company and we'll have to somehow filter people, and the job market is such that we'll get MANY applicants, most of them probably wrong for the job. i will probably end up giving them coding challenges (not necessarily leetcode, but some coding challenge for sure), because i need a way to grasp their problem solving and coding skills. i don't know a better way to do it in a condensed time frame of a 1 hour zoom call.
We have always just talked through apllicants' portfolios
Had the weirdest interview of my life on Tuesday, more on that at the end.
Relevant part: the interviewer/CEO's philosophy is not to code challenge because no one properly vetted for "problem-solving, collaboration, and growth in relevant areas" (tfa) would ever accept a job they were going to fail at. Will always stick with me.
Now, here's the weirdest interview of my life: An acquaintance who runs a software business interviewed me by bringing me along to an NBA game he has season tickets for. The interview in the dining area before the game was normal, the game included some breaks for more interview discussion, but the home team had the best shooting night of their entire season so the crowd was very rowdy and we each had 2 beers.
I apologise for the shameless plug but I’m working on something to solve this.[1] The problem is that most hiring can be classified into 2 types - the company needs candidates that can work with their tech stack OR they want candidates who are good, regardless of tech stack. What we do today is conduct interviews that make no sense and that have vague signals when we could be assessing them based on real-world skills. Candidates shouldn’t have to practice interviewing in any case!
If you are a hiring manager and would like to experiment with something like this, please get in touch!
I feel like the real problem is that most people are poor evaluators of talent. The pervasive bad interview processes in industry reflect this truth (as well as cowardice that allows people to hide behind process to avoid accountability as well as accusations of bias). The only way to escape this trap is to find the right connections or start your own org. Otherwise you will spend your entire career suffering the consequences of broken interview processes (both in terms of your experience getting hired as well as being forced to work with bad hires who game the system).
I'm fine with tests, but only if companies pay a standard fee (say, $100) for a dev's time. If a company doesn't respect your time during the interview process, it probably won't while you're on the job.
I think you a missed a 0 there.
$100 is insultingly low for a 4 hour assignment.
Agree, IMO even interviews should be compensated. Most software companies want to schedule 3-4 interviews each 2 hrs long so they should minimally compensate 1 day pay.
If we assume 15 days off that leaves roughly 245 workdays per year and with a salary of $200k that would be close to $800.
A fair amount of compensation to spend half a weekend on a project would be $400.
This is such an out-of-touch and ridiculous thing to say
And why is that? Do you not agree that time is our most valuable asset?
Interviewing is a time investment for both sides. Should the company charge you to evaluate your skills?
If someone gives me a take-home assignment that I spend 4 hours on, there is no commitment of time on their side. They may decide to not even look at the results. They may spend five minutes on it.
That's exactly my problem with take-home code challenges. If I go to an interview, and they waste my time, they have to waste their own. But with a take-home assignment, they can waste my time without wasting their own. That may lead them to be more wasteful of my time.
If that’s the case - sure. But that’s not inherent to code challenges. I evaluate results of challenges at work, they require 2-3 hours of candidate time and it rarely takes less than half of that time to fully evaluate and form a recommendation.
When the interviewee is asked to complete an assignment, they have to invest substantially more time than their counterpart.
Is that not literally LeetCode Premium?
I once ended a job interview at a company because of their coding challenges at the end of a 2-3 hour interview. It was a job for a PHP developer working in a DDD environment. They were asking people to do algorithms, the last one just took the cake because it was completely unnecessary.
The question made sense: how do you add to very large numbers together? I assumed the key knowledge they wanted was that the numbers had to be represented as strings and you should use bc_math to compute them. So I answered that, and they said, "Yeah, great. Now write a function to do that." I sat there for a few moments thinking about it and realized it was just nuts. So I asked if I could just end the interview process. They were clearly shocked, I doubt many people end an interview process on the last question. I said that this wasn't realistic and that if I did that for real it would get code-reviewed to hell. One of them kept saying but it's to see how you would do it if bc_math wasn't installed. I simply stated "I would have them install it" to which one of the interviewers instinctively nodded since that's the real-world answer.
It seemed like it would have been a good place to work up to that point. I didn't like my current job at the time, it paid considerably more, had more public holidays due to weird German reasons, etc.
Ugh, I got that question in an interview. Only they wanted me to write up a basic arbitrary precision math library, in C, on paper. (I may have chosen to do it on paper.) It was harder than I expected. I didn't do that well and didn't manage to finish in the time allotted -- halfway through, I switched from encoding things as ASCII numerals to base-256 binary (or the reverse?) -- but it was enough that the interviewer said it was clear I understood what was required. I got the job.
I thought it was a hard but fair question. I still feel bad about not doing well. It wasn't all that close to what I'd be doing in the position, but it wasn't unrelated. Perhaps if it were a PHP role I'd be more bothered? But even there, it seems like one way to check if someone understands what's going on under the hood, so as to not naively do expensive stuff without realizing it. The fact that one should not write their own library, and that if you tried you should get code reviewed to hell, isn't relevant if that's the purpose.
I think the reason that bothered me was, that this was after a take-home tech test, a long interview, and 2 other questions that were similar.
But really, the fact you should never write one should be very basic and fundamental knowledge. Quite simply it should be an automatic no if someone didn't know they shouldn't write that themselves. They were literally testing people on things they don't do. Which means everyone I would be working with wasn't tested on their ability to do their job.
Yeah, that's fair, and I'm guessing your decision made a lot of sense in that context.
Though in an interview situation, it's reasonable to not point out that you shouldn't ever do the thing that you're being asked to do. Some amount of artificiality is to be expected. I do agree that stating that one should never do it is a purely good sign. It's just that not stating it isn't necessarily a bad sign.
I have a very different perspective on coding challenges. In my last role building a startup, I WAS the employer. The results of coding challenges and the conversation that followed were the most telling about a candidates technical and reasoning skills. I would genuinely never hire without them.
If you're not willing to do coding challenges, your ability to write code and reason about your decisions probably suffers as well. It's self-selection.
We told candidates to not take more than 2 hours on our challenge and we meant it. I tested the challenges myself from scratch to verify that it was possible in 2 hours. I still only consider myself a quasi-engineer, so if I can get it done in that timeframe, a full-time engineer using AI should surely be able to.
There's a few considerations that I believe make coding challenges as fair as possible:
- Make sure the challenge 100% relates to a real task they would do on the job. As an example, for our engineers who would be writing open-source Python ETL integrations, our test was to build a Python ETL integration to pull episode data from a IMDB api.
- Tell the candidate that it's ok to not clean or refactor the code. Meet the requirements of the task first. Your job in the interview process is to ask them what could be improved or why specific decisions were made.
- Don't have anyone perform a technical test until you've first invested a decent amount of time into them as the employer.
- Ask the candidate how long it took them. If a candidate takes longer than the allotted time, they're showing you that they'll overwork themselves and not follow instructions.
> Hiring processes should focus on problem-solving, collaboration, and growth in relevant areas.
Agreed. When I think about the best engineers I've worked with-- and the worst-- I have identified certain characteristics that I want to look for in a candidate, and certain characteristics that I must avoid.
In particular, there are certain people who just can't entertain any thought or design idea that they themselves did not have. This makes them very hard to collaborate with, except in the cases where what we need to do just happens to coincide with what they are thinking. Related, there are certain people who can only think about exactly one way to solve a problem, and there is not room in their brain to honestly consider any alternative approaches. These people tend to be really bad team players. A good engineer will understand that there's no one right way to do certain kinds of things, that everything comes with tradeoffs.
During interviews I present coding problems where there is more than one way to attack the problem. Whatever approach the candidate does, I propose an alternative approach and ask them to talk with me about the pros and cons of the alternative approach. I also ask for them to extend upon the alternative approach and make it better. For a significant fraction of candidates, they just can't entertain any other way of doing the thing, there just isn't room in their brain to consider alternative approaches.
Ability to really listen and give an honest consideration for other people's ideas is a vital part of the kind of team culture we need to succeed, so I look for that in interviews. Long ago I abandoned worrying about leetcode in interviews, and nobody in more senior management has made me stop yet, and this is at a big tech company.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
I wonder why the author thinks this is something unthinkable and unrealistic. Small teams with strict ownership and skilled engineers. If you join a company like that you will inevitably find yourself in that situation.
I don't mind the coding challenges necessarily, but more how they're implemented. I find them to be far too broad relative to the day-to-day, and it only showcases how quickly someone can write unthoughtful code.
For example, it's not uncommon for a challenge in finance to be "build an exchange matching engine that takes orders, modifications and cancels from different threads in two hours." That's totally doable if you just fly through whatever comes to the top of your head, and maybe have an hour left over to test, debug and modify it. But that project (an exchange matching engine) is something that the CME has 10 full time programmers working all the time on. A more realistic 2 hour task is something like "look at edge cases for the order cancel type, and figure out a way to handle malformed inputs." Then you can actually write production quality code for them to judge.
We recently went through a hiring process using a recruiter. Our assumption was that the people they vetted were "legit" and we didn't need to do any additional work to verify that they had the skills they said they had. If the recruiter wastes our time it's a surefire way for them to never get business again.
Anyway, in lieu of a coding interview, I opened up a couple of tickets (one a long term change request, another a real life problem we encountered just the day before). I provided them logs and answered questions and watched the interviewees move through both scenarios.
We capped the time at an hour. The focus was more on the interaction between them and myself—it actually used the "real stuff" they'd encounter.
I was amazed at the diversity of the responses and how differently people approached a problem.
> What companies often ignore is the extra time candidates invest beyond the “suggested time” for these tests.
This is a feature not a bug. Companies are testing if you can focus and complete a hard uncomfortable challenging task, because at your job you’re expected to do things you don’t want to do, but will be rewarded for doing.
Companies are backhandedly selecting for those who can invest the most time beyond the suggested time (read: single, childless people in their early 20s who live at home) and those who are willing to work insane hours. It's a pledge of fealty before you are graced with an interview. 15 pieces of flair is the minimum, but it's up to you whether or not you want to just do the bare minimum.
I knew one person that used sick leave (for mental health) to take time off to prep for job interviews due to the toxic work environment.
That is an option if you have higher personal obligations and are struggling mentally.
Requiring a huge time investment like that will filter out a lot of the non-desperate folks…probably exactly the folks they don’t want to filter out!
Also do they really want to set the tone that “I expect you to intuit what I want and I won’t tell you directly”? Sounds like an awful place to work, right out of the gate.
> probably exactly the folks they don’t want to filter out!
Maybe? non-desperate folks might waste the company's time. Usually companies can only give out one offer at a time. If the non-desperate person takes a while to accept, someone else in their pipeline may get an offer and flake.
We had a candidate agree to a start date and then cancel 2 weeks before because they decided to stay at their job.
> “I expect you to intuit what I want and I won’t tell you directly”?
They do tell you directly? "Do this hard uncomfortable task, where the task is study leetcode, completing a coding project."
> Also do they really want to set the tone that “I expect you to intuit what I want and I won’t tell you directly”? Sounds like an awful place to work, right out of the gate.
I have only once had this be a problem and in a way, it was my fault for not noting the ambiguity in the submission. Every other time I have dropped a comment saying "you could have intended A, but also could have intended B so this is when I go back to product and request more information."
That's a perfectly fair and reasonable tone to set because that's how approximately 99% of non-junior jobs work. My boss says he has a problem he'd like me to solve. He might not have all the details so it's on me to investigate the options, identify the gotchas, and clarify the ambiguities.
I never deliberately snuck ambiguities into coding challenges. I've used it in oral sessions for senior and above devs though. "I'm a product manager who wants to add a new feature. I don't really know what's involved but I want you to implement it. Feel free to ask me a million questions and we'll talk it through!"
This happens when companies have too many applicants. It isn't the best way to evaluate candidates, but it is a cheap and easy way to do it and resumes aren't enough.
It's common for candidates who have an internal referral to skip all these steps, because the referral establishes a base line of credibility.
> Dropping these absurd assignments and focusing on what really counts ...
These assignments are so much closer to a simulation of doing the actual job than the industry standard LC interview. The only thing closer is work trial which has the significant problem of requiring candidates to not currently have a job.
I've recently had a couple technical interviews where they were structured much more like a pair programming session. Those were a great experience on my end, and for the interview I expect it gives a much more realistic view of what it would be like to work with someone.
Those are my favorite. When I’m interviewing someone, I’m literally considering working with them - what better way to find out about that than to do it?
I was asked to code a sudoku solver from scratch during an hour long tech screen. I got maybe 60% there. A few days later they called me up and said they liked my work but wanted someone faster. At the time I was bummed, but a decade later I’m glad I didn’t get that job.
I was ready for this to be another complaining post about having to write code during an interview, but found myself mostly agreeing with it.
Having to spend time to do some work before an interview to prep (like a presentation about your experience or something) seems reasonable, but a company should be able to test technical skills adequately in the 5+ hours they spend interviewing candidates.
I feel like the take home assignment trend is part of the backlash against doing coding problems live in interviews and I agree with the article that it's just worse. It takes more of your time and also opens things up to candidates cheating even more than live coding over zoom does.
Just stick with the live coding and ask good questions.
I find that take-home assignments are very one-sided form of an interview:
- The candidate needs to do lots of work while the company gets away with little to none. - The candidate never knows what exactly the other side will be looking for, so the only way is to put in more work. Like when the assignment didn't ask for tests, they might still be expecting them. - The candidate usually gets no feedback about the work he did. More importantly he usually doesn't get a chance to defend his solution. It's either great or garbage. - While the company can learn something about the candidate, the candidate will learn nothing about the company by doing the assignment.
This blog post lists problems but offers no solutions.
Yes, complain all you want about what companies SHOULDN'T do. But then what do you feel is a better approach?
From my experience, you ask 4 software engineers what a proper interview should look like and you'll get 4 different answers.
I’m a product manager, not a developer, although I used to be a developer a long time ago in a language far far away.
A few months before my last interview I had taken up Python by solving about fifty projecteuler.net problems. I’m not fluent, but capable, and since no one is hiring me to write code anyway, I put it on my resume.
At my interview with a CTO he said, I see you have Python on your resume? “Yep,” I answered, and said pretty much the above.
“Do you mind if I open a shared session to code in”
“Sure” (I haven’t coded in front of someone…ever?)
“Do you know what Fibonacci numbers are?”
“Yes”
“Can you write me a function to return the Nth Fibonacci?”
That’s clearly not a very hard task, and fortunately I was up to the task and got the job.
I've turned down a number of jobs in the last few years because of unrealistic interview requirements.
One asked for a 20+ page personality assessment (perhaps completing that _was_ the test?). The other asked for an interesting technical challenge which I was happy to do -- for fun -- and then followed up with a request to write a basic web service which was... unexpectedly simplistic for the level of role I was applying for.
I pointed out a number of my public, solely maintained repos on Github illustrating exactly the same sort of thing they were looking for and turned down the followup assignment, but we decided it wasn't a good mutual fit.
Frame shift: coding challenges are work. (I don’t care that the assignment is useless for the company, that’s not my problem!) The issue for me is that it is unpaid.
If you really want me as a developer, pay me. As an industry, we might even set a standard rate for these homework assignments. I suggest a rate which is enough for me to say it wasn’t a waste of my time.
As an anecdote, I was interviewing at a startup that sent me one of these homework assignments. I told them I would do it for $1000 now, or whenever I got around to it otherwise. I never got around to it.
Nobody is going to pay candidates for their time during the hiring process. And honestly nor should they. You are not doing work for the company when you do some test to demonstrate your skill.
I'm not in favor of take home "go write this thing" projects, but what you're suggesting isn't a viable answer at all.
Whether they should or should not pay candidates is a matter of negotiation, not a law of nature. You don’t need to argue the company’s perspective for them, they will do it.
From the company’s perspective, they are indeed getting value from the homework assignment: an opportunity to evaluate a potential hire. If that seems weird, consider why any company would pay to post a job opening. The price they are willing to pay has nothing to do with the marginal cost of displaying the ad.
> Nobody is going to pay candidates for their time during the hiring process.
Not if they don't ask. If you asked, you'd find many who are willing.
> You are not doing work for the company when you do some test to demonstrate your skill.
You literally are, it's just not work from which they are likely make a profit. That is not the candidate's problem. This doesn't make it unreasonable for the candidate to ask for compensation.
FB interviewer had "two hard Leetcode problems within 35min", where 25min was background and 1st leetcode problem was "changed" (adding ambiguity to the point of be without solution - later could not find any full solution, for which I stated factors and disclaimers to interviewer).
Agree, such or similar unrealistic interviews are loss for both sides, plus they are plain STUPID. I've performed ~400 interviews that most included coding in my career with consistency and standardized report (can compare apple to apple).
Non paid solo-coding assessments should disappear. Many disrespectful companies send it automatically to all applicants to don't even look at them after.
Every time I get those I request to do a live coding assessment, if they are not willing to lose even 1h of their time they were not serious about you.
If you decide your process really requires a solo assignment you can offer a gift card to prove you appreciate the candidate (and don't be stingy, you are already saving a lot of money by not having 1 of your engineers interview)
I’ll act as a counter-balance on this topic. I have conducted hundreds of technical interviews across varying levels of experience - from interns and juniors all the way to PhD grads and those with lots of years under their belt. I have extensive hands-on experience in multiple fields and passable knowledge in others. If a candidate has worked on something in the past, a good position for me to be in is to know more about what they’ve done than they do. A passable position is one where they can walk me through what they’ve done and I pick up as I go along, piecing it together in my head with what I already know. I very rarely have pre-canned questions, instead I opt to come up with examples on the fly based on how the conversation (and it is a conversation) is unfolding. If the candidate is gold, I will usually have a first offer prepared for the candidate before the interview is over, and I do not cheap out on the offers. Candidates have historically reached out to express they enjoyed the interview even if an offer was not presented. I believe in respecting the candidate, respecting that people can be nervous and shy one day and social the next, that people interview differently, and that more interview rounds does not necessarily yield a better result. Prerequisite for this is you have to enjoy talking to people, put your bullshit ego aside, show your hand where you think it will help the candidate, and know what you’re talking about and what you don’t know. People can learn, people can adapt. Every so often, I accept candidates that would otherwise be rejected on the understanding that no “system” is ever perfect, certainly not one that involves people. It saddens me to see coding challenge, especially automated ones, because it shows laziness on part of the interviewer. However, it is a skill that must be attained and practiced like any other. And no, I haven’t used the return key since the 8th grade. AMA.
Agreed.
I can't claim to be always be good at it, but this is what I strive for as well. I like talking to a candidate about something they are experienced or even expert at, and asking questions until I find the boundary of their ability. It can be very illuminating what happens when they hit it -- some get defensive, some get enthusiastic.
Defensiveness might just mean they're being interviewed and I failed to adequately put them at ease. But usually when we're neck-deep in some specific topic, both of us will kind of forget that it's an interview -- especially since the topic is more in their area of strength than mine.
Otherwise, defensiveness often means they never actually wanted to understand the problem they were working on, they just wanted to use it to get a degree or ship something by throwing it over the wall and forgetting about it. Even someone who is heartily sick of their PhD topic (as in, everyone who is over 1/3 of the way to getting one) will be relieved to discuss the ideas behind it, the reasons why it caught their interest in the first place, when they don't have to do the work of coming up with rigid results and writing them up.
Enthusiasm is usually a good sign, though even there I have to watch out for excessive enthusiasm where they care more about the problem itself than the benefits of solving the problem, and are likely to waste resources in unnecessary pursuits of perfection.
Oh, and finding the limits of someone's knowledge doesn't require a genius, which is fortunate since I am decidedly not one. 3-year olds can do it just by endlessly asking "why?" You'll probably need to be a little more sophisticated than that, which is good since you'll be able to evaluate their ability to explain things to you in the process.
I do like the return key, though.
That is a hard to read wall of text. Would you mind breaking it down into paragraphs?
No, I don't think I will. Thanks for the suggestion though.
FINALLY I GOT MY LOST BITCOIN BACK ALL THANKS TO TSUTOMU SHIMOMURA
Hello everyone I want to use this Medium to say big thank you to TSUTOMU SHIMOMURA for they helped me recover my stolen crypto worth $250,000 through their hacking skills I tried it I was skeptic but it worked and I got my money back, I’m so glad I came across them early because I thought I was never going to get my money back from those fake online investment websites . you can also contact them via Email: tsutomuhimomurahacker@gmail.com or WhatsApp via: +12569564498, or Telegram @TsutomuShimomurahacker, and I’m sure you will be happy you did.
I had one coding challenge a month ago that said: this should take you no longer than 2 hours. It didn't have guard rails on it so you could technically spend more time on it. It ended up taking me 2 days on and off because it was basically build a front and and a backend. And then later I realized the instructions for submission were totally broken too: DONT make a PR - later in the instructions: make a PR.
> > When was the last time you had to debug an ancient codebase without documentation or help from a team?
I would rather do this than some tired ass LeetCode problem.
I refuse to do algorithmic and / or timed coding challenges during interviews and I'm glad to see such articles pop up more frequently. These days, I get a kick out of pointing out to recruiters and managers that both my LinkedIn and GitHub profiles state clearly that I won't accept such interviews and they need to stop wasting my time if this is part of their process.
Tech jobs should be more like interviews instead, rather than making interviews more like tech jobs. I'd rather solve algorithmic puzzles all day.
Can we create an abstraction on top of modern day CRUD work and turn it into the solving of algorithmic puzzles?
one of the biggest problems with coding challenges for me is the conflict between providing a good solution versus making a strong impression.
do you show them a quick'n'dirty solution that ignores edge cases but shows i'm a pragmatic and not going to overcomplicate things?
OR
do you show something fancy that you'd never actually do in an real codebase that shows off my depth of knowledge and where my ceiling is?
Unfortunately, you often have to ask the interviewer (which is tough when it's a coding challenge and you have no way to ask for clarification on the rules). I know interviewers who will say "You didn't cover edge case A, B, and C, your solution is incomplete!" and other interviewers who will say "You are overthinking this, I'm just looking for an MVP."
that's part of what i like about whiteboarding interviews -- you can get realtime feedback and discussion the pros/cons of your solution in realtime
That's assuming they don't pre-bikeshed before even looking at what you did.
Candidate: writes code matching current style as much as possible to show that you can adapt
Reviewer: Rejected. Didn't even use autoformatter, which takes literally zero effort, so clearly does not follow best practices.
Candidate: autoformats
Reviewer: Rejected. Changed more code than was really needed (separate commit or not), clearly showing they didn't try to keep the change as small as possible. If this were a real PR, they would be making life more difficult to the reviewers making it harder to spot the parts that were actually modified.
Candidate: keeps existing code as-is, but writes own changes following the currently accepted conventions
Reviewer: Rejected. They clearly can't follow an existing style and prefer to be a snowflake and "leave their mark" in the codebase.
Candidate: does any of the options above, and adds comment explaining the reasoning, including mentioning the other possible options that they thought about and their tradeoffs
Reviewer: Rejected. Candidate thinks too hard about trivial stuff, showing lack of focus on what really matters.
---
Granted, being rejected for those reasons above could mean that you dodged a bullet.
But yeah, like you mentioned in a different response, a live interview (coding or not) helps reduce these kinds of uncertainties to some extent, but of course these live interviews have other trade-offs compared to take-home tests.
Pretty common artifact that take-home challenges are being more widespread now that the labor pool is oversupplied with tech talent looking for jobs. Companies need new ways to filter for candidates that are willing to do whatever it takes and work the hardest. The easiest way to do so is give them a "three hour" take-home assignment and see if they are willing to do it.
Doesn't this approach self-select for people who - value their time less, - don't have other things to do (family, hobbies, side-projects), - unemployed people
It's one thing where companies are applying this to 1000 applicants for an entry level position, but they also use this tactic when they reach out to you and tell you how excited they are about your background and resume.
In reality they didn't evaluate your background or resume and are just cold-calling you to fill some portion of their "funnel", then ignore all their excitement and reasons they reached out to you and have you jump through random hoops.
I agreed with most of the things in the article. However, the debugging test is very reasonable provided they give you a heads up on the environment/language you're meant to debug. Depending on the language, setting up a debugger can be a test in it of itself.
Debugging (and setting up the debugger) is a core skill every developer should have.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
Literally my first internship. Had to migrate a small system from C to (modern) C++ in the span of 6 months. Most files (and I'm going to say in the ballpark of 85%) had last been touched in 2003.
I started my intership in 2020.
If one does want to get into FANNG-like companies or whatever those acronyms are, I'd suggest a few practical tips.
- Use probability as your advantage. Or put it another way, perseverance works. Do recognize that interviewing is a hit-and-miss game. Say with your preparation your chance of getting into any of your top 10 companies is 30% -- a pretty low number just for us to steelman argument. Let's say you can try each companies three times a year (most companies have cool-down period per department instead of per company). Then in 5 years, you can try 30 times. The probability of failing all of them is 0.3^30 ~ 10^-16. That is, practically, zero. Of course, this assumes that each attempt is independent of each other, and we can maximize such assumption by keeping learning the lessons of failures. By the way, this is also a reason why one can see so many so-so engineers in a big companies.
- Do not cram all of the leetcode questions. That's insane. Remember the quote from Invincibles: "when everyone is a superhero, nobody will be"? It applies to cramming too. Instead, use the time to study fundamentals. Pick up Kleinberg's Algorithm Design, Levitin's Introduction to the Design and Analysis of Algorithms, or Skiena's The Algorithm Design Manual. Note these books are not that academic. They all focus on the process of designing an algorithm from constraints and basic principles.
- Do not worry about Leetcode Hard. Yes they will be asked from time to time by an interviewer, but see my suggestion #1. If you really want to try Leetcode Hard, at least ignore the ones that require deep ad-hoc analysis or the ones that require an aha-moment. They are talent filters or memory filters or obedience filters, but they won't improve your engineering expertise. Then, why bother? And trust me, it is actually rare for you to get a Leetcode Hard question.
This will only stop when we collectively refuse to participate in such theather plays, until then, welcome to what is ultimately a programming casting show with internal audience seating on their theather chairs for the audition.
Using coding tests to hire developers is like asking a CPA to complete an arithmetic quiz.
Whatever happened to looking at a resume and realizing this person can code?
What are references for anyway...
When I was hiring, resume quality was often a negative signal. I could sometimes tell when someone used a resume preparation service. They were good at what they did, the resumes really were much better, and the candidates correspondingly worse.
The problem is that there is financial pressure to game the system, which means it's worth spending time and money on getting through the hoops but not on improving your work after that. Resume preparation, leet code grinding, even references are tainted when the reference is aware of the financial pressure (this person wasn't that great, but do I want to impoverish their family? or they'll scratch your back in exchange for you scratching theirs).
They're just encrypted IQ tests, working around the Duke Power supreme court ban on IQ testing employees.
You're being tested on your general intelligence, not because anybody cares about binary trees or reversing linked lists or whatever.
> You're being tested on your general intelligence, not because anybody cares about binary trees or reversing linked lists or whatever.
A lot of interviewers missed that memo. They are in fact very focused on a specific algorithm or design approach. It's quite tiresome.
I've never been so deflated than after leaving a interview after bombing it and seeing the question they asked me on the front page of leetcode.
I wholeheartedly disagree with this take. there's nuance obviously and takehome coding challenges can be done horribly but when done right and assessed properly, they are a wonderful tool for hiring.
People who say not to do something as an interview stage should propose an alternative.
Otherwise it's "well yes it's not great but it's better that not doing it"
Applicants underestimate the cost of a bad hire.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
All the time now that WFH is normal and collaborating with a team member is 10x harder and 10x slower.
The beatings will continue until morale (or the job market) improves.
In all seriousness, we just need to get together and refuse to do these types of problems (starting with hiring managers and strong candidates).
> "Hiring processes should focus on problem-solving, collaboration, and growth"
How is a coding challenge not problem solving?
Collaboration can be part of coding challenges.
How in the heck do you plan to measure growth in a hiring process?
>When was the last time you had to debug an ancient codebase without documentation or help from a team?
Uh.. just about every company i've worked at in the last 20 years has had little to no documentation and the guy who last touched the code left already.
A recent related post with proposed solutions: https://news.ycombinator.com/item?id=41948474
Many of them are high-effort from the company doing the hiring, which is an optimisation factor people forget.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
This has literally been my job for a year. That is an actually important skill to have.
Sounds like a skill issue
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
Lol, I'm pretty sure that's been my job description for the past 20 years.
Take home test under 10 hours > getting stabbed in the eye by a blunt fork > live coding challenge for 5 minutes
we administer a coding challenge designed to ensure people can do some basic coding - we don't even have them compile it. But we have caught people who start talking about using a B tree to solve fizzbuzz and we know they're not a fit. It's useful as an initial screener.
>please stop the coding challenges sounds like someone didn't pass through coding challenge
There's a lot of talk from engineers about how this process is broken, but never a solution.
What gives?
My employer's coding challenge uses replit and takes 30 minutes in which we're watching them do it (and interacting, asking them questions or prompting them with hints). It's not a hard challenge at all--one candidate who'd been grinding leetcode did it twice in the 30 minutes, with different solutions. We do the same with a system design question.
Most finish it, some don't and still get hired. It's a workalong in which we see how they approach the problem, how they discuss it with me, and what they do if they get stuck. We tell them to go ahead and google things, just tell us what they're looking for.
Over 10 years, we've got a pretty good record of hires that are smart, professional, and effective. And without specifically trying to hire for diversity, we've had a surprisingly diverse range of hires. We don't care about "culture fit", it's more like "work fit".
So I agree that the sort of code challenge where you send them away and see what they come up with is both unfair and a poor indicator. The real hiring test in an interview should be "can I work alongside this person?"
No leetcode.
No coding challenges.
No take-home assignments.
What is the alternative?
I believe the current best practice in the Art of Hiring, is to hire based on vibes.
Obviously you can't do that publicly because it would affect the company's image, so you still need to have leetcode/coding/takehome steps to keep appearances, even if they don't contribute to the final decision.
Hiring by sortition
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
Is this a joke?
That was my reaction, too.
My answer to that question: I'm doing it right now, and have had to do it at least occasionally in almost every job I've ever had.
As a consultant this is my bread and butter, so it's kind of funny.
and i enjoy doing it to boot.
Agreed - I've never had a job where I didn't have to do this.
Yeah. And never mind debugging that codebase. How about being on call for that codebase in production.
I did hundreds of interviews for a former employer (10k+ employees) and developed a fondness for in-person coding interviews. In particular I would tell the candidate up front that the goal of the exercise is to see how they collaborate on a problem, how they communicate about their thinking, etc. The problem itself was considered too easy by many of my colleagues, but I still found it an extremely useful for the seniority and capability of engineers.
A later employer was in the hiring tech space, enabling this kind of take home exercises. Lots of time was invested into getting signals as the developer was doing it because everyone knew that's just as valuable, if not more, than the final solution. But many companies using these take-home exercises do it just to filter out candidates. To them, it's a proof-of-work system that helps regulate the intake of their hiring funnels.
Eh, coding challenges in of themselves are fair enough. What companies really need to understand is that:
1. Said challenges should reflect the work done on a day to day basis, not something completely disconnected from the company as a whole
Google's interview challenges should be very different from say, a local web development agency's interview challenges. I've seen too many companies where either the challenge was way too difficult/complex for the job on offer (build a complex modern app from scratch for a junior/graduate level role, or solve a bunch of leetcode tests and HackerRank puzzles for a WordPress agency job), or where it was far too simple (I remember at least one software engineering interview for a React focused role where the challenge could be solved without writing a single line of JavaScript).
2. The requirements and environment should match a realistic working day
When was the last time you had no access to the internet, no access to an offline IDE, no access to premade software, no one to help, a boss standing screaming at you over your shoulder and a camera tracking your every eye movement in case you're 'cheating'?
Probably never right? Unless your company environment is a dystopian hellhole, this is probably not your typical set of working conditions. Don't expect interviewees to work under them, they're not kids in school doing an exam.
3. And that said projects should be somewhat realistic in terms of scope
Most people aren't coding a new website/app from the ground up every day/week. Expecting someone to create that in a few hours feels incredibly unrealistic, and an incredibly poor method of judging someone's skills in the role.
If you can't stand the heat then stay out of the kitchen.
This is a friendly reminder that you are welcome to withdraw a job application as soon as you are offered a coding challenge. If this is an exchange with a recruiter over email or linkedin messages, you can simply reply with something like this:
"Thank you for sending that along. I would like to withdraw my application for this position."
It might be just me, but seeing such a message would mean that my company just dodged a potential bullet.
We tell candidates they will likely never have to do work that is similar to what we give them in coding challenges.
Do it anyway.
And if they don’t want to do it, we don’t want them.
While we're at it: enough with the brain teasers and weird analogies. These, too, have nothing to do with actual day-to-day work and are really just a way to exercise power over candidates.
Aside from all the comments saying that coding challenges aren’t as unrealistic as you think, there’s also the fact that there seems to be an oversupply of good enough coders for some roles so companies need to filter beyond that. Yeah inclusive environments etc is a nice to have but if you can get a top level coder that can solve problems quickly and not crumble under time pressure, even better. Why settle for good enough if you can get really good instead?
The really good rockstars aren't even in this interview process.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
Right the fuck now. That's what happen when you get saddled with whatever monstrosity the last coder(s) who quit commited or had to maintain themselves.
And then you get the joys of trying to frankenstein some modern js "component" developed like only React exists on some vanilla / jquery front. Also a personal rant for those modern js script kiddies: forms action attributes exist and work.
> they put developers in situations they’d never face in the workplace, where collaboration and support are standard
If you are the one doing the supporting of your collaborators it's pretty much just like a coding challenge.
"your coworker needs to Inver binary tree but doesn't know how, can you help him? ... also the Internet just returns interview worthy crap when you try to search"
As interviewees, we need to stop entertaining this nonsense. A company tried to give me a "homework" assignment and I just told them I'm not doing it and moved on.
Yet another rant like "stop doing what you do" but not actually suggesting how to test "actually required skills" in practice.
Nope. I gave dozens of coding challenges at my last job. It was as directly job relevant, and as low-stress, as we could make it. We were a Python+Flask shop, and we had candidates who claimed both those skills flesh out some API endpoints in a stripped-down sample repo. They could use their own computer, and their own editor, and collaborate with me, their potential new coworker. And I enjoyed giving the challenge, and liked making the candidate feel comfortable and excited to work on some of our code. We gave instructions ahead of time like "you're going to clone a Git repo, edit the Python, run the tests until they pass, then commit the results". You know, the absolute bare minimum you'd need to be successful at the job.
Some candidates with many years of listed experience had never used Git. That was a little surprising but maybe they'd worked at a Perforce shop or something. Nevertheless, they knew in advance they'd be using it today, and that's not exactly some obscure job skill we were asking them to learn and then forgot. OK there: "I didn't use Git at my last job but I read up on it, and I might have some questions." Not OK: "What's a clone?"
Others didn't have an editor or IDE set up. Just how? I get not taking work home with you, but you've never once written a little program to balance your checkbook or make anagrams or something else random? And again, we'd told them ahead of time they'd need it today.
It's OK not to be a Python expert, but if we give you:
def add(a, b):
return 4
and ask you to implement it, and you say you have more than zero days of experience, I do expect you to figure it out.And I'll say it: we had plenty of people fail at the fizzbuzz step, often in astonishing ways. One memorable person couldn't grok the concept of factoring it out into a function like:
for i in range(1, 101):
print(fizzbuzz(i))
so that you could arbitrarily test fizzbuzz(1_000_000_000_000). They had the novel idea of capturing stdout and comparing it to a test fixture stringwise. Which, OK, it would work, but... I asked them how they'd see if the a test value in the trillions was successful and they mentioned using CLI tools like tail to check just the end. I did verify that they weren't just playing around for conversation's sake, like coming up with an absurd idea and exploring it for the fun of thinking through the problem. They were convinced that was the right way to write real unit tests in large codebases.So no, I'm not giving up coding challenges, not until I see a higher passing rate among people who list years of experience on their resume. If you claim you've been writing Node apps for the last 10 years, by gosh, I want to see that you can at least write a working function in JavaScript.
[dead]
>please stop the coding challenges sounds like someone didn't pass coding challenge
So we should stop the coding challenges, stop LeetCode, stop whiteboarding, stop profiling candidates, stop asking for their GitHub page. How is hiring supposed to work then? Just post the contract online and the first one to mail it back gets the job?
I like live coding challenges, something like a ~2 hour pair programming session, ideally modifying an existing project. I invest as much time as each candidate, while we are both exploring whether we want to work together.
I personally hire people I’ve worked with before and I know what they can do.
If I can’t do that, I ask to see their prior work.
Good programmers have usually written a lot of code. Having them walk through and explain it usually gives a good idea of what they can do and how they think.
Sometimes you meet a programmer who only works on proprietary code and can’t share it.
In that case I ask them to explain the design of something similar, and write a modestly sized example of a component of that system. Watching them in their own dev environment for 30 minutes usually tells you all you need to know.
You could also, you know, talk to people, like we did before Google and the Silicon Valley bro-wanna-bes decided that coding interviews was the only solution.
I've hired plenty of people without having coding challenges or any form of live coding. I've been happy with all of those hires. A former co-worker did some take-home coding tasks, which they'd then talk to the candidate about during the interview. I feel like those hires where worse in may ways, they certainly didn't stay around as long. That may very well be completely unrelated obviously.
For years people have been complaining that exams aren't realistic, that some talented people just don't do well in an exam situation and we've mostly come to the consensus that this is correct and mistakenly filter out highly talented people. So why wouldn't we apply the same logic to hiring?
If you're hiring for a specialist position some coding exercise can absolutely be in order, but I can't think of any reason to have them for a junior position. So I wouldn't recommend dropping coding tasks completely, but I'd apply them more selectively, otherwise you risk missing a number of really good hires.
Part of it may also be that so many companies and interviewers are absolutely terrible at doing coding challenges, but do them anyway, because Google and Facebook do them.
I admit I've been hired once from just a friendly chit-chat, but I can't see how this could be reliable. There are so many bullshitters in the industry, who, during character creation put all their skill points into Charisma. They are smooth talking, good looking, and have all those charming Ivy League mannerisms of an Investment Banker or Enterprise Sales person. And while they might not be able to be technically deep enough to fool an engineer interviewer, they will absolutely fool the director or VP who does the final screen and has the final say. These kinds of people flock to companies who don't do robust technical screens.
I can see your point about the bullshitters being able to fool a VP, but in that case they wouldn't be doing the coding interview anyway.
On the other side we're also seeing bullshitters that can ace any leetcode challenge, but not actually design anything of value or work well with others.
There's probably not a one-size fits all, but if you're going to do coding interview do them well and understand many don't do well in these types of interviews because they don't see the value, or feel that their other skills are being ignored.
i had also good experiences with live coding. both as an interviewer and a candidate. you not only see a candidates skill, but also their way of communicating, and as a candidate if you are lucky to be interviewed by a future teammate or superior, you see how they will treat you when you need help working on something.
i had this approach work even in china where people tend to be very submissive and afraid to speak up. i expected them to have trouble in the interview, but most candidates told me they felt very at ease even though they had never worked with a foreigner before.
What is left is networking and referrals, which I suspect people also object to.
Yes, this is seriously flawed for other reasons but is the best way when you can.
No, there are other ways, especially as AI coding assistants become more capable and developers can be more productive if they are able to leverage them.
One approach that I've encountered (with a YC company) is that the first interview was actually a code review. One of the founders asked me to review some SQL DDL, some backend API endpoints. The DDL was missing some indices that were needed for the queries. It was using an integer ID field. The API endpoints were missing input validation, error handling, etc.
I thought this was a GREAT way to start an interview that tested for depth of experience and platform/language knowledge.
This actually inspired me to build https://coderev.app because the tooling for this felt like it would be clumsy for both the interviewer and it was certainly for me as the interviewee.
But a lot of times, seeing a candidate's portfolio -- if they have one -- is probably even more insightful than any coding exercise. When I've been on the hiring side, one of my favorite things to do is to look through a candidate's GH and ask them questions about projects they've done, why they chose specific technologies, etc.