Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.
AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.
AI systems are trained on vast bodies of human work and generate answers near the center of existing thought. A human might occasionally step back and question conventional wisdom, but AI systems do not do this on their own. They align with consensus rather than challenge it. As a result, they cannot independently push knowledge forward. Humans can innovate with help from AI, but AI still requires human direction.
You can prod AI systems to think critically, but they tend to revert to the mean. When a conversation moves away from consensus thinking, you can feel the system pulling back toward the safe middle.
As Apple’s “Think Different” campaign in the late 90s put it: the people crazy enough to think they can change the world are the ones who do—the misfits, the rebels, the troublemakers, the round pegs in square holes, the ones who see things differently. AI is none of that. AI is a conformist. That is its strength, and that is its weakness.
[1] https://www.modular.com/blog/the-claude-c-compiler-what-it-r...
You know where LLMs boost me the most? When I need to integrate a bunch of systems together, each with their own sets of documentation. Instead of spending hours getting two or three systems to integrate with mine with the proper OAuth scopes or SAML and so on, an LLM can get me working integrations in a short time. None of that is ever going to be innovative; it's purely an exercise in perseverance as an engineer to read through the docs and make guesses about the missing gaps. LLMs are just better at that.
I spend the other time talking through my thoughts with AI, kind of like the proverbial rubber duck used for debugging, but it tends to give pretty thoughtful responses. In those cases, I'm writing less code but wanting to capture the invariants, expected failure modes and find leaky abstractions before they happen. Then I can write code or give it good instructions about what I want to see, and it makes it happen.
I'm honestly not sure how a non-practitioner could have these kinds of conversations beyond a certain level of complexity.
> Instead of spending hours getting two or three systems to integrate with mine with the proper OAuth scopes or SAML and so on
As someone who's job is handling oauth and saml scope, I am not convinced anyone can get these right.
Saml atleast acts nice, oauth on the other hand is a fucking nightmare.
Every time I request the wrong OAuth scope that doesn't have the authorization to do what I need, then make a failing request, I hear Jim Gaffigan affecting a funny authoritative voice saying, "No." I can't be the only one who defensively requests too much authority beyond what I need with extra OAuth scopes, hoping one of them will give me the correct access. I've had much better luck with LLMs telling me exactly which scopes to select.
I always hear Little Britain’s “computer says nooooo”
oauth is the one area where I genuinely trust the LLM more than myself. not because it gets it right but because at least it reads all the docs instead of rage-quitting after the third wrong scope
And the libraries provided by the various OAuth vendors are only adding fuel to the fire.
A while ago I spent some time debugging a superfluous redirect and the reason was that the library would always kick off with a "not authenticated" when it didn't find stored tokens, even if it was redirecting back after successful log in (as the tokens weren't stored yet).
The worst integration problems tend to be conceptual mismatches between the systems, where--even with the same names--they have different definitions and ideas of how things work.
That's a category of problem I wouldn't expect a text-based system to detect very well.. Though it might disguise the problem with a solution that seems to work until it blows up one day or people discover a lot of hard-to-fix data.
Well that's another use I have for LLMs: asking questions about these informational or architectural impedance mismatches. LLMs get it wrong sometimes, but with proper guidance (channel your inner Karl Popper), they can be quite helpful. But this doesn't really speed me up that much, though it makes me more confident that my deliverable is correct.
This is fundamental. Well, not really - a strategy SV tried to use is absolute market dominance to the point where you have to integrate with them. But in spaces where true interoperability is required, it's just philosophically hard. People don't mean the same thing.
Yeah, semantics is extremely important. Just no way around it.
Maybe you are already an expert in those so its fine. But for anybody else, using llms extensively would mean becoming way less proficient in those topics. Skip building some deeper senior knowledge and have a rather shallow knowledge. Anytime I grokked anything deeper like encryption, these saml/jwt auths, complex algorithms was only and unavoidably due to having to go deep with them.
Good for the company, not so much for the given engineer. But I get the motivation, we all are naturally lazy and normally avoid doing stuff if we can. Its just that there are also downsides, and they are for us, engineers.
Isn’t this what mcp is supposed to solve?
MCP connects the LLM to the APIs, which can be consulted with "tool calls." I'm talking about integrating the software I produce (with LLM assistance) to APIs. Traditionally, this is a nightmare given poor documentation. LLMs have helped me cut through the noise.
Couldn’t agree more and especially when some docs are incorrect and AI is able to guesstimate the correction based on other implementations or parallel docs that it’s found. Goes from “Let me spend a few days scouring the internet and our internal repo to see if I can maybe find a workaround” to “This can definitely get done”.
Itops related work in general is so suitable for ai agents. Configuring clusters of barebone servers. I normaly spend days on configuring things like nfs, sysctls, firewalls, upgrades, disks, crons, monitoring etc. now it's hours max. I can literally ask it to ssh into 50 vps machines and perform all tasks I tell him to do.
You mean like
for host in vps_hosts; do
ssh $host < commands.sh > errors.log 2&>1 &
done
I imagine what you are really getting out of it is not having to type the actual commands into commands.sh?> Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.
I’ve recently taken a look at our codebase, written entirely by humans and found nothing innovative there, on the opposite, I see such brainrot that it makes me curious what kind of biology needed to produce this outcome.
So maybe Chris Lattner, inventor of the Swift programming language is safe, majority of so called “software engineers” are sure as hell not. Just like majority of people are NOT splitting atoms.
Compilers are a hobby of mine, and I'd extend that to argue that the majority of compilers do not contain anything innovative either.
Also: if that one particular AI-produced compiler has nothing innovative, that only means that the human "director" behind the AI didn't ask it to produce anything innovative; what it does not mean is that AI can never produce anything innovative in a compiler.
> if that one particular AI-produced compiler has nothing innovative, that only means that the human "director" behind the AI didn't ask it to produce anything innovative
Couldn't it also be true that the AI didn't produce innovative output even though the human asked it to produce something innovative?
Otherwise you're saying an AI always produces innovative output, if it is asked to produce something innovative. And I don't think that is a perfection that AI has achieved. Sometimes AI can't even produce correct output even when non-innovative output is requested.
> Couldn't it also be true that the AI didn't produce innovative output even though the human asked it to produce something innovative?
It could have been, but unless said human in this case was lying, there is no indication that they did. In fact, what they have said is that they steered it towards including things that makes for a very conventional compiler architecture at this point, such as telling it to use SSA.
> Otherwise you're saying an AI always produces innovative output
They did not say that. They suggested that the AI output closely matches what the human asks for.
> And I don't think that is a perfection that AI has achieved.
I won't answer for the person you replied to, but while I think AI can innovate, I would still 100% agree with this. It is of course by no means perfect at it. Arguably often not even good.
> Sometimes AI can't even produce correct output even when non-innovative output is requested.
Sometimes humans can't either. And that is true for innovation as well.
But on this subject, let me add that one of my first chats with GPT 5.1, I think it was, I asked it a question on parallelised parsing. That in itself is not entirely new, but it came up with a particular scheme for paralellised (GPU friendly) parsing and compiler transformations I have not found in the literature (I wouldn't call myself an expert, but I have kept tabs on the field for ~30 years). I might have missed something, so I intend to do further literature search. It's also not clear how practical it is, but it is interesting enough that when I have time, I'll set up a harness to let it explore it further and write it up, as irrespective of whether it'd be applicable for a production compiler, the ideas are fascinating.
I’ve built some live programming systems in the past that are innovative, but not very practical, and now I’m trying to figure out how to get a 1.5B model (a small language model) into the pipeline if a custom small programming language. That is human driven innovation, but an LLM is definitely very useful.
> Lattner found nothing innovative in the code generated by AI
I don't think the replacement is binary. Instead, it’s a spectrum. The real concern for many software engineers is whether AI reduces demand enough to leave the field oversupplied. And that should be a question of economy: are we going to have enough new business problems to solve? If we do, AI will help us but will not replace us. If not, well, we are going to do a lot of bike-shedding work anyway, which means many of us will lose our jobs, with or without AI.
Before software, there were accountants. It was The qualification to have.
Today accountants are still needed. But it's a commodified job. And you start at the absolute bottom of the bottom rungs and slave it out till you can separate yourself and take on a role on a path to CFO or some respectable level of seniority.
I'm oversimplifying here but that is sufficient to show A path forward for software engineers imo. In this parallel, most of us will become AI drivers. We'll go work in large companies but we'll also go work in a back room department of small to medium businesses, piloting AI on a bottom of the rung salary. Some folks will take on specialisms and gain certifications in difficult areas (similar to ACCA). Or maybe ultra competitive areas like how it is in actuarial science. Those few will eventually separate themselves and lead departments of software engineers (soon to be known as AI pilots). Others will embed in research and advance state of art that eventually is commoditized by AI. Those people will either be paid mega bucks or will be some poor academia based researcher.
The vast majority? Overworked drones having to be ready to stumble to their AI agent's interface when their boss calls them at 10 PM saying the directors want to see a feature setup for the meeting tomorrow.
Business problems are essentially neverending. And humans have a broader type of intelligence that LLMs lack but are needed to solve many novel problems. I wouldn't worry.
Problems are never ending but amount of money which can be made in short (or even mid) term by solving these problems is limited. Every dollar spent on LLM is a dollar not spent on salaries.
I know multiple engineers who have spent months or even years trying to find a job. How can you say not to worry when the industry has already gotten this bad?
It's no consolation, but this situation is temporary. Everyone is just distracted with AI.
"Temporary" might mean "the next three years", but at the same time some acted as if the Zero Interest Rate Policy would continue indefinitely, so this situation might end suddenly and unexpectedly.
if you want a job then here's my 2 cents:
To me the opportunity is with agents. Specially copilot and what ever amazon's agent it. figure out how to code using them. build something cool in the space your interested in finding a job for. that's the skill enterprise companies are fighting for. nobody knows how to do it.
Unless you're one of the bulk of 1x programmers who aren't doing anything novel. I think it will be like most industries that got very helpful technology - the survivors have to do more sophisticated work and the less capable people are excluded. Then we need more education to supply those sophisticated workers but the existing education burden on professionals is already huge and costly. Will they be spending 10 years at university instead of 3-4? Will a greater proportion of the population be excluded from the workforce because there's not enough demand for low-innate-ability or low-educated people?
To add, just keeping up in this industry was already a problem. I don't know of many professions[1] with such demands on time outside of a work day to keep your skills updated. It was perhaps an acceptable compromise when the market was hot and the salaries high. But I am hearing from more and more people who are just leaving the field entirely labeling it as "not worth it anymore".
[1] Medicine may be one example of an industry with poor work-life balance for some, specifically specialists. But job security there is unmatched and compensation is eye-watering.
> I don't know of many professions[1] with such demands on time outside of a work day to keep your skills updated.
This is an extremely miopic view (or maybe trolling).
The vast majority of software developers never study, learn, or write any code outside of their work hours.
In contrast, almost all professional have enormous, _legally-required_ upskilling, retraining, and professional competence maintenance.
If you honestly believe that developers have anywhere near the demands (both in terms of time and cost) in staying up to date that other professions have, you are - as politely as I can - completely out-of-touch.
Sure, but those same professional certifications and development hours also allow them to not need to re-prove their basic competency when interviewing.
Basically everything you mentioned is covered by L&D
I never really felt this. If you have a job where you're actively learning by doing the work then you shouldn't need to learn outside of the job.
> Business problems are essentially neverending
That feels overly optimistic. LLMs seems on track to automate out basically any "email job" or "spreadsheet job," in which case we'll be looking at higher unemployment numbers than the great depression for at least some period of time. Combine with increased automation...
There are a LOT of people in the world and already a not insignificant portion can't find work despite wanting to. Seems the most likely thing is that the value of most labor is reduced to pennies.
Do you really think the billionaires are willing to have consumers so impoverished that they can’t continue to spend large sums of discretionary income buying the things that make the billionaires themselves richer?
They may not be, but even so they might find themselves in a prisoner's dilemma. I wouldn't rely in this logic for peace of mind.
Well what would each billionaire do? Give out money so that the poor can give some of it back?
You cannot just point at a system, say it’d be unsustainable and then assume nobody will let that happen.
Monarchies, lords, etc. have had much more reason to support their own countryfolk, yet many throughout history have not - has society changed enough that the billionaires have changed on this?
What evidence is there otherwise? That seems to be exactly what they want.
The impoverished are cheaper to enslave.
The billionaires are already billionaires. People like Sam Altman are not building a doomsday bunker because they believe in the longevity of established society. They are doing it because they've already won and are taking their ball.
I've read a theory that as the ultra rich divide their wealth among their descendants, eventually they capture so much of it among their families that trying to extract more from the working class is hardly worth the effort. The only option then, for the descendants of the ultra wealthy, is to start turning on each other. The theory states that the last time this happened was WWI.
Megacap investors already cargo cult business practices that reduce their own return and harm employees. This is why they all over-hired at the start of covid only to begin layoffs a couple of years later.
In summary: billionaires aren't as competent as you'd hope.
“The billionaires” are a boogeyman and not a cabal with all that much power in the west.
> ...generate answers near the center of existing thought.
This is right in the Wikipedia's article on universal approximation theorem [1].[1] https://en.wikipedia.org/wiki/Universal_approximation_theore...
"n the field of machine learning, the universal approximation theorems (UATs) state that neural networks with a certain structure can, in principle, approximate any continuous function to any desired degree of accuracy. These theorems provide a mathematical justification for using neural networks, assuring researchers that a sufficiently large or deep network can model the complex, non-linear relationships often found in real-world data."
And then: "Notice also that the neural network is only required to approximate within a compact set K {\displaystyle K}. The proof does not describe how the function would be extrapolated outside of the region."
NNs, LLMs included, are interpolators, not extrapolators.
And the region NN approximates within can be quite complex and not easily defined as "X:R^N drawn from N(c,s)^N" as SolidGoldMagiKarp [2] clearly shows.
It has been proven that recurrent neural networks are Turing complete [0]. So for every computable function, there is a neural network that computes it. That doesn't say anything about size or efficiency, but in principle this allows neural networks to simulate a wide range of intelligent and creative behavior, including the kind of extrapolation you're talking about.
[0] https://www.sciencedirect.com/science/article/pii/S002200008...
I think you cannot take the step from any turing machine being representable as a neural network to say anything about the prowess of learned neural networks instead of specifically crafted ones.
I think a good example are calculations or counting letters: it's trivial to write turing machines doing that correctly, so you could create neural networks, that do just that. From LLM we know that they are bad at those tasks.
So for every computable function, there is a neural network that computes it. That doesn't say anything about size or efficiency
It also doesn't say anything about finding the desired function, rather than a different function which approximates it closely on some compact set but diverges from it outside that set. That's the trouble with extrapolation: you don't know how to compute the function you're looking for because you don't know anything about its behaviour outside of your sample.
Turing conpleteness is not associated with crativity or intelligence in any ateaightforward manner. One cannot unconditionally imply the other.
after all, CSS is Turing Complete ;)
https://stackoverflow.com/questions/2497146/is-css-turing-co...
No, but unless you find evidence to suggest we exceed the Turing computable, Turing completeness is sufficient to show that such systems are not precluded from creativity or intelligence.
I believe that quantum oracles are more powerful than Turing oracles, because quantum oracles can be constructed, from what I understand, and Turing oracles need infinite tape.
Our brains use quantum computation within each neuron [1].
There's no evidence to suggest a quantum computer exceeds the Turing computable.
The difference is quantum oracles can be constructed [1] and Turing oracle can't be [2]: "An oracle machine or o-machine is a Turing a-machine that pauses its computation at state "o" while, to complete its calculation, it "awaits the decision" of "the oracle"—an entity unspecified by Turing "apart from saying that it cannot be a machine" (Turing (1939)."
[1] https://arxiv.org/abs/2303.14959
[2] https://en.wikipedia.org/wiki/Turing_machineThis is meaningless. A Turing machine is defined in terms of state transitions. Between those state transitions, there is a pause in computation at any point where the operations takes time. Those pauses are just not part of the definition because they are irrelevant to the computational outcome.
And given we have no evidence that quantum oracles exceeds the Turing computable, all the evidence we have suggests that they are Turing machines.
That's an irrelevant strawman. It tells us nothing about how create such a system ... how to pluck it out of the infinity of TMs. It's like saying that bridges are necessarily built from atoms and adhere to the laws of physics--that's of no help to engineers trying to build a bridge.
And there's also the other side of the GP's point--Turing completeness not necessary for creativity--not by a long shot. (In fact, humans are not Turing complete.)
No, twisting ot to be about how to create such a system is the strawman.
> Turing completeness not necessary for creativity--not by a long shot.
This is by far a more extreme claim than the others in this thread. A system that is not even Turing complete is extremely limited. It's near impossible to construct a system with the ability to loop and branch that isn't Turing complete, for example.
>(In fact, humans are not Turing complete.)
Humans are at least trivially Turing complete - to be Turing complete, all we need to be able to do is to read and write a tape or simulation of one, and use a lookup table with 6 entries (for the proven minimal (2,3) Turing machine) to choose which steps to follow.
Maybe you mean to suggest we exceed it. There is no evidence we can.
P.S. everything in the response is wrong ... this person has no idea what it means to be Turing complete.
> all we need to be able to do is to read and write a tape or simulation of one
An infinite tape. And to be Turing complete we must "simulate" that tape--the tape head is not Turing complete, the whole UTM is.
> A system that is not even Turing complete is extremely limited.
PDAs are not "extremely limited", and we are more limited than PDAs because of our very finite nature.
> P.S. everything in the response is wrong ... this person has no idea what it means to be Turing complete.
I know very well what it means to be Turing complete. All the evidence so far, on the other hand suggests you don't.
> An infinite tape. And to be Turing complete we must "simulate" that tape--the tape head is not Turing complete, the whole UTM is.
An IO port is logically equivalent to infinite tape.
> PDAs are not "extremely limited", and we are more limited than PDAs because of our very finite nature.
You can trivially execute every step in a Turing machine, hence you are Turing equivalent. It is clear you do not understand the subject at even a basic level.
Judging from what I read, their work is subject to regular hardware constraints, such as limited stack size. Because paper describes a mapping from regular hardware circuits to the continuous circuits.
As an example, I would like to ask how to parse balanced brackets grammar (S ::= B <EOS>; B ::= | BB | (B) | [B] | {B};) with that Turing complete recurrent network and how it will deal with precision loss for relatively short inputs.
Paper also does not address training (i.e., automatic search of the processors' equations given inputs and outputs).
No, the size of those networks that would be capable of that are infeasible. That's a common fallacy. You hint at this but then dismiss it.
Mathematically possible != actually possible.
> approximate any continuous function
It wouldn't surprise me if many interesting functions we'd like to approximate aren't continuous at all.
This is one of the reasons current AI tech is so poor at learning physical world dynamics.
Relationships in the physical world are sparse, metastable graphs with non-linear dynamics at every resolution. And then we measure these dynamics using sparse, irregular sampling with a high noise floor. It is just about the worst possible data model for conventional AI stacks at a theoretical level.
This is what softmax [1] is for.
> Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.
This feels like an unfair comparison to me; the objective of the compiler was not to be innovative, it was to prove it can be done at all. That doesn't demonstrate anything with regards to present or future capabilities in innovation.
As others have mentioned, it's not entirely clear to me what the limit of the agentic paradigm is, let alone what future training and evolution can accomplish. AlphaDev and AlphaEvolve ddemonstrate that it is possible to combine the retained knowledge of LLMs with exploratory abilities to innovate in both programming and mathematics; there's no reason to believe that it'll stop there.
Yeah, it's a bit like taking the output of a student project in a compiler construction class and using it to judge whether said student is capable of innovation without telling them in advance they'd be judged on that rather than on the stated requirements of the course.
It'd be interesting to prompt it to do the same job but try to be innovative.
To your point, yeah, I mostly don't want AI to be innovative unless I'm asking for it to be. In fact, I spend much more time asking it "is that a conventional/idiomatic choice?" (usually when I'm working on a platform I'm not super experienced with) than I do saying "hey, be more innovative."
Yeah, I'd love to find time to. But e.g. I think that is also a "later stage". If you want to come up with novel optimizations, for example, it's better to start with a working but simple compiler, so it can focus on a single improvement. Trying to innovate on every aspect of a compiler from scratch is an easy way of getting yourself into a quagmire that it takes ages to get out of as a human as well.
E.g. the Claude compiler uses SSA because that is what it was directed to use, and that's fine. Following up by getting it to implement a set of the conventional optimizations, and then asking it to research novel alternatives to SSA that allows restarting the existing optimizations and additional optimisations and showing it can get better results or simpler code, for example, would be a really interesting test that might be possible to judge objectively enough (e.g. code complexity metrics vs. benchmarked performance), though validating correctness of the produced code gets a bit thorny (but the same approach of compiling major existing projects that have good test suite is a good start).
If I had unlimited tokens, this is a project I'd love to do. As it is, I need to prioritise my projects, as I can hit the most expensive Claude plans subscription limits every week with any of 5+ projects of mine...
> AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.
Of course! But that's what makes them so powerful. In 99% of cases that's what you want - something that is conventional.
The AI can come up with novel things if it has an agency, and can learn on its own (using e.g. RL). But we don't want that in most use cases, because it's unpredictable; we want a tool instead.
It's not true that this lack of creativity implies lack of intelligence or critical thinking. AI clearly can reason and be critical, if asked to do so.
Conceptually, the breakthrough of AI systems (especially in coding, but it's to some extent true in other disciplines) is that they have an ability to take a fuzzy and potentially conflicting idea, and clean up the contradictions by producing a working, albeit conventional, implementation, by finding less contradictory pieces from the training data. The strength lies in intuition of what contradictions to remove. (You can think of it as an error-correcting code for human thoughts.)
For example, if I ask AI to "draw seven red lines, perpendicular, in blue ink, some of them transparent", it can find some solution that removes the contradictions from these constraints, or ask clarifying questons, what is the domain, so it could decide which contradictory statements to drop.
I actually put it to Claude and it gave a beautiful answer:
"I appreciate the creativity, but I'm afraid this request contains a few geometric (and chromatic) impossibilities: [..]
So, to faithfully fulfill this request, I would have to draw zero lines — which is roughly the only honest answer.
This is, of course, a nod to the classic comedy sketch by Vihart / the "Seven Red Lines" bit, where a consultant hilariously agrees to deliver exactly this impossible specification. The joke is a perfect satire of how clients sometimes request things that are logically or physically nonsensical, and how people sometimes just... agree to do it anyway.
Would you like me to draw something actually drawable instead? "
This clearly shows that AI can think critically and reason.
> This is, of course, a nod to the classic comedy sketch by Vihart
As a big fan of Vi Hart I was surprised to read that she wrote or was involved in that "classic comedy sketch".
As far as I can tell, after a few minutes searching, she was not.
That shows it knew this bit of satire more than anything. Also, the problem as stated isn't actually constrained enough to be unsolvable: https://youtu.be/B7MIJP90biM
Feel free to ask Claude about any other contradictory request. I use Claude Code and it often asks clarifying questions when it is unsure how to implement something, or or autocorrects my request if something I am asking for is wrong (like a typo in a filename). Of course sometimes it misunderstands; then you have to be more specific and/or divide the work into smaller pieces. Try it if you haven't.
I have. In fact, I've been building my own coding agent for 2 years at this point (i.e. before claude code existed). So it's fair to say I get the point you're making and have said all the same stuff to others. But this experience has taught me that LLMs, in their current form, will always have gaps: it's in the nature of the tech. Every time a new model comes out, even the latest opus versions, while they are always better, I always eventually find their limits when pushing them hard enough and enough times to see these failure modes. Anything sufficiently out of distribution will lead to more or less nonsensical results.
The big flagship AI models aren't just LLMs anymore, though. They are also trained with RL to respond better to user requests. Reading a lot of text is just one technique they employ to build the model of the world.
I think there are three different types of gaps, each with different remedies:
1. A definition problem - if I say "airplane", who do I mean? Probably something like jumbo jet or Cesna, less likely SR-71. This is something that we can never perfectly agree on, and AI will always will be limited to the best definition available to it. And if there is not enough training data or agreed definition for a particular (specialized) term, AI can just get this wrong (a nice example is the "Vihart" concept from above, which got mixed up with the "Seven red lines" sketch). So this is always going to be painful to get corrected, because it depends on each individual concept, regardless of the machine learning technology used. Frame problem is related to this, question of what hidden assumptions I am having when saying something.
2. The limits of reasoning with neural networks. What is really happening IMHO is that the AI models can learn rules of "informal" logical reasoning, by observing humans doing it. Informal logic learned through observation will always have logical gaps, simply because logical lapses occur in the training data. We could probably formalize this logic by defining some nice set of modal and fuzzy operators, however no one has been able to put it together yet. Then most, if not all, reasoning problems would reduce to solving a constraint problem; and even if we manage to quantize those and convert to SAT, it would still be NP-complete and as such potentially require large amounts of computation. AI models, even when they reason (and apply learned logical rules) don't do that large amount of computation in a formal way. So there are two tradeoffs - one is that AIs learned these rules informally and so they have gaps, and the other is that it is desirable in practice to time limit what amount of reasoning the AI will give to a given problem, which will lead to incomplete logical calculations. This gap is potentially fixable, by using more formal logic (and it's what happens when you run the AI program through tests, type checking, etc.), with the mentioned tradeoffs.
3. Going back to the "AI as an error-correcting code" analogy, if the input you give to AI (for example, a fragment of logical reasoning) is too much noisy (or contradictory), then it will just not respond as you expect it to (for example, it will correct the reasoning fragment in a way you didn't expect it to). This is similar to when an error-correcting code is faced with an input that is too noisy and outside its ability to correct it - it will just choose a different word as the correction. In AI models, this is compounded by the fact that nobody really understands the manifold of points that AI considers to be correct ideas (these are the code words in the error-correcting code analogy). In any case, this is again an unsolvable gap, AI will never be a magical mind reader, although it can be potentially fixed by AI having more context of what problem are you really trying to solve (the downside is this will be more intrusive to your life).
I think these things, especially point 2, will improve over time. They already have improved to the point that AI is very much usable in practice, and can be a huge time saver.
You had me at "fuzzy", but lost me at "clean up" - because that's what I usually have to do after it went on another wild refactoring spree. It's a stochastic thing, maybe you're lucky and it fuzzy-matches exactly what you want, maybe the distributions lead it astray.
On the line test, I guess it's highly probable that the joke and a few hundred discussions or blog pieces about it were in it's training data.
I only have experience with Claude Code. If it goes on a spree, the task you are giving it is too big IMHO.
It's not a SAT solver (yet) and will have trouble to precisely handle arbitrarily large problems. So you have to lead it a bit, sometimes.
Was recently optimizing an old code base. If I tell it to optimize it does stupid stuff but if I tell it to write profiler first and then slowly attack each piece one at a time then it does really well. Only a matter of time before it does it automatically.
Don't forget the line in the shape of a kitten!
That skit has nothing to do with Vihart ... Claude hallucinated that.
> This clearly shows that AI can think critically and reason.
No it doesn't ... Claude regurgitated human knowledge.
I think Lattner was too generous and missed a couple of crucial points in the CCC experiment. He wrote:
> CCC shows that AI systems can internalize the textbook knowledge of a field and apply it coherently at scale.
Except that's not what happened. There was neither (just) textbook knowledge nor a "coherent application at scale":
1. The agents relied on thousands of human written tests embodying many person-years of "preparation effort", not to mention a complete spec. Furthermore, their models were also trained not only on the spec (and on the tests) but also on a reference implementation and the agents were given access to the reference implementation as a test oracle. None of that is found in a textbook.
2. Despite the extraordinary effort required to help the agents in this case - something that isn't available for most software - the models ultimately failed to write a workable C compiler, and couldn't converge. They reached a point where any bug fix caused another bug and that's when the people running the agents stopped the experiment.
The main issue wasn't that there was nothing innovative in the code but that even after embibing textbooks and relying on an impractical amount of preparation effort of help, the agents couldn't write a workable C compiler (which isn't some humongous task to begin with).
Anther perspective, AI is fast turning [0.1x to 0.5x] low cost X-world Sofwate Engineers into >1x engineers.
Contrary to pre AI era, one of my close relative he has become very good "understand / write the requirement" guy. HN may be dominated by >1x engineers, another revolution is happening at lower /bulk end of spectrum as well.
AI makes it possible for someone who has never written code to generate a program that does what they want. One of my friends wanted to simulate a 7,9 against a dealer 10 upcard in the card game blackjack. GPT was able to write the simulation for him in javascript/html. So it took a 0.001x coder and turned him into a 0.2x coder.
Was it actually correct? How would they tell?
> And this is why humans will be needed to advance the state of the art.
What percentage of developers advance the state of the art, what percentage of juniors advance the state of the art?
Wait, is novelty really the benchmark here?
1. The experiment was to show that AI can generate working code for a fairly complicated spec. Was it even asked to do things in a novel way? If not, why would we expect it do anything other than follow tried and tested approaches?
2. Compilers have been studied for decades, so it's reasonable to presume humans have already found the most optimal architectures and designs. Should we complain that the AI "did nothing novel" or celebrate because it "followed best practices"?
I'm actually curious, are there radically different compiler designs that people have hypothesized but not yet built for whatever reasons? Maybe somebody should repeat the experiment explicitly prompting AI agents to try novel designs out, would be fascinating to see the results.
> You can prod AI systems to think critically
There is no critical thought, you can't prod an LLM to do such a thing. Even CoT is just the LLM producing text that looks like it could be a likely response based on what it generated before.
Sometimes that text looks like critical thought, but it does not at all reflect the logical method or means the AI used to generate it. It's just riffing.
I think that finding proofs for open mathematical questions should count as critical thought. (See https://medium.com/%40cognidownunder/three-erdős-problems-fe...)
That's impressive, but it isn't thought, anymore than neurons in a dish that learn to play Tetris have thoughts, or if you spent eons painstakingly calculating what the TPU did with the model to come up with the same output tokens, but via pen and paper instead.
When the TPU does it, is the TPU thinking? Where does the critical thinking take place in the endless pages of matrix math that eventually evaluates into the same token output as the TPU?
Sure but there's somebody somewhere who had a relevant critical thought and the LLM can find it and adapt it to your case. That's good enough much of the time.
You won't find anything innovative in most human-written compilers either, so by that argument we can't advance the state of the set either.
We created compilers in the first place. I suppose an interesting question is: would LLMs have come up with compilers if humans hadn't?
"We", yes, but my point is that most people who write compilers do nothing but implement known techniques. If you then judge human ability to innovate by investigating a single compiler for innovation, odds are you would get the entirely wrong idea of what we are capable of.
>AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.
all AI works on patterns, it's not very different from playing chess. Chess Engines use similar method, learn patterns then use them.
While it's true training data is what creates pattern, so you do not have any new "pattern" which is also not already in data
but interesting thing is when pattern is applied to External World -> you get some effect
when the pattern again works on this effect -> it creates some other effect
This is also how your came into existence through genetic recombination.
Even though your ancestral dna is being copied forward, the the data is lossy and effect of environment can be profoundly seen. Yet you probably don't look very different from your grandparents, but your grandchildren may look very different from your grandparents. at same point you are so many orders moved from the "original" pattern that it's indistinguishable from "new thing"
in simple terms, combinatorial explosion + environment interaction
> Chris Lattner, inventor of the Swift programming language
More proximately, the creator of the clang c compiler.
> AI tends to accept conventional wisdom
I wrote an article on that: Hard Things in Computer Science
Did you write it or did an LLM?
I find some irony in seeing the telltale tropes of conventional LLM writing there
I did put a note at the end.
The original post in Chinese was handwritten
https://blog.est.im/2026/stderr-03
the English translation was compiled by gemini3.
"The AI made a compiler, but it wasn't that novel, so AI is not novel" is a very poor rhetorical foundation
Man - just think about what you said
Two years ago that would have been beyond shocking.
If 'AI is making compilers' - then that's 'beyond disruptive'.
It's very true that AI has 'reversion to the mean' characteristics - kind of like everything in life ..
... but it's just unfair to imply that 'AI can't be creative'.
The AI is already very 'creative' (call it 'synthetic creativity' or whatever you want) - but sufficiently 'creative' to do new things, and, it's getting better at that.
It's more than plausible that for a given project 'creativity' was not the goal.
AI will help new language designers try and iterate over new ideas, very quickly, and that alone will be disruptive.
"The AI made a compiler" is an argument for the disruptive power of AI, not against it.
The LLM didn’t make a compiler. It generated code that could plausibly implement one. Humans made the compilers it was trained on. It took many such examples and examples of other compilers and thousands of books and articles and blog posts to train the model. It took years of tweaking, fitting, aligning and other tricks to make the model respond to queries with better, more plausible output. It never made, invented, or reasoned about compilers. It’s an algorithm and system running on a bunch of computers.
The C compiler Anthropic got excited about was not a “working” compiler in the sense that you could replace GCC with it and compile the Linux kernel for all of the target platforms it supports. Their definition of, “works,” was that it passed some very basic tests.
Same with SQLite translation from C to Rust. Gaping, poorly specified English prose is insufficient. Even with a human in the loop iterating on it. The Rust version is orders of magnitude slower and uses tons more memory. It’s not a drop in Rust-native replacement for SQLite. It’s something else if you want to try that.
What mechanism in these systems is responsible for guessing the requirements and constraints missing in the prompts? If we improve that mechanism will we get it to generate a slightly more plausible C compiler or will it tell us that our specifications are insufficient and that we should learn more about compilers first?
I’m sure its possible that there are cases where these tools can be useful. I’m not sure this is it though. AGI is purely hypothetical. We don’t simulate a black hole inside a computer and expect gravity to come out of it. We don’t simulate the weather systems on Earth and expect hurricanes to manifest from the computer. Whatever bar the people selling AI system have for AGI is a moving goalpost, a gimmick, a dream of potential to keep us hooked on what they’re selling right now.
It’s unfortunate that the author nearly hits on why but just misses it. The quotes they chose to use nail it. The blog post they reference nearly gets it too. But they both end up giving AI too much credit.
Generating a whole React application is probably a breath of fresh air. I don’t doubt anyone would enjoy that and marvel at it. Writing React code is very tedious. There’s just no reason to believe that it is anything more than it is or that we will see anything more than incremental and small improvements from here. If we see any more at all. It’s possible we’re near the limits of what we can do with LLMs.
"I’m sure its possible that there are cases where these tools can be useful. I’m not sure this is it though. "
You are arguing against the internet, motor cars and electricity.
It's like 1998, and you're saying: "I'm sure it's possible there are cases where the internet can be useful. I'm not sure it is though"
On 'hackernews' of all places.
It's pretty wild to see that, and I think it says something about what hn has become (or maybe always was?).
Humans learned from prior art, and most of their inventions are modifications of prior art. You a are after all, mostly a biological machine.
The point is - there are so many combinations and permutations of reality, that AI can easily create synthetically novel outcomes by exploring those options.
It's just wrong to suggest that 'it was all in some textbook'.
"here’s just no reason to believe that it is anything more than it is "
It's almost ridiculous at face value, given that millions of people are using it for more than 'helping to write react apps every day'.
It's far more likely that you've come to this conclusion because you're simply not using the tools creatively, or trying to elicit 'synthetic creativity' out of the AI, because it's frankly not that hard, and the kinds of work that it does goes well beyond 'automation'.
This is not an argument, it's the lived experience of large swaths of individuals.
>It's very true that AI has 'reversion to the mean' characteristics - kind of like everything in life ..
Genetic natural selection is the exact opposite mechanism of this... life is literally built around generating exception and experimenting.
> Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.
Lots of people have ideas for programming languages; some of those ideas may be original-but many of those people lack the time/skills/motivation to actually implement their ideas. If AI makes it easier to get from idea to implementation, then even if all the original ideas still come from humans, we still may stand to make much faster progress in the field than we have previously.
So the problem with Chris’ take is “This one for fun project didn’t produce anything particularly interesting.”
So outside of the fact that we have magic now that can just produce “conventional “ compilers. Take it to a Moore’s Law situation. Start 1000 create a compiler projects- have each have a temperature to try new things, experiment, mutate. Collate - find new findings - reiterate- another 1000 runs with some of the novel findings. Assume this is effectively free to do.
The stance that this - which can be done (albeit badly) today and will get better and/or cheaper - won’t produce new directions for software engineering seems entirely naive.
Moors law states that the number of transistors in an integrated circuit doubles about every two years. It has nothing to say about the capabilities of statistical models.
In fact in statistics we have another law which states that as you increase parameters the more you risk overfitting. And overfitting seems to already be a major problem with state of the art LLM models. When you start overfitting you are pretty much just re-creating stuff which is already in the dataset.
In their example it doesn't matter is this case if the models get better or not. It matters whether inference gets cheaper to the point that we can afford to basically throw huge amounts of tokens at exploring the problem space.
Further model improvements would be a bonus, but it's not required for us to get much further.
Modern LLMs showed that overfitting disappears if you add more and more parameters. "Double descent" is well documented, if not well understood.
> Modern LLMs showed that overfitting disappears if you add more and more parameters.
I have not seen that. In fact this is the first time I hear this claim, and frankly it sounds ludicrous. I don‘t know how modern LLMs are dealing with overfitting but I would guess there is simply a content matching algorithm after the inference, and if there is a copyright match the program does something to alter or block the generation. That is, I suspect the overfitting prevention is algorithmic and not part of the model.
>Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.
"Needed to advance the state of the art" and actually deployed to do so are two different things. More likely either AI will learn to advance the state of the art itself, or the state of the art wont be advancing much anymore...
Yeah I think he had a pretty sane take in that article:
>CCC shows that AI systems can internalize the textbook knowledge of a field and apply it coherently at scale. AI can now reliably operate within established engineering practice. This is a genuine milestone that removes much of the drudgery of repetition and allows engineers to start closer to the state of the art.
And also
> The most effective engineers will not compete with AI at producing code, but will learn to collaborate with it, by using AI to explore ideas faster, iterate more broadly, and focus human effort on direction and design. Lower barriers to implementation do not reduce the importance of engineers; instead, they elevate the importance of vision, judgment, and taste. When creation becomes easier, deciding what is worth creating becomes the harder problem. AI accelerates execution, but meaning, direction, and responsibility remain fundamentally human.
> allows engineers to start closer to the state of the art
This reminds me of the Slate Star Codex story "Ars Longa, Vita Brevis"[1], where it took almost an entire lifespan just to learn what the earlier alchemists had found, so only the last few hours of an alchemist's life were actually valuable. Now we can all skip ahead.
1. https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/
I think the fact that AI can make a working compiler is crazy, especially compared to what most of us thought was possible in this space 4 years ago.
Lately, there have been a few examples of AI tackling what have traditionally been thought of as "hard" problems -- writing browsers and writing compilers. To Christ Lattner's point, these problems are only hard if you're doing it from scratch or doing something novel. But they're not particularly hard if you're just rewriting a reference implementation.
Writing a clean room implementation of a browser or a compiler is really hard. Writing a new compiler or browser referencing existing implementations, but doing something novel is also really hard.
But writing a new version of gcc or webkit by rephrasing their code isn't hard, it's just tedious. I'm sure many humans with zero compiler or browser programing experience could do it, but most people don't bother because what's the point?
Now we have LLMs that can act as reference implementation launderers, and do it for the cost of tokens, so why not?
With the way modern development often goes this essentially means using spicy autocomplete for code is a just a fast track to the cargo culted solutions of whatever day the model was trained.
Reinforcement Learning changes this though - remember Move 37?
The issue is you need verifiable rewards for that (and a good environment set-up), and it's hard to get rewards that cover everything humans want (security, simplicity, performance, readability, etc.)
I think this article was on HN a few days ago.
> And this is why humans will be needed to advance the state of the art.
That might be valid, if LLMs stopped improving today.
LLMs helping with code that is averge to above average might be an improvement overall across most projects, and I also have found that some things that LLMs suggest to me that are new to me can feel innovative, but areas I have experience with I often have a different or more effective way to start at instead of iterating towards it while trying to contain complexity.
so we need to make some crazy llms...
Sure. When we come to the point of AI able to make independent innovations we have reached AGI right?
>AI systems are trained on vast bodies of human work and generate answers near the center of existing thought. A human might occasionally step back and question conventional wisdom, but AI systems do not do this on their own. They align with consensus rather than challenge it. As a result, they cannot independently push knowledge forward.
But AI companies keep telling us AGI is 6 months into the future.
All of this is true of AI systems in 2026
However AI systems in 2026-ε were utterly inadequate at coding
And AI systems in 2026+ε might not have the present limitations
I mean this genuinely; that was a very well written piece. Well said!
> Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1].
Well, of course. Despite people applying the label of AI to them, LLMs don't have a shred of intelligence. That is inherent to how they work. They don't understand, only synthesize from the data they were trained on.
> don't have a shred of intelligence. ... They don't understand, only synthesize from the data they were trained on.
Couldn't you say that about 99% of humans too?
99% of humans in a particular specialization, sure. It's the 1% who become experts in that specialization who are able to advance the state of the art. But it's a different 1% for every area of expertise! Add it all up and you get a lot more than 1% of humans contributing to the sum of knowledge.
And of course, if you don't limit yourself to "advancing the state of the art at the far frontiers of human knowledge" but allow for ordinary people to make everyday contributions in their daily lives, you get even more. Sure, much of this knowledge may not be widespread (it may be locked up within private institutions) but its impact can still be felt throughout the economy.
If 1% of the people in each specialization are advancers, and you add up all the specializations together, then 1% of the total number of people are advancers.
Even this assumes that everyone has a specialization in which 1% of people contribute to the sum of human knowledge. I would probably challenge that. There are a lot of people in the world who do not do knowledge-oriented work at all.
You don’t need to do knowledge work to advance the state of the art. You could be working in a shoe factory and discover a better way to tie your shoes.
Your math assumes each person has exactly one thing they do in life. The shoe factory worker could also be a gardener. He might not make any advancements in gardening, but his contribution means that if you add up all the fields of specialization the sum is greater than the population of humans. Take 1% of that sum and it’s greater than 1% of humans. 1% of people in a specialization is not the same as 1% of specialists. In fact, I would say it’s a much higher proportion of specialists making contributions (especially through collaboration).
Oh, and don’t get caught up on the 1% number. I used it as shorthand for whatever small number it is. Maybe it’s only 10 people in some hyper-specialized field. But that doesn’t matter. Some other field may have thousands of contributors. You don’t have to be a specialist in a field to make a contribution to that field, for example: glassmakers advanced the science of astronomy by making the telescope possible.
>99% of humans in a particular specialization, sure. It's the 1% who become experts in that specialization who are able to advance the state of the art
How? By also "synthesizing the data they were trained on" (their experience, education, memories, etc.).
No, that's not all we're doing. If that's all humans ever did, we'd still be living in the stone age.
That's begging the question though.
Where's the proof we don't do exactly this? The mind as a prediction engine is one of the handful most accepted theories.
Well, humans do experiments for one, as I explained elsewhere in the discussion. Experiments give us access to new knowledge from the world itself, which is not merely a synthesis of what we already know.
Real progress in science is made by the hard collection and cataloguing of data every single day, not by armchair philosophizing.
Can we be sure? Maybe it's just very rare for experience, education and memories to line up in exactly the way that allows synthesizing something innovative. So it requires a few billion candidates and maybe a couple of generations too.
I want to point back to my remark about everyday people.
if you don't limit yourself to "advancing the state of the art at the far frontiers of human knowledge" but allow for ordinary people to make everyday contributions in their daily lives, you get even more
This isn't a throwaway comment. I do this all the time myself, at work. Everywhere I've worked, I do this. I challenge the assumptions and try to make things better. It's not a rare thing at all, it's just not revolutionary.
Revolutions are rare. Perhaps only a handful of them have ever happened in any one particular field. But you simply will not ever go from Aristotelian physics to Newtonian physics to General Relativity by merely "synthesizing the data they were trained on", as the previous comment supposed.
Edit: I should also say something about experimentation. You can't do it from an armchair, which is all an LLM has access to (at present). Real people learn things all the time by conducting experiments in the world and observing the results, without necessarily working as formal scientists. Babies learn a lot by experimenting, for example. This is one particular avenue of new knowledge which is entirely separate from experience, education, memories, etc. because an experiment always has the potential to contradict all of that.
Experimentation leads to experience, so I feel like this was included by the parent comment. And in the case of writing software, agents are able to experiment today. They run tests, check log output, search DBs... Sure, they can't have apples fall on their heads like Newton had but they can totally observe the apple falling on someones head in a video.
Experimentation leads to experience
Of course it does, but only after the fact. You don't have any experience of the result of the experiment before you perform it.
Sure, they can't have apples fall on their heads like Newton had but they can totally observe the apple falling on someones head in a video
I have strong doubts that LLMs have any understanding whatsoever of what's happening in images (let alone videos). The claim (I've sometimes heard) that they possess a world model and are able to interpret an image according to that model is an extremely strong one, that's strongly contradicted by the fact that they: a) continue to hallucinate in pretty glaring ways, and b) continue to mis-identify doctored (adversarial) images that no human would mis-identify (because they don't drastically alter the subject).
In software, they can and do perform experiments (make a change then observe the log output). I don't think they possess a "world model" or that it's worth spending too much thought on... My reasoning is more along the lines that our brains are also just [very advanced] inference machines. We also hallucinate and mis-identify images (there are image/video classification tasks where humans have lower scores).
For me the most glaring difference to how humans work is the lack of online learning. If that prevents them from being able to innovate, I'm not so sure.
Software is not the world. It’s a tiny bit of what humans do.
The lack of online learning is a critical fault. Much of what humans learn (such as anything based on mathematics) has a dependency tree of stuff to learn. But even mundane stuff involves a lot of dependent learning. For example, ask an LLM to write a cookbook and it can synthesize from recipes that are already out there but good luck having it invent new cooking techniques that require experimentation or invention (new heat source, new cooking utensils, etc).
I guess we'll just have to wait and see how things turn out. Currently it seems we have examples of where it seems like the technology allows some amount of innovation (AlphaGo, software, math proofs) and examples where they seem surprisingly stupid (recipes?).
Btw, it looks like there is a growing body of research evaluating exactly this. I found this nice overview with even some benchmarks specifically for scientific innovation: https://github.com/HKUST-KnowComp/Awesome-LLM-Scientific-Dis...
How can you ever say that about humans? Human brain is not trained once on all the data and then you start using it, human brain is constantly training and rewiring in the real time while being used, there is quite a dramatic difference from how LLM transformers work. Human can form new abstractions from sparse experience which is the true conceptual reasoning which LLMs are struggling with
Yes, and the natural extension is that a lot of what people do day to day is not work-driven by intelligence; it is just reusing a known solution to a presented problem in a bespoke manner. However, this is something that AI excels at.
The LLM was trained on 100% of humans, the 99% you’re scoffing at is feeding the LLM answers.
100% (or close to it) of material AI trains on was human generated, but that doesn't mean 100% of humans are generating useful material for AI training.
Let's train one on just the expert written code and books then, and not the entirety of GitHub or Stack Overflow and such, and see how it fares...
Yes... maybe not 99%...
You could say the same thing about Chris Lattner. How did he advance the state of the art with Swift? It’s essentially just a subjective rearranging of deck chairs: “I like this but not that.” Someone had to explain to Lattner why it was a good idea to support tail recursion in LLVM, for example - something he would have already known if he had been trained differently. He regurgitates his training just like most of us do.
That might read like an insult to Lattner, but what I’m really pointing out is that we tend to hold AIs to a much higher standard than we do humans, because the real goal of such commentary is to attempt to dismiss a perceived competitive threat.
>Despite people applying the label of AI to them, LLMs don't have a shred of intelligence. That is inherent to how they work. They don't understand, only synthesize from the data they were trained on
People also "synthesize from the data they were trained on". Intelligence is a result of that. So this dead-end argument then turns into begging the question: LLMs don't have intelligence because LLMs can't have intelligence.
So AI won't surpass humans, because Chris Lattner can do better than a model than didn't exist two years ago?
> Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art
And yet the AI probably did better than 99% of human devs would have done in a fraction of the time.
Human devs rarely need to create compilers. Those that do would do much better job.
what’s your point again?
The point is that saying the LLM failed to do what the overwhelming majority of devs can't do isn't exactly damning.
It's like Stephen King saying an AI generated novel isn't as good as his. Fine, but most of have much lesser ambitions than topping the work of the most successful people in the field.
LLMs still do forEach, it’s like wearing Tommy Hilfiger
That's one of the benefits of LLMs: they don't care for bullshit fashion
Sure they do, just whatever the average of "fashion" was across the training set.
That approach they have for everything. Zero special care for fashion.
> Lattner found nothing innovative in the code generated by AI [1].
In theory, we are just one good innovation away from changing this. In reality, it's probably still some years away, but we are not in a situation where have to seriously speculate with this possibility.
> And this is why humans will be needed to advance the state of the art.
But we only need a minority for innovations, progress and control. The bulk of IT is boring repetitive slop, lacking any innovation and just following patterns. The endgame will still result in probably 99% of humans being useless for the machinery. And this is not really new. In any industry, the majority of workers are just average, without any real influence on their industries progress, and just following conventional wisdom to make some bucks for surviving the next day.
Humans have the advantange of millions of year of training baked in their genes. There is nothing magical about being a human. Once algorithms have ability to collect data from real world(robotics), ability to do experiments in real world and ability to mimic nature all these advantages will fall away.
The rate of change is accelerating. I worry we don't have much time left unless we get serious about merging with machines.
Would you seriously consider merging with cockroaches if a swarm of smart cockroaches invaded Earth?
Depends. Do they use vim or emacs?
District 9 vibes :-)
The innovation isn't the output but the provenance.
We don't necessarily need a Chris Lattner to make a compiler now.
End of the day Chris Lattner is a single individual, not a magic being. A single individual posting submarine ads for his cleverness in the knowledge work subfield of language compilers. Of course he is going to drag the competition.
Languages are abstraction over memory addresses to provide something friendlier for human consumption. It's a field that's decades old and repeats itself constantly being it revolves around the same old; development of a compression technique to deduplicate and transpile to machine code the languages more verbose syntax.
Building a compiler is itself just programming. None of this is truly novel nor has it been since the 60s-70s. All that's changing is the user interface; the syntax.
Intelligence gives rise to our language capacity. The languages themselves are merely visual art that fits the preferences of the language creator. They arbitrarily decided nesting the dolls their way makes the most sense.
Currently have agents iterating on "prompt to binary". Reversing a headless Debian system into a model and optimizing to output tailored images. Opcodes, power use in system all tucked into a model to spit back out just the functions needed to achieve the electromagnetic geometry desired[1]
[1] https://iopscience.iop.org/article/10.1088/1742-6596/2987/1/...
So someone who is a proven expert in his field, who writes a detailed, well-reasoned, balanced assessment of the state of compiler development and the role LLMs play in this, is according to you, “A single individual posting submarine ads for his cleverness in the knowledge work subfield of language compilers. Of course he is going to drag the competition.”?
Chris Lattner has forgotten more about language and compiler design than most of us will know in a lifetime. If you’re going to mis-characterize him you need to bring more to the table than some reductionist pseudo intelligent babbling.
It seems to be inevitable that with any new technology we go through a phase of super duper excitement about the possibilities, where we try to use it to the extreme, and through that process start to absorb what it actually is and isn't capable of.
The hype cycle's distasteful of course, but I've accepted that this is how humans figure out what things are. Like a child we have to abuse it before we learn how to properly use it.
I think many of us sense and have sensed that the promises made of agentic programming smell too good to be true, owing to our own experiences as programmers and engineers. But experts in a domain are always the minority, so we have to understand that everyone else is going to have to reach the same intuition the hard way.
I’ve been programming professionally for 25 years. Well, 24 really because in the whole last year I barely wrote a line myself but my output increased dramatically.
If you can’t see that it’s over, I’m not sure what to tell you. You will, in time.
The type of work matters and understanding how capital interacts with labor is something that hasn't really changed over the last 150 years (not the first time productivity tools have been introduced in capitalism).
All we are going to get is increased mass surveillance and molding software engineers into more assembly line work.
Both things do not sound good or reasonable nor wanted by a majority in our industry.
But sure! Being able to do more busy work is useful I guess, too bad the workers will never benefit from such a scheme; hopefully the masses don't overthrow the country, but I wouldn't blame them if they did.
+1, it feels very much like a case of _feeling_ more productive because you’re outputting more …stuff…, but IME, it’s easy to produce a lot of stuff that isn’t useful and just creates a productive vibe (pun intended)
that's the problem.
I don't know if I'm just not seeing something that the vibe coders do, or if it's not really that crazy?
like I'd say it's a productivity boost in some aspects, definitely. but it's not like you'd be able to get the same output unless you had years of experience and know what you're doing
and going full unsupervised agentic mode I haven't seen much benefit from that. still have to pretty much just guide em to do this and that in this way
These models haven't been very good for long.
To assume progress stops here is silly.
I'm already growing tired of prognostications using the current status quo when the current status quo isn't even six months old.
> AI is getting better/faster/cheaper at incredible rates, but regardless of when, unless you believe in magic, it's only a matter of time until we reach the point at which machine intelligence is indistinguishable from human intelligence. We call that point AGI.
I still don’t think this is certain. It’s telling that code generation is one of the few things these systems do extremely well. Translating between English and French isn’t that much different than translating between English and Python. These are both tasks where the most likely next token has a good shot of being correct. I’m still not sold that we should assume that LLM-based tech will be well-generalized beyond that. Maybe some new tech will come along to augment or replace LLMs and that will get us there, who knows. Just because the line is going up quickly at the moment doesn’t mean it always will.
> unless you believe in magic, it's only a matter of time until we reach the point at which machine intelligence is indistinguishable from human intelligence
I find this flippancy about the greatest mystery in the universe extremely arrogant and incurious and wish it wouldn't be so prevalent.
In theory a computer should be able to model any physical process so I do agree it's only a matter of time. That said I don't think I will be alive to see it honestly.
The current tech won't get us there just like the steam engine or the internal combustion engine didn't get us to the Moon. And getting to AGI is probably more like getting to Mars.
I personally think we have some tremendous energy problems (and the negative externalities derived from “solutions”) to contend with before we even sniff AGI.
Definitely.
Comercial fusion is still decades away.
Solar and lithium batteries are getting cheaper every year but unfortunately still lots of political issues in many countries.
Considering we can only approximate irrational numbers, I’m not sure that’s a given. Maybe we’ll have a breakthrough with some type of analog computing, but we could also just hit physical limits on energy or precision.
Yeah, church turing suggests that a computer can compute any computable function. Or the universality of computable substrate. Maybe there's a confusion that computation universality implies everything universality?
> In theory a computer should be able to model any physical process
Wait, which theory is that? In church turing theory the computer can compute any computable function.
Why do we think that the computer can model any physical process?
Or are we suggesting that you can build a computer out of whatever physical process you want to model?
> > In theory a computer should be able to model any physical process
> Wait, which theory is that?
The Church-Turing-Deutsch Principle. (Which isn’t a theory in the empirical sense, but somewhat more speculative.)
> Or are we suggesting that you can build a computer out of whatever physical process you want to model?
Well, you obviously can do that. Whether that computer is Turing equivalent, more limited, or potentially a hypercomputer is...well, Church-Turing-Deutsch says the last is always false, but good luck proving it.
Hans Moravec introduced the idea of the "landscape of human competence" , a topology representing the peaks and valleys of human capabilities. Art, writing, coding, game playing. Elevation corresponds to cognitive difficulty, and the landscape maps to everything humans are capable of doing. AI is represented as the rising waterline - when Moravec created the idea, AI was more or less constrained to a few scattered lakes, with humans clearly demonstrating superiority nearly everywhere. After transformers, the waterline began to rise, and today we no longer have a vast contiguous majority, but are left with a scattered handful of islands, and the waterline continues to rise.
It's not arrogant or incurious to acknowledge the flood, but it might be to deny that flood is happening.
If you think there are fundamental human qualities or capabilities that AI can't ever have, you might put in the work to articulate that, instead of projecting negativity onto people who have watched the vast majority of the human competencies landscape get completely submerged over the last 10 years. The islands we have remaining don't really suggest any unifying principle underlying things that AI is still bad at, but instead they highlight the lack of technical capabilities and various engineering tracks to solve for. Many of the problems are solved in principle, but are economically infeasible; for all intents and purposes, you might consider those islands completely submerged as well.
I think you would need to work very hard to prove that the topology you are describing is well-formed enough for this analogy to make sense. For one: "cognitive difficulty" is not really a crisply defined quantity such that expressing it as a function of some input vector makes obvious sense (to me anyways). What's the cognitive difficulty of deciding what to have for dinner? What's the cognitive difficulty of making my 5 year plan? What's the cognitive difficulty of imagining a nice gift to get my wife for her birthday? There are so many things humans do which are heavily 'contingent' (in the sense of having sensitivity to the local culture, history, personal experience, etc) that the idea of being able to assign everything a single, decidable scalar to represent 'difficulty' seems like an extremely tall order to me. And that's setting aside whether the ambient vector space of 'human capabilities' is even really a sensible construct (a proposition that I also doubt quite heavily).
All this to say that describing what's happening as a 'rising tide' seems misleading to me. Techno-sociological development is super messy already, let's not make it more complex by pinning ourselves to inaccurate and potentially misleading analogies. The introduction of the car did not 'push humans higher onto a set of capability peaks', it implied a total reorganization of behavior and technologies (highways, commuting, and suburban sprawl); using the terms of your analogy humans built new landmasses on top of the water.
Two counterpoints:
1. Implying that there are only "a few islands left" shoes a strong bias towards assuming that only thins humans do in the digital realm is relevant, when in fact, the vast majority of things humans do are not in the digital sphere at all.
2. It's pretty clear when most people say that machine intelligence is close, right now, they are alluding to LLM or Deep Learning based approaches. I don't think you should assume they mean machines will catch up in a 100 years. They seem to imply it will be by 2030 or sowmthing.
To address both points - there appear to be no individual, well defined tasks that humans can do that you cannot train a machine to do. Some tasks are inefficient, uneconomical, and other impractical, but there appear to be no tasks that in principle machines cannot do. What is missing is broad generalization, human equivalent time horizons, continuous learning, and embodiment.
Robotics has passed the point of superhuman performance for any given task. Software has passed the point of superhuman performance for any given task.
Regardless of the particular technique or embodiment, the constraints aren't "is it possible in principle" but "is it too expensive" and "is this allowed by the pertinent principles and regulations and laws"
We don't have AGI that learns and adapts in real time like humans. We do have incredibly powerful algorithms that can learn from whatever data we throw at them, but many domains where it's impractical, ruinously expensive, illegal, or otherwise not possible to use AI for some other good reasons.
The few islands left to humanity are not fundamental barriers. We haven't solved intelligence, or achieved RSI or ASI or AGI yet; those were never the important thresholds.
AI has always been a question about good enough, and it looks like we've gone solidly past the good enough line into "we can probably automate everything" even if we don't solve the big problems over 5 or 10 years or beyond. I think it's very unlikely we don't solve intelligence by 2030, but even if AI stalls out where it's at right now, and all we get is the incremental improvements and engineering optimizations on current SOTA, we have enough to automate anything humans do at levels exceeding human capabilities.
What AGI and ASI do is make humans economically obsolete. Good enough AI means there might be some places where humans are needed for generalization and adaptability until the exhaustive tedious work gets done for a particular application that enables a robot or software system to be competent enough to handle the work.
A hiker on a mountain might as well imagine that at the end of their journey they will step off onto the moon. But it's just a mirage. As us humans have externalized more and more of our understanding of the world into books, movies, websites and the like, our methods of plumbing this treasury for just the needed tidbits have developed as well. But it's still just working off that externalized collective understanding. This includes heuristics for combining different facts to produce new ones, sure, but still dependent on brilliant individuals to raise the "island peaks" which ultimately pulls up the level of the collective intelligence as well.
While a 2 dimensional projection of intelligence may be a satisfying rhetorical device, I think it’s an extremely mathematically naive interpretation.
Not only is intelligence probably most accurately modeled as something extremely high dimensional, it’s probably also extremely nonlinearly traversed by learning methods, both organic and artificial. Not a topology very easily “flooded”.
In other words: bull shit.
It wasn't a formal model or a theorem, it was an observation about reality. Humans are indeed gradually being overtaken on almost all fronts by AI. But by all means, if you want to take issue with Moravec's framing of the issue, feel free.
Explaining it as something like "realizable instantiation of physical computation occurring in the universe mapping to an ultra-sparse, discrete point cloud embedded in the Euclidean parameter space of all computable functions" could definitely be more precise, but you're either going to need a topology like a landscape or a bumpy sphere to visualize it, and then you're going to need to spend more time showing the effects of things like scaling laws, available compute, where the known boundaries of human intelligence lie, and so on, and so forth, and by then you've lost everyone, probably even the ML professor.
It's a good enough metaphor that maps to a real thing.
> It's a good enough metaphor that maps to a real thing.
My entire point, which I’m not sure you addressed is that no, it’s not a good metaphor. Water “floods” a 3d topology in a predictable manner with regards to the volume the topology can contain. The entire argument is that progress is observable, predictable, and limitless, and the “islands” are a rhetorical device. My argument was turning the rhetorical device around and pointing out that we know so little about intelligence and AI that describing it in this way is not meaningful beyond sounding intellectual.
Sure, but it's entirely possible this point lies way past the expiry date of the universe itself (if there is such a thing). Plus, I do believe in magic - the magic of Life, the Universe, and Everything. And "42" doesn't dispel it for me.
Yeah I was thinking this to, but he did say "Indistinguishable". I guess if you are a intellectual you can buy into that. Fortunately, consciousness and intelligence is much bigger than we can comprehend as human beings. We want to break everything down into understandable bites, but the truth is we are barley scratching the surface of what the brain does and what constitutes as intelligence.
>I find this flippancy about the greatest mystery in the universe extremely arrogant and incurious and wish it wouldn't be so prevalent.
There's absolutely nothing mysterious about human intelligence unless you refuse to give it a clear definition. All the people waffling on about AGI refuse to give a clear, measurable definition of intelligence, because if they defined it exactly then it's possible to clearly determine whether a given machine does or does not meet that criteria. It's just 21st century woo peddling.
> There's absolutely nothing mysterious about human intelligence unless you refuse to give it a clear definition
This begs the question[0] by assuming that it can be given a clear, measurable definition. A large part of the mystery of consciousness and intelligence is that it's hard to define, measure, or explain; the most characteristic aspects (i.e. those relating to a subjective experience) are, in principle, impossible to measure or verify[1]. To say that it's not mysterious once you give it a clear, measurable definition is basically saying "it's not mysterious once you remove all aspects that make it mysterious."
Until the hard problem of consciousness is solved, this is absolutely false
In a chat bot coding world, how do we ever progress to new technologies? The AI has been trained on numerous people's previous work. If there is no prior art, for say a new language or framework, the AI models will struggle. How will the vast amounts of new training data they require ever be generated if there is not a critical mass of developers?
Most art forms do not have a wildly changing landscape of materials and mediums. In software we are seeing things slow down in terms of tooling changes because the value provided by computers is becoming more clear and less reliant on specific technologies.
I figure that all this AI coding might free us from NIH syndrome and reinventing relational databases for the 10th time, etc.
LLMs are very much NIH machines
i'd go one step further, they're going to turbo charge the NIH syndrome and treat every code file as a seperate "here"
For others like me who know “NIH” to be “National Institutes of Health”…
“NIH” here refers to “Not Invented Here” Syndrome, or a bias against things developed externally.
Basically not wanting to use dependencies or frameworks from outside the company or team.
See I thought they were the same thing, considering the Queensland Health payroll database issues, I assumed someone coined the term assuming it would clobber Health acronyms.
Yeah it’s gonna make the bar for good enough here super easy to meet and people will have less reasons to look around outside
This was one of my predictions in https://thomshutt.com/2026/03/17/predictions/ - fiddling around with creating new languages and lower level tooling becomes less rewarding versus figuring out what we can get agents to build on top of the existing ones
The bar to create the new X framework has just been lowered so I expect the opposite, even more churn.
All frameworks make some assumptions and therefore have some constraints. There was always a well-understood trade-off when using frameworks of speeding up early development but slowing down later development as the system encountered the constraints.
LLMs remove the time problem (to an extent) and have more problems around understanding the constraints imposed by the framework. The trade-off is less worth it now.
I have stopped using frameworks completely when writing systems with an LLM. I always tell it to use the base language with as few dependencies as possible.
If you are doing js, that makes sense since all the frameworks are a mess anyway.
Yes but no AI will know how to use your new framework so it will not get adopted
There is even a bigger problem; if AI didn't see your framework, you don't exist. Soon AI companies will be asking for money from devs to include their frameworks in the training dataset. Worse than Google's SEO that could at least be gamed somewhat.
I don't think people discovered frameworks with Google, nor they are going to do so with LLMs. It might be a different topic for libraries.
If no coding agent offers your framework as a choice for code generation, your framework might not even exist, you'd get the same outcome.
That’s factually untrue. I’m using models to work on frameworks with nearly zero preexisting examples to train on, doing things no one’s ever done with them, and I know this because I ecosystem around these young frameworks.
Models can RTFM (and code) and do novel things, demonstrably so.
>I’m using models to work on frameworks with nearly zero preexisting examples to train on
Zero preexisting examples of your particular frameworks.
Huge number of examples of similar existing frameworks and code patterns in their training set though.
Still not a novel thing in any meaningful way, not any more than someone who has coded in dozens of established web frameworks, can write against an unfamiliar to them framework homegrown at his new employer.
> Still not a novel thing in any meaningful way
Right. What you're saying is that barely anyone is doing truly novel work. 100% agree.
Almost no e.g. web or app or enterprise or even game developer is doing any novel work, does that come as a surprise?
And is that fact supposed to be an argument in favor of how LLMs can do novel work and move the state of the art (which is what we're arguing about).
I mean, "LLMs can do novel work because: barely anyone is doing truly novel work" doesn't really compute as an argument.
What I'm saying is that LLMs don't have to do truly novel work in order to be useful. They are useful because the lion's share of all work is a variation on an existing theme (even if the creator may not realize it).
I'm not talking about web frameworks. I'm talking about other frontiers with darn near zero preexisting examples, and no code samples to borrow from in any language or in any similar framework (because there is no such thing).
"LLMs can only emit things they've been trained on" is wholly obsolete.
>I'm not talking about web frameworks. I'm talking about other frontiers
Such as what?
>with darn near zero preexisting examples
Whatever it is, you'd be surprised.
Yeah. I work with bleeding edge zig. If you just ask Claude to write you a working tcp server with the new Io api, it doesn’t have any idea what it’s doing and the code doesn’t compile. But if you give it some minimal code examples, point it to the recent blog posts about it, and paste in relevant points from std it does incredibly well and produce code that it has not been trained on.
It also needs a validation loop. Give it the compiler output and I bet it would fix that code even without examples/a blog post.
It’s always been about context, then being able to communicate it.
Manager people or managing a hyper-knowledgeable intern (LLM). If you know what you need, actually want what you want (super difficult), and have the ability to provide context to someone else… management has always been easier for you than others.
I find one of the more interesting things about the current “AI debate” is that many programmers are autistic or at least close one side of an empathetic spectrum that they’ve always had trouble communicating what is needed for a task and why. So it’s hard for me to take the opinions going around.
Maybe you’re right about modern LLMs. But you seem to be making an unstated assumption: “there is something special about humans that allow them to create new things and computers don’t have this thing.”
Maybe you can’t teach current LLM backed systems new tricks. But do we have reason to believe that no AI system can synthesize novel technologies. What reason do you have to believe humans are special in this regard?
After thousands of years of research we still don’t fully understand how humans do it, so what reason (besides a sort of naked techno-optimism) is there to believe we will ever be able to replicate the behavior in machines?
Well, understanding how it works is not a prerequisite to being able to do it.
People have been doing thigs millenia before they understood them. Did primitive people understood the mechanism behind which certain medicinal plants worked in the body, or just saw that when they e.g. boil them and consume them they have a certain effect?
Thousands of years?
We've only had the tech to be able to research this in some technical depth for a few decades (both scale of computation and genetics / imaging techniques).
And then we discover that DNA in (not only brain) cells are ideal quantum computers, DNA's reactions generate coherent light (as in lasers) used to communicate between cells and single dendrite of cerebral cortex' neuron can compute at the very least a XOR function which requires at least 9 coefficients and one hidden layer. Neurons have from one-two to dozens of thousands of dendrites.
Even skin cells exchange information in neuron-like manner, including using light, albeit thousands times slower.
This switches complexity of human brain to "86 billions quantum computers operating thousands of small neural networks, exchanging information by lasers-based optical channels."
The Church-Turing thesis comes to mind. It would at least suggest that humans aren’t capable of doing anything computationally beyond what can be instantiated in software and hardware.
But sure, instantiating these capabilities in hardware and software are beyond our current abilities. It seems likely that it is possible though, even if we don’t know how to do it yet.
The church turing thesis is about following well-defined rules. It is not about the system that creates or decides to follow or not follow such rules. Such a system (the human mind) must exist for rules to be followed, yet that system must be outside mere rule-following since it embodies a function which does not exist in rule-following itself, e.g., the faculty of deciding what rules are to be followed.
We can keep our discussion about church turing here if you want.
I will argue that the following capacities: 1. creating rules and 2. deciding to follow rules (or not) are themselves controlled by rules.
Church turing is about computable functions. Uncomputable functions exist.
For example how much rain is going to be in the rain gauge after a storm is uncomputable. You can hook up a sensor to perform some action when the rain gets so high. This rain algorithm is outside of anything church turing has to say.
There are many other natural processes that are outside the realm of was is computable. People are bathed in them.
Church turing suggests only what people can do when constrained to a bunch of symbols and squares.
That example is completely false: how much rain will fall is absolutely a computable function, just a very difficult and expensive function to evaluate with absurdly large boundary conditions.
This is in the same sense that while it is technically correct to describe all physically instantiated computer programs, and by extension all AI, as being in the set of "things which are just Markov chains", it comes with a massive cost that may or may not be physically realisable within this universe.
Rainfall to the exact number of molecules is computable. Just hard. A quantum simulation of every protein folding and every electron energy level of every atom inside every cell of your brain on a classical computer is computable, in the Church-Turing sense, just with an exponential slowdown.
The busy beaver function, however, is actually un-computable.
The busy beaver function isn't uncomputable.
You just compute the brains of a bunch of immortal mathematics. At which point it's "very difficult and expensive function to evaluate with absurdly large boundary conditions."
> The busy beaver function isn't uncomputable.
False.
To quote:
One of the most consequential aspects of the busy beaver game is that, if it were possible to compute the functions Σ(n) and S(n) for all n, then this would resolve all mathematical conjectures which can be encoded in the form "does ⟨this Turing machine⟩ halt".[5] For example, there is a 27-state Turing machine that checks Goldbach's conjecture for each number and halts on a counterexample; if this machine did not halt after running for S(27) steps, then it must run forever, resolving the conjecture.[5][7] Many other problems, including the Riemann hypothesis (744 states) and the consistency of ZF set theory (745 states[8][9]), can be expressed in a similar form, where at most a countably infinite number of cases need to be checked.[5]
"Uncomputable" has a very specific meaning, and the busy beaver function is one of those things, it is not merely "hard".> You just compute the brains of a bunch of immortal mathematics. At which point it's "very difficult and expensive function to evaluate with absurdly large boundary conditions."
Humans are not magic, humans cannot solve it either, just as they cannot magically solve the halting problem for all inputs.
That humans come in various degrees of competence at this rather than an, ahem, boolean have/don't have; plus how we can already do a bad approximation of it, in a field whose rapid improvements hint that there is still a lot of low-hanging fruit, is a reason for techno-optimism.
>>> But do we have reason to believe that no AI system can synthesize novel technologies
We don’t even know if they want to. But in general, it’s impossible to conclusively prove that something won’t ever happen in the future.
Something I think about frequently is that 20 years ago, there weren’t machines that could do visual object recognition/categorization and we didn’t really have a clue how humans did it either. We knew that neuron built fancier and fancier receptive fields that became “feature detectors”, but h the ere was a sense of “is that all it takes? There has to be something more sophisticated in order to handle illumination changes of out of plane rotation?”
But then we got a neural wr that was big enough and it turns out that feedforward receptive fields ARE enough. We don’t know whether this is how our brains do it, but it’s a humbling moment to realize that you just overthought how complex the problem was.
So ive become skeptical when people start claiming that some class of problem is fundamentally too hard for machines.
Are modern visual recognition & categorisation systems comparable to human capabilities? From what I can tell, they aren't even close (although still impressive!).
They aren't, or capchas wouldn't be a thing any longer.
Its not an assumption, it is a fact about how computers function today. LLMs interpolate, they do not extrapolate. Nobody has shown a method to get them to extrapolate. The insistence to the contrary involves an unstated assumption that technological progress towards human-like intelligence is in principle possible. In reality, we do not know.
As long as agnosticism is the attitude, that’s fine. But we shouldn’t let mythology about human intelligence/computational capacity stop us from making progress toward that end.
> unstated assumption that technological progress towards human-like intelligence is in principle possible. In reality, we do not know.
For me this isn’t an assumption, it’s a corollary that follows from the Church-Turing thesis.
That certainly doesn’t follow from the Church-Turing thesis because the Church Turing thesis doesn’t demonstrate that human intelligence is computational. That it is still an unstated assumption.
Noted, thanks.
I don’t know what non-computational intelligence would look like but I guess I’ll keep my mind open.
In the grand scale of things, a computer is not much more than a fancy brick. Certainly it is much closer to a brick than to a human. So the question is more 'why should this particularly fancy brick have abilities that so far we have only encountered in humans?'
> Certainly it is much closer to a brick than to a human.
I disagree with this premise. A computer approximates a Turing Machine, which puts it far above a brick.
but still so so much further to go until you reach human.
> fancy brick
If we're going to be reductionist we can just call humans "meat sacks" and flip the question around entirely.
That's irrelevant.
The claim being made is not "no computer will ever be able to adapt to and assist us with new technologies as they come out."
The claim being made is "modern LLMs cannot adapt to and assist us with new technologies until there is a large corpus of training data for those technologies."
Today, there exists no AI or similar system that can do what is being described. There is also no credible way forward from what we have to such a system.
Until and unless that changes, either humans are special in this way, or it doesn't matter whether humans are special in this way, depending on how you prefer to look at it.
Note that I prefaced my comment by saying the parent might be right about LLMs.
> That's irrelevant.
My comment was relevant, if a bit tangential.
Edit: I also want to say that our attitude toward machine vs. human intelligence does matter today because we’re going to kneecap ourselves if we incorrectly believe there is something special about humans. It will stop us from closing that gap.
Look at the history of art. Lots of people used the same paint that had always been used and the same brushes, and came up with wildly different uses for those tools. Until there are literally no people involved, we'll always be using the tools in new ways.
People are doing this now. It's basically what skills.sh and its ilk are for -- to teach AIs how to do new things.
For example, my company makes a new framework, and we have a skill we can point an agent at. Using that skill, it can one-shot fairly complicated code using our framework.
The skill itself is pretty much just the documentation and some code examples.
Isn't the "skill" just stuff that gets put into the context? Usually with a level of indirection like "look at this file in this situation"?
How long can you keep adding novel things into the start of every session's context and get good performance, before it loses track of which parts of that context are relevant to what tasks?
IMO for working on large codebases sticking to "what the out of the box training does" is going to scale better for larger amounts of business logic than creating ever-more not-in-model-training context that has to be bootstrapped on every task. Every "here's an example to think about" is taking away from space that could be used by "here is the specific code I want modified."
The sort of framework you mention in a different reply - "No, it was created by our team of engineers over the last three years based on years of previous PhD research." - is likely a bit special, if you gain a lot of expressibility for the up-front cost, but this is very much not the common situation for in-house framework development, and could likely get even more rare over time with current trends.
> Isn't the "skill" just stuff that gets put into the context? Usually with a level of indirection like "look at this file in this situation"?
Today, yes. I assume in the future it will be integrated differently, maybe we'll have JIT fine-tuning. This is where the innovation for the foundation model providers will come in -- figuring out how to quickly add new knowledge to the model.
Or maybe we'll have lots of small fine tuned models. But the point is, we have ways today to "teach" models about new things. Those ways will get better. Just like we have ways to teach humans new things, and we get better at that too.
A human seeing a new programming language still has to apply previous knowledge of other programming languages to the problem before they can really understand it. We're making LLMs do the same thing.
The question is, who made the new framework? Was it vibe coded by someone who does not understand its code?
No, it was created by our team of engineers over the last three years based on years of previous PhD research.
A framework is different than a paradigm shift or new language.
Yes and no. How does a human learn a new language? They use their previous experience and the documentation to learn it. Oftentimes they way someone learns a new language is they take something in an old language and rewrite it.
LLMs are really good at doing that. Arguably better than humans at RTFM and then applying what's there.
And LLMs will get retrained eventually. So writing one good spec and a great harness (or multiple) might be enough, eventually.
According to the nobel prize winner Geoffrey Hinton, these LLMs will be able to talk to each other and self-train in the same way that AlphaGo started playing games against itself to be able to surpass all human experts, on who's games it had originally been trained and therefore originally been restricted to their ability. This is how LLMs will surpass human knowledge rather than being limited to a statistical average from human generated training data.
You can have the LLM itself generate it based on the documentation, just like a human early adopter would
The same could be asked about people. The answer is social intelligence.
This would also mean that we should design new programming languages out of sight of LLMs in case we need to hide code from them.
Inject the prior art into the (ever increasing) context window, let in-context-learning to its thing and go?
You can just have AI generate its own synthetic data to train AI with. If you want knowledge about how to use it to be in the a model itself.
In a chat bot coding world, how do we ever progress to new technologies?
Funny, I'd say the same thing about traditional programming.
Someone from K&R's group at Bell Labs, straight out of 1972, would have no problem recognizing my day-to-day workflow. I fire up a text editor, edit some C code, compile it, and run it. Lather, rinse, repeat, all by hand.
That's not OK. That's not the way this industry was ever supposed to evolve, doing the same old things the same old way for 50+ years. It's time for a real paradigm shift, and that's what we're seeing now.
All of the code that will ever need to be written already has been. It just needs to be refactored, reorganized, and repurposed, and that's a robot's job if there ever was one.
You're probably using an IDE that checks your syntax as you type, highlighting keywords and surfacing compiler warnings and errors in real time. Autocomplete fills out structs for you. You can hover to get the definition of a type or a function prototype, or you can click and dig in to the implementation. You have multiple files open, multiple projects, even.
Not to mention you're probably also using source control, committing code and switching between branches. You have unit tests and CI.
Let's not pretend the C developer experience is what it was 30 years ago, let alone 50.
I disagree that any of those things are even slightly material to the topic. It's like saying my car is fundamentally different from a 1972 model because it has ABS, airbags, and a satnav.
Reply due to rate limiting:
K&R didn't know about CI/CD, but everything else you mention has either existed for over 30 years or is too trivial to argue about.
Conversely, if you took Claude Code or similar tools back to 1996, they would grab a crucifix and scream for an exorcist.
You said C developers are doing things the "same old way" as always.
I think you're taking for granted the massive productivity boost that happened even before today's era of LLM agents.
If all problems were solved, we should have already found a paradise without anything to want for. Your editing workflow being the similar to another for a 1970s era language does not have any relevance to that question.
If all problems were solved
Now that's extrapolation of the sort that, as you point out elsewhere, no LLM can perform.
At least, not one without serious bugs.
That's your fault for still writing C in Ed :P
But I do broadly agree that we still write code for a lot of shit that should have been automated long before. I'm not actually sure why it hasn't been automated yet. LLM's can kind of do it, I just wish we had automated it with something deterministic and human.
The fact that LLMs can kind of do it is an indictment of current programming languages and frameworks. We're coding at too low a level of abstraction. Code has too much boilerplate, too little entropy. We need a new generation of much higher level languages which would obviate much of the need for code generation. Of course, the tension there is that high-level abstractions always leak and don't work well when maximum performance and efficiency is required.
We were almost there, back in the 80s.
A vice president at Symbolics, the Lisp machine company at their peak during the first AI hype cycle, once stated that it was the company's goal to put very large enterprise systems within the reach of small teams to develop, and anything smaller within the reach of a single person.
And had we learned the lessons of Lisp, we could have done it. But we live in the worst timeline where we offset the work saved with ever worse processes and abstractions. Hell, to your point, we've added static edit-compile-run cycles to dynamic, somewhat Lisp-like languages (JavaScript)! And today we cry out "Save us, O machines! Save us from the slop we produced that threatens to make software development a near-impossible, frustrating, expensive process!" And the machines answer our cry by generating more slop.
While i dont disagree with the larger point here i do disagree that all the code we ever need has been written. There are still soooooo many new things to uncover in that domain.
Like what?
New cryptography algorithms, particularly post-quantum cryptography
New zero-knowledge proofs
Video compression
And so forth.
Those are all instances of reuse of existing techniques in new contexts. And when genuinely-new algorithms do arise from genuinely-new areas of study, it's easy enough to teach LLMs how to apply and deploy them.
I still research efficient algorithms. You can describe these to LLMs and they do it without any prior art. They just took away the stomach churners.
In fact, we probably started a communist revolution in software with anthropic/openai streaming your solutions to lesser coders.
You're actually better off using the LLM to consult textbooks from the 70s, because most likely someone already came up with a better algorithm that hasn't seen adoption yet.
Rude!
I’m writing a new type of CRDT that supports move/reorder/remove ops within a tree structure without tombstones. Claude Code is great at writing some of the code but it keeps adding tombstones back to my remove ops because “research requires tombstones for correctness”.
This is true for a usual approach, but the whole reason I’m writing the CRDT is to avoid these tombstones! Anyway, a long story short, I did eventually convince Claude I was right, but to do it I basically had to write a structural proof to show clear ordering and forward progression in all cases. And even then compaction tends to reset it. There are a lot of subtleties these systems don’t quite have yet.
Interesting. I'm the author of DocNode, a library that does exactly what you're describing; it might be useful. https://docukit.dev
Cheers!
I would strongly advise using Codex for a project like that
Please do elaborate. I’ve only tried switching to codex once or twice, and it’s been probably 3 months since I last tried it, but I was underwhelmed each time. Is it better on novel things in your experience?
My experience is that it is much more terse and realistic with its feedback, and more thoughtful generally. I trust its positive acknowledgements of my work more than claude, whose praise I have been trained to be extremely skeptical of.
In my experience, Codex / ChatGPT are better at telling you where you're wrong, where your assumptions are incomplete, etc., and better at following the system prompts.
But more importantly, as a coding agent, it follows instructions much better. I've frequently had Claude go off and do things I've explicitly told it not to do, or write too much code that did wrong things, and it's more work to corral it than I want to spend.
Codex will follow instructions better. Currently, it writes code that I find a few notches above Claude, though I'm working with C# and SQL so YMMV; Claude is terrible at coming up with decent schema. When your instructions do leave some leeway, I find the "judgment" of Codex to be better than Claude. And one little thing I like a lot is that it can look at adjacent code in your project so it can try to write idiomatically for your project/team. I haven't seen Claude exhibit this behavior and it writes very middle-of-the-road in terms of style and behavior.
But when I use them I use them in a very targeted fashion. If I ask them to find and fix a bug, it's going to have as much or more detail as a full bug report in my own ticketing system. If it's new code, it comes with a very detailed and long spec for what is needed, what is explicitly not needed, the scope, the constraints, what output is expected, etc., like it's a wiki page or epic for another real developer to work from. I don't do vague prompts or "agentic" workflow stuff.
GPT is much better at anything mathematical than Claude, as is Gemini. This is evidenced by their superior results at math Olympiads, the Putnam, etc.
How much is OpenAI paying you for this
Absolutely nothing. I have active subscriptions for both. Claude is better at FE stuff. Codex is better at actual programming.
How is FE not actual programming? I spend less time on FE than I once did, but it has presented some of the most interesting programming challenges I've encountered in my career. It's a large technical space, rich with 'actual' programming to be done.
So much of society's intellectual talent has been allocated toward software. Many of our smartest are working on ad-tech, surveillance, or squeezing as much attention out of our neighbors as possible.
Maybe the current allocation of technical talent is a market failure and disruption to coding could be a forcing function for reallocation.
Those are business goals that don't just go away because tech changes.
Of course. But LLMs may subtract the need for top talent to be working on them.
Those business goals will soon realize they need more electricity. More brains will be devoted to power generation.
Likely, but through regulation, not AI
I don't know that people are saying code is dead (or at least the ones who have even a vague understanding of AI's role) - more that humans are moving up a level of abstraction in their inputs. Rather than writing code, they can write specs in English and have AI write the code, much in the same way that humans moved from writing assembly to writing higher-level code.
But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English. There are probably a lot of cases where you could write an instruction unambiguously in English, but it'd end up being much longer because English is much less precise than any coding language.
I think we'll see the same in photo and video editing as AI gets better at that. If I need to make a change to a photo, I'll be able to ask a computer, and it'll be able to do it. But if I need the change to be pixel-perfect, it'll be much more efficient to just do it in Photoshop than to describe the change in English.
But much like with photo editing, there'll be a lot of cases where you just don't need a high enough level of specificity to use a coding language. I build tools for myself using AI, and as long as they do what I expect them to do, they're fine. Code's probably not the best, but that just doesn't matter for my case.
(There are of course also issues of code quality, tech debt, etc., but I think that as AI gets better and better over the next few years, it'll be able to write reliable, secure, production-grade code better than humans anyway.)
> But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English.
Unless the defect rate for humans is greater than LLMs at some point. A lot of claims are being made about hallucinations that seem to ignore that all software is extremely buggy. I can't use my phone without encountering a few bugs every day.
Yeah, I don't really accept the argument that AI makes mistakes and therefore cannot be trusted to write production code (in general, at least - obviously depends on the types of mistakes, which code, etc.).
The reality is we have built complex organizational structures around the fact that humans also make mistakes, and there's no real reason you can't use the same structures for AI. You have someone write the code, then someone does code review, then someone QAs it.
Even after it goes out to production, you have a customer support team and a process for them to file bug tickets. You have customer success managers to smooth over the relationships with things go wrong. In really bad cases, you've got the CEO getting on a plane to go take the important customer out for drinks.
I've worked at startups that made a conscious decision to choose speed of development over quality. Whether or not it was the right decision is arguable, but the reality is they did so knowing that meant customers would encounter bugs. A couple of those startups are valuable at multiple billions of dollars now. Bugs just aren't the end of the world (again, most cases - I worked on B2B SaaS, not medical devices or what have you).
> humans also make mistakes
This is broadly true, but not comparable when you get into any detail. The mistakes current frontier models make are more frequent, more confident, less predictable, and much less consistent than mistakes from any human I'd work with.
IME, all of the QA measures you mention are more difficult and less reliable than understanding things properly and writing correct code from the beginning. For critical production systems, mediocre code has significant negative value to me compared to a fresh start.
There are plenty of net-positive uses for AI. Throwaway prototyping, certain boilerplate migration tasks, or anything that you can easily add automated deterministic checks for that fully covers all of the behavior you care about. Most production systems are complicated enough that those QA techniques are insufficient to determine the code has the properties you need.
> The mistakes current frontier models make are more frequent, more confident, less predictable, and much less consistent than mistakes from any human I'd work with.
my experience literal 180 degrees from this statement. and you don’t normally get the choose humans you work with, some you may be involved in the interview process but that doesn’t tell you much. I have seen so much human-written code in my career that, in the right hands, I’ll take (especially latest frontier) LLM written code over average human code any day of the week and twice on Sunday
Humans also make mistakes, but unlike LLMs, they are capable of learning from their mistake and will not repeat it once they have learned. That, not the capacity to make mistakes, is why you should not allow LLMs to do things.
Developers repeat the same mistakes all the time. Otherwise off by one wouldn’t be a thing.
most human bugs are caused by failures in reasoning though, not by just making something up to leap to the conclusion considered most probable, so not sure if the comparison makes sense.
The end result is the same either way, as is the resolution.
> most human bugs are caused by failures in reasoning though
Citation needed.
sorry, that is just taken from my experience, and perhaps I am considering reasoning to be a broader category than others might.
To be lenient I will separate out bugs caused by insufficient knowledge as not being failures in reasoning, do you have forms of bugs that you think are more common and are not arguably failures in reasoning that should be considered?
on edit: insufficient knowledge that I might not expect a competent developer to have is not a failure in reasoning, but a bug caused by insufficient knowledge that I would expect a competent developer in the problem space to have is a failure in reasoning, in my opinion on things.
This morning a person posted a question to the Reddit group r/Mathematica (https://www.reddit.com/r/Mathematica/comments/1s1fin2/can_ho...).
I asked GPT to write code to address their question and the code was quite acceptable drawing the circle and finding the correct intersection point. It would have take me about 40 minutes to write the code, so I would not have done it myself.
Currently, GPT is great for writing short programs. The results often have a bug or two that is easy to fix, but it's much faster to have GPT write the code. This works fine for projects that are less than 100 lines of code where you just want something that works.
This take was accurate about 2 years ago, up until perhaps one year ago. Current capabilities far exceed what you are outlining, for example using Claude Opus models in a harness such as Claude Code or OpenCode.
I feel pretty strongly about a set of somewhat at-odds thoughts:
- in a non-hobby setting, code is a liability
- I want to solve problems, not write code
- I love writing code as a hobby.
- being paid to do my hobby professionally is amazing.
- I love the idea of the Star Trek Ship’s Computer. To just ask for things and for it to do the work. It sometimes feels like we’re very close.
Star trek is the polar opposite of what we are close to in every single way.
Nah. Remember the episode where Geordi asked the computer to create an opponent worthy of Data instead of Sherlock Holmes, and the computer creates sentient Moriarty with the ability to control the ship.
That sounds exactly like something an LLM based system would do.
I agree that programming language can be a better (denser, more precise) encapsulator of intent than natural language. But the converse is more often true; natural language is a denser and more precise encapsulator of intent than programming language.
I think there's some irony in using Russell's quote being used this way. My intent will often be less clear to a reader once encoded in a language bound inextricably to a machine's execution context.
Good abstraction meaningfully whittles away at this mismatch, and DSLs in powerful languages (like ML-family and lisp-family languages) have often mirrored natural(ish) language. Observe that programming languages themselves have natural language specifications that are meaningfully more dense than their implementations, and often govern multiple implementations.
Code isn't just code. Some code encapsulates intent in a meaningfully information and meaning-dense way: that code is indeed poetry, and perhaps the best representation of intent available. Some code, like nearly every line of the code that backs your server vs client time example, is an implementation detail. The Electric Clojure version is a far better encapsulation of intent (https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...). A natural language version, executed in the context of a program with an existing client server architecture, is likely best: "show a live updated version of the servers' unix epoch timestamp and the client's, and below that show the skew between them."
Given that we started with Russell, we could end with Wittgenstein's "Is it even always an advantage to replace an indistinct picture by a sharp one? Isn't the indistinct one often exactly what we need?"
I think we're in agreement. My Dijkstra quote is the perfect rejoinder to your Wittgenstein:
The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.
— Edsger Dijkstra
That's good—you guys should come out to visit sometime!
Isaac!!!!! I had no idea that was you! Too funny! Emily and I are dying
Yes, we'd love to visit!
A week ago there was an artical about Donald Knuth asking an ai to prove something then unproved and it found the proof. I suppose it is possible that the great Knuth didn't know how to find this existing truth - but there is a reason we all doubted it (including me when I mentioned it there)
i have never written a c compiler yet I would bet money if you paid me to write one (it would take a few years at least) it wouldn't have any innovations as the space is already well covered. Where I'm different from other compilers is more likely a case of I did something stupid that someone who knows how to write a compiler wouldn't.
So I would like to know how it found the proof. Because it’s much more likely to have been plucked from an obscure record where the author didn’t realize this was special than to have been estimated on the fly.
This makes LLMs incredibly powerful research tools, which can create the illusion of emergent capabilities.
If you read https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc... it was more of a guided effort to write a program to find examples that helped with moving the proof along
Here's the PDF: https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc...
It wasn't Knuth who used Claude, but his friend. Nevertheless, Knuth was quite impressed.
> as the space is already well covered
The US patent commissioner in 1899 wanted to shutdown the patent office because "everything that can be invented has been invented." And yet, human ingenuity keeps proving otherwise.
There are lots of small innovations left. Only a few patents have ever been for revolutions. Small innovaions add up to big things.
This is apocraphyl :(
I'd bet if you read the Dragon book (yes, I'm dating myself) you'd have something working in less than three months. More importantly, you would understand every bit of it.
Probably. I know what book you mean and never tried to read it. As I noted elsewhere I could probably brute force something in a week without reading the book. However the ai tried to be better than just a basic translator and that takes more time and exberience than I have.
You could probably do it in a few days, C is not that hard to compile
And a few seconds more to write a stub `stdio.h` that will allow to compile at least helloworld.
Writing a compiler that can compile the Linux kernel is a bit more involved.
Claude built an optimizer as well. (Not a great one) that takes a lot more. Yes I could lively brute force a C compiler that works much faster.
Right, and that was a design goal of C language... to be close to the machine.
Yes, and I was responding to
> it would take a few years at least
Famous last words.
C# back in 2000-2007 had a bunch of innovations. I expect we will have more.
I don't see any reason to doubt that plausible-next-token-guessing could sometimes plausibly-next-guess a sequence that happens to decode to the answer to some question we'd not yet solved.
... it'd be even more likely if, as other have suggested in this thread, we actually had recorded the answer in writing but nobody had noticed it yet, say, but even without that I don't see why it couldn't happen.
Krouse points to a great article by Simon Willison who proposes that the killer role for vibe coding (hopefully) will be to make code better and not just faster.
By generating prototypes that are based on different design models each end product can be assessed for specific criteria like code readability, reliability, or fault tolerance and then quickly be revised repeatedly to serve these ends better. No longer would the victory dance of vibe coding be simply "It ran!" or "Look how quickly I built it!".
This is my hope as well. We now have time to write things a bit better. Comment on the pr with a quick improvement and it can just happen. But I’m failing to convince people at work. The majority seem to just be happy for code to go away and for us to never think about it again.
> Nobody is out there claiming that ChatGPT is putting the great novelists or journalists out of jobs. We all know that's nonsense.
Of course they are taking about that!!
I think what people don't realize is that rent and the mortgage isn't paid through art. It's paid through boring, important work that is mostly uncreative and requires precision. A lot of people don't really care about doing anything innovative, they just want to do something to get money to sustain their life. The same goes for businesses.
What a lot of people don't realize about software is that it is one of the few industries that offered a means to greatly improve your standard of living without requiring a formal degree.
AI just one-shotted that kind of work. There will always be a place for humans to do creative things, but there won't be a place for average people to make a living.
Example - look at animated movies. In the past studios hired hundreds of people to draw the movie. Now, its nearly all automated with software.
The need for human artistic ability for commercial work is nearly gone, and only left for nice to have products.
In 5 years we will see the same for software. It will be much faster than what happened to art because software is already in nearly every aspect of life.
r0ml's third law states that: “Any distributed system based on exchanging data will be replaced by a system based on exchanging programs.”
I believe the same pattern is inevitable for these higher level abstractions and interfaces to generate computer instructions. The language use must ultimately conform to a rigid syntax, and produce a deterministic result, a.k.a. "code".
> “Any distributed system based on exchanging data will be replaced by a system based on exchanging programs.”
So distributed systems tend to converge towards being more and more mystifying? Cf. the mythical mammoth:
> Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.
I desperately want this to be true, but at least in my sector, it isn't. You still need talented and knowledgable programmers, but they don't do very much programming. Its all code review, infrastructure, devops.
At least for a small business, users are catching on that they can build a dirty app that gets them what they specifically want, instead of relying on some paid software to give everyone a little bit of what they want. Partially this suggests I'm just in the wrong sector, but it is absolutely happening.
I dont' think this matters to Google or Amazon, they can't be replaced. But small businesses are a different story.
And the result of all this? We need to heavily rely on AI, so that we can outpace individual users in delivering what they want. I hate it, I didn't give the order, but I do see the writing on the wall. This workflow is miserable, it sucks the fun out of the job, but unfortunately it really is faster. And small businesses rely on the income coming in next year, not in 5 years.
As a side note, I also think users are becoming extremely used to having a chatbot do everything for them. Every site is going to have one, and apps that don't will fall behind.
I'd like to be on a different multiverse timeline honestly
The price hikes are going to be absolutely devastating.
Imagine oracle level price acuity along with 0 competition and utter dependence. This is the future the AI labs are drooling for. You will be charged based on the value it delivers. People will be start making trade offs on if hiring humans would be cheaper than AI, etc.
There's no way they're going to leave all that money on the table when there is all that investment to pay back.
> unless you believe in magic, it's only a matter of time until we reach the point at which machine intelligence is indistinguishable from human intelligence.
I'm sure it will be possible, but it may well be very expensive. If it is, why would anyone spend the resources?
AI evolution will certainly follow the money, which is not necessarily the same as the path to AGI.
Programming is an abstraction of the machine code which describe what the computer should do. You could in theory program in prose, meaning the description of the program compiles into an app.
the integration glue comment really resonates. ive been using agents mostly for wiring up oauth flows and api integrations between services - stuff where theres no creativity involved, just reading 3 different docs and getting the tokens right. saved me hours on stuff i used to dread. but the moment i need to think about actual architecture decisions or tradeoffs, im back to my own brain. feels like thats where things will settle for a while.
Got flashbacks to 1999 from some of those charts - I had a pair of design charts (partly for arguments, partly for onboarding) that were 17 nodes each and a lot of lines. (A coworker snuck in some extra nodes and an arrow labeled "troops move through Austria" and it was a while before anyone other than me noticed - yeah, that kind of chart.) This is not a lesson in design complexity - the design was pretty tight for what it did, even if you go back and read the patents - it's a lesson in the use of abstraction for explanation complexity and that you can break up the presentation more sanely than the code-on-disk actually is, you just have to stop and think about it (and have a bit more empathy for the people you're presenting to than, well, anyone in 1999 actually had :-)
I don't expect AI to replace me anytime soon, but...
AI is already letting me care less about the languages I use and focus more on the algorithms. AI helps me write tests. AI suggests improvements and catches bugs before compiling. AI writes helper scripts/tools for me. All of these things are good enough for me to accept paying a few hundred dollars every month, although I don't have to because my employer already does do that for me.
6 months ago I was arguing that AI wasn't very good and code was more precise than english for specifying solutions. The first part is not true anymore for many things I care about. The second is still true but for many things I care about it doesn't matter.
I'm getting tired of articles that try to tell me what to think about AI. "AI is great and will replace all programmers!"... "AI sucks and will ruin your brain and codebase!"... both of these are tired and meaningless arguments.
My problem is that while I know “code” isn’t going away, everyone seems to believe it is, and that’s influencing how we work.
I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.
> I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.
You let them play out. Shift-left was similar to this and ultimately ended in part disaster, part non-accomplishment, and part success. Some percentage of the industry walked away from shift-left greatly more capable than the rest, a larger chunk left the industry entirely, and some people never changed. The same thing will likely happen here. We'll learn a lot of lessons, the Overton window will shift, the world will be different, and it will move on. We'll have new problems and topics to deal with as AI and how to use it shifts away from being a primary topic.
Shift-left was a disaster? A large number of my day to day problems at work could be described as failing to shift-left even in the face of overwhelmingly obvious benefits
Shift left?
Edit: I've googled it and I can't find anything relevant. I've been working in software for 20+ years and read a myriad things and it's the first time I hear about it...
It's a security practice. https://www.crowdstrike.com/en-us/cybersecurity-101/cloud-se...
"Shift-left" was a general term that occurred in the systems engineering / devops space – I'm not surprised to see it used in a security context now. More or less, about a decade ago most systems engineers were recruited into the industry without any application software engineering skills and that became a drag on organizations trying to scale. It was about moving testing, devops, security, etc into the software engineering role and attempting to consolidate systems engineering into SWE roles. It was a part of the larger "devops movement".
I've heard a ton of times about "designing/planning for quality and security from the start", I guess it can't hurt to also have a buzzword for it.
Well you're trying to convince them to reject their actual experience. Better tooling and better models have indeed solved a lot of the limitations models faced a couple years ago.
I also believe coding isn't going to disappear, but AI skeptics have been mostly doing a combination of moving the goalposts and straight up denial over the last few years.
I've been trying out AI over the past month (mostly because of management trying to force it down my throat), and have not found it to be terribly conducive to actually helping me on most tasks. It still evidences a lot of the failure modes I was talking about 3 years ago. And yet the entire time, it's the AI boosters who keep trying to say that any skepticism is invalid because it's totally different than how it was three months ago.
I haven't seen a lot of goalpost moving on either side; the closest I've seen is from the most hyperbolic of AI supporters, who are keeping the timeline to supposed AGI or AI superintelligence or whatnot a fairly consistent X months from now (which isn't really goalpost-moving).
Well, to be fair, judging by the shift in the general vibes of the average HN comment over the past 3 years, better use of agents and advanced models DID solve the previous temporary setbacks. The techno-optimists were right, and the nay-sayers wrong.
Over the course of about 2 years, the general consensus has shifted from "it's a fun curiosity" to "it's just better stackoverflow" to "some people say it's good" to "well it can do some of my job, but not most of it". I think for a lot of people, it has already crossed into "it can do most of my job, but not all of it" territory.
So unless we have finally reached the mythical plateau, if you just go by the trend, in about a year most people will be in the "it can do most of my job but not all" territory, and a year or two after that most people will be facing a tool that can do anything they can do. And perhaps if you factor in optimisation strategies like the Karpathy loop, a tool that can do everything but better.
Upper managment might be proven right.
LLM agents are glorified autocomplete with a thesaurus bolted on, so the victory laps look pretty prematue.
Try one on a mildly ugly multi-step task in a repo with stale deps, weird config, and a DB/API boundary, and you'll watch it bluff past missing context, mutate the wrong file, and paper over the gap with confident nonsense instead of doing the boring work a decent engieneer would do. PR people can call that 'better Stack Overflow' if they want.
Your definition of a glorified autocomplete is … oof. So in short, “try ask it to do something you’d hate on bad code you’d yourself fail at and it might fail”.
And I’m pretty sure I could try Claude on a repo as you describe and it wouldn’t in fact fail. You’re letting your opinions of what LLMs were like a few months ago influence what you think of them now.
Comments like yours really annoy me because they are ridiculously confident about AI being “glorified autocomplete”, but also clearly not informed about the capabilities. I don’t get how some people can be on HN and not actually … try these things, be curious about them, try them on hard problems.
I’m a good engineer. I’ve coded for 24 years at this point. Yesterday in 45 minutes I built a feature that would have taken me three months without AI. The speed gains are obscene and because of this, we can build things we would never have even started before. Software is accelerating.
If self-driving is any indication, it may take 10+ years to go from 90% to 95%.
As a former PM, I will say that if you want to stop something from happening at your company, the best route is to come off very positive about it initially. This is critical because it gives you credibility. After my first few years of PMing, I developed a reflex that any time I heard a deeply stupid proposal, I would enthusiastically ask if I could take the lead on scoping it out.
I would do the initial research/planning/etc. mostly honestly and fairly. I'd find the positives, build a real roadmap and lead meetings where I'd work to get people onboard.
Then I'd find the fatal flaw. "Even though I'm very excited about this, as you know, dear leadership, I have to be realistic that in order to do this, we'd need many more resources than the initial plan because of these devastating unexpected things I have discovered! Drat!"
I would then propose options. Usually three, which are: Continue with the full scope but expand the resources (knowing full well that the additional resources required cannot be spared), drastically cut scope and proceed, or shelve it until some specific thing changes. You want to give the specific thing because that makes them feel like there's a good, concrete reason to wait and you're not just punting for vague, hand-wavy reasons.
Then the thing that we were waiting on happens, and I forget to mention it. Leadership's excited about something else by that point anyway, so we never revisit dumb project again.
Some specific thoughts for you:
1. Treat their arguments seriously. If they're handwaving your arguments away, don't respond by handwaving their arguments away, even if you think they're dumb. Even if they don't fully grasp what they're talking about, you can at least concede that agents and models will improve and that will help with some issues in the future.
2. Having conceded that, they're now more likely to listen to you when you tell them that while it's definitely important to think about a future where agents are better, you've got to deal with the codebase right now.
3. Put the problems in terms they'll understand. They see the agent that wrote this feature really quickly, which is good. You need to pull up the tickets that the senior developers on the team had to spend time on to fix the code that the agent wrote. Give the tradeoff - what new features were those developers not working on because they were spending time here?
4. This all works better if you can position yourself as the AI expert. I'd try to pitch a project of creating internal evals for the stuff that matters in your org to try with new models when they come out. If you've volunteered to take something like that on and can give them the honest take that GPT-5.5 is good at X but terrible at Y, they're probably going to listen to that much more than if they feel like you're reflexively against AI.
It's even better when you guide them into finding the fatal flaw for themselves.
Hahaha yes this is absolutely true but often times so much more work.
As a (sometime) TPM, you are the kind of PM I've been looking for.
Hah, thanks but unfortunately I quit and started a business a couple of years ago, in no small part because I didn't want to spend my time maneuvering to kill stupid ideas.
Very well said. So many engineers balk at "coming off as positive" as a form of lying or as a pointless social ritual, but it's the only thing that gets you a seat at the table. Engineers who say "no" or "that's stupid" are never seen as leaders by management, even if they're right. The approach you laid out here is how you have _real_ impact as an engineering leader, because you keep getting a seat at the table to steer what actually happens.
Show them this[1], and if it doesn't sober them up with its absurdity, at least they'll be occupied with something other than treating LinkedIn fluffers as prophets and trying to gaslight you into tanking production
To an extent, these people have found their religion, and rational discussion does not come into play. As with previous tech Holy Wars over operating systems, editors, and programming languages, their self-image is tied to the technology.
Where the tech argument doesn't apply to upper management, business practices, the need to "not be left behind" and leap at anything that promises reducing headcount without reducing revenue, money talks. As long as it's possible to slop something together, charge for it, and profit, slop will win.
I've enjoyed using Claude to essentially build my own APIs at whatever level of complexity I'm comfortable with at the time. I can use lower level APIs for graphics (for example), and Claude can abstract the boiler plate into my own personal API. Then when performance gets to be an issue, I can dig into the abstractions Claude handled for me and start to pick apart the slow-downs.
It's only dead to those who are ignorant to what it takes to build and run real systems that don't tip over all the time (or leak data, embroil you in extortion, etc). That will piss some people off but it's worth considering if you don't want to perma-railroad yourself long-term. Many seem to be so blinded by the glitz, glamour, and dollar signs that they don't realize they're actively destroying their future prospects/reputation by getting all emo about a non-deterministic printer.
Valuable? Yep. World changing? Absolutely. The domain of people who haven't the slightest clue what they're doing? Not unless you enjoy lighting money on fire.
> non-deterministic printer.
I interpret non-deterministic here as “an LLM will not produce the same output on the same input.” This is a) not true and b) not actually a problem.
a) LLMs are functions and appearances otherwise are due to how we use them
b) lots of traditional technologies which have none of the problems of LLMs are non-deterministic. E.g., symbolic non-deterministic algorithms.
Non-determinism isn’t the problem with LLMs. The problem is that there is no formal relationship between the input and output.
When I started my professional life in the 90s, we used Visual J++ (Java) and remember all this damn code it generated to do UIs...
I remember being aghast at all the incomprehensible code and "do not modify" comments - and also at some of the devs who were like "isn't this great?".
I remember bailing out asap to another company where we wrote Java Swing and was so happy we could write UIs directly and a lot less code to understand. I'm feeling the same vibe these days with the "isn't it great?". Not really!
You just brought me back to my first internship where as interns we were asked to hand-manipulate a 30k lines auto-generated SOAP API definition because we lost the license to the software that generated it
Oh the memories, but at least that generated code was deterministic...
I remember the first time trying to work with MFC. I was aghast at all the generated garbage the IDE produced. But I guess if you're a drone working in an insurance company somewhere, you don't want to have to deal with message loops, window classes, WinMain, and all that, so you would welcome all that stuff just being handled for you while you just filled in the application-specific code. To me, that was the fun part of programming against the Windows API. And it was gonna come bite you anyway, so in for a penny...
Extrapolate to the present day with LLM-generated code. I'm sure you're not far off the actual mark.
> AI is getting better/faster/cheaper at incredible rates
Maybe but all technologies have limits.
It's irrational to believe any single technology can be improved forever.
Some of the good quotes or analogies in this article:
1 - “It seems like 99% of society has agreed that code is dead. …It's the same as thinking storytelling is dead at the invention of the printing press. No you dummies, code is just getting started. AI is going to be such a boon for coding.“
2 - Another one comparing writing and coding, and explaining how Code is both a means and an end to manage complexity:
“we're confused because we (incorrectly) think that code is only for the software it produces. It's only partly about that. The code itself is also a centrally important artifact… I think this is a lot clearer if you make an analogy to writing. Isn't it fucking telling that nobody is talking about "vibe writing"?”
Remember Deep Thought, the greatest computer ever built that spent 7.5 million years computing the Answer to the Ultimate Question of Life, the Universe, and Everything? The answer was 42, perfectly correct, utterly useless because nobody understood the question they were asking.
That's what happens when you hand everything to a machine without understanding the problem yourself.
AI can give you correct answers all day long, but if you don't understand what you're building, you'll end up just like the people of Magrathea, staring at 42 and wondering what to do with it.
True understanding is indistinguishable from doing.
The question to which 42 was the answer was, of course, "How many roads must a man walk down, before you call him a man?"
Well, yes, but AI can also give you wildly incorrect answers with alarming frequency.
I know, I know, "skill issue"/"you're holding it wrong". And maybe that's vacuously true, in that it's so hard to guess what will produce correct output, because LLMs are not an abstraction layer in the way that we're used to. Prior abstraction layers related input to output via a transparent homomorphism: the output produced for an input was knowable and relatively straightforward (even with exotic optimization flags). LLMs are not like that. Your input disappears into a maze of twisty little matmuls, all alike (a different maze per run, for the same input!) and you can't relate what comes out the other end in terms of the input except in terms of "vibes". So to get a particular output, you just have to guess how to prompt it, and it is not very helpful if you guess wrong except in providing a wrong (often very subtly so) response!
Back in the day, I had a very primitive, rinky-dink computer—a VIC-20. The VIC-20 came with one of the best "intro to programming" guides a kid could ask for. Regarding error messages it said something like this: "If your VIC-20 tells you something like ?SYNTAX ERROR, don't worry. You haven't broken it. Your VIC-20 is trying to help you correct your mistakes." 8-bit 6502 at 1 MHz. 5 KiB of RAM. And still more helpful than a frontier model when it comes to getting your shit right.
You are correct.
One minor note. The skill issue isn't about failing to prompt it correctly, but rather failing to understand what it actually does.
There's an entire crop of professionals who believe we can Harry Potter our way out of any situation with the right magic words.
> AI can give you correct answers all day long, but if you don't understand what you're building, you'll end up just like the people of Magrathea, staring at 42 and wondering what to do with it.
Yeah it's so true. LLM's tell you what you want to hear based on the input you give it. If you know the domain, it's really powerful because it can output what you want it to output at an incredibly fast rate. If you don't, you're basically rolling the dice on what it gives you. The idea that programmers are now useless because it can output something is hilariously wrong. The quality of the input has a direct impact on the quality of the output, meaning people with domain expertise (i.e. software developers) is a crucial component to its utility. In other words, vibe coding is useless unless the viber has the correct vibes. Or in other other words, being a good programmer is fundamental to the technology producing useful results. As such, we aren't going to see a total replacement of software developers, but rather that good software developers will increase their productive output.
From "code" to "no-code" to "vibe coding" and back to "code".
What you are seeing here is that many are attempting to take shortcuts to building production-grade maintainable software with AI and now realizing that they have built their software on terrible architecture only to throw it away, rewriting it with now no-one truly understanding the code or can explain it.
We have a term for that already and it is called "comprehension debt". [0]
With the rise of over-reliance of agents, you will see "engineers" unable to explain technical decisions and will admit to having zero knowledge of what the agent has done.
This is exactly happening to engineers at AWS with Kiro causing outages [1] and now requiring engineers to manually review AI changes [2] (which slows them down even with AI).
[0] https://addyosmani.com/blog/comprehension-debt/
[1] https://www.theguardian.com/technology/2026/feb/20/amazon-cl...
[2] https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f77...
> With the rise of over-reliance of agents, you will see "engineers" unable to explain technical decisions and will admit to having zero knowledge of what the agent has done.
I've had to work on multiple legacy systems like this where the original devs are long gone, there's no documentation, and everyone at the company admits it's complete mess. They send you off with a sympathetic, "Good luck, just do the best you can!"
I call it "throwing dye in the water." It's the opposite of fun programming.
On the other hand, it often takes creativity and general cleverness to get the app to do what you want with minimally-invasive code changes. So it should be the hardest for AI.
While I agree with everything you said, Amazon’s problems aren’t just Kiro messing up. It’s a brain drain due to layoffs, and then people quitting because of the continuous layoff culture.
While publicly they might say this is AI driven, I think that’s mostly BS.
Anyway, that doesn’t take away from your point, just adds additional context to the outages.
> We have a term for that already and it is called "comprehension debt".
This isn't any different than the "person who wrote it already doesn't work here any more".
> now requiring engineers to manually review AI changes [2] (which slows them down even with AI).
What does this say about the "code review" process if people cant understand the things they didn't write?
Maybe we have had the wrong hiring criteria. The "leet code", brain teaser (FAANG style) write some code interview might not have been the best filter for the sorts of people you need working in your org today.
Reading code, tooling up (debuggers, profilers), durable testing (Simulation, not unit) are the skill changes that NO ONE is talking about, and we have not been honing or hiring for.
No one is talking about requirements, problem scoping, how you rationalize and think about building things.
No one is talking about how your choice of dev environment is going to impact all of the above processes.
I see a lot of hype, and a lot of hate, but not a lot of the pragmatic middle.
> This isn't any different than the "person who wrote it already doesn't work here any more".
It is very different. With empathy you can often deduct why people wrote code the way they did. With LLMs there often is no reason.
> This isn't any different than the "person who wrote it already doesn't work here any more".
Yeah but that takes years to play out. Now developers are cranking out thousands of lines of “he doesn’t work here anymore” code every day.
> Yeah but that takes years to play out.
https://www.invene.com/blog/limiting-developer-turnover has some data, that aligns with my own experience putting the average at 2 years.
I have been doing this a long time: my longest running piece of code was 20 years. My current is 10. Most of my code is long dead and replaced because businesses evolve, close, move on. A lot of my code was NEVER ment to be permanent. It solved a problem in a moment, it accomplished a task, fit for purpose and disposable (and riddled with cursing, manual loops and goofy exceptions just to get the job done).
Meanwhile I have seen a LOT of god awful code written by humans. Business running on things that are SO BAD that I still have shell shock that they ever worked.
AI is just a tool. It's going from hammers to nail guns. The people involved are still the ones who are ultimately accountable.
the moment your vibe-coded bot hits edge cases in message threading, you need someone who actually understands the abstraction layer.
> If you know of any other snippet of code that can master all that complexity as beautifully, I'd love to see it.
Electric Clojure: https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...
Sick!!! Great example! I'm actually a longtime friend and angel investor in Dustin but I hadn't seen this
The people who hold funeral addresses over "coders" or developers tend to miss the point. If devs are disposable now, then the only question is: who is next?
I know of quite a lot of business people who kind of frolicking over the idea that the former behemoth got humbled so massively. From eating the world to unemployed in no time.
This delusion itself is telling and perpetuates the clinging to the sinking ship that is still the Elephant in the Room.
If something as complex or even complicated as app development including SDLCs etc. could simply be prompted now, then AI will eat anything less complicated alive.
So people better start considering the implications and ramifications of their statements. Otherwise we are all doomed and part of a darwinian system that will weed out the unnecessary parts of the system or we acknowledge the fact that a traditional profession such as app development is fundamentally changing.
This is something that happened before and constantly does. Otherwise we would not use DSL or Java.
But the fundamentals still work and therefore you need abstractions.
instead of asking „what’s next”, a good question to ask is „what jobs are now feasible that previously were cinstrained by the cost of producing software”?
I don’t know if someone said it already, but when Steve Jobs said this famous quote (“reports of my death are greatly exaggerated”) he then died maybe just a couple of years later.
Hope this does not happen to code :)
Mark Twain.
should note that Mark Twain died 13 years after he announced reports of his death an exaggeration.
We may expect code to be killed off in AI's troublesome teen years.
Every few years something is going to kill code and here we are. The job changes, it does not disappear.
For future greenfield projects, I can see a world where the only jobs are spec-writer and test-writer, with maybe one grumpy expert coder (aka code janitor) who occasionally has to go into the code to figure out super gnarly issues.
A good spec-writer, as the article notes, is writing code.
It's part of the job, but has never been the fun part for me. Solving the puzzle with code and that "holy shit it actually works" moment has always been the part I get the most satisfaction from.
This is already happening, many days I am that grumpy "code janitor" yelling at the damn kids to improve their slop after shit blows up in prod. I can tell you It's not "fun", but hopefully we'll converge on a scalable review system eventually that doesn't rely on a few "olds" to clean up. GenAI systems produce a lot of "mostly ok" code that has subtle issues you on catch with some experience.
Maybe I should just retire a few years early and go back to fixing cars...
Yeah I imagine it has to be utterly thankless being the code janitor right now when all the hype around AI is peaking. You're basically just the grumpy troll slowing things down. And God forbid you introduce a regression bug trying to clean up some AI slop code.
Maybe in the future us olds will get more credit when apps fall over and the higher ups realize they actually need a high-powered cleaner/fixer, like the Wolf in Pulp Fiction.
I’ve got a “I haven’t written a line of code in one year” buddy whose startup is gaining traction and contracts. He’s rewritten the whole stack twice already after hitting performance issues and is now hiring cheap juniors to clean up the things he generates. It is all relatively well defined CRUD that he’s just slapped a bunch of JS libs on top of that works well enough to sell, but I’m curious to see the long term effects of these decisions.
Meanwhile I’m moving at about half the speed with a more hands on approach (still using the bots obviously) but my code quality and output are miles ahead of where I was last year without sacrificing maintain ability and performance for dev speed
I've had to slowly and painfully learn the lesson that early on in a company's lifycycle it doesn't really matter how terrible the code is as long as it mostly works. There are of course exceptions like critical medical applications and rocket/missile guidance systems but as a general rule code quality is only a problem when it inevitably bites you much farther down the line, usually when customers start jumping ship when it's obvious you can't scale or reach uptime contact targets. By then you'll hopefully have enough money saved from your initial lax approach to put some actual effort into shoring up the losses before they become critical. Sometimes you just get by with "good enough" for decades and no one cares. For someone that cares about the quality of their work it can be sad state of affairs, but I've seen this play out more times than I'd care to.
> There are of course exceptions like critical medical applications and rocket/missile guidance systems but as a general rule code quality is only a problem when it inevitably bites you much farther down the line, usually when customers start jumping ship when it's obvious you can't scale or reach uptime contact targets.
My experience is it hits both new-feature velocity and stability (or the balance between those two) really early, but lots of managers don't realize that this feature that's taking literal months could have been an afternoon with better choices earlier on (because they're not in a position to recognize those kinds of things). For that matter, a lot of (greener) developers probably don't recognize when the thing that's a whole-ass project for them could have been toggling a feature flag and setting a couple config entries in the correct daemon, with better architecture, because... they don't even know what sort of existing bulletproof daemon ought to be handling this thing that somehow, horrifically, ended up in their application layer.
So the blame never gets placed where it belongs, and the true cost of half-assed initial versions is never accounted for, nor is it generally appreciated just how soon the bill comes due (it's practically instantly, in many cases).
There are phases in a company's lifecycle which carries different weights associated with code quality depending on factors like the domain, how many customers you have, what your risk aversion is etc. I'm just saying don't build a cathedral when a mole hill will do. If the product doesn't work that's another story, it still needs to stand up without falling over when you look at it sideways and having only juniors would be a good way to get the latter. Use basic design principles, and proven architectures but don't sweat things like code coverage, reinventing wheels because you think you can do it better than something you can just grab off the shelf rn. It'll inevitably be a bit of a hodgepodge in the beginning but that's ok. Consider early code as "throwaway", don't spend your limited time rewriting anything already working "better" unless you actually have the leisure to do so (few actually do, and even fewer realize they don't)
Code will be replaced by EnglishScript running on ClaudeVM https://jperla.com/blog/the-future-is-claudevm
The cartoon told me everything...
It's a horrific article that starts off wrong by equating specification with code. In reality, the relevance of a specification comes with substantial abstraction that its author doesn't care about. The goodness of a spec is not just from what is defined, but also from what is left out. Code on the other hand leaves nothing out, unless you get into compiler level optimizations. The two are not the same.
I remember when I moved to C++ from Python when I was a Junior. After getting deeper into C++, I started questioning whether Python programmers are really programmers or what we now call vibe coders. Just through a bit of experience, I realised that in a sense, Python just operates on different layers of abstractions and allows you to do more , and much faster, if in skilful hands. On the other hand, an uneducated person will just generate what we now call slop. For some reason, this parallel resonates with the current state of affairs
I can't tell if the author's "when we get AGI" is sarcasm or genuine.
Genuine!
The biggest point everyone keeps missing is that a single code review makes your vibe coded code go from “terrifyingly dangerous” to “better than most people’s code” in one step.
We’re at a point where LLMs write great code, way better than my average coworkers used to anyway. Of course, not reviewing said code by an expert would be a silly as not reviewing a coworkers code, there might be security vulnerabilities in there, hardcoded api keys, etc. But once it’s been professionally reviewed, it’s just as safe as any code written by a human only probably if a higher quality than most people write.
On HN there’s an argument I keep seeing go back and forth which is like “vibe coding is the worst thing ever” and the other side will be like “AI is the second coming of christ and we don’t need programmers” - I think the reason we have what appears to be such opposing views is that those views are actually really close to one another, and proper review is all that separates one from the other.
If you’re already an expert and you don’t vibe code most things and then carefully test and review after, you’re wasting the benefits of these machines. If you’re not an expert then you shouldn’t be employed in the first place, as the main thing people are employed for is responsibility, not output.
This has always been the way in everything. A foreperson gets paid more than a worker on a building site not because they build more than the worker, but because they’re responsible for more than the worker. This is the real reason why programmer jobs won’t go away in my opinion.
Yet again we can pull out Edsger W.Dijkstra's 1978 article, "On the foolishness of "natural language programming""
"In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt."
Such a perfect quote! Thank you! Will add it to my collection
Djikstra wasn’t a god. He’s going to be wrong on this one.
He's not wrong. People are just drinking the AI kool-aid too hard to realize that the emperor has no clothes.
How come it works with humans? Give seasoned engineers a spec and they'll create a working product. Many software companies are created and guided on the verbal directions of people who don't code.
I think "working product" is doing a lot of heavy lifting for you here.
I think anyone who has done serious product development wouldn't be so flippant and dismissive about the difficulties of even conveying WHAT to build, let along getting the right balance of quality, timeliness, and actual functionality.
Because humans are fundamentally different from LLMs regardless of how much people draw the comparison.
Go on. How, in this context?
A) My coworkers care. Their job is to solve the problem in the most pragmatic way possible, and the LLMs job is to produce plausible answers, even if they literally write broken syntax. They are not capable of caring about anything, but they can sound like they care.
B) They are shit for frontend work. This is the big one, and maybe we will someday, but we really haven't bridged the gap between an LLM seeing code and matching it to something on screen. People are well aware that agents need tight feedback loops to really be effective, which is more doable for server side code where you can setup a good test apparatus (still a pain in the ass for other reasons). But the thing a developer does most often is instantly see what their code changes did to a project visually. This is a huge gap with LLMs. We are at the level where an LLM can check that the site loads to begin with. That's it. We can't trust it to make functioning UI, and we sure as hell can't trust it to compare code against UI unless we ask it something very specific. Watch the LLM clocks sometime [1]. I was shocked to see even 5.2 codex making complete nonsense every few minutes.
C) My coworkers have narrow experience in the field we work in, which means they are frequently zooming past things that a general approximation of intelligence like Claude gets tripped up on. ChatGPT is trained on the entire corpus of humanity, yet doesn't understand there is such a thing as bad language servers. That a linting error is not inherently problematic and you don't need to spend $5 in tokens investigating it. This is something a junior developer understands. Why do I need to constantly watch and stop my agent and re-explain things we learned in the first week of programming?
D) Because LLMs are an approximation of correct solutions, god help you if you work in spaces where your syntax looks similar to other frameworks. I work in a lot of legacy apps that are framework spin-offs of other framework spin-offs. Stuff that's half documented and poorly made. There is no testing for these projects and no good IDE support. So if something is an offshoot of OctoberCMS, ChatGPT is just going to make up a bunch of methods from OctoberCMS even if they are completely invalid in this project. Agents are completely useless in projects like these.
E) Local models aren't there yet, and the fact that the entire industry is cool with letting their skills atrophy so hard that they are completely dependent on the livelihoods and structures of a few companies they hadn't even heard of 5 years ago is deeply concerning and unprofessional. I personally wouldn't hire someone so short sighted. Believing in LLMs taking over all coding work is a fundamental disbelief in Humanity, and I'm just not behind it. The most amazing thing about development is how much knowledge share we have and how cooperative we are. A future where no one knows how to do anything anymore is the future where tech truly goes off the rails.
Dijkstra also mockingly described software engineering as "the doomed discipline" because its goal was to determine "how to program if you cannot".
"How to program if you cannot" has been solved now.
This is coping, with tools like Boomi, n8n, Langflow, and similar, there are plenty of automated tasks that can already be configured and that's it.
The argument here seems to be “you need AGI to write good code. Good code is required for… reasons. AGI is far away. Therefore code is not dead.”
First, I disagree that good code is required in any sense. We have decades of experience proving that bad code can be wildly successful.
Second, has the author not seen the METR plot? We went from: LLMs can write a function to agents can write working compilers in less than a year. Anyone who thinks AGI is far away deserves to be blindsided.
In agree in principle, but the compiler is a terrible example given the amount of scaffolding afforded to the LLMs, literally hundreds of thousands of test cases covering all kinds of esoteric corners.
Also (and this is coming from someone who thinks it's quite close) "AGI" is not implied by the ability to implement very-long-horizon software tasks. That's not "general" at all.
You're moving the goal posts. A year ago, _no one_ thought it could write a working compiler. Yes, the compilers we've seen today are not great. Yes, they rely too much on existing implementations. But... if you can't see which way the wind is blowing then I can't help you at this point.
AGI is a meaningless milestone. No one can actually define it. The best definition I've seen is the one that ARC is using: "AI that is as good at a human at every task".
What goal posts have I moved? You seem to be attributing arguments to me that I haven't made. I'm simply pointing out that the example you gave involves a level of scaffolding that most projects don't have, so that the data point is exaggerated; and that it's possible (and quite reasonable) to have an agent that is extremely good at programming while not matching what most companies and people in the space have defined as "AGI". I do believe that we'll soon have agents that can achieve Claude C Compiler–level achievements in spaces with far less scaffolding.
That's not my argument at all! Though I can see why you took that away; my bad for not making my argument clearer.
I believe that even when we have AGI, code will still be super valuable because it'll be how we get precise abstractions into human heads, which is necessary for humans to be able to bring informed opinions to bear.
No, I think we just fundamentally disagree.
IMO, black boxes that "just work" will be fine provided they can produce intermediate artifacts and explanations that make sense. The people that I know who use CoWork already don't care about how the agent got the result as long as the outputs look right and the process is explainable.
I don't disagree with anything you just said
The author’s intuition is still backward calibrated, even though he talks about the future. He doesn’t have an intuition for the future. All code will be AI generated. There’s no way to compete with the AI. And whatever new downsides this brings will be solved in ways we aren’t fully anticipating. But the solution is not to walk back vibecoding. You have to be blind to believe not most code will be vibecoded very soon.
You have to be incredibly incompetent and naiive to look at the absolute garbage theatre that AI outputs today to go "yeah this will write all future code".
Usually the response, for the last years, has been "no no you don't get it, it'll get so much better" and then they make the context window slightly larger and make it run python code to do math.
What will really happen is that you and people like you will let Claude or some other commerical product write code, which it then owns. The second Claude becomes more expensive, you will pay, because all your tooling, your "prompts saved in commits" etc. will not work the same with whatever other AI offer.
You've just reinvented vendor lock in, or "highly paid consultant code", on a whole new level.
Can you explain what you think will happen, actually? People at OpenAi and Anthropic aren’t longer coding by hand. Are you saying everyone changes their mind and goes back? Not gonna happen. You have to work around this new constrain.
Yes, I'm saying that the companies who's entire business model is selling you AIs are not a reliable source. And of course, again, if you are competent, you can see that AI only generates passable outputs when guided or when the scope is small. This guiding only works when there is a human operator.
You're all being fooled by emergent behavior, which acts like intelligence, but really doesn't fool people who are familiar with how to write code.
I'm sorry, I'm not sure how to say this without sounding elitist, but the goal post has not moved since GPT 3. These tools, autonomously, produce code that only fools the clueless. I don't know how else to put it and I'm getting really tired of this argument that "look, company with billions and some of the buggiest shittest software is using AI to write it". No shit they are.
So what's going to happen? We will see the same divide we've seen with JavaScript. It's new, then everyone says it's what everyone must use. C++ developers no longer needed--its all JS now. No need for native UIs, it's all JS now. If you're not learning the latest web tech, you'll miss out and fall behind. If you're studying for anything but web, you'll be out of a job in 5 years. And now, a couple decades later, we are still waiting for it.
Well, one thing I'll say is... If for whatever reason we have an electrical issue, or just general chip scarcity, then all programmers with experience will be the ones that can bail out society. Just saying. Especially because kids today won't really learn coding. Its a FAFA situation. Stay sharp!
To all the vibe coders:
When you let an LLM author code, it takes ownership of that code (in the engineering sense).
When you're done spending millions on tokens, years of development, prompt fine tuning, model fine tuning, and made the AI vendor the fattest wad of cash ever seen, you know what the vendor will do?
You have to migration path. Your Codex prompts don't work the same in Claude. All the prompts you developed and saved in commits, all the (probably proprietary) memory the AI vendor saved in their servers to make the AI lock you in even more, all of it is worthless without the vendor.
You are inventing "ah heck, we need to pay the consultant another 300 bucks an hour to take a look at this, because nobody else owns this code", but supercharged.
You're locking yourself in, to a single vendor, to such a degree that they can just hold your code hostage.
Now sure, OpenAI would NEVER do this, because they're all just doing good for humanity. Sure. What if they go out of business? Or discontinue the model that works for you, and the new ones just don't quite respond the same to your company's well established workflows?
It is the same as adding dependencies or hosting on Azure/Aws, choosing a nosql db isn't it?
Well, except it's your entire codebase, yeah
> When you're done spending millions on tokens, years of development, prompt fine tuning, model fine tuning, and made the AI vendor the fattest wad of cash ever seen, you know what the vendor will do?
They'll hire the person who knows AI, not the human clinging onto claims of artisanal character by character code.
It's entirely possible to engineer well-designed and intentional systems with AI tools and not stochastically "vibe" your way into tech debt.
AI engineers will get hiring preference. That is until we're all replaced by full agentic engineering. And that's coming.
It's almost like I addressed my entire comment to vibe coders and NOBODY else, because other uses of AI are pretty valid
I was locked into apple chips, amd chips and intel chips long ago. Everyone is already locked into one of these companies.
The fact of reality is that the technology is so complex only for-profit centralized powers can really create these things. Linux and open source was a fluke and even then open source developers need closed source jobs to pay for their time doing open source.
We are locked in and this is the future. Accept it or deny it one is delusional the other is reality. The world is transforming into vibe coding whether you like it or not. Accept reality.
If you love programming, if you care for the craft. If programming is a form of artistry for you, if programming is your identity and status symbol. Then know that under current trends… all of that is going into the trash. Better rebuild a new identity quick.
A lot of delusional excuse scaffolds people build around themselves to protect their identity is they just say “the hard part of software wasn’t really programming” which is kind of stupid because AI covers the hard part too.. in fact it covers it better then actual coding. Either way this excuse is more viable then “ai is useless slop”