> It turns out vibe-coding an Electron app is still preferable to vibe-coding on multiple platforms and delivering a tailored experience for each.
Off-topic, but people have some weird obsession over native apps. What does "delivering a tailored experience for each platform" look like?
Blender is probably the most successful non-Electron, open-sourced multiple platform app we ever. It completely ignores each platform's native UI. VSCode is the most used editor for programmers [0] and it's literally Electron-based.
Is there even one (1) app that
1. is as successful as Blender or VSCode
2. delivers a tailored experience for each platform, or at very least use the platform's native UI?
[0]: https://survey.stackoverflow.co/2025/ and it's not even close.
Back to the topic:
> Video games stand out as one market where consumers have pushed back effectively
No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.
If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well.
What the author and many others find hard to digest is that LLMs are surfacing the reality that most of our work is a small bit of novelty against boiler plate redundant code.
Most of what we do is programming is some small novel idea at high level and repeatable boilerplate at low level. A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions? LLMs are especially good at fuzzy abstracting repeatable code, and it’s simply not possible to get the same result from other manual methods.
I empathise because it is distressing to realise that most of value we provide is not in those lines of code but in that small innovation at the higher layer. No developer wants to hear that, they would like to think each lexicon is a creation from their soul.
Abstraction isn't free... even if you had the correct abstraction and the tools to remove the parts you don't need for deployment, there is still the cost of understanding and compiling.
There is also the cost reason, somebody trying to sell an abstraction will try to monetize it and this means not everyone will want/be able to use it (or it will take forever/be unfinished if it's open/free).
There's also the platform lockin/competition aspect...
Time to learn design, how to talk to customers, and how to discover unsolved problems. Used right LLMs should improve your software quality. Make stuff that matters that you can be proud of.
> This stands in stark contrast to code, which generally doesn't suffer from re-use at all ...
This is an absolute chef-kiss double-entendre.
> It's not a co-pilot, it's just on auto-pilot.
Love it. Calling it "Copilot" in itself is a lie. Marketing speak to sell you an idea that doesn't exist. The idea is that you are still in control.
Well initially it was a lot less capable. Someone might describe it auto-complete on steroids.
Someone might call LLMs that today, except they've stepped a bit up from steroids.
Then MS is conveniently keeping the old name.
Acko.net remains the best website on the internet.
>If you ask me, no court should have ever rendered a judgement on whether AI output as a category is legal or copyrightable, because none of it is sourced. The judgement simply cannot be made, and AI output should be treated like a forgery unless and until proven otherwise.
Guilty until proven innocent will satisfy the author's LLM-specific point of contention, but it is hardly a good principle.
You are missing the point of the author. He literally said no court should have rendered a judgement, that's the exact opposite of guilty until proven innocent. Guilty means a court has made a judgement.
He is proposing to not make a judgement at all. If the AI company CLAIMS something they have to prove it. Like they do in science or something. Any claim is treated as such, a claim. The trick is to not even claim anything, let the users all on their own come to the conclusion that it's magic. And it's true that LLMs by design cannot cite sources. Thus they cannot by design tell you if they made something up with disregard to it making sense or working, if they just copy and pasted it, something that either works or is crap, or if they somehow created something new that is fantastic.
All we ever see are the success stories. The success after the n-th try and tweaking of the prompt and the process of handling your agents the right way. The hidden cost is out there, barely hidden.
This ambiguity is benefitting the AI companies and they are exploiting it to the maximum. Going even as far as illegally obtaining pirated intellectual property from an entity that is banned in many countries on one end of their utilization pipeline and selling it as the biggest thing ever at the other end. And yes, all the doomsday stories of AI taking over the world are part of the marketing hype.
> Whether something is a forgery is innate in the object and the methods used to produce it. It doesn't matter if nobody else ever sees the forged painting, or if it only hangs in a private home. It's a forgery because it's not authentic.
On a philosophical level I do not get the discussions about paintings. I love a painting for what it is not for being the first or the only one. An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful. Me labeling "authentic" it or not should not affect it's artistic value.
For a piece of code you might care about many things: correctness, maintainability, efficiency, etc. I don't care if someone wrote bad (or good) code by hand or uses LLM, it is still bad (or good code). Someone has to take the decision if the code fits the requirements, LLM, or software developer, and this will not go away.
> but also a specific geographic origin. There's a good reason for this.
Yes, but the "good reason" is more probably the desire of people to have monopolies and not change. Same as with the paintings, if the cheese is 99% the same I don't care if it was made in a region or not. Of course the region is happy because means more revenue for them, but not sure it is good.
> To stop the machines from lying, they have to cite their sources properly.
I would be curious how can this be applied to a human? Should we also cite all the courses, articles that we have read on a topic when we write code?
> An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful.
There are a lot such artists who can do that after having seen Van Gogh's paintings before. Only Van Gogh (as far as we know) did paint those without having seen anything like it before - in other words, he had a new idea.
I won't call that forging, but commission.
btw you can make git commits with AI as author and you as commiter. Which makes git blame easier
This rules. What a good, sensible, sober post.
I instantly remembered the page header, I probably visited this site last time 10 years ago or something.
> This sort of protectionism is also seen in e.g. controlled-appelation foods like artisanal cheese or cured ham. These require not just traditional manufacturing methods and high-quality ingredients from farm to table, but also a specific geographic origin.
Maybe "Artisanal Coding" will be a thing in the future?
This geographic protection is extremely bogus in many cases, if not most cases, which imo undermines his argument.
What a wonderful read.
A pointless opinion-piece of low information density, perfect for an echo chamber of equally minded people.
And "lazy".
Claude makes me mad: even when I ask for small code snippets to be improved, it increasingly starts to comment "what I could improve" in the code I stead of generating the embarrassingly easy code with the improvement itself.
If I point it to that by something like "include that yourself", it does a decent job.
That's so _L_azy.
LLMs are cheaters because their goal isn't to produce good code but to please the human.
That's a problem with any self-improving tools, not just LLMs. Successful self-improvement leads to efficiency, which is just another name for laziness.
LLMs are pretty cool technology and are useful for programming.
If you check the code afterwards. You do check the code yourself, don't you?
Hello, I am a single dev using an agent (Claude Code) on a solo project.
I have accepted that reading 100% of the generated code is not possible.
I am attempting to find methods to allow for clean code to be generated none the less.
I am using extremely strict DDD architecture. Yes it is totally overkill for a one man project.
Now i only have to be intimate with 2 parts of the code:
* the public facade of the modules, which also happens to be the place where authorization is checked.
* the orchestrators, where multiple modules are tied together.
If the inners of the module are a little sloppy (code duplication and al), it is not really an issue, as these do not have an effect at a distance with the rest of the code.
I have to be on the lookout though. It happens that the agent tries to break the boundaries between the modules, cheating its way with stuff like direct SQL queries.
eyeroll
Lying implies knowing what’s true
Oh sorry my mistake! you’re right I don’t know what’s true.
Incredible website
More like Lunatic.
In can be both. There are two L's to pick from.
Lovely lizard machine.
That's a lie.