• q3k 6 hours ago

    You're holding it wrong. I just spent 14 hours (high on coke) working with Claude to generate an agent orchestration framework that has already increased my output to 20x over just using Copilot. Adapt or you'll be left behind and forever part of the permanent underclass.

    • chasd00 6 hours ago

      That’s nothing I used a Claude code to put together a totally new agent harness model architecture that can cook 30min brownies in only 20mjnutes!

      • bogzz 6 hours ago

        CDDOL is undoubtedly the future, it is just sad seeing all these negative comments. It's like those people don't even know they've been made redundant already.

        It's not too late to jump on the Cocaine-Driven Development Orchestrated by LLMs train.

        • nananana9 6 hours ago

          Tomorrow you'll write 20 agent orchestration frameworks in 14 hours!

          • q3k 6 hours ago

            Amen! I'm pissing blood faster than I can increase my credit card limit for token use, but we'll make it. The 200x (10x from LLM + 20x from orchestration) means that by the end of 2026 we'll all be building $1MM ARR side projects daily.

            • esseph 5 hours ago

              I would love to subscribe to your newsletter to hear more about this topic.

          • ladyprestor 6 hours ago

            You didn't mention the $1M ARR!

            • neya 6 hours ago

              I built a windmill with Claude. I created a skills.md and followed everything by the book. But now, I have to supply power to keep the windmill running. What am I doing wrong?

              • xcubic 6 hours ago

                Can you share details about this? Do you have a repo?

                • gherkinnn 6 hours ago

                  Doesn't coke come with mania?

                  Either way, OP is holding it wrong and vague hypebro comments like yours don't help either. Be specific.

                  Here's an example: I told Claude 4.5 Opus to go through our DB migration files and the ORM model definitions and point out any DB indexes we might be missing based on how the data is being accessed. It did so, ingested all the controllers as well and a short while later presented me with a list of missing indexes, ordered by importance and listing why each index would speed up reads and how to test the gains.

                  Now, I have no way of knowing how exhaustive the analysis was, but the suggestions it gave were helpful, Claude did not recommend over-indexing, and considered read vs write performance.

                  The equivalent work would have taken me a day, Claude gave me something helpful in a matter of minutes.

                  Now, I for one could not handle the information stream of 20 such analyses coming in. I can't even handle 2 large feature PRs in parallel. This is where I ask for more specifics.

                  • dmbche 6 hours ago

                    Parent comment seems sarcastic

                  • weakfish 6 hours ago

                    Parent comment is a joke I think, but there’s something ironic (Poe’s law?) about it being possibly _not_ a joke

                    • SJMG 6 hours ago

                      There's a parenthetical offset about being high on coke for 14 hours. It's obviously a joke.

                      • bogzz 6 hours ago

                        Sniped.

                        • beepbooptheory 6 hours ago

                          Why go through all migration files if you're looking for missing indices in the present? That doesn't seem to make sense when you could just look at the schema as it stands? Either way, why would this take you a day? How many tables do you have?

                        • undefined 6 hours ago
                          [deleted]
                          • zdw 6 hours ago

                            RFK Jr. is that you?

                            • netdevphoenix 6 hours ago

                              For the oblivious: /s

                              • snarf21 6 hours ago

                                This one is a lot harder to tell because there are some AI bros who claim similar things but are completely serious. Even look at Show HN now: There used to be ~20-40 posts per day but now there are 20 per HOUR.

                                (Please oh please can we have a Show HN AI. I'm not interested in people's weekend vibe coded app to replace X popular tool. I want to check out cool projects wher people invested their passion and time.)

                              • defraudbah 6 hours ago

                                that's a pretty long time to be on someones cok

                                • re-thc 6 hours ago

                                  It’s Claude Coke

                                  • Insanity 6 hours ago

                                    Well, it’ll definitely make you hallucinate!

                                  • bdangubic 6 hours ago

                                    the time you are on coke = the time there is coke around to be had :)

                                • simonw 6 hours ago

                                  How did you have it testing its code changes? Did you tell it to use Playwright or agent-browser or anything like that?

                                  If coding agents can't test the code as they're editing it they're no different from pasting your entire codebase into ChatGPT and crossing your fingers.

                                  At one point you mention it hadn't run "npm test" - did it run that once you directly told it to?

                                  I start every one of my coding agent sessions with "run uv run pytest" purely to confirm that it can run the tests and seed the idea with it that tests exist and matter to me.

                                  Your post ends with a screenshot showing you debating a C# syntax thing with the bot. I recommend telling it "write code that demonstrates if this works or not" in cases like that.

                                  • aurareturn 6 hours ago

                                      If coding agents can't test the code as they're editing it they're no different from pasting your entire codebase into ChatGPT and crossing your fingers.
                                    
                                    Out of curiosity, how do you get Claude Code or Codex to actually do this? I asked this question here before:

                                    https://news.ycombinator.com/item?id=46792066

                                    • simonw 6 hours ago

                                      I don't use CLAUDE.md, I instead use simple token-efficient conventions.

                                      Most importantly all of my Python projects use a pyproject.toml file with this pattern:

                                        [dependency-groups]
                                        dev = ["pytest"]
                                      
                                      Which means I can tell the agent:

                                        Run "uv run pytest"
                                      
                                      And it will run the tests - without first needing to setup a virtual environment or install dependencies or anything like that. I wrote more about that pattern here: https://til.simonwillison.net/uv/dependency-groups

                                      For more complex test suites I'll give it more detailed instructions.

                                      For testing web apps I used to tell it "use playwright" or "use playwright Python".

                                      I'm currently experimenting with my own simple CLI browser automation tool. This means I can tell it:

                                        Run "uvx rodney --help" and then use 
                                        rodney to test this change
                                      
                                      The --help output tells it everything it needs to use the tool - here's that document in the repo: https://github.com/simonw/rodney/blob/10b2a6c81f9f3fb36ce4d1...

                                      I've recently started having the bots "manually" test changes with a new tool I built called Showboat. It's less than a week old but it's so far been working really well: https://simonwillison.net/2026/Feb/10/showboat-and-rodney/

                                      • SJMG 6 hours ago

                                        Instruct it to test as it goes along. Add whatever testing base command to your list of trusted tools.

                                    • aadarshkumaredu 35 minutes ago

                                      Removing jQuery isn’t a mechanical find-and-replace task.

                                      jQuery is often deeply intertwined with: • Event delegation patterns • Implicit DOM readiness assumptions • Legacy plugin ecosystems • Cross-browser workarounds

                                      An LLM can translate $(selector).addClass() to element.classList.add(). But it struggles when behavior depends on subtle timing, plugin side effects, or undocumented coupling.

                                      The hard part isn’t syntax replacement. It’s preserving behavioral invariants across the app.

                                      AI is decent at scaffolding migrations, but for legacy front-end codebases, you still need test coverage and incremental refactors. Otherwise it’s easy to “remove jQuery” and silently break UX flows.

                                      • littlecranky67 6 hours ago

                                        Not surprised. The amount of jQuery pasta code from the 2010s the models are trained on make it probably look like all jQuery-specific stuff is plain JavaScript. Plus in my experience (and lucky for me as a mostly FE dev) AIs suck at all things frontend (relative to other scenarios). They just never got trained on the real, rendered output in the browser so they can't "see" and complete the feedback loop during training. Most tests in Javascript projects genereate <div>-soup - so the AI gets trained on that output as a feedback, vs. the actual browser rendered image.

                                        • coldcode 6 hours ago

                                          For any AI post, there seems like that one person for whom it worked great, and a whole lot where it didn't. Your mileage may vary...

                                          Some things AI does well, many things it may be not worth the effort entailed, and some where it downright sucks and may even be harmful. The question is will it ever change the curve to where it is useful most of the time?

                                          • mingus88 5 hours ago

                                            Like any tool, you get better at using it. YMMV indeed.

                                            The author of this article could probably have, for example, written most of this into the project’s Claude.md and the AI would learn what not to do.

                                            Instead they wrote it up as a blog post which is unsurprisingly not going to net quality software.

                                            Having some way for Claude to test what it wrote is critical as well. It will learn on its own very fast if it can see the error messages and iterate on it like any other developer would

                                            Sounds like the author had tests that Claude never ran. Sounds misconfigured to me. Again, did the author learn how to use the tool?

                                          • Arubis 6 hours ago

                                            That sounds like a realistic outcome for a real engineer, too.

                                            • tommy_axle 5 hours ago

                                              If doing it directly fails (not surprising) wouldn't the next thing (maybe the first thing) to do was to have AI write a codemod to do what needed to be done then apply the codemod? Then all you need to do is get the codemod right and apply it to as many files as you need. Seems much more predictable and context-efficient.

                                              • simonw 5 hours ago

                                                This should work really well, but you still need to first ensure the agent is able to test the code (both through automated tests and "manually" poking at it) so it can verify the changes made actually work.

                                              • simonw 6 hours ago

                                                Were you using --dangerously-skip-permissions or were you approving every edit and every tool use?

                                                Which tools did it use?

                                                • rado 6 hours ago

                                                  Refactoring jQuery to vanilla JS was one of my first AI dev experiences a couple of years ago and it was great.

                                                  • re-thc 6 hours ago

                                                    You don't remove jQuery. EVER. You'll lose all the $.

                                                    • SAI_Peregrinus 4 hours ago

                                                      You can use any POSIX shell to get lots of $ back in your code.

                                                      • re-thc 3 hours ago

                                                        That's not Webscale.

                                                    • cbg0 6 hours ago

                                                      Seeing some of the pictures where OP says "MOTHERFUCKER" in the prompts and how simplistic some of the questions provided are gives me a feeling that CC is being used incorrectly.

                                                      My experience with 4.6 has been that it gobbles up tokens like crazy but it's pretty smart otherwise. Even the latest LLMs need a lot of context to know what they're working on, which versions to target, access to some MCP like Context7 to get up to date documentations(especially for js/ts).

                                                      My non-tech friends have a tendency to talk to AI like a person and then complain about the quality of the answers and I always tell them: ask your question, with one or two follow-ups max then start a new conversation. Also, provide as much relevant context as possible to get the best answer, even if it seems obvious. I'd expect a SWE to already be aware of this stuff.

                                                      I've been able to find obscure edge cases thanks to Claude and I've also had it produce code that does the opposite of what I asked even with a clear prompt, but that's the nature of LLMs.

                                                      • padjo 6 hours ago

                                                        This sounds like something I would have done with sed

                                                        • Anon1096 6 hours ago

                                                          > Also, why not run "npm run test" at some point? We have tons of tests. I even have an integration test that crawls the entire fucking app recusrively link-by-link in a headless browser and reports on JS errors. CLAUDE.md has all the info.

                                                          I'm a little baffled by this post. The author claims to have "Wrote a comprehensive CLAUDE.md with detailed instructions." and yet didn't have "run the tests" anywhere? I realize this post is going to be a playground for bashing on AI but I just wish the prompt was published or even better, if it's open source let other people try. Seems like the perfect case to throw claude code in a wiggum loop at overnight.

                                                          • mingus88 5 hours ago

                                                            Exactly, if Claude is making these types of mistakes, write a better claude.md instead of a blog post.

                                                            My company uses an obscure DSL with a name shared with a popular OSS project. Claude was worthless because it kept suggesting code in that other language.

                                                            Well, we wrote an MCP so Claude could run and test its code and reference the language docs. It’s amazing now. It makes mistakes like this post and then just fixes it and tests again.

                                                            • suddenlybananas 3 hours ago

                                                              It's super intelligent but it can't be bothered to run tests unless specifically told to?

                                                              • simonw 3 hours ago

                                                                Personally I prefer my agents not to run random commands on my machine without me telling them to first.

                                                                Imagine you just cloned some random project from GitHub and fired up Claude Code in that folder, but it turned out to be malicious and running 'npm test' stole all your files.

                                                            • lenerdenator 6 hours ago

                                                              > Why AI is so bad at vanilla JS and HTML, when there's no React/Vue in a project?

                                                              Because we're still paying for Brendan Eich's mistakes 30 years later (though Brendan isn't, apparently), and even an LLM trained on an unfathomably-large corpus of code by experts at hundreds of millions of dollars of expense can't unscrew it. What, like, even is a language's standard library, man?

                                                              > The moment you point it at a real, existing codebase - even a small one - everything falls apart

                                                              That's not been my experience with running Claude to create production code. Plan mode is absolutely your friend, as is tuning your memory files and prompts. You'll need to do code reviews as before, and when it makes changes that you don't like (like patching in unit tests), you need to correct it.

                                                              Also, we use hexagonal architecture, so there are clean patterns for it to gather context from. FWIW, I work in Python, not JS, so when Claude was trained on it, there weren't twenty wildly different flavor-of-the-week-fifteen-years-ago frameworks and libraries to confuse it.

                                                              If JS sucks to write as a human, it will suck even more to write as a LLM.

                                                              • kittikitti 6 hours ago

                                                                Removing jQuery is a great task and one I hope to implement in some of my JavaScript code bases. Thank you for this post. I don't know exactly why but I've found these agents to be less useful when it's counterintuitive from popular coding methods. Although there are many reasons why replacing jQuery is a great idea, coding agents may fail on this because so much of their training data requires jQuery. For example, many top comments on StackOverflow utilize jQuery, perhaps to address the same logic you are trying to replace.

                                                                • lenerdenator 6 hours ago

                                                                  jQuery simply turned the tables and executed a `$( ".Claude_Code" ).remove();`. Now Anthropic's services are down across several regions and emergency meetings are being held with stakeholders.

                                                                  jQuery: It's Going Absolutely Nowhere™

                                                                  • dana321 6 hours ago

                                                                    Its a slot machine, you need to revert the changes and try again!

                                                                    • josefritzishere 6 hours ago

                                                                      suprise factor zero.

                                                                      • aurareturn 6 hours ago

                                                                          The moment you point it at a real, existing codebase - even a small one - everything falls apart.
                                                                        
                                                                        Not my experience. It excels in existing codebases too.

                                                                        I often ask it "I have this bug. Why?" And it almost always figures it out and fixes it. Huge code base.

                                                                        Codex user, not Claude Code.

                                                                        • netdevphoenix 6 hours ago

                                                                          > Not my experience. It excels in existing codebases too.

                                                                          Why don't you prove it?

                                                                          1. Find an old large codebase in codeberg (avoiding the octopus for obvious reasons)

                                                                          2. Video stream the session and make the LLM convo public

                                                                          3. Ask your LLM to remove jQuery from the db and submit regular commits to a public remote branch

                                                                          Then we will be able to judge if the evidence stands

                                                                          • aurareturn 6 hours ago

                                                                            I don't have to prove it. I do it every single day at work in a real production codebase that my business relies on.

                                                                            And I don't remove jQuery every day. Maybe the OP is right that Opus 4.6 sucks at removing jQuery. I don't know. I've never asked an AI to do it.

                                                                                The moment you point it at a real, existing codebase - even a small one - everything falls apart.
                                                                            
                                                                            This statement is absolutely not true based on my experience. Codex has been amazing for me at existing code bases.
                                                                            • netdevphoenix 6 hours ago

                                                                              Extraordinary claims require extraordinary evidence. "Works on my machine" ain't it.

                                                                              • aurareturn 6 hours ago

                                                                                Is it an extraordinary claim that Opus 4.6 or GPT 5.3 works amazing on existing code bases in my experience?

                                                                                That's funny. I feel like it's the opposite. Claiming that Opus 4.6 or GPT 5.3 fails as soon as you point them to an existing code base, big or small, is a much more extraordinary claim.

                                                                            • simonw 6 hours ago

                                                                              What are the obvious reasons?

                                                                            • bsaul 6 hours ago

                                                                              Not my experience too and i'm on claude code. I'd be really curious to see what when wrong in OP case. Maybe too much indication ? Could it be that it used a fast model instead of the deep ones ?

                                                                              • aurareturn 6 hours ago

                                                                                No, OP said he used the Max Opus 4.6.

                                                                                Anyways, I think one area where Codex and Claude Code falls short is that they do not test the changes they made by using the app.

                                                                                In this case, the LLM should ideally render the page in a real browser, and actually click on the buttons to verify. Best if the LLM test it before the changes, and then after so that it is the same. Maybe it should take a screenshot of before the change, then take a screenshot after. And match.

                                                                                I asked why Codex and Claude don't do this here: https://news.ycombinator.com/item?id=46792066

                                                                                • threetonesun 6 hours ago

                                                                                  Yeah, if you have these tools in place to validate it's changes you can quickly iterate with it to the right results. But think through how it's making UI changes and it becomes obvious quickly why it can make absolutely wrong and terrible guesses about the implementation details, it can't _see_ what it's doing, or interact with it, it's just pattern matching other implementations its seen.

                                                                                  • aurareturn 6 hours ago

                                                                                    Yea, the next breakthrough for Codex or Claude Code would be to actually use/test the app like a real human would during the development process.

                                                                                • mwigdahl 6 hours ago

                                                                                  You can easily do this, at least with Claude Code. Ask it to install and use Playwright to confirm rendering and flow. You're correct that it is a failing to not do this. When you do, it definitely helps cut down on bugs.

                                                                                  EDIT: Sorry, just noticed you said "real browser". Haven't tried this but Playwright gets you a long way down the road.

                                                                                  • aurareturn 6 hours ago

                                                                                    Will check it out. Looks like there is also chrome-devtools-mcp for Codex.

                                                                                  • throwup238 6 hours ago

                                                                                    See the /chrome command in Claude code.

                                                                                    • lenerdenator 6 hours ago

                                                                                      FWIW, I've found Playwright tests to be a decent way of getting Claude to do what you're talking about.

                                                                                    • undefined 6 hours ago
                                                                                      [deleted]
                                                                                      • n4r9 6 hours ago

                                                                                        They say explicitly what model they're using.

                                                                                      • uludag 6 hours ago

                                                                                        There could be a whole spectrum of types of repositories where these tools exceed and fail. I can immagine a large repository, poorly documented, with confusing inconsistent usages/patterns, in a dynamic language, with poor tests will almost always lead to failure.

                                                                                        I honestly think that size and age alone are sufficient to lead these tools into failure cases.

                                                                                        • aurareturn 6 hours ago

                                                                                          It could be. I mainly use LLMs with Typescript and Go, both typed languages.

                                                                                        • netdevphoenix 6 hours ago

                                                                                          > I often ask it "I have this bug. Why?" And it almost always figures it out and fixes it. Huge code base.

                                                                                          Is your AI PR publicly available in github?

                                                                                          • aurareturn 6 hours ago

                                                                                            No. I don't do any open source work. I work for a private company.

                                                                                            • whiplash451 6 hours ago

                                                                                              These two things are not mutually exclusive.