• nine_k 2 days ago

    All these stories about vibe coding going well or wrong remind me of an old joke.

    A man visits his friend's house. There is a dog in the house. The friend says that the dog can play poker. The man is incredulous, but they sit at a table and have a game of poker; the dog actually can play!

    The man says: "Wow! Your dog is incredibly, fantastically smart!"

    The friend answers: "Oh, well, no, he's a naïve fool. Every time he gets a good hand, he starts wagging his tail."

    Whether you see LLMs impressively smart or annoyingly foolish depends on your expectations. Currently they are very smart talking dogs.

    • tliltocatl 2 days ago

      Nobody says LLMs aren't impressive. But there is a subtle difference an impressive trick and being something worth throwing 1% of GDP at. Proponents say that if we keep throwing more money at it, it will improve, but this is far from certain.

      • glimshe 2 days ago

        We've thrown fantastical amounts on some very uncertain things, like the moon landing. I think that the willingness of betting big at a potentially transformative technology such as AI is a good thing and a bit of a return to the old days when humanity still engaged in big infrastructure bets. Yes, it may fail, but that's intrinsic of any ambitious project.

        • mhh__ 2 days ago

          Some people definitely think they aren't impressive.

          > 1 % of GDP

          LLMs are basically the only thing genuinely new in decades , that have someone excited in basically every dept in the entirely world, why is it so bad that we spend money on them? The alternative is going back to shovelling web3 crap.

          There's definitely a new generation of bullshit merchants to go with LLMs but I think they (the models) target a very different part of the brain to normal tech so in some ways they're much more resilient to usual fad archetypes (this is also why some people who are a bit jittery socially hate them)

          • b33j0r a day ago

            I get the sense that people might mean that the transformer paradigm might not scale. But I do not understand the argument that AI in general is hype, and that investing in it is cult-like.

            It’s just a technology; one that will improve, sometimes stagnate, sometimes accelerate. Like anything else, right? I don’t see a time when we’ll just stop using AI because it “feels so trite.”

            • mrits a day ago

              The alternative is to not throw money at AI. The amount that we spend could be justified just under a national security budget and not even increased GDP

              • smohare a day ago

                [dead]

              • kqr 2 days ago

                Somehow this also reminds me of http://raisingtalentthebook.com/wp-content/uploads/2014/04/t...

                "I taught my dog to whistle!"

                "Really? I don't hear him whistling."

                "I said I taught him, not that he learnt it."

                • pyman 2 days ago

                  A person is flying a hot air balloon and realises he’s lost. He lowers the balloon and spots a man down below. He shouts:

                  “Excuse me! Can you help me? I promised a friend I’d meet him, but I have no idea where I am.”

                  The man replies, “You’re in a hot air balloon, hovering 30 feet above the ground, somewhere between 40 and 41 degrees north latitude and between 59 and 60 degrees west longitude.”

                  “You must be a Prompt Engineer,” says the balloonist.

                  “I am,” replies the man. “How did you know?”

                  “Well,” says the balloonist, “everything you told me is technically correct, but it’s of no use to me and I still have no idea where I am.”

                  The man below replies, “You must be a Vibe Coder.”

                  “I am,” says the balloonist. “How did you know?”

                  "Because you don’t know where you are or where you’re going. You made a promise you can’t keep, and now you expect me to solve your problem. The fact is, you’re in the same position you were in before we met, but now it’s somehow my fault!"

                • markus_zhang 2 days ago

                  From my experience (we have to vibe code as it becomes a norm in the company), vibe coding is most effective when the developer feeds detailed context to the agent beforehand, and gives very specific commands to it for each task. It still speeds up development quite a bit once everything goes the right direction.

                  • 6LLvveMx2koXfwn 2 days ago

                    I vibe code in domains I am unfamiliar with - getting Claude to configure the right choice of AWS service for a specific use-case can take a very long time. But would that still be quicker than me alone with docs is hard to tell.

                  • undefined 2 days ago
                    [deleted]
                    • Marazan 2 days ago

                      The variation I've seen on this applied to AIs is:

                      Fred insists to his friend that he has a hyper intelligent dog that can talk. Sceptical, the friend enquires of the dog "What's 2+2?"

                      "Five" says the dog

                      "Holy shit a talking dog!" says the friend "This is the most incredible thing that I've ever seen in my life".

                      "What's 3+3?"

                      "Eight" says the dog.

                      "What is this bullshit you're trying to sell me Fred?"

                      • codeflo 2 days ago

                        This joke is a closer analogy to reality with a small addition. After the friend is suitably impressed:

                        > "Holy shit a talking dog!" says the friend "This is the most incredible thing that I've ever seen in my life".

                        this happens:

                        "Yes," says Fred. "As you can see, it's already at PhD level now, constantly improving, and is on track to replace 50% of the economy in twelve months or sooner."

                        Confused, the friend asks:

                        > "What's 3+3?"

                        > "Eight" says the dog.

                        > "What is this bullshit you're trying to sell me Fred?"

                      • totetsu 2 days ago

                        Try playing an adversarial word game with ChatGPT. like the rules are, one player asks questions and the other is not allowed to say "yes" or "no", not allowed to reuse the same wording, and not allowed to evade the question. You'll see its tail wagging pretty quickly.

                        • ragequittah a day ago

                          You could very likely train an AI (llama?) to easily do this but trying to get a general LLM to play a game such as this doesn't make sense. Best way to get around it? Have it create a python program that will play the game correctly instead.

                        • ak_111 2 days ago

                          I asked chatgpt to write a short fable about the phenomena of vibe coding in the style of Aesop:

                          The Owl and the Fireflies

                          One twilight, deep in the woods where logic rarely reached, an Owl began building a nest from strands of moonlight and whispers of wind.

                          "Why measure twigs," she mused, "when I can feel which ones belong?"

                          She called it vibe nesting, and declared it the future.

                          Soon, Fireflies gathered, drawn to her radiant nonsense. They, too, began to build — nests of smoke and echoes, stitched with instinct and pulse. "Structure is a cage," they chirped. "Flow is freedom."

                          But when the storm came, as storms do, their nests dissolved like riddles in the rain.

                          Only the Ants, who had stacked leaves with reason and braced walls with pattern, slept dry that night. They watched the Owl flutter in soaked confusion, a nestless prophet in a world that demanded substance.

                          Moral: A good feeling may guide your flight, but only structure will hold your sky.

                          • suddenlybananas 2 days ago

                            This is incoherent

                        • recipe19 2 days ago

                          I work on niche platforms where the amount of example code on Github is minimal, and this definitely aligns with my observations. The error rate is way too high to make "vibe coding" possible.

                          I think it's a good reality check for the claims of impending AGI. The models still depend heavily on being able to transform other people's work.

                          • winrid 2 days ago

                            Even with typescript Claude will happily break basic business logic to make tests pass.

                            • motorest 2 days ago

                              > Even with typescript Claude will happily break basic business logic to make tests pass.

                              It's my understanding that LLMs change the code to meet a goal, and if you prompt them with vague instructions such as "make tests pass" or "fix tests", LLMs in general apply the minimum necessary and sufficient changes to any code that allows their goal to be met. If you don't explicitly instruct them, they can't and won't tell apart project code from test code. So they will change your project code to make tests work.

                              This is not a bug. Changing project code to make tests pass is a fundamental approach to refactoring projects, and the whole basis of TDD. If that's not what you want, you need to prompt them accordingly.

                              • bapak 2 days ago

                                Speaking of TypeScript, every time I feed a hard type problem to LLMs they just can't do it. Sometimes I find out it's a TS limitation or just not implemented yet, but that won't stop us from wasting 40 minutes together.

                                • rs186 a day ago

                                  When I vibe coded with GitHub Copilot in TypeScript, it keeps using "any" even though those variables had clear interfaces already defined somewhere in the code. This drove me crazy, as I had to go in and manually fix all those things. The only thing that helps a bit is me screaming "DO NOT EVER USE 'any' TYPE". I can't understand why it would do this.

                                  • CalRobert 2 days ago

                                    That seems like the tests don’t work?

                                  • pygy_ 2 days ago

                                    I've had a similar problem with WebGPU and WGSL. LLMs create buffers with the wrong flags (and other API usage errors), doesn't clean up resources, mix up GLSL and WGSL, write semi-less WGSL (in template strings) if you ask them to write semi-less [0] JS...

                                    It's a big mess.

                                    0. https://github.com/isaacs/semicolons/blob/main/semicolons.js

                                    • poniko a day ago

                                      Yes and if you work with a plarform that has been arround for long time like .net you will most definitely get a mix of really outdated deprecated code mixed with the latest features.

                                      • remich a day ago

                                        I recommend the context7 MCP tool for this exact purpose. I've been trying to really push agents lately at work to see where they fall down and whether better context can fix it.

                                        As a test recently I instructed an agent using Claude to create a new MCP server in Elixir based on some code I provided that was written in Python. I know that, relatively speaking, Python is over-represented in training data and Elixir is under-represented. So, when I asked the agent to begin by creating its plan, I told it to reference current Elixir/Phoenix/etc documentation using context7 and to search the web using Kagi Search MCP for best practices on implementing MCP servers in Elixir.

                                        It was very interesting to watch how the initially generated plan evolved after using these tools and how after using the tools the model identified an SDK I wasn't even aware of that perfectly fit the purpose (Hermes-mcp).

                                        • ragequittah a day ago

                                          This is easily solved by feeding the LLM the correct documentation. I was having problems with tailwind because of this right up until I had ChatGPT deep research come up with a spec sheet on how to use the latest version of it. Fed it into the various AIs I've been using (worked for ChatGPT, Claude, and Cursor) and no problems since.

                                        • gompertz 2 days ago

                                          Yep I program in some niche languages like Pike, Snobol4, Unicon. Vibe coding is out of the question for these languages. Forced to use my brain!

                                          • johnisgood 2 days ago

                                            You could always feed it some documentation and example programs. I did it with a niche language and it worked out really well, with Claude. Around 8 months ago.

                                          • vineyardmike 2 days ago

                                            Completely agree. I’m a professional engineer, but I like to get some ~vibe~ help on person projects after-work when I’m tired and just want my personal project to go faster. I’ve had a ton of success with go, JavaScript, python, etc. I had mixed-success with writing idiomatic Elixir roughly a year ago, but I’ve largely assumed that this would be resolved today, since every model maker has started aggressively filling training data with code, since we found the PMF of LLM code-assistance.

                                            Last night I tried to build a super basic “barely above hello world” project in Zig (a language where IDK the syntax), and it took me trying a few different LLMs to find one that could actually write anything that would compile (Gemini w/ search enabled). I really wasn’t expecting it considering how good my experience has been on mainstream languages.

                                            Also, I think OP did rather well considering BASIC is hardly used anymore.

                                            • andsoitis 2 days ago

                                              > The models

                                              The models don’t have a model of the world. Hence they cannot reason about the world.

                                              • pygy_ 2 days ago

                                                I tried vibe coding WebGPU/WGSl, which is thoroughly documented, but has little actual code around, and LLMs are pretty bad at it right now.

                                                They don't need a formal model, they need examples from which they can pilfer.

                                                • bawana a day ago

                                                  The theory is that language is an abstraction built on top of the world and therefore encompasses all human experience of the world. The problem will arise however when the world (aka nature) acts in an unexpected way outside human experience

                                                  • hammyhavoc 2 days ago

                                                    "reason" is doing some heavy-lifting in the context of LLMs.

                                                  • jjmarr 2 days ago

                                                    I've noticed the error rate doesn't matter if you have good tooling feeding into the context. The AI hallucinates, sees the bug, and fixes it for you.

                                                    • empressplay 2 days ago

                                                      I don't know if you're working with modern models. Grok 4 doesn't really know much about assembly language on the Apple II but I gave it all of the architectural information it needed in the first prompt of a conversation and it built compilable and executable code. Most of the issues I encountered were due to me asking for too much in a prompt. But it built a complete, albeit simple, assembly language game in a few hours of back and forth with it. Obviously I know enough about the Apple II to steer it when it goes awry, but it's definitely able to write 'original' code in a language / platform it doesn't inherently comprehend.

                                                      • timschmidt 2 days ago

                                                        This matches my experience as well. Poor performance usually means I haven't provided enough context or have asked for too much in a single prompt. Modifying the prompt accordingly and iterating usually results in satisfactory output within the next few tries.

                                                      • cmrdporcupine a day ago

                                                        I find for these kinds of systems, if I pre-seed Claude Code with a read of the language manual (even the BNF etc) and a TLDR of what it is, results are far better. Just part of the initial prompt: read this summary page, read this grammar, and look at this example code.

                                                        I have had it writing LambdaMOO code, with my own custom extensions (https://github.com/rdaum/moor) and it's ... not bad considering.

                                                      • manca 2 days ago

                                                        I literally had the same experience when I asked the top code LLMs (Claude Code, GPT-4o) to rewrite the code from Erlang/Elixir codebase to Java. It got some things right, but most things wrong and it required a lot of debugging to figure out what went wrong.

                                                        It's the absolute proof that they are still dumb prediction machines, fully relying on the type of content they've been trained on. They can't generalize (yet) and if you want to use them for novel things, they'll fail miserably.

                                                        • abrookewood 2 days ago

                                                          Clearly the issue is that you are going from Erlang/Elixir to Java, rather than the other way around :)

                                                          Jokes aside, they are pretty different languages. I imagine you'd have much better luck going from .Net to Java.

                                                          • tsimionescu 2 days ago

                                                            Sure, it's easier to solve an easier problem, news at eleven. In particular, translating from C# to Java could probably be automated with some 90% accuracy using a decent sized bash script.

                                                            • nine_k 2 days ago

                                                              This mostly means that LLMs are good at simpler forms of pattern matching, and have much harder time actually reasoning at a significant depth. (It's not easy even for human intellect, the finest we currently have.)

                                                            • nerdsniper 2 days ago

                                                              Claude Code / 4o struggle with this for me, but I had Claude Opus 4 rewrite a 2,500 line powershell script for embedded automation into Python and it did a pretty solid job. A few bugs, but cheaper models were able to clean those up. I still haven't found a great solution for general refactoring -- like I'd love to split it out into multiple Python modules but I rarely like how it decides to do that without me telling it specifically how to structure the modules.

                                                              • conception 2 days ago

                                                                I’m curious what your process was. If you just said “rewrite this in Java” I’d expect that to fail. If you treated the llm like a junior developer or an official project, worked with them to document the codebase, come up with a plan, tasks for each part of the code base and a solid workflow prompt- I would expect it to succeed.

                                                                • 4hg4ufxhy a day ago

                                                                  There is a reason to go the extra mile for juniors. They eventually learn and become seniors. With AI I'd rather just do it myself and be done with it.

                                                                  • Marazan 2 days ago

                                                                    Yes, if you do all the difficult time consuming bits I bet it would work.

                                                                  • h4ck_th3_pl4n3t 2 days ago

                                                                    I just wished the LLM model providers would realize this and instead would provide specialized LLMs for each programming language. The results likely would be better.

                                                                    • chuckadams 2 days ago

                                                                      The local models JetBrains IDEs use for completion are specialized per-language. For more general problems, I’m not sure over-fitting to a single language is any better for a LLM than it is for a human.

                                                                    • credit_guy a day ago

                                                                      If you try to ride a bicycle, do you expect to succeed at the first try? Getting AI code assistants to help you write high quality code takes time. Little by little you start having a feel for what prompts work, what don't, what type of tasks the LLMs are likely to perform well, which ones are likely to result in hallucinations. It's a learning curve. A lot of people try once or twice, get bad results, and conclude that LLMs are useless. But few people conclude that bicycles are useless if they can't ride them after trying once or twice.

                                                                      • hammyhavoc 2 days ago

                                                                        They'll never be fit for purpose. They're a technological dead-end for anything like what people are usually throwing them at, IMO.

                                                                        • zer00eyz 2 days ago

                                                                          I will give you an example of where you are dead wrong, and one where the article is spot on (without diving into historic artifacts).

                                                                          I run HomeAssistant, I don't get to play/use it every day. Here, LLM's excel at filling in the (legion) of blanks in both the manual and end user devices. There is a large body of work for it to summarize and work against.

                                                                          I also play with SBC's. Many of these are "fringe" at best. LLM's are as you say "not fit for purpose".

                                                                          What kind of development you are using LLM's for will determine your experience with them. The tool may or may not live up to the hype depending how "common", well documented and "frequent" your issue is. Once you start hitting these "walls" you realize that no, real reason, leaps of inference and intelligence are still far away.

                                                                          • motorest 2 days ago

                                                                            > They'll never be fit for purpose. They're a technological dead-end for anything like what people are usually throwing them at, IMO.

                                                                            This comment is detached from reality. LLMs in general have been proven to be effective at even creating complete, fully working and fully featured projects from scratch. You need to provide the necessary context and use popular technologies with enough corpus to allow the LLM to know what to do. If one-shot approaches fail, a few iterations are all it takes to bridge the gap. I know that to be a fact because I do it on a daily basis.

                                                                        • edent 2 days ago

                                                                          Vibe Coding seems to work best when you are already an experienced programmer.

                                                                          For example "Prompt: Write me an Atari BASIC program that draws a blue circle in graphics mode 7."

                                                                          You need to know that there are various graphics modes and that mode 7 is the best for your use-case. Without that preexisting knowledge, you get stuck very quickly.

                                                                          • baxtr 2 days ago

                                                                            This is a description of a “tool”. Anyone can use a hammer and chisel to carve out wood, but only an artist with extensive experience will create something truly remarkable.

                                                                            I believe many in this debate are confusing tools with magic wands.

                                                                            • tonyhart7 2 days ago

                                                                              this marketing and social media buzz that AI (artificial intelligence) that would replace human intelligence or people job for everyone didn't help either

                                                                              sure it maybe someday but not today, but there are jobs that already get replaced tho for example like writing industry

                                                                              • Sharlin a day ago

                                                                                > I believe many in this debate are confusing tools with magic wands.

                                                                                Unfortunately, it's usually the ones who control the money.

                                                                              • JdeBP 2 days ago

                                                                                Previous generations would have simply read something like the circle drawing writeup by Jeffrey S. McArthur in chapter 4 of COMPUTE!'s Third Book of Atari which as a matter of fact is available in scrapable text. (-:

                                                                                * https://archive.org/details/ataribooks-computes-third-book-o...

                                                                                * https://atariarchives.org/c3ba/page153.php

                                                                                Fun fact: Orson Scott Card can be found in chapter 1.

                                                                                • ack_complete a day ago

                                                                                  Even then, I've seen LLMs generate code with subtle bugs that even experienced programmers would trip on. For the Atari specifically, I've seen:

                                                                                  - Attempting to use BBC BASIC features in Atari BASIC, in ways that parsed but didn't work - Corrupting OS memory due to using addresses only valid on an Apple II - Using the ORG address for the C64, such that it corrupts memory if loaded from Atari DOS - Assembly that subtly doesn't work because it uses 65C02 instructions that execute as a NOP on a 6502 - Interrupt handlers that occasionally corrupt registers - Hardcoding internal OS addresses only valid for the OS ROM on one particular computer model

                                                                                  The POKE 77,0 in the article is another good example. ChatGPT labeled that as hiding the cursor, but that's wrong -- location 77 is the attract timer counter on the Atari OS. Clearing it to 0 periodically resets the timer that controls the OS's primitive screensaver. But in order for this to work, it has to be done periodically -- doing it at the start will just reset this timer once, after which attract mode will start in 9 minutes. So effectively, this is an easter egg that got snuck into the program, and even if the unrequested behavior was desirable, doesn't work.

                                                                                  • throwawaylaptop 2 days ago

                                                                                    Exactly this. I'm a self taught PHP/jQuery guy that learned it well enough to make an entire saas that enough companies pay for that it's a decent little lifestyle business.

                                                                                    I started another project recently basically vibe coding in PHP. Instead of a single page app like I made before, it's just page by page single loading. Which means the AI also only needs to keep a few functions and the database in its head, not constantly work on some crazy ui management framework (what that's called).

                                                                                    It's made in a few days what would have taken me weeks as an amateur. Yet I know enough to catch a few 'mistakes' and remind it to do it better.

                                                                                    I'm happy enough.

                                                                                    • johnisgood 2 days ago

                                                                                      Exactly. I would not like to be called a vibe coder for using an LLM for tedious tasks though, is it not a pejorative term? I used LLMs for a few projects and it did well, because I knew what I wanted and how I wanted. So yeah, you do have to be an experienced programmer to excel with LLMs.

                                                                                      That said, you can learn a lot using LLMs, which is nice. I have a friend who wants to learn Python, and I have given him actual resources, but I have also told him to use LLMs.

                                                                                      • j4coh 2 days ago

                                                                                        In this case I asked ChatGPT without the part specifying mode 7 and it replied with a working program using mode 7, with a comment at the top that mode 7 would be the best choice.

                                                                                        • motorest 2 days ago

                                                                                          > Vibe Coding seems to work best when you are already an experienced programmer.

                                                                                          I think that is a very vague and ambiguous way of putting it.

                                                                                          I would frame it a tad more specific: vibecode seems to work best when users know what they want and are able to set requirements and plan ahead.

                                                                                          Vibecoding doesn't work at all or is an unmaintainable god awful mess if users don't do software engineering and instead hack stuff together hoping it works.

                                                                                          Garbage in, garbage out.

                                                                                          • kqr 2 days ago

                                                                                            Not only is it a useful constraint to ask for mode 7, but making sure the context contains domain-expert technology puts the LLM in a better spot in the sampling space.

                                                                                            • cfn 2 days ago

                                                                                              Just for fun I asked ChatGPT "How would you ask an LLM to write a drawing program for the ATARI?" and it asked back a bunch of details to which I answered "I have no idea, just go with the simplest option". It chose the correct graphics mode and BASIC and created the program (which I didn't test).

                                                                                              I still agree with you for large applications but for these simple examples anyone with a basic understanding of vibe coding could wing it.

                                                                                              • forinti 2 days ago

                                                                                                Exactly! If you can't properly assess the output of the AI, you are really only shooting into the dark.

                                                                                              • Earw0rm 2 days ago

                                                                                                Has anyone tried it on x87 assembly language?

                                                                                                For those that don't know. x87 was the FPU for 32-bit x86 architectures. It's not terribly complicated, but it uses stack-based register addressing with a fixed size (eight entry) stack.

                                                                                                All operations work on the top-of-stack register and one other register operand, and push the result onto the top of the stack (optionally popping the previous top of stack before the push).

                                                                                                It's hard but not horribly so for humans to write.. more a case of annoyingly slow and having to be methodical, because you have to reason about the state of the stack at every step.

                                                                                                I'd be very curious as to whether a token-prediction machine can get anywhere with this kind of task, as it requires a strong mental model of what's actually happening, or at least the ability to consistently simulate one as intermediate tokens/words.

                                                                                                • userbinator 2 days ago

                                                                                                  If you are familiar with HP's calculators, x87 Asm isn't that difficult. Also noteworthy is that its density makes it a common choice for tiny demoscene productions.

                                                                                                  • Earw0rm 2 days ago

                                                                                                    Not too bad to write, kind of horrible to read.

                                                                                                  • silisili 2 days ago

                                                                                                    I'm going to doubt that. I was pushing GPT a couple weeks ago to test its limits. It's 100% unable to write compilable Go ASM syntax. In fairness it's slightly oddball, but enough exists that it's not esoteric.

                                                                                                    In the error feedback cycle, it kept blaming Go, not itself. A bit eye opening.

                                                                                                    • messe 2 days ago

                                                                                                      I'm comfortable writing asm for quite a few architectures, but Go's assembler...

                                                                                                      When I struggle to write Go ASM, I also blame Go and not myself.

                                                                                                      • Earw0rm 2 days ago

                                                                                                        The thing with x87 is that it's easy to write compilable, correct-looking code, and much harder to write correct compilable code, even for trivial sequences of a dozen or so operations.

                                                                                                        Whereas in most asm dialects, register AX is always register AX (word length aliasing aside), that's not the case for x87: the object/value at ST3 in one operation may be ST1 or ST5 in a couple of instructions' time.

                                                                                                      • FeepingCreature 2 days ago

                                                                                                        Prediction: it can do it, so long as you tell it to explicitly keep track of the FPU stack in comments on every FPU instr.

                                                                                                      • ofrzeta 2 days ago

                                                                                                        It didn't go well? I think it went quite well. It even produced an almost working drawing program.

                                                                                                        • abrookewood 2 days ago

                                                                                                          Yep, thought the same thing. I guess people have very different expectations.

                                                                                                        • ofrzeta 2 days ago

                                                                                                          Even though I was impressed with the original article (so kind of contrary to the author) in the meantime I tried the same thing with Claude Sonnet 4 (because some people here criticized the approach due to not using a proper "coding model") and got no better results. Now I tried about a dozen iterations but it did not manage to create a "BASIC programm for Atari 800XL that makes use of display list interrupts to draw rainbow-like colored horizontal stripes", although this is like a "hello world" for that technique and there should be plenty of samples on the Internet. I am curious to see if anyone can make that work with an LLM.

                                                                                                          • JKCalhoun 2 days ago

                                                                                                            Are there really plenty of examples on the internet?

                                                                                                            My first thought reading the article was that Atari BASIC is kind of a little specialized. If BASIC is kind of an under-represented language in general on the internet (you know, compared to Javascript, for example) then Atari BASIC has to be a white whale.

                                                                                                        • ilaksh 2 days ago

                                                                                                          I think it's a fair article.

                                                                                                          However I will just mention a few things. When you make an article like this please take note of the particular language model used and acknowledge that they aren't all the same.

                                                                                                          Also realize that the context window is pretty large and you can help it by giving it information from manuals etc. so you don't need to rely on the intrinsic knowledge entirely.

                                                                                                          If they used o3 or o3 Pro and gave it a few sections of the manual it might have gotten farther. Also if someone finds a way to connect an agent to a retro computer, like an Atari BASIC MCP that can enter text and take screenshots, "vibe coding" can work better as an agent that can see errors and self-correct.

                                                                                                          • xiphias2 2 days ago

                                                                                                            4o is not even a coding model and very far from the best coding models OpenAI has, I seriously don't understand why these articles are upvoted so much

                                                                                                            • throw101010 2 days ago

                                                                                                              > I seriously don't understand why these articles are upvoted so much

                                                                                                              It confirm a bias for some, it triggers others who might have the opposite position (and maybe have a bias too on the other end).

                                                                                                              Perfect combo for successful social media posts... literally all about "attention" from start to finish.

                                                                                                              • layer8 a day ago

                                                                                                                A coding model doesn't seem to produce better results: https://news.ycombinator.com/item?id=44624222

                                                                                                                • undefined 2 days ago
                                                                                                                  [deleted]
                                                                                                                  • timeon 2 days ago

                                                                                                                    Short while ago it was like "you are still using <X>? Why not the new 4o?!"

                                                                                                                  • Mikhail_Edoshin 2 days ago

                                                                                                                    I don't know if anyone notices that, but LLMs are very much like what you see in dreams when you reflect on them when awake. When you asleep a dream feels very coherent; but when you're awake, you see wild gaps and jumps and the overall impression of many dreams' plot is that it is pure nonsense.

                                                                                                                    I once heard advice on trying to re-read the text you see in dreams. And I did that once. It was a phrase where one of words referred to a city. The first time I read that city was "London". I remembered the advice and re-read the phrase and the word changed to the name of an old Russian city "Rostov". Yet the phrase was "same", that is it felt same in the dream, even though the city was different.

                                                                                                                    LLMs are like that dream mechanics. It is how something else is reflected in what we know (e.g. an image of a city is rendered as a name of a city, just any city). So, on one hand, we do have a similar mechanism in our minds. But on another hand our normal reasoning is very much unlike that. It would be a very wild stretch to believe that reasoning somehow stems from dreaming. I'd say reasoning is opposite to dreaming. If we amplify our dreaming mechanics we won't get a genius; more likely we'll get a schizophrenic.

                                                                                                                    • neom a day ago

                                                                                                                      I think that is a cool way of looking at it - build on what you said, to me it might be a bit like in both instances (dreaming and llms) - it's maybe a bit related to what they are trying to do (presuming dreams have a purpose) + the resource they have to do it in + the context they have to couple in to get the point across + something related to the abilities of the user? Lets for fun say there is a subsystem that understands and runs dreaming, you only have so much time, plus it's a weird modality, and you're trying to do something... maybe it's fine enough to just serve up the story no matter how muddled it is, what you might be talking about in LLMs is a similar thing? Dreams have a ticking clock where the brain chemistry is literally changing and the opportunity for that type of processing is about to disappear, and eventually the human will awaken, LLMs have a context window size. Fun thinking, anyway.

                                                                                                                      • Fade_Dance a day ago

                                                                                                                        Well it's neither schizophrenic nor using human reasoning. It's a machine using transformers no reason to overly anthropomorphize it, but sure the parallels do exist to some degree.

                                                                                                                        These tools are obviously not reasoning tools in the classic sense, as they're not building off of core axiomic truths like classic logic. These truths may be largely embedded in the probabilistic output, due to the input universe that we feed it being based on human reason, but it's certainly not in a power of these tools. That said, of course we are attacking on more and more of this ability (ability to check databases of facts, iterative "reasoning" models, etc) so they are becoming "adequate" in these respects.

                                                                                                                        The dreaming comparison seems quite apt though. I entirely get what you mean by rereading the dream word and seeing it suddenly transformed to another city name, yet somehow also "fitting". For some reason I'm keenly aware of these sorts of relationships when I think about my dreams. I will think of a situation and immediately be able to identify the input memories that "built" the dream scenario. Usually they involve overlapping themes and concepts, as well as some human specific common targets like "unresolved, emotionally charged, danger, etc" (presumably running through these types of brain neurons provides some sort of advantage for mammals, which makes sense to me).

                                                                                                                        What an LLM does is essentially create a huge interconnected conceptual web of the universe which is fed, and then uses probably ballistic models to travel through these chains, much like how dreaming does a trance like dance through these conceptual connections. In the former example though, we have optimized the traversal to be as close to a "mock awake human" as possible. If the dream poem is dreary in nature, and Rostov sounds dreary, and you were hearing about the dreary London rain earlier in the day, and you have a childhood memory of reading through a dreary poem that made you very sad, that's the perfect sort of overlapping set of memory synapses for a dream to light up. And when looking back, all you'll see is a strange fantasmic and (usually, not always) frustratingly inaccessible conglomeration of these inputs.

                                                                                                                        This sort of traversal isn't just used in dreaming though. To some degree we're doing similar things when we do things like creative thinking. The difference is, and this is especially so in "modern" life, we're strongly filtering our thoughts through language first and foremost (and indeed there's a lot of philosophical and scientific work done about how extremely important language is to humanness), but also through basic logic.

                                                                                                                        LLM's inherent some of the power/magic of language, in that it is deconstructing the relationships between the concepts behind language that contains so much embedded meaning. But they aren't filtering through logic like we do. Well, reasoning models do to some degree, but it is obviously quite rudimentary.

                                                                                                                        I think it's a good analogy.

                                                                                                                      • danjc 2 days ago

                                                                                                                        What's missing here is tool use.

                                                                                                                        For example, if the llm had a compile tool it would likely have been able to correct syntax errors.

                                                                                                                        Similarly, visual errors may also have been caught if it were able to run the program and capture screens.

                                                                                                                        • wbolt 2 days ago

                                                                                                                          Exactly! The way in which the LLM is used here is very, very basic and outdated. This experiment should be redone in a proper „agentic” setup where there is a feedback loop between the model and the runtime plus access to documentation / internet. The goal now is not to encapsulate all the knowledge inside single LLM - this is too problematic and costly. LLM is a language model not knowledge database. It allows to interpret and interact with knowledge and text data from multiple sources.

                                                                                                                        • Yokolos 2 days ago

                                                                                                                          When I start getting nonsense back and forth prompting, I've found it best to just start a new chat/context with the latest working version and then try again with a slightly more detailed prompt that tries to avoid the issues encountered in the previous chat. It usually helps. AI generally quickly gets itself lost, which can be annoying.

                                                                                                                          • rzz3 a day ago

                                                                                                                            > With the rise of LLM systems (or “AI” as they are annoyingly called),

                                                                                                                            I made it this far and realized the rest wasn’t worth reading. Language evolves, words change, and AI means what it means now. It turns out it’s actually really useful to have an abstraction above the concept of LLMs to talk about the broader set of these types of technologies, and generally speaking I find that these very pedantic types of people don’t bring me useful new perspectives.

                                                                                                                            • andriamanitra a day ago

                                                                                                                              > Language evolves, words change, and AI means what it means now. It turns out it’s actually really useful to have an abstraction above the concept of LLMs to talk about the broader set of these types of technologies

                                                                                                                              I agree that "AI" can be useful as an umbrella term, but using it when referring specifically to the "LLM" subset of AI technologies is not useful. A ton of information about the capabilities and limitations of the system is lost when making that substitution. I understand why marketing departments are pushing everything as "AI" to sell a product but as consumers we should be fighting against that.

                                                                                                                              • rzz3 2 hours ago

                                                                                                                                People who aren’t software engineers need to be able to converse about these technologies, and no one outside of our world cares about the difference between LLMs and other types of AI.

                                                                                                                            • bethekidyouwant a day ago

                                                                                                                              It literally worked fine but then he called it a bust for no reason. Also free version of chatgpt is a bold choice

                                                                                                                              • Radle 2 days ago

                                                                                                                                I had way better results. I'd assume the same would have happened to the author if he provided the LLM with a full documentation on what ATARI BASIC is and some example programs.

                                                                                                                                Especially when asking the LLM to create a drawing program and a game the author would have probably received working code if he supplied the ai with documentation to the graphics function and sprite rendering using ATARI BASIC.

                                                                                                                                • docandrew 2 days ago

                                                                                                                                  Maybe other folks’ vibe coding experiences are a lot richer than mine have been, but I read the article and reached the opposite conclusion of the author.

                                                                                                                                  I was actually pretty impressed that it did as well as it did in a largely forgotten language and outdated platform. Looks like a vibe coding win to me.

                                                                                                                                  • grumpyprole 2 days ago

                                                                                                                                    Sure it did ok with examples that are easily found in a text book like drawing a circle.

                                                                                                                                    • sixothree 2 days ago

                                                                                                                                      Here's an example of a recent experience.

                                                                                                                                      I have a web site that is sort of a cms. I wanted users to be able to add a list of external links to their items. When a user adds a link to an entry, the web site should go out and fetch a cached copy of the site. If there are errors, it should retry a few times. It should also capture an mhtml single file as well as a full page screenshot. The user should be able to refresh the cache, and the site should keep all past versions. The cached copy should be viewable in a modal. The task also involves creating database entities, DTOs, CQRS handlers, etc.

                                                                                                                                      I asked Claude to implement the feature, went and took a shower, and when I came out it was done.

                                                                                                                                      • nico 2 days ago

                                                                                                                                        Im pretty new to CC, been using it in a very interactive way.

                                                                                                                                        What settings are you using to get it to just do all of that without your feedback or approval?

                                                                                                                                        Are you also running it inside a container, or setting some sort of command restrictions, or just yoloing it on a regular shell?

                                                                                                                                        • hammyhavoc 2 days ago

                                                                                                                                          Let us know how the security audit by human beings on the output goes.

                                                                                                                                      • benbristow a day ago

                                                                                                                                        Annoying website. Cookie policy banner at the start with the usual 'essential' or 'all', then get about a fifth down the article and the page fades out making the article unreadable with a box asking me to subscribe.

                                                                                                                                        • fcatalan 2 days ago

                                                                                                                                          I had more luck with a little experiment a few days ago: I took phone pics of one of the shorter BASIC listings from Tim Hartnell's "Giant Book of Computer Games" (I learned to program out of those back in the early 80s, so I treasure my copy) and asked Gemini to translate it to plain C. It compiled and played just fine on the first go.

                                                                                                                                          • ghuntley 2 days ago

                                                                                                                                            It's not really 'vibe coding' if you're copying and pasting from ChatGPT by hand...

                                                                                                                                            • serf 2 days ago

                                                                                                                                              please just include the prompts rather than saying "So I said X.."

                                                                                                                                              There is a lot of nuance in how X is said.

                                                                                                                                              • heisenbit a day ago

                                                                                                                                                Maybe the Basic code out there has a too high similarity with spaghetti making it hard to abstract in latent space. OO and functional patterns are more localized.

                                                                                                                                                • firesteelrain 2 days ago

                                                                                                                                                  Not surprised; there were so many variations of BASIC and unless you train ChatGPT on a bunch of code examples and contexts then it can only get so close.

                                                                                                                                                  Try a local LLM then train it

                                                                                                                                                  • ofrzeta 2 days ago

                                                                                                                                                    > ... unless you train ChatGPT on a bunch of code examples and contexts then it can only get so close.

                                                                                                                                                    How do you do this?

                                                                                                                                                    • Paradigma11 2 days ago

                                                                                                                                                      Gemini Pro 2.5 has a context window of 1 million tokens and wants to rise that to 2 million tokens soon. 1 token is approx 0.75 words, so 1 million tokens would be in the ballpark of 3k pages of code.

                                                                                                                                                      You can add some tutorials/language docs as context without any problem. The bigger your project gets the more context it gets from there. You can also convert apis/documentation to a RAG and expose it as a MCP tool to the LLM.

                                                                                                                                                      • firesteelrain 2 days ago

                                                                                                                                                        Gist is

                                                                                                                                                        1. Gather training data

                                                                                                                                                        2. Format it into JSONL or Hugging Face Dataset format

                                                                                                                                                        3. Use Axolotl or Hugging Face peft to fine-tune

                                                                                                                                                        4. Export model to GGUF or HF format

                                                                                                                                                        5. Serve via Ollama

                                                                                                                                                        https://adithyask.medium.com/axolotl-is-all-you-need-331d5de...

                                                                                                                                                        https://www.philschmid.de/fine-tune-llms-in-2025

                                                                                                                                                        https://blog.devgenius.io/complete-guide-to-model-fine-tunin...

                                                                                                                                                        • oharapj 2 days ago

                                                                                                                                                          If you're OpenAI you scrape StackOverflow and GitHub and spend billions of dollars on training. If you're a user, you don't

                                                                                                                                                          • sixothree 2 days ago

                                                                                                                                                            RAG maybe?

                                                                                                                                                        • dogman1050 2 days ago

                                                                                                                                                          It never did draw a blue circle, it's orange or similar but that's never mentioned in the article.

                                                                                                                                                          • clambaker117 2 days ago

                                                                                                                                                            Wouldn’t it have been better to use Claude 4?

                                                                                                                                                            • sixothree 2 days ago

                                                                                                                                                              I'm thinking Gemini CLI because of the context. He could add some information about the programming language itself in the project. I think that would help immensely.

                                                                                                                                                              • 4b11b4 2 days ago

                                                                                                                                                                Even though the max token limit is higher, it's more complicated than that.

                                                                                                                                                                As the context length increases, undesirable things happen.

                                                                                                                                                            • register 2 days ago

                                                                                                                                                              I'll tell you a secret. The same happens also with mainstream languages

                                                                                                                                                              • calvinmorrison 2 days ago

                                                                                                                                                                This is great. I am currently vibecoding a replacement connector for some old EDI software that is written in a business basic kind of language called ProvideX, and a fork of that has undocumented behaviour.

                                                                                                                                                                It uses some built inet ftp tooling thats terrible and barely works, even internally anymore.

                                                                                                                                                                We are replacing it with a winscp implementation since winscp can talk over a COM object.

                                                                                                                                                                unsuprisingly the COM object in basic works great - the problem is that I have no idea what I am doing. I spent hours doing something like

                                                                                                                                                                WINSCP_SESSION'OPEN(WINSCP_SESSION_OPTIONS)

                                                                                                                                                                when i needed

                                                                                                                                                                WINSCP_SESSION'OPEN(*WINSCP_SESSION_OPTIONS)

                                                                                                                                                                It was obvious after because it was a pointer type of setup, but i didnt find it until pages and pages deep into old PDF manuals.

                                                                                                                                                                However the vibecode of all the agents did not understand the syntax of the system, it did help me analyse the old code, format it, and at least throw some stuff at the wall.

                                                                                                                                                                I finished it up friday, hopefully i deploy monday.

                                                                                                                                                                • empressplay 2 days ago

                                                                                                                                                                  We have Claude Code writing Applesoft BASIC fine. It wrote a text adventure (complete with puzzles) and a PONG clone, among other things. Obviously it didn't do it 100% right straight out of the gate, but the hand-holding wasn't extensive.

                                                                                                                                                                  I've been using Grok 4 to write 6502 assembly language and it's been a bit of a slog but honestly the issues I've encountered are due mostly my to naivety. If I'm disciplined and make sure it has all of the relevant information and I'm (very) incremental, I've had some success writing game logic. You can't just tell it to build an entire game in a prompt, but if you're gradual about it you can go places with it.

                                                                                                                                                                  Like any tool, if you understand its idiosyncrasies you can cater for them, and be productive with it. If you're not then yeah, it's not going to go well.

                                                                                                                                                                  • undefined 2 days ago
                                                                                                                                                                    [deleted]
                                                                                                                                                                    • hammyhavoc 2 days ago

                                                                                                                                                                      Ah yes, truly impressive, Pong. A game that countless textbooks et al have recreated numerous times. There's a mountain of training data for something so unoriginal.

                                                                                                                                                                    • j45 a day ago

                                                                                                                                                                      It can only code to the average level of content that it's trained on.

                                                                                                                                                                      • undefined 2 days ago
                                                                                                                                                                        [deleted]
                                                                                                                                                                        • alfiedotwtf 2 days ago

                                                                                                                                                                          I’ve been using my AirPods to talk to ChatGPT while I drive into work.. not coding talk though, using that time to pick up particle physics. So far, nothing looks hallucinated, though I’m only touching the surface and haven’t looked at equations yet.

                                                                                                                                                                          .. either way, I’m super happy that it has kept my drives to work very interesting!

                                                                                                                                                                          • 486sx33 2 days ago

                                                                                                                                                                            [dead]

                                                                                                                                                                            • pavelstoev 2 days ago

                                                                                                                                                                              I vibe coded a site about vibe 2 code projects. https://builtwithvibe.com/

                                                                                                                                                                              • esafak 2 days ago

                                                                                                                                                                                The "Yo dawg, I heard..." memes are writing themselves today.

                                                                                                                                                                              • CMay 2 days ago

                                                                                                                                                                                This does kind of make me wonder.

                                                                                                                                                                                It's believable that we might either see an increase in the number of new programming languages since making new languages is becoming more accessible, or we could see fewer new languages as the problems of the existing ones are worked around more reliably with LLMs.

                                                                                                                                                                                Yet, what happens to adoption? Perhaps getting people to adopt new languages will be harder as generations come to expect LLM support. Would you almost need to use LLMs to synthesize tons of code examples that convert into the new language to prime the inputs?

                                                                                                                                                                                Once conversational intelligence machines reach a sort of godlike generality, then maybe they could very quickly adapt languages from much fewer examples. That still might not help much with the gotchas of any tooling or other quirks.

                                                                                                                                                                                So maybe we'll all snap to a new LLM super-language in 20 years, or we could be concreting ourselves into the most popular languages of today for the next 50 years.

                                                                                                                                                                                • hammyhavoc 2 days ago

                                                                                                                                                                                  Fantasy.

                                                                                                                                                                                  • CMay a day ago

                                                                                                                                                                                    Can you elaborate?