• adamzwasserman 12 minutes ago

    Job security for those of us who think like this.

    Two layers vibe coding can't touch: architecture decisions (where the constraints live) and cleanup when the junior-dev-quality code accumulates enough debt. Someone has to hold the mental model.

    • raw_anon_1111 16 hours ago

      When I first started coding, I knew how my code worked down to assembly language because that was the only way I could get anything to run at a sufficient speed on a 1Mhz computer, I then graduated to C and C++ with some VB and then C#, JavaScript and Python

      Back in 2000 I knew every server and network switch in our office and eventually our self hosted server room with a SAN and a whopping 3TB of RAM before I left. Now I just submit a yaml file to AWS

      Code is becoming no different, I treat Claude/Codex as junior developers, I specify my architecture carefully, verify it after it’s written and I test the code that AI writes for functionality and scalability to the requirements. But I haven’t looked at the actually code for the project I’m working on.

      I’ve had code that I did write a year ago that I forgot what I did and just asked Codex questions about it.

      • mikaelaast an hour ago

        How do you verify the code without actually looking at it?

        • adamzwasserman 20 minutes ago

          Although I write very little code myself anymore, I don't trust AI code at all. My default assumption: every line is the most mid possible implementation, every important architecture constraint violated wantonly. Your typical junior programmer.

          So I run specialized compliance agents regularly. I watch the AI code and interrupt frequently to put it back on track. I occasionally write snippets as few-shot examples. Verification without reading every line, but not "vibe checking" either.

          • mikaelaast 16 minutes ago

            I like this. The few-shot example snippet method is something I’d like to incorporate in my workflow, to better align generated code with my preferences.

          • raw_anon_1111 an hour ago

            How do you verify the compiler without looking at the assembled code? How do you verify code that links against binary libraries?

            You run it and check for your desired behavior.

            • mikaelaast an hour ago

              (Those are hardly analogous comparisons to LLM generated code, are they?)

              So you do a vibe check?

              • raw_anon_1111 32 minutes ago

                What’s “vibe checking”?

                I input x and I expect y behavior and check for corner cases - just like I have checked for correctness for 40 years. Why do I care how the code was generated as long as it has the correct behavior?

                Of course multithreaded code is the exception unless the LLM is putting a bunch of rnd() calls in the code to make it behave differently.

        • cyrusradfar 20 hours ago

          The metaphor I'd use is, can you understand the a story if you don't read it in the original language? Code is a language that describes the function.

          I want to say, I've lived through the time (briefly) where folks felt if you didn't understand the memory management or, even assembly, level ops of code, you're not going to be able to make it great.

          High level languages, obviously, are a counter-argument that demonstrate that you don't necessarily need to understand all the details to deliver an differentiable experience.

          Personally, I can get pretty far with a high-level mental model and deeper model of key high-throughput areas in the system. Most individuals aren't optimizing a system, they're building on top of a core innovation.

          At the core you need to understand the system.

          Code is A language that describes it but there's others and arguably, in a lot of cases, a nice visual language goes much further for our minds to operate on.

          • mikaelaast 20 hours ago

            Yes, and I like the points you are making. I feel like the mental models we make are exercises in a purer form of knowledge building than the code artifacts we produce. A kind of understanding that is liberated from the confines of languages.

          • pigon1002 16 hours ago

            ``` - code I don’t need to model in my head (low risk, follows established conventions, predictable, easy to verify), and

            - code I can’t help modelling in my head (business-critical, novel, experimental, or introduces new patterns). I feel like there’s actually one or two more shades in between. ```

            Sometimes I think something belongs in the second category, but then it turns out it’s really more like the first. And sometimes something is second-category, but for the sake of getting things done, it makes more sense to treat it like the first.

            If vibe coding keeps evolving, this is probably the path it needs to explore. I just wonder what we’ll end up discovering along the way.

            • mikaelaast an hour ago

              If it’s in the second category, I struggle not to mentally model it. How do you stop yourself? And should you?

            • sinenomine 20 hours ago

              If the AI provides 0-1 nines of reliability and you refuse to provide the rest of nines required by the customer, then who will provide these, and what is your role and claim to margin here?

              • mikaelaast 20 hours ago

                Creating work for the clean-up crew and leaving good money on the table for them (because it ain't gonna be cheap).

              • nacozarina 19 hours ago

                Have CC users been raving about rock-solid stability improvements, more insightful spending analytics, and overall quantum improvements in customer experience?

                No, most of the chatter I’ve heard here has been the opposite. Changes have been poorly communicated, surprising, and expensive.

                If he’s been vibe-coding all this and feeling impressed with himself, he’s smelling his own farts. The performance thus far has been ascientific, tone-deaf and piss-poor.

                Maybe vibe-coding is not for him.

                • dapangzi 17 hours ago

                  If you don't understand code, you're asking for a whole heap of trouble.

                  Why? You can't validate the LLM outputs properly, and commit bugs and maybe even blatantly non-functional code.

                  My company is pressuring juniors to use LLM when coding, and I'm finding none of them fully understand the LLM outputs because they don't have enough engineering experience to find code smells, bugs, regressions, and antipatterns.

                  In particular, none of them have developed strong unit testing skills, and they let the LLM mock everything because they don't know any better, when they should generally only mock API dependencies. Sometimes LLM will even mock integration tests, which to me isn't generally a super good idea.

                  So the tests that are supposed to validate the code are completely worthless.

                  It has led to multiple customer impacting issues, and we spend more time mopping the slop than we do engineering as tenured engineers.

                  • tjr 20 hours ago

                    The "good riddance" attitude surprises me also. On one hand, it can be unpleasant to sort through obscure syntactical gobbledegook, like tracing around multiple levels of pointer indirection, but then again, I have found a certain enjoyable satisfaction in such things. It can be tough, but a good tough.

                    It does seem to me that the people who consistently get the best results from AI coding aren't that far away from the code. Maybe they aren't literally writing code any more, but still communicating with the LLM in terms that come from software development experience.

                    I think there will still be value in learning how to code, not unlike learning arithmetic and trigonometry, even if you ultimately use a calculator in real life.

                    But I think there will also still be value in being able to code even in real life. If you have to fix a bug in a software product, you might be able to fix it with more precise focus than an LLM would, if you know where to look and what to do, resulting in potentially less re-testing.

                    Personally, I balk at the idea of taking responsibility for shipping real software product that I (or, in a team environment, other humans on my team) don't understand. Perhaps that is my aerospace software background speaking -- and I realize most software is not safety-critical -- but I would be so much more confident shipping something that I understood how it worked.

                    I don't know. Maybe in time that notion will fade. As some are quick to point out, well, do you understand the compiled/assembled machine code? I do not. But I also trust the compilation process more than I trust LLMs. In aerospace, we even formally qualify tools like compilers to establish that they function as expected. LLM output, especially well-guided by good prompts and well-tested, may well be high quality, but I still lack trust in it.

                    • dapperdrake 19 hours ago

                      Many irrelevant difference between programming languages are now exposed for what they are.

                      Thinking clearly is just as relevant or encumbering as it always was.

                      • austin-cheney 9 hours ago

                        Don’t buy into self promotion bullshit. AI can be helpful. It’s another form of automation. It is not creative and will not make you a better programmer. The only thing that will make you a better programmer is time spent programming, just like with anything else.

                        • chrisjj 20 hours ago

                          Great question, but not specific to LLMs. Same applies to importing a C library.

                          Answer: no. Just harder.

                          • bediger4000 21 hours ago

                            That seems like exactly the wrong lesson to learn from LLM "AI". Under no circumstances does such an "AI" understand anything, much less important semantics, so human understanding becomes that much more important.

                            I realize that director level managers may not get this because they've always lived and worked in the domain of "vibes" but that doesn't mean it's not true