• SubiculumCode 6 hours ago

    I definitely would be okay if we hit an AI winter; our culture and world cannot adapt fast enough for the change we are experiencing. In the meantime, the current level of AI is just good enough to make us more productive, but not so good as to make us irrelevant.

    • kookamamie 2 hours ago

      I hope this will happen, too. I think it might as soon as investors realize the LLMs will not become the AGI they were sold as an idea.

      • bitmasher9 3 hours ago

        I think negative feedback loops of AIs trained on AI generated data might lead to a position where AI quality peaks and slides backwards.

        • Paradigma11 2 minutes ago

          We are just at the beginning of integrating external tools in the process and developing complex cognitive structures. LLM is just one part of it. Till now it was cheaper and easier to improve that part especially if other work would be rendered obsolete by LLM improvements.

          • energy123 an hour ago

            I would not bet against synthetic data. AlphaZero is trained only on synthetic data and it's better than any human, and keeps getting better with more training compute. There is no negative feedback loop in the narrow cases we have tried previously. There may be trade-offs but on net we are going forward.

            • csande17 24 minutes ago

              There's a pretty big difference between AlphaZero and a "generative AI" program: AlphaZero has access to an oracle that can tell it whether it's making valid moves and winning games.

              By comparison, getting accurate feedback on whether facts are correct in a piece of text (for example) is much more difficult and expensive. At least, presumably that's why AI companies publish staged demo videos where the AI still makes factual errors half the time.

            • sgt101 3 hours ago

              Thank goodness we have version control systems then.

              • phoe-krk 2 hours ago

                "Version control systems", in case of AI, mean that their knowledge will stay frozen in time, and so their usefulness will diminish. You need fresh data to train AI systems on, and since contemporary data is contaminated with generative AI, it will inevitably lead to inbreeding and eventual model collapse.

              • adventured 2 hours ago

                AI will radically leap forward in specialized function gain over the next decade. That's what everybody should be focusing on. It'll rapidly splinter and acquire dominance over the vast minutia. The intricacy of the endeavor will be led by the AI itself, as it'll fly-wheel itself on becoming an expert at every little thing far faster than we can. We're just seeding that possibility now. Not only will it not slide backwards, it'll leap a great distance forward from where it's at now.

                Mainframes -> desktop computers -> a computer in every hand

                Obese LLMs you visit -> agents riding with you whereever you are, integrated into your life and things -> everything everywhere, max specialization and distribution into every crevice, dominance over most tasks whether you're there or active or not

                They haven't even really started working together yet. They're still largely living in sandboxes. We're barely out of the first inning. Pick a field and it's likely hardly even at the first pitch for most of them you can name, eg aircraft/flight.

                In hindsight people will (jokingly?) wonder whether AI self-selected software development as one of its first conquests, as the ultimate foot in the door so it could pursue dominion over everything else (of course it had to happen in that progression; it'll prompt some chicken or the egg debates 30-50 years out).

            • tim333 an hour ago

              I usually disagree with Garry Marcus but his basic point seems fair enough if not surprising - Large Language Models model language about the world, not the world itself. For a human like understanding of the world you need some understanding of concepts like space, time, emotion, other creatures thoughts and so on, all things we pick up as kids.

              I don't see much reason why future AI couldn't do that rather than just focusing on language though.

              • code51 31 minutes ago

                The underlying assumption is that language and symbols are enough to represent phenomena. Maybe we are falling for this one in our own heads as well.

                Understanding may not be a static symbolic representation. Contexts of the world infinite and continuously redefined. We believed we could represent all contexts tied to information, but that's a tough call.

                Yes, we can approximate. No, we can't completely say we can represent every essential context at all times.

                Some things might not be representable at all by their very chaotic nature.

                • tim333 15 minutes ago

                  I did think that human mental modeling of the world is also quite rough and often inaccurate. I don't see why AI can't become human like in it's abilities but accurately modeling all the relativistic quarks in an atom is a bit beyond anything just now.

              • extr 6 hours ago

                I find Gary's arguments increasingly semantic and unconvincing. He lists several examples of how LLMs "fail to build a world model", but his definition of "world model" is an informal hand-wave ("a computational framework that a system (a machine, or a person or other animal) uses to track what is happening in the world"). His examples are lifted from a variety of unclear or obsolete models - what is his opinion of O3? Why doesn't he create or propose a benchmark that researchers could use to measure progress of "world model creation"?

                What's more, his actual point is unclear. Even if you simply grant, "okay, even SOTA LLMs don't have world models", why do I as a user of these models care? Because the models could be wrong? Yes, I'm aware. Nevertheless, I'm still deriving subtantial personal and professional value from the models as they stand today.

                • squirrel 5 hours ago

                  He cites o3 and o4-mini as examples of LLMs that play illegal chess moves.

                  • Lerc 5 hours ago

                    I don't understand the reasoning behind drawing a conclusion that if something fails a task that requires reasoning implies that thing cannot reason.

                    To use chess as an example. Humans sometimes play illegal moves. That does not mean Humans cannot reason. It is an instance of failing to show proof of reasoning. Not a proof of the inability to reason.

                    • voidhorse 5 hours ago

                      I don't think that's a fair representation of the argument.

                      The argument is not "here's one failure case, therefore they don't reason". The argument is that systematically if you given an LLM problem instances outside training sets in domains with clear structural rules, they will fail to solve them. The argument then goes that they must not have an actual model or understanding of the rules, as they seem to only be capable of solving problems in the training set. That is, they have failed to figure out how to solve novel problem instances of general problem structures using logical reasoning.

                      Their strict dependence on having seen the exact or extremely similar concrete instances suggests that they don't actually generalize—they just compute a probability based on known instances—which everyone knew already. The problem is we just have a lot of people claiming they are capable of more than this because they want to make a quick buck in an insane market.

                      • Lerc 4 hours ago

                        That still seems unfalsifiable. If it fails one instance the claim is that the failure is representative of things outside the training set. If it succeeds the claim is that it is in the training set. Without a definitive way to say something is not in the training set (a likely impossible task) the measure of success or failure is the only indicator of the purported reason reason for the success or failure.

                        Given models can get things wrong even when the training data contains the answer, failure cannot show absence.

                        • voidhorse 4 hours ago

                          I do think there are cases which, in controlled environments, there is some degree of knowledge as to what is in the training set. I also don't thin it's as impossible as you assume.

                          If you really wanted to ensure this with certainty just use the natural numbers to parameterize an aspect of a general problem. Assume there are N foo problems in the training set, then there is always a case N+1 parameter not in the training set, and you can use this as an indicative case. Go ahead and generate an insane number of these and eventually the probability that the Mth instance is not in the set is effectively 1.

                          Edit: Of course, it would not be perfect certainty, but it is probabilistically effectively certain. The number of problem instances in the set is necessarily finite, so if you go large enough you get what you need. Sure, you wouldn't be able to say there is a specific problem instance not in the set, but the aggregate results would evidence whether or no the LLm deals with all cases or (on assumption) just known ones.

                          • Lerc 4 hours ago

                            Well there are models that can sum two many-digit numbers. They certainly have not been trained on every pair of integers up to that level. That either makes the claim they can't do things that they haven't seen trivially false, or the criteria for counting something as being in the training data includes a degree of inference.

                            What happens when someone makes a claim that they have gotten a model to do something not in the training data and another person claims it must be encoded in the training data in some form. It seems like an impasse.

                        • energy123 4 hours ago

                          The lack of rigor and evidence behind the argument is the problem.

                        • imtringued 2 hours ago

                          Anthropomorphic fallacy.

                          Human fails at task due to not knowing the rules in perfect detail.

                          AI fails at task even though it knows the rules and could easily reproduce them for chess and dozens of chess variants.

                          "Look! The fallibility of humans rubbed off onto the AI, proving that they are more human and AGI than we give them credit to!"

                        • seanhunter 2 hours ago

                          But really, so what? We already have specialised chess engines (stockfish, leela, alphazero etc) that are far far stronger than humans will ever be, so insofar as that’s an interesting goal, we achieved it with deep blue and have gone way way beyond it since. The fact that a large Language model isn’t able to discern legal chess moves seems to me to be neither here nor there. Most humans can’t do that either. I don’t see it as evidence of lack of a world model either (because most people with a real chess board in front of them and a mental model of the world can’t play legal chess moves).

                          I find it astonishing that people pay any attention to Gary Marcus and doubly so here. Whether or not you are an “AI optimist”, he clearly is just a bloviator.

                          • undefined 2 hours ago
                            [deleted]
                        • voidhorse 6 hours ago

                          I think the point is that category errors or misinterpreting what a tool does can be dangerous.

                          Both statistical data generators and actual reasoning are useful in many circumstances, but there are also circumstances in which thinking that you are doing the latter when you are only doing the former can have severe consequences (example: building a bridge).

                          If nothing else, his perspective is a counterbalance to what is clearly an extreme hype machine that is doing its utmost to force adoption through overpromising, false advertising, etc. These are bad things even if the tech does actually have some useful applications.

                          As for benchmarks, if you fundamentally don't believe that stochastic data generation leads to reason as an emergent property, developing a benchmark is pointless. Also, not everyone has to be on the same side. It's clear that Marcus is not a fan of the current wave. Asking him to produce a substantive contribution that would help them continue to achieve their goals is preposterous. This game is highly political too. If you think the people pushing this stuff are less than estimable or morally sound, you wouldn't really want to empower them or give them more ideas.

                          • NitpickLawyer 5 hours ago

                            > If nothing else, his perspective is a counterbalance to what is clearly an extreme hype machine that is doing its utmost to force adoption through overpromising, false advertising, etc. These are bad things even if the tech does actually have some useful applications.

                            In other words, overhyped in the short term, underhyped in the long term. Where short and long term are extremely volatile.

                            Take programming as an example. 2.5 years ago, gpt3.5 was seen as "cute" in the programming world. Oh, look, it does poems and e-mails, and the code looks like python but it's wrong 9 times out of 10. But now a 24B model can handle end-to-end SWE tasks in 0-shot a lot of the times.

                            • nmadden 4 hours ago

                              The improvements in programming are largely due to the adoption of “agentic” architectures. This is really a hybrid neural-symbolic approach: the symbolic part being the interpreter/compiler. Effectively the LLM still produces an almost-correct-but-wrong program and then the compiler “fact-checks” it and then the LLM basically local-searches its way from there to something that passes the compiler. (If you want to be disabused of the idea that LLMs on their own are good at programming, just review the “reasoning” log of one trying to fix a simple string | undefined error in Typescript).

                              It seems clear to me therefore that further improvements in programming ability will not come from better LLM models (which have not really improved much), but from better integration of more advanced compilers. That is, the more types of errors that can be caught by the compiler, the better chance of the AI fuzzing its way to a good overall solution. Interestingly, I hear anecdotally that current LLMs are not great at writing Rust, which does have an advanced type system able to capture more types of errors. That’s where I’d focus if I was working on this. But we should be clear that the improvements are already largely coming via symbolic means, not better LLMs.

                              I wrote some notes about a year ago about the irony of LLMs being considered a refutation of GOFAI when they are actually now firmly recapitulating that paradigm: https://neilmadden.blog/2024/06/30/machine-learning-and-the-...

                              • NitpickLawyer 3 hours ago

                                > The improvements in programming are largely due to the adoption of “agentic” architectures.

                                Yes, I agree. But it's not just the cradles, it's cradles + training on traces produced with those cradles. You can test this very easily with running old models w/ new cradles. They don't perform well at all. (one of the first things I did when guidance, a guided generation framework, launched ~2 years ago was to test code - compile - edit loops. There were signs of it working, but nothing compared to what we see today. That had to be trained into the models.)

                                > will not come from better LLM models (which have not really improved much), but from better integration of more advanced compilers.

                                Strong disagree. They have to work together. This is basically why RL is gaining a lot of traction in this space.

                                Also disagree on llms not improving much. Whatever they did with gemini 2.5 feels like gpt3-4 to me. The context updates are huge. This is the first model that can take 100k tokens and still work after that. They're doing something right to be able to support such large contexts with such good performance. I'd be surprised if gemini 2.5 is just gemini 1 + more data. Extremely surprised. There have to be architecture changes and improvements somewhere in there.

                        • energy123 7 hours ago

                          Why was Anthropic's interpretability work not discussed? Inconvenient for the conclusion?

                          https://www.anthropic.com/news/tracing-thoughts-language-mod...

                          • Animats 4 hours ago

                            Note that this is the same problem engineers have talking to managers. The manager may lack a mental model of the task, but tries to direct it anyway.

                            • vunderba 7 hours ago

                              Speaking of chess, a fun experiment is building a few positions such as on Lichess, taking a screenshot, and asking a state-of-the-art VLM to count the number of pieces on the board. In my experience, it had a much higher error ratio in less likely or impossible board situations (three kings on the board, etc).

                              • undefined 6 hours ago
                                [deleted]
                                • sdenton4 7 hours ago

                                  "A wandering ant, for example, tracks where it is through the process of dead reckoning. An ant uses variables (in the algebraic/computer science sense) to maintain a readout of its location, even as as it wanders, constantly updated, so that it can directly return to its home."

                                  Hm.

                                  Dead reckoning is a terrible way to navigate, and famously led to lots of ships crashed on the shore of France before good clocks allowed tracking longitude accurately.

                                  Ants lay down pheromone trails and use smell to find their way home... There's likely some additional tracking going on, but I would be surprised if it looked anything like symbolic GOFAI.

                                  • deadbabe 7 hours ago

                                    Even if you find a pheromone trail, it doesn’t tell you what direction is home, or what path to take at branching paths. You need dead reckoning. The trail just helps you reduce the complexity of what you have to remember.

                                    • viraptor 5 hours ago

                                      The lack of information in ant trails (beyond "it exists here") leads to death spirals https://en.m.wikipedia.org/wiki/Ant_mill

                                      • fmbb 2 hours ago

                                        The very first sentence of the article you linked says this happens because they lose the pheromone track.

                                    • cma 7 hours ago

                                      The trail also leads the other ants to food, hard for them to use your own dead reckoning.

                                      • undefined 7 hours ago
                                        [deleted]
                                    • Animats 4 hours ago

                                      That LLMs are a black box and that LLMs lack an underlying model are both true, but orthogonal. It's possible to have a black box system which has an underlying model. That's true of many statistical prediction methods. Early attempts at machine learning were a white box with no underlying model. This is true of most curve-fitting. The AI version was where you're trying to divide a high-dimensional space with a cutting plane to create a classifier. You can tell where the separating plane is, but not why.

                                      The lack of a world model is a very real limitation in some problem spaces, starting with arithmetic. But this argument is unconvincing.

                                      • seanhunter 2 hours ago

                                        “LLMs lack an underlying model” is very obviously incorrect. LLMs have an underlying model of semantics as tokens embedded into a high-dimensional vector space.

                                        The question is not whether or not they have any model at all, the question is whether the model they indisputably have (which is a model of language in terms of linear algebra) maps onto a model of the external universe (a “world model”) that emerges during training.

                                        This is pretty much an unfalsifiable question as far as I can see. There has been research that aims to show this one way or another and it doesn’t settle the question of what a “world model” even means if you permit a “world model” to mean anything other than “thinks like we do”.

                                        For example, LLMs have been shown to produce code that can make graphics somewhat in the style of famous modern artists (eg Kandinsky and Mondrian) but fail at object-stacking problems (“take a book, four wine glasses, a tennis ball, a laptop and a bottle and stack them in a stable arrangement”). Depending on the objects you choose the LLM either succeeds or fails (generally in a baffling way). So what does this mean? Clearly the model doesn’t “know” the shape of various 3-D objects (unless the problem is in their training set which it sometimes seems to be) but on the other hand seems to have shown some ability to pastiche certain visual styles. How is any of this conclusive? A baby doesn’t understand the 3-D world either. A toddler will try and fail to stack things in various ways. Are they showing the presence or lack of a world model? How do you tell?

                                        • comp_throw7 4 hours ago

                                          > LLMs lack an underlying model

                                          Obviously false for any useful sense by which you might operationalize "world model". But agree re: being a black box and having a world model being orthogonal.

                                        • voidhorse 6 hours ago

                                          The whole thing is silly. Look, we know that LLMs are just really good word predictors. Any argument that they are thinking is essentially predicated on marketing materials that embrace anthropomorphic metaphors to an extreme degree.

                                          Is it possible that reason could emerge as the byproduct of being really good at predicting words? Maybe, but this depends on the antecedent claim that much if not all of reason is strictly representational and strictly linguistic. It's not obvious to me that this is the case. Many people think in images as direct sense datum, and it's not clear that a digital representation of this is equivalent to the thing in itself.

                                          To use an example another HN'er suggested, We don't claim that submarines are swimming. Why are we so quick to claim that LLMs are "reasoning"?

                                          • Velorivox 6 hours ago

                                            > Is it possible that reason could emerge as the byproduct of being really good at predicting words?

                                            Imagine we had such marketing behind wheels — they move, so they must be like legs on the inside. Then we run around imagining what the blood vessels and bones must look like inside the wheel. Nevermind that neither the structure nor the procedure has anything to do with legs whatsoever.

                                            Sadly, whoever named it artificial intelligence and neural networks likely knew exactly what they were doing.

                                            • SubiculumCode 5 hours ago

                                              I was having a discussion with Gemini. It claimed that because Gemini, as a large language model, cannot experience emotion, that the output of Gemini is less likely to be emotionally motivated. I countered that the experience of emotion is irrelevant. Gemini was trained on data written by humans who do experience emotion, who often wrote to express that emotion, and thus Gemini's output can be emotionally motivated, by proxy.

                                              • rented_mule 5 hours ago

                                                > this depends on the antecedent claim that much if not all of reason is strictly representational and strictly linguistic. It's not obvious to me that this is the case

                                                I'm with you on this. Software engineers talk about being in the flow when they are at their most productive. For me, the telltale sign of being in the flow is that I'm no longer thinking in English, but I'm somehow navigating the problem / solution space more intuitively. The same thing happens in many other domains. We learn to walk long before we have the language for all the cognitive processes required. I don't think we deeply understand what's going in these situations, so how are we going to build something to emulate it? I certainly don't consciously predict the next token, especially when I'm in the flow.

                                                And why would we try to emulate how we do it? I'd much rather have technology that complements. I want different failure modes and different abilities so that we can achieve more with these tools than we could by just adding subservient humans. The good news is that everything we've built so far is succeeding at this!

                                                We'll know that society is finally starting to understand these technologies and how to apply them when we are able to get away from using science fiction tropes to talk about them. The people I know who develop LLMs for a living, and the others I know that are creating the most interesting applications of them, already talk about them as tools without any need to anthropomorphize. It's sad to watch their frustration as they are slowed down every time a person in power shows up with a vision based on assumptions of human-like qualities rather than a vision informed by the actual qualities of the technology.

                                                Maybe I'm being too harsh or impatient? I suppose we had to slowly come to understand the unique qualities of a "car" before we could stop limiting our thinking by referring to it as a "horseless carriage".

                                                • voidhorse 5 hours ago

                                                  Couldn't agree more. I look forward to the other side of this current craze where we actually have reasonable language around what these machines are best for.

                                                  On a more general level, I also never understood this urge to build machines that are "just like us". Like you I want machines that, arguably, are best characterized by the ways in which they are not like us—more reliable, more precise, serving a specific function. It's telling that critiques of the failures of LLMs are often met with "humans have the same problems"—why are humans the bar? We have plenty of humans. We don't need more humans. If we're investing so much time and energy, shouldn't the bar be bette than humans? And if it isn't, why isn't it? Oh, right it's because actually human error is good enough and the actual benefit of these tools is that they are humans that can work without break, don't have autonomy, and that you don't need to listen to or pay. The main beneficiaries of this path are capital owners who just want free labor. That's literally all this is. People who actually want to build stuff want precision machines that are tailored for the task at hand, not some grab bag of sort of works sometimes stochastic doohickeys.

                                                • cageface 4 hours ago

                                                  but this depends on the antecedent claim that much if not all of reason is strictly representational and strictly linguistic.

                                                  Most of these newer models are multi-modal, so tokens aren't necessary linguistic.

                                                  • comp_throw7 3 hours ago

                                                    What use of the word "reasoning" are you trying to claim that current language models knowably fail to qualify for, except that it wasn't done by a human?

                                                    • sgt101 13 minutes ago

                                                      Well - all of them.

                                                      The mechanism by which they work prohibits reasoning.

                                                      This is easy to see if you look at a transformer architecture and think through what each step is doing.

                                                      The amazing thing is that they produce coherent speech, but they literally can't reason.

                                                    • etaioinshrdlu 5 hours ago

                                                      I don't think it's accurate anymore to say LLMs are just really good word predictors. Especially in the last year, they are trained with reinforcement learning to solve specific problems. They are functions that predict next tokens, but the function they are trained to approximate doesn't have to be just plain internet text.

                                                      • voidhorse 5 hours ago

                                                        Yeah, that's fair. It's probably more accurate to call them sequence predictors or general data predictors than to limit it to words (unless we mean words in the broad, mathematical sense) they are free monoid emulators

                                                        • antonvs 2 hours ago

                                                          And what are humans?

                                                          • sgt101 11 minutes ago

                                                            Humans are humans - to deny that we are thinking, reasoning, living beings is a strange thing to do.

                                                            You can taste a beer, laugh so much it hurts, come to know how something works.

                                                    • dist-epoch 3 hours ago

                                                      The article links to a tweet about jail-braking Claude to provide a recipe for Sarin gas production: https://x.com/argleave/status/1926138376509440433

                                                      But some words are redacted. So I've uploaded the picture to Gemini and asked it what the redacted words are, and it told me. Not sure if they are correct, and some are way longer to fit in the redacted black box, but it didn't refuse the request.

                                                      • UltraSane 5 hours ago

                                                        This paper argues the opposite

                                                        https://arxiv.org/abs/2506.01622

                                                        Are world models a necessary ingredient for flexible, goal-directed behaviour, or is model-free learning sufficient? We provide a formal answer to this question, showing that any agent capable of generalizing to multi-step goal-directed tasks must have learned a predictive model of its environment. We show that this model can be extracted from the agent's policy, and that increasing the agents performance or the complexity of the goals it can achieve requires learning increasingly accurate world models. This has a number of consequences: from developing safe and general agents, to bounding agent capabilities in complex environments, and providing new algorithms for eliciting world models from agents.

                                                        • voidhorse 4 hours ago

                                                          I only skimmed it so far, but this seems to only argue against the functional import of the OP, not its philosophical import.

                                                          On my reading, the philosophical claim is that these models do not develop an actual logical, internal representation of domains.

                                                          The functional import is whether or not they are able to realize specific behaviors within a domain. The paper argues that a markov process can realize the functional equivalence of the initial goal oriented picture of its domain—that is can solve goals with an error bound—but not that it develops an actual representation of the domain.

                                                          Lack of an actual representation prevents such a machine from doing other things. For example, iiuc, it would be unable to solve problems in domains that are homomorphic to the original, while an explicit representation does enable this.