• LeroyRaz 6 hours ago

    The article is misleading and badly written. None of the mentioned works seem to have used language or knowledge based models.

    It looks like all the results were driven by optimization algorithms, and yet the writing describes AI 'using' concepts and "tricks". This type of language is entirely inappropriate and misleading when describing these more classical (if advanced) optimization algorithms.

    Looking at the paper in the first example, they used an advanced gradient descent based optimization algorithm, yet the article describes "that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades ago to reduce quantum mechanical noise."

    Ridiculous, and highly misleading. There is no conceptual manipulation or intuition being used by the AI algorithm! It's an optimization algorithm searching a human coded space using a human coded simulator.

    • HarHarVeryFunny 5 hours ago

      Right - I hate that "AI" is just being used as (at best) a replacement term for ML, and very misleading for the public who is being encouraged to believe that some general purpose AGI-like capabiity is behind things like this.

      The article is so dumbed down that it's not clear if there is even any ML involved or if this is just an evaluation of combinatorial experimental setups.

      > The outputs that the thing was giving us were really not comprehensible by people,

      > Adhikari’s team realized that the AI was probably using some esoteric theoretical principles that Russian physicists had identified decades ago to reduce quantum mechanical noise.

      I'll chalk this one up to the Russians, not "AI".

      • agentcoops 4 hours ago

        I agree with your point, but I think it's worth noting that there's a real problem of language today both in popular and scientific communication. On the one hand, in popular understanding, there's the importance of clearly separating the era of "machine learning" as let's say Netflix recommendations from the qualitative leap of modern AI, most obviously LLMs. This article clearly draws on the latter association and really leads to confusion, most glaringly in the remark you note that the AI probably took up some forgotten Russian text etc.

        However, scientifically, I think there's a real challenge to clearly delineate from the standpoint of 2025 what all should fall under the concept of AI -- we really lose something if "AI" comes to mean only LLMs. Everyone can agree that numeric methods in general should not be classed as AI, but it's also true that the scientific-intellectual lineage that leads to modern AI is for many decades indistinguishable from what would appear to be simply optimization problems or the history of statistics (see especially the early work of Paul Werbos where backpropagation is developed almost directly from Bellman's Dynamic Programming [1]). The classical definition would be that AI pursues goals under uncertainty with at least some learned or search‑based policy (paradigmatically but not exclusively gradient-descent of loss function), which is correct but perhaps fails to register the qualitative leap achieved in recent years.

        Regardless -- and while still affirming that the OP itself makes serious errors -- I think it's hard to find a definition of AI that is not simply "LLMs" under which the methods of the actual paper cited [2] would not fall.

        [1] His dissertation was re-published as The Roots of Backpropagation. Especially in the Soviet Union, important not least for Kolmogorov and Vapnik, AI was indistinguishable from an approach to optimization problems. It was only in the west where "AI" was taken to be a question of symbolic reasoning etc, which turned out to have been an unsuccessful research trajectory (cf the "AI winter").

        [2] https://arxiv.org/pdf/2312.04258

        • bubblyworld 5 hours ago

          I think it's pretty clear that they suspect the mechanism underlying the model's output is the same as the mechanism underlying said theoretical principles, not that the AI was literally manipulating the concepts in some abstract sense.

          I don't really get your rabid dismissal. Why does it matter that they are using optimisation models and not LLMs? Nobody in the article is claiming to have used LLMs. In fact the only mention of it is lower down where someone says they hope it will lead to advances in automatic hypothesis generation. Like, fair enough?

          • gammarator 2 hours ago

            This is the obvious consequence of funding agencies pouring money into “AI.”

          • JimDabell 13 hours ago

            > Initially, the AI’s designs seemed outlandish. “The outputs that the thing was giving us were really not comprehensible by people,” Adhikari said. “They were too complicated, and they looked like alien things or AI things. Just nothing that a human being would make, because it had no sense of symmetry, beauty, anything. It was just a mess.”

            This description reminds me of NASA’s evolved antennae from a couple of decades ago. It was created by genetic algorithms:

            https://en.wikipedia.org/wiki/Evolved_antenna

            • ElFitz 8 hours ago

              There was something similar about using evolutionary algorithms to produce the design for a mechanical piece used to link two cables or anchor a bridge’s cable, optimizing for weight and strength.

              The design seemed alien and somewhat organic, but I can’t seem to find it now.

              • eleveriven 9 hours ago

                That evolved antenna looked like something cobbled together by a drunk spider

                • kriro 6 hours ago

                  Reminds me a bit of chess engines that crush the best humans with ease but play moves that human players can identify as "engine moves". In chess the environment is fixed by the rules so I'd assume this deeper understanding of underlying patterns is only amplified in more open environments.

                  • wickedsight 8 hours ago

                    That reminds me of this article:

                    https://www.damninteresting.com/on-the-origin-of-circuits/

                    They used genetic algorithms to evolve digital circuits directly on FPGAs. The resulting design exploited things like electromagnetic interference to end up with a circuit much more efficient than a human could've created.

                    In my mind this brings some interesting consequences for 'AI apocalypse' theories. If the AI understand everything, even an air gap might not be enough to contain it, since it might be able to repurpose some of its hardware for wireless communication in ways that we can't even imagine.

                    • johnisgood 5 hours ago

                      I remember reading about this, fun times.

                      • PicassoCTs 7 hours ago

                        The bias is a handicap, the looking for beauty, symmetry, a explanation, a story, its all googles upon googles of warping lenses and funhouse-mirrors, hiding and preventing the perception of truth.

                        • carabiner 11 hours ago

                          [mandatory GA antenna post requirement satisfied]

                          • esperent 9 hours ago

                            That evolved antenna is a piece of wire with exactly 6 bends. It's extremely simple, the exact opposite of a hard to understand mess.

                          • sampo 5 hours ago

                            Not-so-many years ago, this kind of work developing optimization algorithms would have been called optimization algorithms, not AI.

                            > We develop Urania, a highly parallelized hybrid local-global optimization algorithm, sketched in Fig. 2(a). It starts from a pool of thousands of initial conditions of the UIFO, which are either entirely random initializations or augmented with solutions from different frequency ranges. Urania starts 1000 parallel local optimizations that minimize the objective function using an adapted version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. BFGS is a highly efficient gradient-descent optimizer that approximates the inverse Hessian matrix. For each local optimization, Urania chooses a target from the pool according to a Boltzmann distribution, which weights better-performing setups in the pool higher and adds a small noise to escape local minima.

                            https://journals.aps.org/prx/abstract/10.1103/PhysRevX.15.02...

                            • pavon an hour ago

                              It does look like the gradient descent is paired with a type of genetic algorithm:

                              > For each local optimization, Urania chooses a target from the pool according to a Boltzmann distribution, which weights better-performing setups in the pool higher and adds a small noise to escape local minima. These choices add a global character to the exploration. When one of the local optimizations of Urania finds a better parameter setting for a setup in the pool, it replaces the old solution with the superior one. Upon convergence, Urania repeats and chooses a new target from the pool. In parallel, Urania simplifies solutions from the pool by probabilistically removing elements whose removal does not impact the overall sensitivity.

                              • MITSardine 3 hours ago

                                This irks me to no end, why not just call it applied mathematics algorithms to not use specific terms, rather than AI. Is grep AI? Is your web browser AI?

                              • luketaylor 13 hours ago

                                Referring to this type of optimization program just as “AI” in an age where nearly everyone will misinterpret that to mean “transformer-based language model” seems really sloppy

                                • saithound 10 hours ago

                                  Referring to this type of optimization as AI in the age where nearly everybody is looking to fund transformer-based language models and nobody is looking to fund this kind of optimization is just common sense though.

                                  • benterix 9 hours ago

                                    I think it's actually this repo:

                                    https://github.com/artificial-scientist-lab/GWDetectorZoo/

                                    Nothing remotely LLM-ish, but I'm glad they used the term AI here.

                                    • bee_rider 11 hours ago

                                      How can one article be expected to fix the problem of people sloppily using “AI” when they mean LLM or something like that?

                                      • advael 9 hours ago

                                        This exact kind of sloppy equivocation does seem to be one of the major PR strategies that tries to justify the massive investment in and sloppy rollout of transformer-based language models when large swaths of the public have turned against this (probably even more than is actually warranted)

                                        • victorbjorklund 8 hours ago

                                          Yea, I can tolerate it when random business people do it. But scientists/tech people should know better.

                                          • layer8 8 hours ago

                                            While nowadays misleading as a title, I found the term being used in the traditional sense refreshing.

                                            • tomrod 12 hours ago

                                              I know, but can we blame the masses for misunderstanding AI when they are deliberately misinformed that transformers are the universe of AI? I think not!

                                              • andai 13 hours ago

                                                That's how I feel about Web 3.0...

                                                • fragmede 10 hours ago

                                                  Thinking "nearly everyone" has that precise definition of AI seems way more sloppy. Most people haven't even heard of OpenAI and ChatGPT still, but among people who have, they've probably heard stories about AI in science fiction. My definition of AI is any advanced computer processing, generative or otherwise, that's happened since we got enough computing power and RAM to do something about it, aka lately.

                                                  • pharrington 11 hours ago

                                                    I'll bet that almost everyone who reads Quanta Magazine knows what they mean by AI.

                                                    • zeofig 12 hours ago

                                                      Absolutely agree.

                                                    • markasoftware 14 hours ago

                                                      not an LLM, in case you're wondering. From the PyTheus paper:

                                                      > Starting from a dense or fully connected graph, PyTheus uses gradient descent combined with topological optimization to find minimal graphs corresponding to some target quantum experiment

                                                      • viraptor 13 hours ago

                                                        This sounds similar to evolved antennas https://en.wikipedia.org/wiki/Evolved_antenna

                                                        There are a few things like that where we can throw AI at a problem is generating something better, even if we don't know why exactly it's better yet.

                                                        • topspin 11 hours ago

                                                          "It added an additional three-kilometer-long ring between the main interferometer and the detector to circulate the light before it exited the interferometer’s arms."

                                                          Isn't that a delay line? The benefit being that when the undelayed and delayed signals are mixed, the phase shift you're looking for is amplified.

                                                          • heisenbit 9 hours ago

                                                            Sounds like ring lasers. Not really an unusual concept to increase sensitivity.

                                                          • WhyNotHugo 8 hours ago

                                                            Article mentions that if students present these designs, they’d be dismissed as ridiculously. But when AI present them, they’re taken seriously.

                                                            I wonder how many times these designes were dismissed because humans who think out of the box too much are dismissed. It seems that students are encouraged NOT to do so, severely limiting how far out they can explore.

                                                            • wongarsu 7 hours ago

                                                              Across basically all fields you have to first show that you can think inside the box before you are allowed to bring out-of-the-box ideas. Once you have shown that you mastered the craft and understood the rules you can get creative, but before that creativity is rarely valued. Doesn't matter if you are an academic or an artist, the same rules apply

                                                              I'm guessing AI gets the benefit of the doubt here because its ideas will be interesting and publishable no matter the outcome

                                                              • Arkhaine_kupo 7 hours ago

                                                                Its a cost risk analysis. We have tried letting studdents do whatever and most of the time it went nowhere, so we ended up with a more rational system (with many caveats) where experiments are proposed and people with good insights and sense of whether it might even work approve it before running it.

                                                                AI is going through the wild phase were people are allowing it to test, as soon as the limits are understood the framework of limitations and the rational system built around will inevitably happen.

                                                              • matt3210 9 hours ago

                                                                The "AI" here is not the same "AI" as claude, Grok or OpenAI. It's just an optimization algorithm that tries different things in parallel until it finds a better solution to inform the next round.

                                                                • jeroenhd 8 hours ago

                                                                  > It's just an optimization algorithm that tries different things in parallel until it finds a better solution to inform the next round.

                                                                  ... which is AI. AI existed long before GPTs were invented, and when neural networks were left unexplored as the necessary compute power wasn't there.

                                                                • anonym00se1 14 hours ago

                                                                  Feels like we're going to see a lot of headlines like this in the future.

                                                                  "AI comes up with bizarre ___________________, but it works!"

                                                                  • viraptor 13 hours ago

                                                                    We've seen this for a while, just not as often: antennas, IC, FPGA design, small mechanical things, ...

                                                                    • sandspar 9 hours ago

                                                                      "AI comes up with a bizarre short-form generative video genre that addicts user in seconds - but it works!" I'm guessing we're only a year or two away.

                                                                      • amelius 9 hours ago

                                                                        ... sometimes.

                                                                        • ninetyninenine 14 hours ago

                                                                          That’s how we become numb to the progress. Like think of this in the context of a decade ago. The news would’ve been amazing.

                                                                          Imagine these headlines mutating slowly into “all software engineering performed by AI at certain company” and we will just dismiss it as generic because being employed and programming with keyboards is old fashioned. Give it twenty years and I bet this is the future.

                                                                        • aeternum 14 hours ago

                                                                          More hype than substance unfortunately.

                                                                          The AI rediscovered an interferometer technique the Russian's found decades ago, optimized a graph in an unusual way and came up with a formula to better fit a dark matter plot.

                                                                          • irjustin 13 hours ago

                                                                            Ehhhhh, I'll say it's substantive and not just pure hype.

                                                                            Yes the AI "resurfaced" the work, but it also incorporated the Russian's theory into the practical design. At least enough to say "hey make sure you look at this" - this means the system produced a workable-something w/ X% improvement, or some benefit that the researchers took it seriously and investigated. Obviously, that yielded an actual design with 10-15% improvement and a "wish we had this earlier" statement.

                                                                            No one was paying attention to the work before.

                                                                            • rlt 13 hours ago

                                                                              The discovering itself doesn’t seem like the interesting part. If the discovery wasn’t in the training data then it’s a sign AI can produce novel scientific research / experiments.

                                                                            • codeaether 7 hours ago

                                                                              These days, it feels like “AI” basically just means neural network-based models—especially large autoregressive ones. Even convolutional neural networks probably don’t count as “real AI” anymore in most people’s eyes. Funny how things change. Not long ago, search algorithms like A* were considered the cutting edge of AI.

                                                                              • eleveriven 9 hours ago

                                                                                Feels like we're entering a new kind of scientific method. Not sure if that's thrilling or terrifying, but definitely fascinating

                                                                                • smj-edison 10 hours ago

                                                                                  Am I understanding the article correctly that the created a quantum playground, and then set thein algorithm to work optimizing the design within the playgrounds' constranits? That's pretty cool, especially for doing graph optimization. I'd be curious to know how compute intensive it was.

                                                                                  • k__ 7 hours ago

                                                                                    "it had no sense of symmetry, beauty, anything. It was just a mess."

                                                                                    Reminds me of the square packing problem, with the absurdly looking solution for packing the 17 squares.

                                                                                    It also reminds me of edge cases in software engineering. When I let an LLM write code, I'm often confused how it starts out, thinking, I would have done it more elegantly. However, I quickly notice, that the AI handled a few edge cases I only would habe caught in testing.

                                                                                    Guess, we should take a hint!

                                                                                    • Huxley1 9 hours ago

                                                                                      This AI-designed experiment is pretty cool. It seemed kind of weird at first, but since it actually works, it’s worth paying attention to. AI feels more like a powerful tool that helps us think outside the box and come up with fresh ideas. Is AI more of a helper or a creator when it comes to research?

                                                                                      • theteapot 9 hours ago

                                                                                        AFAICT "The AI" (which is never actually described in the article) is a CSOP solver.

                                                                                      • kristjank 9 hours ago

                                                                                        Impressive results, I remember reading about AI-generated microstrip RF filters not too long ago, and someone already mentioned evolved antenna systems. We are suffering from a severe case of calling gradient descent AI at the moment, but if it gets more money into actual research instead of LLM slop, I'm all for it.

                                                                                        • IanCal 9 hours ago

                                                                                          > We are suffering from a severe case of calling gradient descent AI at the moment,

                                                                                          We’ve been doing that for decades, it’s just more recently that it’s come with so much more funding.

                                                                                          • carabiner 8 hours ago

                                                                                            I still call computers "adding machines." Total fad devices.

                                                                                          • qz_kb 12 hours ago

                                                                                            This is not "AI", it's non-linear optimization...

                                                                                            • tomrod 12 hours ago

                                                                                              We all do math down here.

                                                                                              • rurban 8 hours ago

                                                                                                Non-linear optimization is classic AI, ie. searching through logic and symbolic computation.

                                                                                                "Modern" AI is just fuzzy logic, connecting massive probabilities to find patterns.

                                                                                              • IAmGraydon 14 hours ago

                                                                                                This is the kind of thing I like to see AI being used for. That said, as is noted in the article, this has not yet led to new physics or any indication of new physics.