• mcv 5 hours ago

    This seems to confirm my feeling when using AI too much. It's easy to get started, but I can feel my brain engaging less with the problem than I'm used to. It can form a barrier to real understanding, and keeps me out of my flow.

    I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.

    So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.

    So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.

    • vidarh 2 hours ago

      My "actual job" isn't to write code, but to solve problems.

      Writing code has just typically been how I've needed to solve those problems.

      That has increasingly shifted to "just" reviewing code and focusing on the architecture and domain models.

      I get to spend more time on my actual job.

      • Kamq an hour ago

        > My "actual job" isn't to write code, but to solve problems.

        Yes, and there's often a benefit to having a human have an understanding of the concrete details of the system when you're trying to solve problems.

        > That has increasingly shifted to "just" reviewing code

        It takes longer to read code than to write code if you're trying to get the same level of understanding. You're gaining time by building up an understanding deficit. That works for a while, but at some point you have to go burn the time to understand it.

        • laurentiurad 18 minutes ago

          It's like any other muscle, if you don't exercise it, you will lose it.

          It's important that when you solve problems by writing code, you go through all the use cases of your solution. In my experience, just reading the code given by someone else (either a human or machine) is not enough and you end up evaluating perhaps the main use cases and the style. Most of the times you will find gaps while writing the code yourself.

        • thefaux an hour ago

          This feels like it conflates problem solving with the production of artifacts. It seems highly possible to me that the explosion of ai generated code is ultimately creating more problems than it is solving and that the friction of manual coding may ultimately prove to be a great virtue.

          • Difwif an hour ago

            This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.

            How we work changes and the extra complexity buys us productivity. The vast majority of software will be AI generated, tools will exist to continuously test/refine it, and hand written code will be for artists, hobbyists, and an ever shrinking set of hard problems where a human still wins.

            • Kbelicius 13 minutes ago

              > This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.

              This to me looks like an analogy that would support what GP is saying. With modern farming practices you get problems like increased topsoil loss and decreased nutritional value of produce. It also leads to a loss of knowledge for those that practice those techniques of least resistance in short term.

              This is not me saying big farming bad or something like that, just that your analogy, to me, seems perfectly in sync with what the GP is saying.

              • hluska 33 minutes ago

                I’ll be honest with you pal - this statement sounds like you’ve bought the hype. The truth is likely between the poles - at least that’s where it’s been for the last 35 years that I’ve been obsessed with this field.

                • paulcole 29 minutes ago

                  They may be early but they’re not wrong.

                  • lazide 16 minutes ago

                    That could be said about hover cars too.

          • Archer6621 4 hours ago

            That's a nice anecdote, and I agree with the sentiment - skill development comes from practice. It's tempting to see using AI as free lunch, but it comes with a cost in the form of skill atrophy. I reckon this is even the case when using it as an interactive encyclopedia, where you may lose some skill in searching and aggregating information, but for many people the overall trade off in terms of time and energy savings is worth it; giving them room to do more or other things.

            • scyzoryk_xyz an hour ago

              If the computer was the bicycle for the mind, then perhaps AI is the electric scooter for the mind? Gets you there, but doesn't necessarily help build the best healthy habits.

              Trade offs around "room to do more of other things" are an interesting and recurring theme of these conversations. Like two opposites of a spectrum. On one end the ideal process oriented artisan taking the long way to mastery, on the other end the trailblazer moving fast and discovering entirely new things.

              Comparing to the encyclopedia example: I'm already seeing my own skillset in researching online has atrophied and become less relevant. Both because the searching isn't as helpful and because my muscle memory for reaching for the chat window is shifting.

              • coole-wurst an hour ago

                Maybe it was always about where you are going and how fast you can get there? And AI might be a few mph faster than a bicycle, and still accelerating.

              • chairmansteve 3 hours ago

                "I reckon this is even the case when using it as an interactive encyclopedia".

                Yes, that is my experience. I have done some C# projects recently, a language I am not familiar with. I used the interactive encylopedia method, "wrote" a decent amount of code myself, but several thousand lines of production code later, I don't I know C# any better than when I started.

                OTOH, it seems that LLMs are very good at compiling pseudocode into C#. And I have always been good at reading code, even in unfamiliar languages, so it all works pretty well.

                I think I have always worked in pseudocode inside my head. So with LLMs, I don't need to know any programming languages!

              • jstummbillig 39 minutes ago

                > But letting it write the actual code was a mistake

                I think you not asking questions about the code is the problem (in so far it still is a problem). But it certainly has gotten easy not to.

                • isolli 5 hours ago

                  This mirrors my experience exactly. We have to learn how to tame the beast.

                  • sothatsit 4 hours ago

                    I think we all just need to avoid the trap of using AI to circumvent understanding. I think that’s where most problems with AI lie.

                    If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.

                    But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI can be very useful. Unfortunately the trap is very alluring.

                    A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.

                    • orenp an hour ago

                      I'd say the new problem is knowing when understanding is important and where it's okay to delegate.

                      It's similar to other abstractions in this way, but on a larger scale due to LLM having so many potential applications. And of course, due to the non-determinism.

                  • exodust 36 minutes ago

                    Similarly I leave Cursor's AI in "ask" mode. It puts code there, leaving me to grab what I need and integrate myself. This forces me to look closely at code and prevents the "runaway" feeling where AI does too much and you're feeling left behind in your own damn project. It's not AI chat causing cognitive debt it's Agents!

                    • foxes 2 hours ago

                      Is this a copilot ad?

                      • PatronBernard 2 hours ago

                        > a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning.

                        I am sorry for being direct but you could have just kept it to the first part of that sentence. Everything after that just sounds like pretentious name dropping and adds nothing to your point.

                        But I fully agree, for complex problems that require insight, LLMs can waste your time with their sycophancy.

                        • TheColorYellow an hour ago

                          This is a technical forum, isn't pretentious name dropping kind of what we do?

                          Seriously though, I appreciated it because my curiosity got the better of me and I went down a quick rabbit hole in Sugiyama, comparative graph algorithms, and learning about the node positioning as a particular dimension of graph theory. Sure nothing ground breaking, but it added a shallow amount to my broad knowledge base of theory that continues to prove useful in our business (often knowing what you don't know is the best initiative for learning). So yeah man, lets keep name dropping pretentious technical details because thats half the reason I surf this site.

                          And yes, I did use ChatGPT to familiarize myself with these concepts briefly.

                          • fatherwavelet 30 minutes ago

                            I think many are not doing anything like this so to the person who is not interested in learning anything, technical details like this sound like pretentious name dropping because that is how they relate to the world.

                            Everything to them is a social media post for likes.

                            I have explored all kinds of graph layouts in various network science context via LLMs and guess what? I don't know anything much about graph theory beyond G = (V,E). I am not really interested either. I am interested in what I can do with and learn from G. Everything on the right of the equals sign Gemini is already beyond my ability. I am just not that smart.

                            The standard narrative on this board seems to be something akin to having to master all volumes of Knuth before you can even think to write a React CRUD app. Ironic since I imagine so many learned programming by just programming.

                            I know I don't think as hard when using an LLM. Maybe that is a problem for people with 25 more IQ points than me. If I had 25 more IQ points maybe I could figure out stuff without the LLM. That was not the hand I was dealt though.

                            I get the feeling there is immense intellectual hubris on this forum that when something like this comes up, it is a dog whistle for these delusional Erdos in their own mind people to come out of the wood work to tell you how LLMs can't help you with graph theory.

                            If that wasn't the case there would be vastly more interesting discussion on this forum instead of ad nauseam discussion on how bad LLMs are.

                            I learn new things everyday from Gemini and basically nothing reading this forum.

                          • hluska 30 minutes ago

                            I’ve been forced down that path and based on that experience it added a whole lot. Maybe you just don’t understand the problem?

                        • sdoering 4 hours ago

                          This reminds me of the recurring pattern with every new medium: Socrates worried writing would destroy memory, Gutenberg's critics feared for contemplation, novels were "brain softening," TV was the "idiot box." That said, I'm not sure "they've always been wrong before" proves they're wrong now.

                          Where I'm skeptical of this study:

                          - 54 participants, only 18 in the critical 4th session

                          - 4 months is barely enough time to adapt to a fundamentally new tool

                          - "Reduced brain connectivity" is framed as bad - but couldn't efficient resource allocation also be a feature, not a bug?

                          - Essay writing is one specific task; extrapolating to "cognition in general" seems like a stretch

                          Where the study might have a point:

                          Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.

                          So am I ideologically inclined to dismiss this? Maybe. But I also think the honest answer is: we don't know yet. The historical pattern suggests cognitive abilities shift rather than disappear. Whether this shift is net positive or negative - ask me again in 20 years.

                          [Edit]: Formatting

                          • wisty 2 hours ago

                            Soapbox time.

                            They were arguably right. Pre literate peole could memorise vast texts (Homer's work, Australian Aboriginal songlines). Pre Gutenberg, memorising reasonably large texts was common. See, e.g. the book Memory Craft.

                            We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.

                            And it's not even that machines are always better, they only have to be barely competent. People will risk their life in a horribly janky self driving car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.

                            We have about 30 years of the internet being widely adopted, which I think is roughly similar to AI in many ways (both give you access to data very quickly). Economists suggest we are in many ways no more productive now than when Homer Simpson could buy a house and raise a family on a single income - https://en.wikipedia.org/wiki/Productivity_paradox

                            Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).

                            • CuriouslyC an hour ago

                              Brains are adaptive. We're not getting dumber, we're just adapting to a new environment. Just because they're less fit for other environments doesn't make it worse.

                              As for the productivity paradox, this discounts the reality that we wouldn't even be able to scale the institutions we're scaling without the tech. Whether that scaling is a good thing is debatable.

                              • discreteevent an hour ago

                                > Brains are adaptive.

                                They are, but you go on to assume that they will adapt in a good way.

                                Bodies are adaptive too. That didn't work out well for a lot of people when their environment changed to be sedentary.

                                • doublerabbit 43 minutes ago

                                  Brains are adaptive and as we adapt we are turning more cognitive unbalanced. We're absorbing potentially bias information at a faster rate. GPT can give you information of X in seconds. Have you thought about it? Is that information correct? Information can easily be adapted to sound real while masking the real as false.

                                  Launching a search engine and searching may spew incorrectness but it made you make judgement, think. You could have two different opinions one underneath each other; you saw both sides of the coin.

                                  We are no longer critical thinking. We are taking information at face value, marking it as correct and not questioning is it afterwards.

                                  The ability to evaluate critically and rationally is what's decaying. Who opens an physical encyclopedia nowadays? That itself requires resources, effort and time. Life's level of complexity doesn't help making it is easier to assume that the first piece information given to us is true. The Wall-E view isn't wrong.

                                  • CuriouslyC 27 minutes ago

                                    I see a lot of people grinding and hustling in a way that would have crushed people 75 years ago. I don't think our lack of desire to crack an encyclopedia for a fact rather than rely on AI to serve up a probably right answer is down to laziness, we just have bigger fish to fry.

                                    • doublerabbit 23 minutes ago

                                      Valid point, amended my viewpoint to cater to that, thanks.

                                • UltraSane an hour ago

                                  Instead of memorizing vasts amount of text modern people memorize the plots of vast amounts of books, moves, TV shows, and video games and pop culture.

                                  Computers are much better at remembering text.

                                • mschild 4 hours ago

                                  Needs more research. Fully agree on that.

                                  That said:

                                  TV very much is the idiot box. Not necessarily because of the TV itself but rather whats being viewed. An actual engaging and interesting show/movie is good, but last time I checked, it was mostly filled with low quality trash and constant news bombardment.

                                  Calculators do do arithmetic and if you ask me to do the kind of calculations I had to do in high school by hand today I wouldnt be able to. Simple calculations I do in my head but my ability to do more complex ones diminished. Thats down to me not doing them as often yes, but also because for complex ones I simply whip out my phone.

                                  • richrichardsson 3 hours ago

                                    > Calculators do do arithmetic and if you ask me to do the kind of calculations I had to do in high school by hand today I wouldnt be able to

                                    I got scared by how awfully my juniour (middle? 5-11) school mathematics had slipped when helping my 9 year old boy with his homework yesterday.

                                    I literally couldn't remember how to carry the 1 when doing subtractions of 3 digit numbers! Felt literally idiotic having to ask an LLM for help. :(

                                    • wiz21c 3 hours ago

                                      On my part, I don't use that carry method at ll. When I have to substract, I substract by chunks that my brain can easily subtract. For example 1233 - 718, I'll do 1233 - 700 = 533 then 533 - 20 = 513 then 513 + 2 = 515. It's completely instinctive (and thus I can't explain to my children :-) )

                                      What I have asked my children to do very often is back-of-the-envelope multiplications and other computations. That really helped them to get a sense of the magnitude of things.

                                      • n4r9 an hour ago

                                        I have a two year old and often worry that I'll teach him some intuitive arithmetic technique, then school will later force a different method and mark him down despite getting the right answer. What if it ends up making him hate school, maths, or both?

                                        • __s 11 minutes ago

                                          I experienced this. Only made me hate school, but maybe because I had game programming at home to appreciate math with

                                          Just expose them to everyday math so they aren't one of those people who think math has no practical uses. My father isn't great with math, but would raise questions like how wide a river was (solvable from one side with trig, using 30 degree angles for easy math). Napkin math makes things much more fun than strict classroom math with one right answer

                                  • kace91 3 hours ago

                                    I think novels and tv are bad examples, as they are not substituting a process. The writing one is better.

                                    Here’s the key difference for me: AI does not currently replace full expertise. In contrast, there is not a “higher level of storage” that books can’t handle and only a human memory can.

                                    I need a senior to handle AI with assurances. I get seniors by having juniors execute supervised lower risk, more mechanical tasks for years. In a world where AI does that, I get no seniors.

                                    • duskdozer 4 hours ago

                                      Not sure "they've always been wrong before" applies to TV being the idiot box and everything after

                                      • boesboes 3 hours ago

                                        I think that is a VERY false comparison. As you say, LLMs try to take over entire cognitive and creative processes and that is a bigger problem then outsourcing arithmetic

                                        • cimi_ 4 hours ago

                                          > The historical pattern suggests cognitive abilities shift rather than disappear.

                                          Shift to what? This? https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

                                          • darkwater 2 hours ago

                                            What the hell have I just read (or at least skimmed)?? I cannot understand if the author is:

                                            a) serious, but we live on different planets

                                            b) serious with the idea, tongue-in-check in the style and using a lot of self-irony

                                            c) an ironic piece with some real idea

                                            d) he is mocking AI maximalists

                                            • cimi_ an hour ago

                                              There was discussion about this here a couple of weeks ago: https://news.ycombinator.com/item?id=46458936

                                              Steve Yegge's a famous developer, this is not a joke :) You could say he is an AI maximalist, from your options I'd go with (b) serious with the idea, tongue-in-check in the style and using a lot of self-irony.

                                              It is exaggerated, but this is how he sees things ending up eventually. This is real software.

                                              If things do end up in glorified kanban boards, what does it mean for us? That we can work less and use the spare time reading and doing yoga, or that we'll work the same hours with our attention even more fragmented and with no control over the outputs of these things (=> stress).

                                              I'd really wish that people who think this is good for us and are pushing for this future do a bit better than:

                                              1. More AI 2. ??? 3. Profit

                                              • cap11235 2 hours ago

                                                Just ignore the rambling crypto shill.

                                            • ben_w 4 hours ago

                                              > 4 months is barely enough time to adapt to a fundamentally new tool

                                              Yes, but also the extra wrinkle that this whole thing is moving so fast that 4 months old is borderline obsolete. Same into the future, any study starting now based on the state of the art on 22/01/2026 will involve models and potentially workflows already obsolete by 22/05/2026.

                                              We probably can't ever adapt fully when the entire landscape is changing like that.

                                              > Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.

                                              Yes, but also consider that this is true of any team: All managers hire people to outsource some entire cognitive process, letting themselves focus on their own personal comparative advantage.

                                              The book "The Last Man Who Knew Everything" is about Thomas Young, who died in 1829; since then, the sum of recorded knowledge has broadened too much for any single person to learn it all, so we need specialists, including specialists in managing other specialists.

                                              AI is a complement to our own minds with both sides of this: Unlike us, AI can "learn it all", just not very well compared to humans. If any of us had a sci-fi/fantasy time loop/pause that let us survive long enough to read the entire internet, we'd be much more competent than any of these models, but we don't, and the AI runs on hardware which allows it to.

                                              For the moment, it's still useful to have management skills (and to know about and use Popperian falsification rather than verification) so that we can discover and compensate for the weaknesses of the AI.

                                              • wartywhoa23 3 hours ago

                                                > TV was the "idiot box."

                                                TV is the uber idiot box, the overlord of the army of portable smart idiot boxes.

                                                • chairmansteve 3 hours ago

                                                  "Socrates worried writing would destroy memory".

                                                  He may have been right... Maybe our minds work in a different way now.

                                                  • vladms 4 hours ago

                                                    > That said, I'm not sure "they've always been wrong before" proves they're wrong now.

                                                    I think a better framing would be "abusing (using it too much or for everything) any new tool/medium can lead to negative effects". It is hard to clearly define what is abuse, so further research is required, but I think it is a healthy approach to accept there are downsides in certain cases (that applies for everything probably).

                                                    • lr4444lr 2 hours ago

                                                      Were any of the prior fears totally wrong?

                                                      • BlackFly 3 hours ago

                                                        If you realize that what we remember are the extremized strawman versions of the complaints then you can realize that they were not wrong.

                                                        Writing did eliminate the need for memorization. How many people could quote a poem today? When oral history was predominant, it was necessary in each tribe for someone to learn the stories. We have much less of that today. Writing preserves accuracy much more (up to conquerors burning down libraries, whereas it would have taken genocide before), but to hear a person stand up and quote Desiderata from memory is a touching experience to the human condition.

                                                        Scribes took over that act of memorization. Copying something lends itself to memorization. If you have ever volunteered extensively for project Gutenberg you can also witness a similar experience: reading for typos solidifies the story into your mind in a way that casual writing doesn't. In losing scribes we lost prioritization of texts and this class of person with intimate knowledge of important historical works. With the addition of copyright we have even lost some texts. We gained the higher availability of works and lower marginal costs. The lower marginal costs led to...

                                                        Pulp fiction. I think very few people (but I would be disappointed if it was no one) would argue that Dan Brown's da Vinci Code is on the same level as War and Peace. From here magazines were created, even cheaper paper, rags some would call them (or use that to refer to tabloids). Of course this also enabled newspapers to flourish. People started to read things for entertainment, text lost its solemnity. The importance of written word diminished on average as the words being printed became more banal.

                                                        TV and the internet led to the destruction of printed news, and so on. This is already a wall of text so I won't continue, but you can see how it goes:

                                                        Technology is a double edged sword, we may gain something but we also can and did lose some things. Whether it was progress or not is generally a normative question that often a majority agrees with in one sense or another but there are generational differences in those norms.

                                                        In the same way that overuse of a calculator leads to atrophy of arithmetic skills, overuse of a car leads to atrophy of walking muscles, why wouldn't overuse of a tool to write essays for you lead to atrophy of your ability to write an essay? The real reason to doubt the study is because its conclusion seems so obvious that it may be too easy for some to believe and hide poor statistical power or p-hacking.

                                                        • darkwater 2 hours ago

                                                          I think your take is almost irrefutable, unless you frame human history as the only possible way to achieve current humanity status and (unevenly distributed) quality of life.

                                                          I also find exhausting the Socrates reference that's ALWAYS brought up in these discussions. It is not the same. Losing the collective ability to recite a 10000 words poem by heart because of books it's not the same thing as stopping to think because an AI is doing the thinking for you.

                                                          We keep adding automation layers on top of the previous ones. The end goal would be _thinking_ of something and have it materialized in computer and physical form. That would be the extreme. Would people keep comparing it to Socrates?

                                                        • direwolf20 3 hours ago

                                                          How do we know they were wrong before?

                                                          • piyuv 3 hours ago

                                                            None of the examples you provided were being sold as “intelligence”

                                                            • bowsamic 4 hours ago

                                                              > they've always been wrong before

                                                              Were they? It seems that often the fears came true, even Socrates’

                                                              • TheOtherHobbes 3 hours ago

                                                                Writing didn't destroy memory, it externalised it and made it stable and shareable. That was absolutely transformative, and far more useful than being able to re-improvise a once-upon-a-time heroic poem from memory.

                                                                It hugely enhanced synthetic and contextual memory, which was a huge development.

                                                                AI has the potential to do something similar for cognition. It's not very good at it yet, but externalised cognition has the potential to be transformative in ways we can't imagine - in the same way Socrates couldn't imagine Hacker News.

                                                                Of course we identify with cognition in a way we didn't do with rote memory. But we should possibly identify more with synthetic and creative cognition - in the sense of exploring interesting problem spaces of all kinds - than with "I need code to..."

                                                                • Akronymus an hour ago

                                                                  > AI has the potential to do something similar for cognition. It's not very good at it yet, but externalised cognition has the potential to be transformative in ways we can't imagine - in the same way Socrates couldn't imagine Hacker News.

                                                                  Wouldnt the endgame of externalized cognition be that humans essentially become cogs in the machine?

                                                                  • latexr an hour ago

                                                                    > in the same way Socrates couldn't imagine Hacker News.

                                                                    Perhaps he could. If there’s an argument to be made against writing, social media (including HN) is a valid one.

                                                                • raincole 2 hours ago

                                                                  > "they've always been wrong before"

                                                                  In my opinion, they've almost always been right.

                                                                  In the past two decades, we've seen the less-tech-savvy middle managers who devalued anything done on computer. They seemed to believe that doing graphic design or digital painting was just pressing a few buttons on the keyboard and the computer would do the job for you. These people were constantly mocked among online communities.

                                                                  In programmers' world, you have seen people who said "how hard it could be? It's just adding a new button/changing the font/whatever..."

                                                                  And strangely, in the end those tech muggles were the insightful ones.

                                                                • carterschonwald 6 hours ago

                                                                  idk, if anything I’m thinking more. The idea that I might be able to build everything I’ve ever planned out. At least the way I’m using them, it’s like the perfect assistive device for my flavor of ADHD — I get an interactive notebook I can talk through crazy stuff with. No panacea for sure, but I’m so much higher functioning it’s surreal. I’m not even using em in the volume many folks claim, more like pair programming with a somewhat mentally ill junior colleague. Much faster than I’d otherwise be.

                                                                  this actually does include a crazy amount of long form latex expositions on a bunch of projects im having a blast iterating on. i must be experiencing what its almost like not having adhd

                                                                  • ensocode 5 hours ago

                                                                    Maybe it’s not that we’re getting stupid because we don’t use our brains anymore. It’s more like having a reliable way to make fire — so we stop obsessing over sparks and start focusing on building something more important.

                                                                    • jack_pp 3 hours ago

                                                                      Instead of being the architect, engineer, plumber, electrician, carpenter you can (most of the time) just be the architect/planner. You for sure need to know how everything works in case LLMs mess the low level stuff up but it sure is nice not needing to lay bricks and dig ditches anymore and just build houses.

                                                                      • discreteevent 3 hours ago

                                                                        It won't turn most people into architects. It will turn them into PMs. The function of PMs is important but without engineers you are not going to build a sustainable system. And an LLM is not an engineer.

                                                                        • jack_pp an hour ago

                                                                          If you already are an engineer it frees you up to be an architect.

                                                                          If you aren't, then sure you'll be a PM with a lackluster team of engineers.

                                                                          LLMs can engineer small well defined functions / scripts rather well in my experience. Of course it helps to be able to understand what it outputs and prod it to engineer it just the way you want it. Still faster than me writing it from scratch, most of the time. And even if it's the same time as me doing it from scratch it feels easier so I can do more without getting tired.

                                                                      • discreteevent 3 hours ago

                                                                        > Maybe it’s not that we’re getting stupid because we don’t use our brains anymore.

                                                                        The study shows that the brain is not getting used. We will get stupid in the same way that people with office jobs get unhealthy if they don't deliberately exercise.

                                                                      • kminehart 5 hours ago

                                                                        I can definitely relate to the abstract at least. While I am more productive now, and I am way more excited about working on longer term projects (especially by myself), I have found that the minutia is way more strenuous than it was before. I think that inhibits my ability to review what the LLM is producing.

                                                                        I haven't been diagnosed with ADHD or anything but i also haven't been tested for it. It's something I have considered but I think it's pretty underdiagnosed in Spain.

                                                                        • isolli 5 hours ago

                                                                          Indeed, I feel like AI makes it less lonely to work, and for me, it's a net positive. It still has downsides for my focus, but that can be improved...

                                                                          • skrebbel 6 hours ago

                                                                            Can you elaborate on how you use AI for this? Do you do it for coding or for “everything?”

                                                                            • notrealyme123 6 hours ago

                                                                              I am currently writing a paper and I am thinking exactly the same.

                                                                              That must be how normal people feel.

                                                                              • ensocode 5 hours ago

                                                                                I feel the same. Do you think this is because the ADHD brain has so many ideas or is it the same for neuro-normal people?

                                                                              • blackqueeriroh 7 hours ago

                                                                                I encourage folks to listen to brilliant psychologist for software teams Cat Hicks [1] and her wife, teaching neuroscientist Ashley Juavinett [2] on their excellent podcast, Change, Technically discussing the myriad problems with this study: https://www.buzzsprout.com/2396236/episodes/17378968

                                                                                1: https://www.catharsisinsight.com 2: https://ashleyjuavinett.com

                                                                                • probably_wrong 4 hours ago

                                                                                  I'm not a fan of "TL;DR" but I think 52 minutes would qualify. I jumped to a random point of the transcript and found just platitudes, which didn't quite hook me into listening to all of it.

                                                                                  How about some more info on what their main conclusions are?

                                                                                  • albumen an hour ago

                                                                                    They view the framing of the MIT paper not just as bad science, but as a dangerous social tool that uses brain data to "consign people" to being less worthy or "stupid" for using cognitive aids. It flags the paper's alarmist findings as "pseudoscience" designed to provoke fear rather than provide rigorous insight. They highlight several "red flags" in the study's design: lack of a coherent scientific framework, methodological errors like typos, and reliance on invented, undefined terms such as "cognitive debt". They challenge the interpretation of EEG results, explaining that while the paper frames a 55% reduction in connectivity as evidence that a user's "brain sucks," such data could instead indicate increased neural efficiency, an alternative explanation the authors ignore. (EEG measures broad, noisy signals from outside the skull and is better understood as a rough index of brain state than as a precise window into specific thoughts or “intelligence.”)

                                                                                    The hosts condemn the study’s "bafflingly weak" logic and ableist rhetoric, and advise skepticism toward "science communicators" who might profit from selling hardware or supplements related to their findings: one of the paper's lead authors, Nataliya Kosmyna, is associated with the MIT Media Lab and the development of AttentivU, a pair of glasses designed to monitor brain activity and engagement. By framing LLM use as creating a "cognitive debt," the researchers create a market for their own solution: hardware that monitors and alerts the user when they are "under-engaged". The AttentivU system can provide haptic or audio feedback when attention drops, essentially acting as the "scaffold" for the very cognitive deficits the paper warns against. The research is part of the "Fluid Interfaces" group at MIT, which frequently develops Brain-Computer Interface (BCI) systems like "Brain Switch" and "AVP-EEG". This context supports the hosts' suspicion that the paper’s "cognitive debt" theory may be designed to justify a need for these monitoring tools.

                                                                                    • internet_points 2 hours ago

                                                                                      It's a podcast, it goes back and forth between high and low density content. I tried listening to it while working and sometimes had to pause it because it got deep into e.g. explaining EEG, and then it's back to laughing at random stuff.

                                                                                      • woof an hour ago

                                                                                        Summary using Claude 3.7 Sonnet:

                                                                                        "Your Brain On Chat GPT" Paper Analysis

                                                                                        In this transcript, neuroscientist Ashley and psychologist Cat critically analyze a controversial paper titled "Your Brain On Chat GPT" that claims to show negative brain effects from using large language models (LLMs).

                                                                                        Key Issues With the Paper:

                                                                                        Misleading EEG Analysis:

                                                                                        The paper uses EEG (electroencephalography) to claim it measures "brain connectivity" but misuses technical methods EEG is a blunt instrument that measures thousands of neurons simultaneously, not direct neural connections The paper confuses correlation of brain activity with actual physical connectivity Poor Research Design:

                                                                                        Small sample size (54 participants with many dropouts) Unclear time intervals between sessions Vague instructions to participants Controlled conditions don't represent real-world LLM use Overstated Claims:

                                                                                        Invented terms like "cognitive debt" without defining them Makes alarmist conclusions not supported by data Jumps from limited lab findings to broad claims about learning and cognition Methodological Problems:

                                                                                        Methods section includes unnecessary equations but lacks crucial details Contains basic errors like incorrect filter settings Fails to cite relevant established research on memory and learning No clear research questions or framework The Experts' Conclusion:

                                                                                        "These are questions worth asking... I do really want to know whether LLMs change the way my students think about problems. I do want to know if the offloading of cognitive tasks changes my own brain and my own cognition... We need to know these things as a society, but to pretend like this paper answers those questions is just completely wrong."

                                                                                        The experts emphasize that the paper appears designed to generate headlines rather than provide sound scientific insights, with potential conflicts of interest among authors who are associated with competing products.

                                                                                    • Elizer0x0309 35 minutes ago

                                                                                      There's a skill of problem solving that will differentiate winners versus losers.

                                                                                      I'm so grateful for AI and always use it to help get stuff done while also documenting the rational it takes to go from point A to B.

                                                                                      Although it has failed many times, I've had ZERO problems backtracking, debugging its thinking, understand what it has done and where it has failed.

                                                                                      We definitely need to bring back courses on "theory of knowledge" and the "Art of problem" solving etc.

                                                                                      • softwaredoug 13 hours ago

                                                                                        Druids used to decry that literacy caused people to lose their ability to memorize sacred teachings. And they’re right! But literacy still happened and we’re all either dumber or smarter for it.

                                                                                        • alt187 13 hours ago

                                                                                          It's more complex than that. The three pillars of learning are theory (finding out about the thing), practice (doing the thing) and metacognition (being right, or more importantly, wrong. And correcting yourself.). Each of those steps reinforce neural pathways. They're all essential in some form or another.

                                                                                          Literacy, books, saving your knowledge somewhere else removes the burden of remembering everything in your head. But they don't come into effect into any of those processes. So it's an immensely bad metaphor. A more apt one is the GPS, that only leaves you with practice.

                                                                                          That's where LLMs come in, and obliterate every single one of those pillars on any mental skill. You never have to learn a thing deeply, because it's doing the knowing for you. You never have to practice, because the LLM does all the writing for you. And of course, when it's wrong, you're not wrong. So nothing you learn.

                                                                                          There are ways to exploit LLMs to make your brain grow, instead of shrink. You could make them into personalized teachers, catering to each student at their own rhythm. Make them give you problems, instead of ready-made solutions. Only employ them for tasks you already know how to make perfectly. Don't depend on them.

                                                                                          But this isn't the future OpenAI or Anthropic are gonna gift us. Not today, and not in a hundred years, because it's always gonna be more profitable to run a sycophant.

                                                                                          If we want LLMs to be the "better" instead of the "worse", we'll have to fight for it.

                                                                                          • svara 6 hours ago

                                                                                            > Make them give you problems, instead of ready-made solutions

                                                                                            Yes, this is one of my favorite prompting styles.

                                                                                            If you're stuck on a problem, don't ask for a solution, ask for a framework for addressing problems of that type, and then work through it yourself.

                                                                                            Can help a lot with coming unstuck, and the thoughts are still your own. Oftentimes you end up not actually following the framework in the end, but it helps get the ball rolling.

                                                                                            • smileeeee 7 hours ago

                                                                                              Right, nobody gains much of anything by memorizing logarithm tables. But letting the machine tell you what even you can do with a logarithm takes away from your set of abilities, without other learning to make up for it.

                                                                                            • giancarlostoro 6 hours ago

                                                                                              Smartphones I think did the most damage. Used to be you had to memorize people's phone numbers. I'm sure other things like memorizing how to get from your house to someone else is also less cognitive when the GPS just tells you every time, instead of you busting out a map, and thinking about your route. I've often found that if I preview a route I'm supposed to take, and use Google Street Maps to physically view key / unfamiliar parts of my route, I am drastically less likely to get lost, because "oh this looks familiar! I turn right here!"

                                                                                              My wife had a similar experience, she had some college project where they had to drive up and down some roads and write about it, it was a group project, and she bought a map, and noticed that after reading the map she was more knowledgeable about the area than her sister who also grew up in the same area.

                                                                                              I think AI is a great opportunity for learning more about your subjects in question from books, and maybe even the AI themselves by asking for sources, always validate your intel from more authoritative sources. The AI just saved you 10 minutes? You can spend those 10 minutes reading the source material.

                                                                                              • zelphirkalt 4 hours ago

                                                                                                About the phone numbers thing: I am now 35yo. Do I still remember the phone number of one of my best friends from primary school back then? Hell yeah, I do! These days though, I am struggling a bit with phone numbers, mostly because I don't even try. If the number is important, I will save it somewhere. Memorizing it? Nahhh... But sometimes my number brain still does that and it seems some weird pattern in the number. Stuff like

                                                                                                "+4 and then -2 and then +6 and then -3. Aha! All makes sense! Cannot repeat the digit differences, and need to be whole numbers, so going to the next higher even number, which is 6, which is 3 when halved!"

                                                                                                And then I am kinda proud my brain still works, even if the found "pattern" is hilariously arbitrary.

                                                                                                • voidnap 4 hours ago

                                                                                                  The worst part about smart phones is their browser/social media. Technically, even dumb phones like the nokia 3310 had contact lists so you didn't have to memorize phone numbers. And land lines had speed dial. And my family used a phonebook with a rotary dial telephone. It's not like people had memorized as many numbers as they now have stored in their telephones.

                                                                                                • otikik 2 hours ago

                                                                                                  The ability is still there. My son dutifully memorizes all the lyrics of his favorite band’s songs.

                                                                                                  What the druids/piests were really decrying was that people spent less time and attention on them. Religion was the first attention economy.

                                                                                                  • timeon 4 hours ago

                                                                                                    This comment sounds like distraction from the topic. Analogy is plausible but is not the real thing.

                                                                                                    • EGreg 13 hours ago

                                                                                                      Druids? Socrates was famously against books far earlier.

                                                                                                      Funny enough, the reason he gave against books has now finally been addressed by LLMs.

                                                                                                      • firstthrowaway 5 hours ago

                                                                                                        Or, irony was being employed and Socrates wasn’t against books, but was instead noting it’s the powerful who are against them for their facilitating the sharing of ideas across time and space more powerfully than the spoken word ever could. The books are why we even know his name, let alone the things said.

                                                                                                    • culi 4 hours ago

                                                                                                      My friend works with people in their 20s. She recently brought up her struggles to do the math in her head for when to clock in/out for their lunches (30 minutes after an arbitrary time). The young coworker's response was "Oh I just put it into ChatGPT"

                                                                                                      The kids are using ChatGPT for simple maths...

                                                                                                      • imzadi 5 minutes ago

                                                                                                        Eh. I have a math degree. Aced all the advanced maths. Was the only one to get an A in Diff Eq. I love math. I've never been able to do simple math in my head. I can't even remember the times tables half the time. Simple math isn't really problem solving.

                                                                                                        • Quothling 2 hours ago

                                                                                                          That'll lead to interesting results. I used a couple of LLM's for my blood bowl statistics, and they get rather simple math wrong. Which makes sense, they aren't build for math after all. It's wild how wrong they can get results though, I'd add the same prompt to 6 different AI's and they'd all get it wrong in 6 different ways.

                                                                                                          On a side note, the most hilarious part of it was when I asked gemini to do something for me in Google Sheets and it kept refering to it as Excel. Even after I corrected it.

                                                                                                          • volemo an hour ago

                                                                                                            Are you sure the coworker wasn't joking? Because if somebody confessed to me they struggle to add half an hour to a time point, my first reaction would definitely be to laugh it off.

                                                                                                            • booleandilemma 4 hours ago

                                                                                                              It's ok this is just the next level of human evolution. We haven't needed to know how to do basic math since the calculator. Nowadays our AIs can read and write for us too. More obsolete skills. We can focus on higher level things now. No more focusing on sparks, we can focus on building something important. We don't have an attention span over 5 seconds anyway thanks to social media. If you don't get where I'm coming from you probably don't have ADHD but that's fine.

                                                                                                              • forsakenharmony 3 hours ago

                                                                                                                you need a spark to start a fire, if you offload everything to the LLM you won't understand the higher level things

                                                                                                                • booleandilemma 2 hours ago

                                                                                                                  Completely agree fwiw. My comment sarcastically paraphrased a few other AI slop lovers I've seen in this comment section.

                                                                                                            • HPsquared 12 minutes ago

                                                                                                              Full title is clearer: "when using an AI assistant for Essay Writing Task"

                                                                                                              • windowpains 8 minutes ago

                                                                                                                I wonder if a similar thing makes managers dumb. As a manager, you have people doing work you oversee, a very similar dynamic to using an AI assistant. Sometimes the AI/subordinate makes a mistake, so you have to watch for that, but for the most part they can be trusted.

                                                                                                                If that’s true, then maybe we could leverage what we know about good management of human subordinates and apply it to AI interaction, and vice versa.

                                                                                                                • netsharc 14 hours ago

                                                                                                                  An obvious comparison is probably the habitual usage of GPS navigation. Some people blindly follow them and some seemingly don't even remember routes they routinely take.

                                                                                                                  • nerdsniper 14 hours ago

                                                                                                                    I found a great fix for this was to lock my screen maps to North-Up. That teaches me the shape of the city and greatly enhances location/route/direction awareness.

                                                                                                                    It’s cheap, easy, and quite effective to passively learn the maps over the course of time.

                                                                                                                    My similar ‘hack’ for LLMs has been to try to “race” the AI. I’ll type out a detailed prompt, then go dive into solving the same problem myself while it chews through thinking tokens. The competitive nature of it keeps me focused, and it’s rewarding when I win with a faster or better solution.

                                                                                                                    • layman51 13 hours ago

                                                                                                                      That's a great tip, but I know some people hate that because there is some cognitive load if they rely more on visuals and have to think more about which way to turn or face when they first start the route, or have to make turns on unfamiliar routes.

                                                                                                                      I also wanted to mention that just spending some time looking at the maps and comparing differences in each services' suggested routes can be helpful for developing direction awareness of a place. I think this is analogous to not locking yourself into a particular LLM.

                                                                                                                      Lastly, I know that some apps might have an option to give you only alerts (traffic, weather, hazards) during your usual commute so that you're not relying on turn-by-turn instructions. I think this is interesting because I had heard that many years ago, Microsoft was making something called "Microsoft Soundscape" to help visually impaired users develop directional awareness.

                                                                                                                      • imp0cat 6 hours ago

                                                                                                                            some cognitive load 
                                                                                                                        
                                                                                                                        That's the entire point of it though, to make you more aware of where you are and which way you should go.
                                                                                                                        • LtWorf 5 hours ago

                                                                                                                          Extra cognitive load while driving isn't the smartest idea probably.

                                                                                                                          • imp0cat 5 hours ago

                                                                                                                            That's debatable.

                                                                                                                            It is hard to gain some location awareness and get better at navigating without extra cognitive load. You have to actively train your brain to get better, there is no easy way that I know of.

                                                                                                                      • iib 3 hours ago

                                                                                                                        This is explained in more detail in the book "Human Being: reclaim 12 vital skills we’re losing to technology", which I think I found on HN a few months ago.

                                                                                                                        The first chapter goes into human navigation and it gives this exact suggestion, locking the North up, as a way to regain some of the lost navigational skills.

                                                                                                                        • themk 4 hours ago

                                                                                                                          I actually noticed this as a kid. One of the early GTA games north locked minimaps, and I knew the city well. Later ones did not, and I was always more confused.

                                                                                                                          I've pretty much always had GPS nav locked to North-Up because of this experience.

                                                                                                                          • hombre_fatal 13 hours ago

                                                                                                                            I try using north-up for that reason, but it loses the smart-zooming feature you get with the POV camera, like zooming in when you need to perform an action, and zooming back out when you're on the highway.

                                                                                                                            I was shocked into using it when I realized that when using the POV GPS cam, I couldn't even tell you which quadrant of the city I just navigated to.

                                                                                                                            I wish the north-up UX were more polished.

                                                                                                                            • simulator5g 12 hours ago

                                                                                                                              Unpolished north-up mode is a feature, the stakeholders want addicted users.

                                                                                                                            • Liftyee 13 hours ago

                                                                                                                              I haven't tried this technique yet, sounds interesting.

                                                                                                                              Living in a city where phone-snatching thieves are widely reported on built my habit of memorising the next couple steps quickly (e.g. 2nd street on the left, then right by the station), then looking out for them without the map. North-Up helps anyways because you don't have to separately figure out which erratic direction the magnetic compass has picked this time (maybe it's to do with the magnetic stuff I EDC.)

                                                                                                                              • netsharc 13 hours ago

                                                                                                                                Yeah, I'm a North-Up cult member too, after seeing a behind the scenes video of Jeremy Clarkson from Top Gear suggesting it, claiming "never get lost again".

                                                                                                                              • zelphirkalt 4 hours ago

                                                                                                                                I think a big part of not knowing regularly taken routes is just over-reliance on GPS and subsequent self-doubt. When I am in a foreign city, I check the map on how to walk somewhere. I can easily remember some sequence of left and right turns. But in reality I still look again at the map and my position, to "make sure" I am still on the right track. Sometimes I check so often, that I become annoyed by this phone looking myself and then I intentionally try to not look for a while. It is stressful to follow the OCD or whatever to check at every turn. If I don't have to check at every turn or maybe call it sync my understanding of where I am with the position on the map, then I have more awareness of the surroundings and might even be able to enjoy the surroundings more and might even feel free to choose another, more interesting looking path.

                                                                                                                                For this experience I am not sure, whether people really don't know regularly taken routes, or they just completely lack the confidence in their familiarity with it.

                                                                                                                                • jchw 13 hours ago

                                                                                                                                  I recall reading that over-reliance on GPS navigation is legitimately bad for your brain health.

                                                                                                                                  https://www.nature.com/articles/s41598-020-62877-0

                                                                                                                                  This is rather scary. Obviously, it makes me think of my own personal over-reliance on GPS, but I am really worried about a young relative of mine, whose car will remain stationary for as long as it takes to get a GPS lock... indefinitely.

                                                                                                                                  • raincole 2 hours ago

                                                                                                                                    Yes. My father never uses GPS at all. He memorized all the main roads in our city.

                                                                                                                                    It's amazing to see how he navigates the city. But however amazing it is, he's only correct perhaps 95 times out of 100. And the number will only go down as he gets older. Meanwhile he has the 99.99% correct answer right in the front panel.

                                                                                                                                    • codazoda 13 hours ago

                                                                                                                                      I have ALWAYS had this problem. It's like my brain thinks places I frequent are unimportant details and ejects them to make room for other things.

                                                                                                                                      I have to visit a place several times and with regularity to remember it. Otherwise, out it goes. GPS has made this a non-issue; I use it frequently.

                                                                                                                                      For me, however, GPS didn't cause the problem. I was driving for 5 or 6 years before it became ubiquitous.

                                                                                                                                      • stephen_g 13 hours ago

                                                                                                                                        This is one I've never found really affects me - I think because I just always plan that the third or fourth time I go somewhere I won't use the navigation, so you are in a mindset of needing to remember the turns and which lane you should be in etc.

                                                                                                                                        Not sure how that maps onto LLM use, I have avoided it almost completely because I've seen coleagues start to fall into really bad habits (like spending days adjusting prompts to try and get them to generate code that fixes an issue that we could have worked through together in about two hours), I can't see an equivalent way to not just start to outsource your thinking...

                                                                                                                                        • yndoendo 13 hours ago

                                                                                                                                          Some people have the ability to navigate with land markers quickly and some people don't.

                                                                                                                                          I saw this first hand with coworkers. We would have to navigate large builds. I could easily find my way around while others did not know to take a left or right hand turn off the elevators.

                                                                                                                                          That ability has nothing to do with GPS. Some people need more time for their navigation skills to kick in. Just like some people need to spend more time on Math, Reading, Writing, ... to be competent compared to others.

                                                                                                                                          • iammjm 4 hours ago

                                                                                                                                            I think it has much to do with the GPS. Having a GPS allows you to turn off your brain: you just go on autopilot. Without a GPS you actually have to create and update a mental model of where you are and where you are going to: maybe preplan your route, count the doors, list a sequence of left-right turns, observe for characteristic landmarks and commit them to memory. Sure, it is a skill, but it is sure to not be developed if there's no need for it. I suspect it's similar with AI-assisted coding or essay writing.

                                                                                                                                        • k8sToGo 6 hours ago

                                                                                                                                          The title is missing an important part "... for Essay Writing Task"

                                                                                                                                          • 0dayz 3 hours ago

                                                                                                                                            It's a bit tiring seeing these extreme positions on Ai sticking out time and time again, Ai is not some cure all for code stagnation or creating products nor is it destroying productivity.

                                                                                                                                            It's a tool, and this study at most indicates that we don't use as much brain power for the specific tasks of coding but do they look into for instance maintenance or management of code?

                                                                                                                                            As that is what you'll be relegated to when vibe coding.

                                                                                                                                            • yomismoaqui 3 hours ago

                                                                                                                                              Lukewarm opinions on the Internet? Where do you think we are...? We only deal in aboslutes here.

                                                                                                                                            • misswaterfairy 15 hours ago

                                                                                                                                              It seems this study has been discussed on HN before, though was recently revised very late December 2025.

                                                                                                                                              https://arxiv.org/abs/2506.08872

                                                                                                                                            • captain_coffee 14 hours ago

                                                                                                                                              Curious what the long-term effects from the current LLM-based "AI" systems embedded in virtually everything and pushed aggressively will be in let's say 10 years, any strong opinions or predictions on this topic?

                                                                                                                                              • m4rtink 13 hours ago

                                                                                                                                                Like with asbesthos and lead paint, we are building surprises today for the people of tomorrow!

                                                                                                                                                And asbestos and lead paint was actually useful.

                                                                                                                                                • yesco 14 hours ago

                                                                                                                                                  If we focus only on the impact on linguistics, I predict things will go something like this:

                                                                                                                                                  As LLM use normalizes for essay writing (email, documentation, social media, etc), a pattern emerges where everyone uses an LLM as an editor. People only create rough drafts and then have their "editor" make it coherent.

                                                                                                                                                  Interestingly, people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles. Despite this, vocabulary and semantics as a whole become more uniform. Spelling errors and typos become increasingly rare.

                                                                                                                                                  In parallel, people start using LLMs to summarize content in a style they prefer.

                                                                                                                                                  Both sides of this gradually converge. Content gets explicitly written in a way that is optimized for consumption by an LLM, perhaps a return to something like the semantic web. Authors write content in a way that encourages a summarizing LLM to summarize as the author intends for certain explicit areas.

                                                                                                                                                  Human languages start to evolve in a direction that could be considered more coherent than before, and perhaps less ambiguous. Language is the primary interface an LLM uses with humans, so even if LLM use becomes baseline for many things, if information is not being communicated effectively then an LLM would be failing at its job. I'm personifying LLMs a bit here but I just mean it in a game theory / incentive structure way.

                                                                                                                                                  • Peritract 6 hours ago

                                                                                                                                                    > people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles

                                                                                                                                                    We're already seeing people use AI to express themselves in several contexts, but it doesn't lead to an increased range of styles. It leads to one style, the now-ubiquitous upbeat LinkedIn tone.

                                                                                                                                                    Theoretically we could see diversification here, with different tools prompting towards different voices, but at the moment the trend is the opposite.

                                                                                                                                                    • cluckindan 13 hours ago

                                                                                                                                                      >Human languages start to evolve in a direction that could be considered more coherent than before

                                                                                                                                                      Guttural vocalizations accompanied by frantic gesturing towards a mobile device, or just silence and showing of LLM output to others?

                                                                                                                                                      • yesco 10 hours ago

                                                                                                                                                        I was primarily discussing written language in my post, as that's easier to speculate on.

                                                                                                                                                        That said, if most people turn into hermits and start living in pods around this period, then I think you would be in the right direction.

                                                                                                                                                      • basch 11 hours ago

                                                                                                                                                        >People only create rough drafts and then have their "editor" make it coherent.

                                                                                                                                                        While sometimes I do dump a bunch of scratch work and ask for it to be transformed into organized though, more often I find that I use LLM output the opposite way.

                                                                                                                                                        Give a prompt. Save the text. Reroll. Save the text. Change the prompt, reroll. Then going through the heap of vomit to find the diamonds. Sort of a modern version of "write drunk, edit sober" with the LLM being the alcohol in the drunk half of me. It can work as a brainstorming step to turn fragments of though into a bunch of drafts of thought, then to be edited down into elegant thought. Asking the LLM to synthesize its drafts usually discards the best nuggets for lesser variants.

                                                                                                                                                      • netsharc 14 hours ago

                                                                                                                                                        Hopefully the brainrot will mean older developers, who know how to code the old-fashioned way, don't get replaced so quickly..

                                                                                                                                                        • nly 4 hours ago

                                                                                                                                                          Or they'll be fired for not working fast enough, which already happens

                                                                                                                                                        • SecretDreams 14 hours ago

                                                                                                                                                          It'll be a lot like giving children all the answers without teaching them how to get the answers for themselves.

                                                                                                                                                          • binary132 13 hours ago

                                                                                                                                                            Most people will continue to become dumber. Some people will try to embrace and adapt. They will become the power-stupids. Others will develop a sort of immune reaction to AI and develop into a separate evolutionary family.

                                                                                                                                                          • MarkusWandel an hour ago

                                                                                                                                                            Junk food and sedentary lifestyle for your brain. What could possibly go wrong.

                                                                                                                                                            • treenode 3 hours ago

                                                                                                                                                              I don't see why this is unexpected. 'Using your brain actively vs evaluating AI' is neurally equivalent to 'active recall vs reading notes'.

                                                                                                                                                              • HPsquared an hour ago

                                                                                                                                                                It's a specific case of the general symptoms of "your brain on lazy shortcuts"

                                                                                                                                                                • canxerian 4 hours ago

                                                                                                                                                                  My use case for ChatGPT is to delegate mental effort on certain tasks, so that I can pour my mental energy on to things I truly care about, like family, certain hobbies and relationships.

                                                                                                                                                                  If you are feeling over reliant on these tools then I quickfix that's worked me is to have real conversations with real people. Organise a coffee date if you must.

                                                                                                                                                                  • coopykins 5 hours ago

                                                                                                                                                                    When I have to put together a quick fix. I reach out to Claude Code these days. I know I can give it the specifics and, Im my recent experience, it will find the issue and propose a fix. Now, I have two options: I can trust it or I can dig in myself and understand why it's happening myself. I sacrifice gaining knowledge for time. I often choose the later, and put my time in areas I think are more important than this, but I'm aware of it.

                                                                                                                                                                    If you give up your hands-on interaction with a system, you will lose your insight about it.

                                                                                                                                                                    When you build an application yourself, you know every part of it. When you vibe code, trying to debug something in there is a black box of code you've never seen before.

                                                                                                                                                                    That is one of the concerns I have when people suggest that LLMs are great for learning. I think the opposite, they're great for skipping 'learning' and just get the results. Learning comes from doing the grunt work.

                                                                                                                                                                    I use LLMs to find stuff often, when I'm researching or I need to write an ADR, but I do the writing myself, because otherwise it's easy to fall into the trap of thinking that you know what the 'LLM' is talking about, when in fact you are clueless about it. I find it harder to write about something I'm not familiar with, and then I know I have to look more into it.

                                                                                                                                                                    • wtetzner 5 hours ago

                                                                                                                                                                      I think LLMs can be great for learning, but not if you're using them to do work for you. I find them most valuable for explaining concepts I've been trying to learn, but have gotten stuck and am struggling to find good resources for.

                                                                                                                                                                      • ensocode 5 hours ago

                                                                                                                                                                        > I think the opposite, they're great for skipping 'learning' and just get the results. yes, and cars skip the hours of walking, planes skip weeks of swimming, calculators skip the calculating ...

                                                                                                                                                                      • potatoman22 13 hours ago

                                                                                                                                                                        I've definitely noticed an association between how much I vibe code something and how good my internal model of the system is. That bit about LLM users not being able to quote their essay resonates too: "oh we have that unit test?"

                                                                                                                                                                        • samthebaam 5 hours ago

                                                                                                                                                                          This has been the same argument since the invention of pen and paper. Yes, the tools reduce engagement and immediate recall and memory, but also free up energy to focus on more and larger problems.

                                                                                                                                                                          Seems to focus only on the first part and not on the other end of it.

                                                                                                                                                                          • wesleywt 4 hours ago

                                                                                                                                                                            Without the engagement on the material you are studying you will not have the context to know and therefore focus on the larger problem. Deep immersion in the material allows you to make the connections. With AI spoon feeding you will not have that immersion.

                                                                                                                                                                          • yndoendo 13 hours ago

                                                                                                                                                                            How can you validate ML content when you don't have educated people?

                                                                                                                                                                            Thinking everything ML produces is just shorting the brain.

                                                                                                                                                                            I see AI wars as creating coherent stories. Company X starts using ML and they believe what was produced is valid and can grow their stock. Reality is that Company Y poised the ML and the product or solution will fail, not right away but over time.

                                                                                                                                                                            • foota 14 hours ago

                                                                                                                                                                              Imo programming is fairly different between vibes based not looking at it at all and using AI to complete tasks. I still feel engaged when I'm more actively "working with" the AI as opposed to a more hands off "do X for me".

                                                                                                                                                                              I don't know that the same makes as much sense to evaluate in an essay context, because it's not really the same. I guess the equivalent would be having an existing essay (maybe written by yourself, maybe not) and using AI to make small edits to it like "instead of arguing X, argue Y then X" or something.

                                                                                                                                                                              Interestingly I find myself doing a mix of both "vibing" and more careful work, like the other day I used it to update some code that I cared about and wanted to understand better that I was more engaged in, but also simultaneously to make a dashboard that I used to look at the output from the code that I didn't care about at all so long as it worked.

                                                                                                                                                                              I suspect that the vibe coding would be more like drafting an essay from the mental engagement POV.

                                                                                                                                                                              • uriegas 13 hours ago

                                                                                                                                                                                I find it very useful for code comprehension. For writing code it still struggles (at least codex) and sometimes I feel I could have written the code myself faster rather than correct it every time it does something wrong.

                                                                                                                                                                                Jeremy Howard argues that we should use LLMs to help us learn, once you let it reason for you then things go bad and you start getting cognitive debt. I agree with this.

                                                                                                                                                                                • falloutx 14 hours ago

                                                                                                                                                                                  AI is not a great partner to code with. For me I just use it to do some boilerplates and fill in the tedious gaps. Even for translations its bad if you know both languages. The biggest issues is that AI constantly tries to steer you wrong, its very subtle in programming that you only realize it a week later when you get stuck in a vibe coding quagmire.

                                                                                                                                                                                  • foota 13 hours ago

                                                                                                                                                                                    shrug YMMV. I was definitely a bit of of a luddite for a while, and I still definitely don't consider myself an "AI person", but I've found them useful. I can have them do legitimately useful things, with varying degrees of supervision.

                                                                                                                                                                                    I wouldn't ask Cursor to go off and write software from scratch that I need to take ownership of, but I'm reasonably comfortable at this point having it make small changes under direction and with guidance.

                                                                                                                                                                                    The project I mentioned above was adding otel tracing to something, and it wrote a tracae viewing UI that has all the features I need and works well, without me having to spend hours getting it up set up.

                                                                                                                                                                                • jchw 13 hours ago

                                                                                                                                                                                  I try my best to make meta-comments sparingly, but, it's worth noting the abstract linked here isn't really that long. Gloating that you didn't bother to read it before commenting, on a brief abstract for a paper about "cognitive debt" due to avoiding the use of cognitive skills, has a certain sad irony to it.

                                                                                                                                                                                  The study seems interesting, and my confirmation bias also does support it, though the sample size seems quite small. It definitely is a little worrisome, though framing it as being a step further than search engine use makes it at least a little less concerning.

                                                                                                                                                                                  We probably need more studies like this, across more topics with more sample size, but if we're all forced to use LLMs at work, I'm not sure how much good it will do in the end.

                                                                                                                                                                                  • curl-up 4 hours ago

                                                                                                                                                                                    Prompt they use in `Figure 28.` is a complete mess, all the way from starting it with "Your are an expert" to the highly overlapping categories to the poorly specified JSON without clear direction on how to fill in those fields.

                                                                                                                                                                                    Similar mess with can be found in `Figure 34.`, with an added bonus of "DO NOT MAKE MISTAKES!" and "If you make a mistake you'll be fined $100".

                                                                                                                                                                                    Also, why are all of these research papers always using such weak LLMs to do anything? All of this makes their results very questionable, even if they mostly agree with "common intuition".

                                                                                                                                                                                    • pfannkuchen 13 hours ago

                                                                                                                                                                                      Talking to LLMs reminds me of arguing with a certain flavor of Russian. When you clarify based on a misunderstanding of theirs, they act like your clarification is a fresh claim which avoids them ever having to backpedal. It strikes me as intellectually dishonest in a way I find very grating. I do find it interesting though as the incentives that produce the behavior in both cases may be similar.

                                                                                                                                                                                      • boomlinde 4 hours ago

                                                                                                                                                                                        "What you said just now isn't true at all and you should reconsider the premise"

                                                                                                                                                                                        "Exactly!"

                                                                                                                                                                                      • spongebobstoes 12 hours ago

                                                                                                                                                                                        the article suggests that the LLM group had better essays as graded by both human and AI reviewers, but they used less brain power

                                                                                                                                                                                        this doesn't seem like a clear problem. perhaps people can accomplish more difficult tasks with LLM assistance, and in those more difficult tasks still see full brain engagement?

                                                                                                                                                                                        using less brain power for a better result doesn't seem like a clear problem. it might reveal shortcomings in our education system, since these were SAT style questions. I'm sure calculator users experience the same effects vs mental mathematics

                                                                                                                                                                                        • kachapopopow 3 hours ago

                                                                                                                                                                                          I mean I think this is okay I can't do math in my head at all and it hasn't stopped me from solving mathematical problems. You might not be able to write code, but you are still the primary problem solver (for now).

                                                                                                                                                                                          I have actually been improving in other fields instead like design and general cleanliness of the code, future extensability and bug prediction.

                                                                                                                                                                                          My brain is not 'normal' either so your mileage might vary.

                                                                                                                                                                                          • nothrowaways 14 hours ago

                                                                                                                                                                                            > Cognitive activity scaled down in relation to external tool use

                                                                                                                                                                                            • moron4hire an hour ago

                                                                                                                                                                                              ChatGPT got me over my imposter syndrome.

                                                                                                                                                                                              Back when it came out, it was all the rage at my company and we were all trying it for different things. After a while, I realized, if people were willing to accept the bullshit that LLMs put out, then I had been worrying about nothing all along.

                                                                                                                                                                                              That, plus getting an LLM to write anything with meaning takes putting the meaning in the prompt, pushed me to finally stop agonizing over emails and just write the damn things as simply and concisely as possible. I don't need a bullshit engine inflating my own words to say what I already know, just to have someone on the other end use the same bullshit engine to remove all that extra fluff to summarize. I can just write the point straight away and send it immediately.

                                                                                                                                                                                              You can literally just say anything in an email and nobody is going to say it's right or wrong, because they themselves don't know. Hell, they probably aren't even reading it. Most of the time I'm replying just to let someone know I read their email so they don't have to come to my office later and ask me if I read the email.

                                                                                                                                                                                              Every time someone says the latest release is a "game changer", I check back out of morbid curiosity. Still don't see what games have changed.

                                                                                                                                                                                              • mrvmochi 13 hours ago

                                                                                                                                                                                                I wonder what would happen if we used RL to minimize the user's cognitive debt. Could this lead to the creation of an effective tutor model?

                                                                                                                                                                                                • alt187 13 hours ago

                                                                                                                                                                                                  Definitely, but it won't be a creation of any known AI companies, anytime soon. I have a hard time to see how this would be profitable.

                                                                                                                                                                                                  It also goes against the main ethos of the AI sect to "stress-test" the AI against everything and everyone, so there's that.

                                                                                                                                                                                                • falloutx 14 hours ago

                                                                                                                                                                                                  I think a lot more people, especially at the higher end of the pay scale, are in some kind of AI psychosis. I have heard people at work talk about how they are using chatGPT to quick health advice, some are asking it for gym advice and others are just saying they just dump entire research reports into it and get the summary.

                                                                                                                                                                                                  • tuckwat 13 hours ago

                                                                                                                                                                                                    What does using a chat agent have to do with psychosis? I assume this was also the case when people googled their health results, googled their gym advice and googled for research paper summaries?

                                                                                                                                                                                                    As long as you're vetting your results just like you would any other piece of information on the internet then it's an evolution of data retrieval.

                                                                                                                                                                                                    • direwolf20 2 hours ago
                                                                                                                                                                                                      • falloutx 5 hours ago

                                                                                                                                                                                                        > As long as you're vetting your results

                                                                                                                                                                                                        this is just what AI companies say so they are not held responsibly for any legal issues, if a person is searching for summary of a paper, surely they don't have time to vet the paper.

                                                                                                                                                                                                      • DocTomoe 13 hours ago

                                                                                                                                                                                                        Pathologising those who disagree with a current viewpoint follows a long and proud tradition. "Possessed by demons" of yesteryear, today it's "AI psychosis".

                                                                                                                                                                                                      • mettlerse 14 hours ago

                                                                                                                                                                                                        Article seems long, need to run it through an LLM.

                                                                                                                                                                                                        • lapetitejort 14 hours ago

                                                                                                                                                                                                          Doesn't look like anything to me

                                                                                                                                                                                                          • fhd2 14 hours ago

                                                                                                                                                                                                            Perfection.

                                                                                                                                                                                                          • SecretDreams 14 hours ago

                                                                                                                                                                                                            When you're done, let us know so we can aggregate your summarized comment with the rest of the thread comments to back out key, human informed, findings.

                                                                                                                                                                                                            • observationist 14 hours ago

                                                                                                                                                                                                              Grug no need think big, Grug brain happy. Magic Rock good!

                                                                                                                                                                                                              • jacquesm 13 hours ago

                                                                                                                                                                                                                That was still one of the best finds on HN in a long time.

                                                                                                                                                                                                                https://grugbrain.dev/

                                                                                                                                                                                                                Carson Gross sure knows how to stay in character.

                                                                                                                                                                                                          • ReptileMan 3 hours ago

                                                                                                                                                                                                            I have a whole phonebook of numbers I know by heart, all of them before my first mobile phone. Not a single one remember afterwards. A lot of stuff I remembered when there was no google, afterwards - remembering how to find it by using google. And so on.

                                                                                                                                                                                                            • somewhatrandom9 14 hours ago

                                                                                                                                                                                                              "Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning."

                                                                                                                                                                                                              • bethekidyouwant 13 hours ago

                                                                                                                                                                                                                I’m gonna make a new study one where I give the participant really shitty tools and one more give them good tools to build something and see which one takes more brain power

                                                                                                                                                                                                                • fabdav 5 hours ago

                                                                                                                                                                                                                  Agreed. "Reduced muscle development in farmers using a tractor mounted plow: Over four months, mechanical plow users consistently underperformed at lifting weights with respect to the control group who had been using spades. These results raise concerns about the long-term implications of tractor mounted plow reliance and underscore the need for deeper inquiry into tractor mounted plow role in farming."

                                                                                                                                                                                                                • xenophonf 14 hours ago

                                                                                                                                                                                                                  I'm very impressed. This isn't a paper so much as a monograph. And I'm very inclined to agree with the results of this study, which makes me suspicious. To what journal was this submitted? Where's the peer review? Has anyone gone through the paper (https://arxiv.org/pdf/2506.08872) and picked it apart?

                                                                                                                                                                                                                  • DocTomoe 13 hours ago

                                                                                                                                                                                                                    I love the parts where they point out that human evaluators gave wildly different evaluations as compared to an AI evaluator, and openly admitted they dislike a more introverted way of writing (fewer flourishes, less speculation, fewer random typos, more to the point, more facts) and prefer texts with a little spunk in it (= content doesn't ultimately matter, just don't bore us.)

                                                                                                                                                                                                                  • newswasboring 4 hours ago

                                                                                                                                                                                                                    "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

                                                                                                                                                                                                                    - Socrates on Writing.

                                                                                                                                                                                                                    • bethekidyouwant 14 hours ago

                                                                                                                                                                                                                      “LLM users also struggled to accurately quote their own work” - why are these studies always so laughably bad?

                                                                                                                                                                                                                      The last one I saw was about smartphone users who do a test and then quit their phone for a month and do the test again and surprisingly do better the second time. Can anyone tell me why they might have paid more attention, been more invested, and done better on the test the second time round right after a month of quitting their phone?

                                                                                                                                                                                                                      • orliesaurus 14 hours ago

                                                                                                                                                                                                                        i think i can guess this article without reading it: ive never been on major drugs, even medically speaking yet using AI makes me feels like i am on some potent drug that eating my brain. what's state management? what's this hook? who cares, send it to claude or whatever

                                                                                                                                                                                                                        • kilpikaarna 6 hours ago

                                                                                                                                                                                                                          > what's state management? what's this hook? who cares

                                                                                                                                                                                                                          Incidentally how I feel about React regardless of LLMs. Putting Claude on top is just one more incomprehensible abstraction.

                                                                                                                                                                                                                          • tuckwat 13 hours ago

                                                                                                                                                                                                                            It's just a different way of writing code. Today you at least need to understand best practices to help steer towards a good architecture. In the near future there will be no developers needed at all for the majority of apps.

                                                                                                                                                                                                                            • cluckindan 13 hours ago

                                                                                                                                                                                                                              That just means the majority of apps don’t actually serve much of a purpose

                                                                                                                                                                                                                              • noman-land 11 hours ago

                                                                                                                                                                                                                                What if the future of apps is serving a few dozen instead of a few billion?

                                                                                                                                                                                                                              • alt187 13 hours ago

                                                                                                                                                                                                                                Becoming a moron is a different way of writing code?

                                                                                                                                                                                                                                • georgemcbay 13 hours ago

                                                                                                                                                                                                                                  > In the near future there will be no developers needed at all for the majority of apps.

                                                                                                                                                                                                                                  Software CEOs think about this and rub their hands together thinking about all the labor costs they will save creating apps, without thinking one step further and realizing that once you don't need developers to build the majority of apps your would-be customers also don't need the majority of apps at all.

                                                                                                                                                                                                                                  They can have an LLM build their own customized app (if they need to do something repeatedly, or just have the LLM one-off everything if not).

                                                                                                                                                                                                                                  Or use the free app that someone else built with an LLM as most app categories race to the moatless bottom.

                                                                                                                                                                                                                                  • akomtu 13 hours ago

                                                                                                                                                                                                                                    You may be right, but for a different reason: the majority apps on Apple and Google appstores will be 100% AI generated crapware.

                                                                                                                                                                                                                                    • joseangel_sc 13 hours ago

                                                                                                                                                                                                                                      this comment will age badly

                                                                                                                                                                                                                                  • usrbinbash 4 hours ago

                                                                                                                                                                                                                                    No shit? When I outsource thinking to a chatbot, my brain gets less good at thinking? What a complete and utter surprise.

                                                                                                                                                                                                                                    /s

                                                                                                                                                                                                                                    • lacoolj 14 hours ago

                                                                                                                                                                                                                                      Dont even need to read the article if you been using em. You already know just as well as I do how bad it gets.

                                                                                                                                                                                                                                      A door has been opened that cant be closed and will trap those who stay too long. Good luck!

                                                                                                                                                                                                                                      • ragle 13 hours ago

                                                                                                                                                                                                                                        I hate it, but I'm actually counting on this and how it affects my future earning potential as part of my early(ish) retirement plan!

                                                                                                                                                                                                                                        I do use them, and I also still do some personal projects and such by hand to stay sharp.

                                                                                                                                                                                                                                        Just: they can't mint any more "pre-AI" computer scientists.

                                                                                                                                                                                                                                        A few outliers might get it and bang their head on problems the old way (which is what, IMO, yields the problem-solving skills that actually matter) but between:

                                                                                                                                                                                                                                        * Not being able to mint any more "pre-AI" junior hires

                                                                                                                                                                                                                                        And, even if we could:

                                                                                                                                                                                                                                        * Great migration / Covid era overhiring and the corrective layoffs -> hiring freezes and few open junior reqs

                                                                                                                                                                                                                                        * Either AI or executives' misunderstandings of it and/or use of it as cover for "optimization" - combined with the Nth wave of offshoring we're in at the moment -> US hiring freezes and few open junior reqs

                                                                                                                                                                                                                                        * Jobs and tasks junior hires used to cut their teeth on to learn systems, processes, etc. being automated by AI / RPA -> "don't need junior engineers"

                                                                                                                                                                                                                                        The upstream "junior" source for talent our industry needs has been crippled both quantitatively and qualitatively.

                                                                                                                                                                                                                                        We're a few years away from a _massive_ talent crunch IMO. My bank account can't wait!

                                                                                                                                                                                                                                        Yes, yes. It's analogous to our wizzardly greybeard ancestors prophesying that youngsters' inability to write ASM and compile it in their heads would bring end of days, or insert your similar story from the 90s or 2000s here (or printing press, or whatever).

                                                                                                                                                                                                                                        Order of "dumbing down" effect in a space that one way or another always eventually demands the sort of functional intelligence that only rigorous, hard work on hard problems can yield feels completely different, though?

                                                                                                                                                                                                                                        Just my $0.02, I could be wrong.

                                                                                                                                                                                                                                        • risyachka 14 hours ago

                                                                                                                                                                                                                                          Yup. This.

                                                                                                                                                                                                                                        • DocTomoe 14 hours ago

                                                                                                                                                                                                                                          TL;DR: We had one group not do some things, an later found out that they did not learn anything by not doing the things.

                                                                                                                                                                                                                                          This is a non-study.

                                                                                                                                                                                                                                          • keithnz 14 hours ago

                                                                                                                                                                                                                                            no, that isn't accurate. One of the key points is that those previously relying on the LLM still showed reduced cognitive engagement after switching back to unaided writing.

                                                                                                                                                                                                                                            • Miraste 13 hours ago

                                                                                                                                                                                                                                              No, it isn't.

                                                                                                                                                                                                                                              The fourth session, where they tested switching back, was about recall and re-engagement with topics from the previous sessions, not fresh unaided writing. They found that the LLM users improved slightly over baseline, but much less than the non-LLM users.

                                                                                                                                                                                                                                              "While these LLM-to-Brain participants demonstrated substantial improvements over 'initial' performance (Session 1) of Brain-only group, achieving significantly higher connectivity across frequency bands, they consistently underperformed relative to Session 2 of Brain-only group, and failed to develop the consolidation networks present in Session 3 of Brain-only group."

                                                                                                                                                                                                                                              The study also found that LLM-group was largely copy-pasting LLM output wholesale.

                                                                                                                                                                                                                                              Original poster is right: LLM-group didn't write any essays, and later proved not to know much about the essays. Not exactly groundbreaking. Still worth showing empirically, though.

                                                                                                                                                                                                                                              • DocTomoe 13 hours ago

                                                                                                                                                                                                                                                And how exactly is that surprising?

                                                                                                                                                                                                                                                If you wrote two essays, you have more 'cognitive engagement' on the clock as compared to the guy who wrote one essay.

                                                                                                                                                                                                                                                In other news: If you've been lifting in the gym for a week, you have more physical engagement than the guy who just came in and lifted for the first time.

                                                                                                                                                                                                                                                • greggoB 13 hours ago

                                                                                                                                                                                                                                                  > And how exactly is that surprising?

                                                                                                                                                                                                                                                  Isn't the point of a lot of science to empirically demonstrate results which we'd otherwise take for granted as intuitive/obvious? Maybe in AI-literature-land everything published is supposed to be novel/surprising, but that doesn't encompass all of research, last I checked.

                                                                                                                                                                                                                                                  • DocTomoe 13 hours ago

                                                                                                                                                                                                                                                    If the title of your study both makes a neurotoxin reference ("This is your brain on drugs", egg, pan, plus pearl-clutching) AND introduces a concept stolen and abused from IT and economics (cognitive debt? Implies repayment and 'refactoring', that is not what they mean, though) ... I expect a bit more than 'we tested this very obvious common sense thing, and lo and behold, it is just as a five year old would have predicted.'

                                                                                                                                                                                                                                                    • greggoB an hour ago

                                                                                                                                                                                                                                                      I struggle to see how you're linking your complaint about the wording of the title to your issue with the obviousness of the result - these seem like two completely independent thought processes.

                                                                                                                                                                                                                                                      Also, re cognitive debt being stolen: I'm pretty sure this is actually a modification of sleep debt, which would be a medical/biological term [0]

                                                                                                                                                                                                                                                      [0] https://en.wikipedia.org/wiki/Sleep_debt

                                                                                                                                                                                                                                                      • Miraste 13 hours ago

                                                                                                                                                                                                                                                        You are right about the content, but it's still worth publishing the study. Right now, there's an immense amount of money behind selling AI services to schools, which is founded on the exact opposite narrative.

                                                                                                                                                                                                                                              • Der_Einzige 13 hours ago

                                                                                                                                                                                                                                                Good. Humans don’t need to waste their mental energy on tasks that other systems can do well.

                                                                                                                                                                                                                                                I want a life of leisure. I don’t want to do hard things anymore.

                                                                                                                                                                                                                                                Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market, and it’s easier to convince them that whatever slop work you submitted after 0.1 seconds of effort “isn’t bad, it’s certainly great at delving into the topic!”

                                                                                                                                                                                                                                                Also, monkey see, monkey speak: https://arxiv.org/abs/2409.01754

                                                                                                                                                                                                                                                • latexr 13 hours ago

                                                                                                                                                                                                                                                  > Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market

                                                                                                                                                                                                                                                  I hope you’re being facetious, as otherwise that’s a selfish view which will come back to bite you. If you live in a society, what other do and how they behave affects you too.

                                                                                                                                                                                                                                                  A John Green quote on public education feels appropriate:

                                                                                                                                                                                                                                                  > Let me explain why I like to pay taxes for schools even though I personally don’t have a kid in school. It’s because I don’t like living in a country with a bunch of stupid people.

                                                                                                                                                                                                                                                  • Der_Einzige 12 hours ago

                                                                                                                                                                                                                                                    You could maybe give this book a read to understand why calling me "selfish" is a compliment.

                                                                                                                                                                                                                                                    https://en.wikipedia.org/wiki/The_Ego_and_Its_Own

                                                                                                                                                                                                                                                    • latexr 3 hours ago

                                                                                                                                                                                                                                                      It was neither a compliment nor an insult, only a descriptor. I didn’t call you selfish (I don’t know you), but one particular view you described. For all I know, you may be the most altruistic person in other areas of your life, but that particular view is unambiguously selfish. And the least defensible kind of selfish, too, because it only benefits you in the short term but harms you in the long run.

                                                                                                                                                                                                                                                      Either way, that’s not how compliments nor insults work. The intent is what matters, not the word.

                                                                                                                                                                                                                                                      For example, amongst finance bros, calling each other a “ruthless motherfucker” can be a compliment. But if your employee calls you that after a round of layoffs, it’s an insult.

                                                                                                                                                                                                                                                • trees101 13 hours ago

                                                                                                                                                                                                                                                  Skill issue. I'm far more interactive when reading with LLMs. I try things out instead of passively reading. I fact check actively. I ask dumb questions that I'd be embarrassed to ask otherwise.

                                                                                                                                                                                                                                                  There's a famous satirical study that "proved" parachutes don't work by having people jump from grounded planes. This study proves AI rots your brain by measuring people using it the dumbest way possible.

                                                                                                                                                                                                                                                  • knitef an hour ago

                                                                                                                                                                                                                                                    Please take this to top.