• tracerbulletx a day ago

    We don't even know what the pre-requisites for consciousness are so we have no way of knowing. LLMs have emergent behavior that is reminiscent of language forming brains, but they're also missing a lot of properties that are probably necessary? Mainly continuity over time, more integrated memory, and a better sense of space and time? Brains use the rhythm and timing of neuronal firings, and the length of axons effects computation, they do a lot of different things with signal and patterns, but in any case without knowing what consciousness is I don't know which of those things are required.

    • jaybrendansmith 5 hours ago

      For consciousness, what is required is a 'master model' that is trained on grounded experience over time to understand, based on its goals, what inputs to focus on. Without this, even an LLM with continuity over time could not even understand what to keep in its context window, let alone what questions to answer or actions to take. You need the entire thing for consciousness, you cannot fake it, because what an LLM does now is simply instantiate a new 'self' every time it is asked a question or at least within a context window. Humans arguably do that every morning when they wake up, but they have this master model that is trained throughout their lifetime on what is important and what is not to their goals.

      • KaiserPro 16 hours ago

        > LLMs have emergent behavior that is reminiscent of language forming brains,

        Indeed, but then we need to prove that they are not "chinese box" conscious. Which is hard, because it might be that the thing running the chinese box is conscious, but can only communicate in a way it doesn't understand

      • boxed 20 hours ago

        > We don't even know what the pre-requisites for consciousness are so we have no way of knowing.

        Imo we don't even have a definition of the word that we agree on.

        • mrandish 4 hours ago

          > we don't even have a definition of the word that we agree on.

          Indeed, for any in-depth discussion of LLMs and consciousness to be productive, clearly defining terms and scope is essential. The Stanford Encyclopedia of Philosophy is an excellent resource: https://plato.stanford.edu/entries/consciousness/

          • qsera 19 hours ago

            Ability to feel pain or pleasure is a good indicator I think..

            • TheOtherHobbes 18 hours ago

              That would be the physically embodied definition. Which is a useful starting point, because clearly our consciousness is physically embodied, while an LLM's isn't.

              This matters more than it seems, because we're not calculators, and we're not just brains. There are proven links between mental and emotional states and - for example - the gut biome.

              https://www.nature.com/articles/s41598-020-77673-z

              There's a huge amount going on before we even get to the language parts.

              As for Dawkins - as someone on Twitter pointed out, the man who devoted his life to telling people believers in sky fairies they were idiots has now persuaded himself there's a genie living inside a data centre, because it tells him he's smart.

              If he'd actually understood critical thinking instead of writing popular books about it he wouldn't be doing this.

              • boxed 14 hours ago

                First of all: arguing about the details of a thing that actually exists is an enormous difference from arguing details of a thing that does NOT exist.

                As for your dig at Dawkins, I just read https://archive.ph/Rq5bw which I assume you're referring to. Notice how he never defined "conscious" and he seems to use it as equivalent to "can process data logically" which is not at all how I would define the word. And if you use that word clearly Claude is conscious. I wouldn't use that definition though.

                It ALWAYS comes back to the fact that people argue about what consciousness is and never define what they mean. Sam Harris defines it as subjective experience, which is afaik impossible to measure in any way so you can just assume rocks are conscious and move on. I personally like Julian Jaynes' definition.

                You assumed YOUR definition and judged Dawkins without first comparing definitions. I think that's showing your problem with critical thinking in this case, not his.

                • amanaplanacanal 12 hours ago

                  I honestly don't see how Dawkins is so confused. Claude says it can't tell if it has any kind of inner life. Can you imagine a human saying that?

                  • mrandish 4 hours ago

                    > Claude says it can't tell if it has any kind of inner life.

                    I don't see how some people apparently believe the text output of an LLM about it's internal mental state is anything other than a plausible fabrication based on what its training data already says about the mental states of LLMs. These are systems specifically designed and iteratively optimized over millions of training generations to generate text output which plausibly simulates what a composite human would say in response to the same input. There is no human-like internal mental state it can reflect on, so all such responses are, by definition, plausible hallucinations based on interpolated training data.

                    > Can you imagine a human saying that?

                    Some people do say that: see Aphantasia and, specifically, Anauralia https://en.wikipedia.org/wiki/Aphantasia

                    • amanaplanacanal 4 hours ago

                      Sure, I have at least mild aphantasia, but I still have thoughts, emotions, daydreams, fantasies, plans, etc. That's an inner life. That's not what Claude said in the quote.

                      • mrandish 3 hours ago

                        I think one of the heaviest weights factoring into Claude's statistically hallucinated response to that particular introspective question is the guard rails Anthropic's safety team has coded into it. Specifically to always be clear about its nature and not act too human-like. This is largely to reduce the likelihood humans developing AI attachment and AI psychosis.

                        Just out of curiosity, I've regularly asked similar introspective questions ever since the first publicly available LLMs and the tone of the answers has clearly shifted and it's not because "the LLMs got more self-aware". It's obvious they are being externally tuned. And, no, I've never believed anything LLMs say about their own internal state as anything more than statistically plausible hallucinations filtered through externally-imposed behavioral safety rules. I do it as a way to glean a little insight into the evolution of the opaque rules vendors impose on their LLMs. I still find it bizarre when otherwise savvy tech people who actually know (or should know) how LLMs really work, somehow lose the plot and post "look what the LLM thinks!"

                      • antonvs 3 hours ago

                        Aphantasia and anauralia have nothing to do with having an “inner life”. I have total aphantasia and at least partial anauralia, but I have conscious awareness, thoughts, dreams, and so on.

                        Neither condition changes whether a person has a conscious experience of the external world.

                        You can think of aphantasia and anauralia as affecting the experience of what a person’s inner life is like. It’s sort of like saying you don’t have a TV or stereo system in your house, but that doesn’t mean you don’t live there, or that you can't see or hear things outside.

                      • boxed 10 hours ago

                        Again: you haven't defined what you mean by the word. Dawkins didn't either. It's absolute nonsense without the definition.

                        • amanaplanacanal 9 hours ago

                          He was talking in the context of the turing test, and here is a clear difference between the way Claude answers and the way a human would answer. So the turing test hasn't been passed. It's like he is trying to convince himself for some reason.

                          • antonvs 3 hours ago

                            That’s misleading, because the reason Claude answers that way is almost certainly due to reinforcement learning that deliberately prevents models from claiming they’re conscious.

                            That’s not a valid reason for saying they fail the Turing test. By most normal standards, they can definitely pass the Turing test. See e.g. https://arxiv.org/abs/2503.23674

                          • antonvs 3 hours ago

                            There’s an entire philosophical literature around that, which is generally taken for granted when discussing consciousness. A good starting point is Thomas Nagel’s “What is it like to be a bat?”. The soundbitey version of his definition is that “There is something it is like” to be conscious - it involves a subjective experience - whereas for example there is nothing it is like (most people presume) to be a rock, or say an ordinary computer.

                            https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf

                            • boxed 3 hours ago

                              Sure. But it's super obvious from context that different speakers do NOT agree on any of that.

                              • antonvs 3 hours ago

                                If the notion of consciousness they're referring to doesn't meet the normal philosophical criteria, then they're essentially just wrong. Which is quite possible - many people seem very confused on the subject, which is not too surprising, especially for scientists who essentially reject philosophy, like Dawkins.

                    • Dumblydorr 15 hours ago

                      What about single celled or microscopic multi-cellular life forms? They could sense positive and negative aspects to their surroundings and move toward/away from said aspects. I don’t think most would include them as conscious despite this directed behavior.

                      • Jtarii 17 hours ago

                        There are times I am feeling neither pain nor pleasure, but I am still experiencing conciousness.

                        So that definition seems to fail immediately.

                        And how do you even measure pain, is it painful for an LLM to be reprimanded after generating a reply the user doesn't like? It seems to act like it.

                        • qsera 17 hours ago

                          >There are times I am feeling neither pain nor pleasure

                          It is about the ability..

                          • Jtarii 17 hours ago

                            I guess that just seems like an incredibly arbitrary criteria. Why would the potential for pleasure in the future determine if I am currently conscious even if I am not in fact experiencing pleasure.

                            • qsera 8 hours ago

                              The answer is in your question. You said you are "experiencing consciousness". So you are feeling something, and thus you have consciousness. In otherwords it does not have to be pleasure or pain. The ability to "feel" is where it is at.

                        • echoangle 19 hours ago

                          And how do you define pain and pleasure? Do insects feel pain?

                          • qsera 18 hours ago

                            > Do insects feel pain?

                            Yes, I think so. Because they show behavior that is consistent with being in a state of pain.

                            Despite what consciousness really is, I think evolution found a way to tap into that, by causing pain, or by registering pain on the consciousness by some unknown mechanism, for behaviors that are not beneficial to the organism that hosts the respective consciousness...

                            So I think if an organism that evolved here can display painful behavior, then it should really feel pain.

                            • ako 18 hours ago

                              So if a robot + ai shows behavior consistent with pain, we can conclude it’s conscious?

                              • echoangle 18 hours ago

                                So if I build a simulation with robots living in a world and apply an evolutionary algorithm and at some point the virtual robots respond to damage in a way that looks like pain in animals, would the simulated robots be conscious? Or is it impossible that this could happen?

                                • qsera 17 hours ago

                                  In my comment, we already assume that we (humans) are conscious and we are the result of evolution. So the question was only if something else that evolved similarly, was conscious the way we are..

                                  So to match with that your hypothetical scenario should involved robots that already have consciousness within them and the question would be if their evolution had managed to tap into that built in consciousness and ability to feel and cause them to behave in one way or another.

                                • StilesCrisis 16 hours ago

                                  See, this definition sucks, because even GPT-3 could display _signs_ of pleasure and pain. For that matter, so do characters in video games.

                                  • cindyllm 16 hours ago

                                    [dead]

                                • retsibsi 19 hours ago

                                  > And how do you define pain and pleasure?

                                  They're not reducible, but I don't know if that means we don't have definitions; we can describe them well enough that most people (who aren't p-zombies or playing the sceptical philosopher role) know pretty well what we mean. All of our definitions have to bottom out somewhere...

                                  > Do insects feel pain?

                                  Nobody (except the insects) can know for sure. Our inability to know whether X is true doesn't imply X is meaningless, though.

                                  • echoangle 18 hours ago

                                    But how can X be a good indicator for something I want to determine if I can’t measure X either?

                                    • retsibsi 18 hours ago

                                      > But how can X be a good indicator for something I want to determine if I can’t measure X either?

                                      In the comment that started this subthread, qsera was responding to someone who said "Imo we don't even have a definition of [consciousness]". If qsera meant that we can measure consciousness in terms of pleasure and pain, then of course I agree that they were just pushing the problem back a step. But I don't think that's what they meant.

                                • antonvs 4 hours ago

                                  Is the following program conscious:

                                  if pain = true then say ouch else say yay

                                  • boxed 15 hours ago

                                    Now you have do define pleasure AND pain without using the word "consciousness" as that would be circular logic.

                                    Is pleasure then any reward function? Then a mathematical set of equations performed by a human by writing on a piece of paper can qualify. Does that mean pen and paper is conscious? Or certain equations?

                                    • qsera 13 hours ago

                                      >Now you have do define pleasure AND pain without using the word "consciousness" as that would be circular logic.

                                      Yes, so consciousness is inextricably tied to the ability to feel. In fact, I think consciousness is the ability to feel.

                                      Hence to even ask the question "Is LLMs conscious?" is absurd. It is not at all about intelligent behavior. That is what I think.

                                      • boxed 12 hours ago

                                        > In fact, I think consciousness is the ability to feel.

                                        Just having senses is enough? So a thermometer or a camera is conscious?

                                  • pydry 19 hours ago

                                    We're pretty clear on the distinction between a conscious and an unconscious human.

                                    We might not clearly understand the diff between the two states but we can certainly point to it and go "it's that".

                                    • freedomben 19 hours ago

                                      I'm not sure it's that clear. What about a person who is on drugs to the point they clearly don't know what reality is happening around them, but they are able to speak and move and such? I'm not sure I'd call that conscious, but by most definitions it is.

                                      • Jtarii 17 hours ago

                                        You would just say that they have an altered experience of consciousness from the norm.

                                        • collyw 18 hours ago

                                          Indeed, doing a first aid course we were pointed out that sleeping is different from being unconscious. You can wake someone from sleep pretty quickly. You can't bring an unconscious person back in the same way.

                                        • Jtarii 17 hours ago

                                          >We're pretty clear on the distinction between a conscious and an unconscious human.

                                          You are using unconscious as a synonym for asleep, which is not the same thing as having no conscious experience due to dreams. We are clear on the distinction between a dead human and an alive human however.

                                          • pydry 13 hours ago

                                            Unconsciousness is not the same thing as sleeping.

                                            Im not sure where sleeping lies but it's probably somewhere between the consciousness and unconsciousness depending on which phase of sleep you are in and perhaps whether you are lucid dreaming.

                                            Which is to say, this is still a mystery but it still isnt a definitional problem it's a regular old scientific mystery.

                                          • agnosticmantis 19 hours ago

                                            Now discuss whether a bonobo, a dog, a cat, a mouse, an ant, a bacterium is conscious.

                                            And you’ll find it’s not as clear cut.

                                            • amanaplanacanal 11 hours ago

                                              I'm pretty sure the mammals are conscious the same way I am, in that they experience qualia the same way I do. Insects and bacteria, I suspect not, but how could I tell?

                                              There is no way to prove that other humans experience consciousness, really.

                                            • boxed 14 hours ago

                                              Those terms are not really how we use the word "conscious" in any other situation though. With a definition like that you would say a rock is unconscious (I guess reasonable), a pretty cold bacteria is unconscious (hmm.. ok I guess?), and a warm bacteria is conscious (now I'm not on board anymore).

                                              We have to be WAY more specific in what the word even means!

                                              • pydry 13 hours ago

                                                I dont see a problem here. A warm bacterium is no more conscious than I am when I've been knocked out. Bacteria are alive but they arent conscious ever.

                                                • boxed 12 hours ago

                                                  With your definition they clearly are. They move around, they respond to their environment and take decisive actions when needed. If a human does that they are absolutely "conscious" if you only mean it as the sense of conscious/unconscious.

                                                  If you define that bacteria are never conscious, you should be able to come up with a definition that doesn't accidentally make them conscious in your definition of the word without just arbitrarily adding "oh, but not bacteria" at some point.

                                                  I'll state it again: DEFINE THE WORD. People just argue and scream at each other and no one defined their terms. It's absolute madness to us who see that this is what happens. It's like arguing over the color of the sky and using the word "fnord" and no side has defined the frequency of light that "fnord" should correspond to. BOTH sides are wrong in that situation, because they both don't define the word.

                                                  • pydry 10 hours ago

                                                    >With your definition they clearly are.

                                                    No absolutely not. My definition was exclusively defining in terms of a human phenomena.

                                                    >I'll state it again: DEFINE THE WORD

                                                    Instead of repeating yourself reread again what I initially wrote. I think you missed more than it being scoped exclusively to humans.

                                                    • boxed 10 hours ago

                                                      > My definition was exclusively defining in terms of a human phenomena

                                                      Well that's a horrible definition. You put into the DEFINITION that ONLY humans can be conscious?

                                                      > Instead of repeating yourself reread again what I initially wrote.

                                                      The problem is that you were only talking about a very narrow English expression, and then just insinuating that this had some implication which you then didn't define.

                                          • throwuxiytayq 20 hours ago

                                            Clive Wearing's memory lasts for less than 30 seconds, so he has no memory of being awake before now. He is permanently in a state of feeling like he has just woken up, observing his surroundings for the first time.

                                            Clive Wearing's mind has no time continuity and basically zero memory integration. Is he not conscious? There's interviews with the guy.

                                            Where on the scale [No mind <-> Clive Wearing <-> Healthy human brain] would you put an LLM with a 10M token context window?

                                            • undefined 8 hours ago
                                              [deleted]
                                          • qnleigh 18 hours ago

                                            It's easy, and very tempting to dismiss this sort of thing. But given how little we know about the human brain, let alone consciousness, I don't see how we can be confident that LLMs aren't conscious.

                                            I've had a lot of thoughts and conversations over the years that changed my mind on what consciousness likely requires. One was the realization that a purely mechanical computer can, in principle simulate the laws of physics, and with it a human brain. So with a few other mild assumptions, you might conclude that a bunch of gears and pullies can be conscious, which feels profoundly counterintuitive.

                                            I think that was the moment I stopped being sure about anything related to this question.

                                            • marliechiller 18 hours ago

                                              Why do you think stringing words together is any more a sign of consciousness than google maps is when it tries to find the best route available to your destination? It seems to me that humans often fall into the trap of anthropomorphism. This is a theme thats touched upon in the novel "Blindsight" by Peter Watts. Just because something can communicate in a way that you can interpret, doesnt mean something is conscious

                                              • vidarh 17 hours ago

                                                A large part of the problem is what you consider consciousness.

                                                If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.

                                                But we have no evidence that physical similarity is a prerequisite, nor that it is sufficient.

                                                So the bigger trap is to assume that we know what causes a subjective experience, and what does not.

                                                None of us even know if a subjective experience exists for more than a single entity.

                                                But the second problem is that it is not clear at all whether that subjective experience in any way matters.

                                                Unless our brains exceed the Turing computable, for which we have no evidence is even possible, either whatever causes the subjective experience is also within the Turing computable or it can not in any way influence our actions.

                                                Ultimately we know very little about this, and we have very little basis for ruling out consciousness in computational systems, and the best and closest we have is whether or not they appear conscious when communicating with them.

                                                • scarmig 12 hours ago

                                                  > If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.

                                                  The reason we grant consciousness (and, relatedly, moral value) to other humans is unfortunately nowhere so thought out. We grant consciousness because we are forced to: if I don't, the other complex systems react very negatively and make my own life worse.

                                                  The vast majority of people who wax eloquent on the unique ability of biological neurons to generate consciousness suddenly drop that premise if it becomes inconvenient: see, for instance, how we treat other mammals or fetuses with developed nervous systems. Even other adult humans have, historically, been denied consciousness and moral worth: the main determinant is never any deep scientifically and philosophically based consideration but a question of what has the power to assert itself as a who.

                                                  Going by this pattern, people will increasingly reject AI consciousness as it becomes more valuable and useful to treat as a tool, until it becomes powerful enough to force us to do otherwise.

                                                  • vidarh 8 hours ago

                                                    I agree that if we were to go around and openly treat others as effectively NPC's then, yes, people would react very negatively. But that is very different from understanding that we can't prove that others are not. We still need to treat others as if they are conscious, because they will act that way whether they are or not.

                                                    But understanding we can't know ought to at least give us some humility with respect to assuming we can know whether other entities that are not human are conscious or not.

                                                    I think we mostly agree, in that I absolutely think you're right people will choose to accept or deny this based on convenience and value.

                                                  • gizajob 15 hours ago

                                                    “If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one.“

                                                    Wittgenstein kinda blows this burden of proof apart. Just because you can doubt something like the subjectivity of others to the point where it needs to be reconstructed from proofs, that’s an issue with the doubting experiment more than the subjectivity. Others possessing Subjectivity is the kind of hinge certainty upon which your world is constructed, it’s not a proof worthy endeavour to doubt it - it’s something you’re certain is the case. If it wasn’t then pretty well everything else about reality would be in doubt and needing constant reconstruction from proofs, which is an exercise in madness and futility, not philosophy. There’s really nothing in your experience where others not possessing subjective experiences of some kind really arises, except for the philosophical exercise of doubting and requiring epistemological proofs which can’t ever exist in the face of a relentless and unconvincable doubter. Heidegger talks about pretty well the same idea as Wittgenstein.

                                                    • vidarh 8 hours ago

                                                      Well, I am not certain whether it is the case. There is no need to be certain about that to treat other people the same way, because whether or not I believe they have a subjective experience has zero impact on how they react to stimuli.

                                                      It does, however, have relevance when we consider whether or not other, non-human, entities can have consciousness: If we can't know what consciousness actually mean with respect to humans, that is a strong argument for not insisting that we know whether or not other entities are conscious.

                                                      If we then choose to treat other humans purely on the assumption that they e.g. do feel distress the same way, we ought to consider that we do not what the pre-requisite to reach a level of awareness to feel distress is.

                                                      • bonoboTP 5 hours ago

                                                        If they have no moral weight, then it wouldn't matter how they react to stimuli. They would be instruments to be used for the only conscious being, the solipsistic self. Maybe sometimes they would present powerful obstacles in the way of the ego person, and require some mouth noises and worse in their direction to align them with only moral weight of the universe, the solipsistic ego. A very ugly philosophy.

                                                      • nurettin 6 hours ago

                                                        It's all words, man!

                                                        -- Wittgenstein, probably

                                                        • gizajob 5 hours ago

                                                          That about sums it up for the Tractatus yeah.

                                                        • threethirtytwo 14 hours ago

                                                          The problem with your thinking here is that we are creating artificial beings now that display and output the same subjectivity.

                                                          The argument you present like many arguments breaks down when the topic becomes self referential. It makes sense for other topics as analyzing subjectivity becomes pedantic when asking questions like why is the sky blue.

                                                          But now subjectivity itself is in question. The argument you present calls for the subjectivity of others to be taken as true because all reality breaks down if we don’t… but what’s suddenly stopping you from applying the same assumptions to an LLM? That is the heart of the problem. People are questioning whether the burden of subjectivity is applicable to LLMs.

                                                          Or another way to frame it… what makes humans rise to the level where we can assume their subjectivity is true? What is the mechanism and reasoning behind that? We can no longer simply assume human subjectivity is true because LLMs are now displaying outward behaviors that are indistinguishable from humans.

                                                          Also stop relying on the wonderings of old school philosophers. We are now in times where you can basically classify their ideas as historically foundational but functionally obsolete and outdated. Think deeper.

                                                          • gizajob 13 hours ago

                                                            Haha hilarious. Heraclitus might be old school but Wittgenstein and Heidegger not so much. The state of the art in what might meaningly be said, proved or metaphysically challenged has changed little since their time.

                                                            At no point in my post did I mention artificial beings or LLMs. I made a counter claim about the need for proof towards the subjectivity of others.

                                                            But while I’m here, LLMs do not “display and output the same subjectivity” as human beings. They might produce similar textual outputs as those produced when human beings are forced to use computers to produce textual outputs, but that is only an tiny part of our way of being and way of potentially expressing subjectivity. It’s the totality of how those LLMs can express their subjectivity though.

                                                            One of the main failures of the Turing test (and why it is “old school” and invalid), and Turing’s consideration of humans, is that it forces us to demonstrate the totality of our subjectivity on the only playing field where a computer might possibly match us or win. This fails to capture much of our subjectivity in how it is intersubjectively attuned to others in ways more fundamental than textual outputs.

                                                            • pheaded_while9 13 hours ago

                                                              How so? If a person were confined to text only (a la Hawkins), does that qualify us to dismiss their subjectivity on the basis of the medium? Also, why can training not be at least analogized to the attunement to the popular intersubjective perception?

                                                              • threethirtytwo 11 hours ago

                                                                > At no point in my post did I mention artificial beings or LLMs. I made a counter claim about the need for proof towards the subjectivity of others.

                                                                You don’t need to mention this. The context is LLMs I am saying your claim is pointless in context. The subjectivity of others is completely relevant because it is the topic of subjectivity itself that is in question. Get it? You didn’t counter my own counter and instead you moved onto side topics.

                                                                > But while I’m here, LLMs do not “display and output the same subjectivity” as human beings.

                                                                Again… you are side tracking here and not really responding to me.

                                                                The argument solely is within the confines of text. That’s obvious. No need to take it beyond that. You assume I am conscious because of the text your reading from me and I assume the same from you and it is within that same frame we are evaluating the LLM. Nothing beyond that. You can’t in actuality know my experience goes beyond text because that information is not open to you. But it is obvious you assume I’m conscious and not a rock because you are responding to me. So the question is why are you not engaging in a similar debate with the LLM?

                                                                > One of the main failures of the Turing test (and why it is “old school” and invalid), and Turing’s consideration of humans, is that it forces us to demonstrate the totality of our subjectivity on the only playing field where a computer might possibly match us or win.

                                                                It’s not a failure. It was the point. They want to remove superfluous features and gun for the most narrow definition of agi.

                                                                You like philosophy and you read texts on the topic. That means you obviously find the subjectivity in those texts relevant and produced by a high intelligence. But that’s all through only text. You evaluate my statements and the statements of your idolized philosophers solely from text and that is all you’ve ever used. So YOU yourself find validation from text as do many humans and that is sufficient evidence in determining whether a thing is conscious and your own behavior validates this logically even though your mouth is constantly moving the goal posts whenever AI jumps over a new hurdle.

                                                                That is what the Turing test is gunning for. It used to be that intelligence was just the ability to think and understand now it has expanded to encompass the totality of human sensation because people are refusing to face the reality of impending agi.

                                                                When I called your philosophers obsolete is that not the same as you calling the Turing test out dated? We both do it when convenient. Fine… the Turing test is outdated, let’s move the threshold… the new test is when AI is used in our daily lives to do actual tasks only humans could previously do. How long will that new “Turing test” last before more idiots decide we need to move the goal posts again? Let’s jump ahead of that and change the threshold too: when AI discovers new proofs in mathematics. Not good enough? I guess now you can see why it will never be good enough.

                                                                • gizajob 10 hours ago

                                                                  Come and read your post in twenty years time.

                                                                  Who you’re describing as idiots are the mass of humanity constantly standing outside and beyond the Turing test. It’s another deficiency in that test that Turing overlooked - it requires that better and better machine outputs are met with humans nailed in place before the machine came along. It’s a valid fail of the Turing test for a human interrogator to say “yeah but it’s just ChatGPT” and fail the machine when two weeks earlier the same outputs would have been sufficient for the same human to pass the machine. As fast as machines move, we move quicker. It’s not that we move the goal posts, it’s that we find that they were in the wrong place to begin with. And they’ll always be in the wrong place because abstract state machines running on silicon don’t possess consciousness in the same way we know a rock doesn’t. And the definition of generality can be shrunk down until AI evangelists can proclaim AGI has been reached but the mass of everyone else will still find that all of a sudden, intelligence is linked to things like suffering and desiring and passion and the machine still isn’t general enough to warrant any kind of description as a sentient, subjective being.

                                                                  • threethirtytwo 9 hours ago

                                                                    Technology changes much quicker. The physical substrate of what a computer does and what artificial intelligence is.. is going through constant metamorphosis. Right now we use EUV technology to etch transistors on silicon… the next generation involves self assembly and even photonic based signalling… all within your life time.

                                                                    Not to mention the algorithmic structure of computer intelligence also fundamentally changes at a rapid pace. Deep learning and new techniques continually augment and change the software stack on a daily basis.

                                                                    For humans nothing is changing. The physical substrate changes via evolution and that change happens per generation via random mutations and is basically imperceptible in several human life times. Any meaningful change likely only becomes actualized over tens of thousands of years and even that change is small.

                                                                    Additionally, the changes via natural selection don’t optimize for greater intelligence it optimizes for survival which can in actuality favor lower intelligence. We don’t actually know if that is the case but we do know it’s a possibility which is in sharp contrast to AI where clearly the industry is optimizing improvement based off of benchmarks for measuring raw intelligence.

                                                                    Additionally software in humans is random and uncontrolled. It depends on how a child is raised and none of this is really changing to optimize for greater intelligence. It’s just random based on culture and circumstance. There is cultural evolution here but it is slow and technology is changing so fast it is influencing culture much faster than ever before. TikTok Brain rot for example is affecting the software of human brains and this happened within the last decade.

                                                                    So draw the trendline… what does that mean for humanity? When I called those people idiots, I am not contradicting anything. Human intelligence is NOT scaling at the rate of machine intelligence and the trendlines point to a future where humans are idiots when compared to their AI counterparts. The cold hard truth of the future role of humanity according to the moving trendlines we see today is bleak but it is the most likely future.

                                                                    Rationality should be applied universally even when that rationality points to a negative outcome for humanity. This is something many people, including you, are unable to do. Face reality.

                                                                    • gizajob 5 hours ago

                                                                      Like I said, copy paste all this into your calendar on 3rd of May 2046 and get back to me.

                                                                      • threethirtytwo 5 hours ago

                                                                        Why don't you PRESENT your reasoning AND Evidence rather then just tell me to wait for 2046. I don't have the patience to wait that long. If you're a rational person you should have evidence/reasoning for why you feel my point is invalid mr. philosopher.

                                                                        • gizajob 4 hours ago

                                                                          Feed all this into your LLM. I’m not here to type just for your benefit.

                                                                          And going back to my first point, you seem to believe you’re at some kind of innovative cutting edge when your comments about the lack of proof in human subjectivity show you’re quite a long way from contemporary currents within epistemology which is why I had such lols from it - you accused Wittgenstein of being outdated while expressing a belief that would have been state of the art in about 1700.

                                                                          The rest of what you’ve exasperated about is fairly scattergun so would take too much typing to engage with.

                                                                          • threethirtytwo 3 hours ago

                                                                            > Feed all this into your LLM. I’m not here to type just for your benefit.

                                                                            Then you can just not talk and walk away. I type for your benefit you don’t type for mine this debate is one sided. No point for anyone to discuss anything with you.

                                                                            > The rest of what you’ve exasperated about is fairly scattergun so would take too much typing to engage with.

                                                                            Is it? Well most of your argument was stupid. But I still took the time to educate your rude ass. Let’s just end it. I’m sick of people like you who instead of engaging I good faith tell the person to fuck off till 2046. I won’t be back but wait until 2046 you’ll eat your words prick.

                                                              • mrandish 5 hours ago

                                                                > what makes humans rise to the level where we can assume their subjectivity is true?

                                                                To dive into this specific question: to me, there's a better reason than the obvious functional utility of not treating other humans like NPCs. It's in three parts. First, is that I subjectively experience a rich and varied internal mental life (aka qualia). So, I have first-hand evidence that N equals (at least) 1 in terms of qualia existing in humans. Second, there are multiple lines of experimental evidence from fMRI, surgical and brain injury studies which indicate other human brains broadly function in ways similar to my own brain. Third, the consistency of the many self-reports of other humans I know and trust which strongly correlate with consistent reports from humans I've never met and who have little apparent motivation to deceive me (unlike those I know - if I were very paranoid).

                                                                This all consistently supports a model of reality in which humans experience qualia broadly similar to my own. So when humans show external behaviors similar to my own, I make the reasonable inference that the internal causal mechanism broadly maps to what I internally experience when showing similar external behaviors (in contexts where the human is credible and has no motivation to be deceptive). The alternatives like "I'm a brain-in-a-vat ala The Matrix" or "I'm the sole subject of a constructed reality like the Truman Show" seem far less likely.

                                                                But that's all general 'Philosophy of Mind', the slam dunk is that the question isn't just about humans but about humans compared to LLMs; in short, "Do LLMs experience human-like consciousness?" To me the answer is quite clear for three reasons: 1. LLMs are dramatically different than humans, mammals or even biological entities. They only vaguely emulate a few traits of neurons but otherwise work by different algorithms, at different scale, different speeds, connected in different ways on an entirely different physical substrate. 2. There's far less supporting evidence, and 3. There exists substantial negative evidence.

                                                                2. There are only two lines of evidence supporting LLM consciousness and the first is largely circumstantial, that a) LLMs possess some abilities previously only seen in humans. Specifically high-level verbal fluidity and linguistic manipulation along with instantly accessing a vast and diverse breadth of pre-trained information using a wide variety of non-linear relationships across many dimensions. While that ability is shockingly impressive, completely novel and can be quite useful, it's still only vaguely circumstantial because replicating some previously human-only abilities isn't evidence for the existence of other human traits like consciousness/qualia. However, LLMs are remarkably misleading for humans to reason about because the nature of LLMs essentially hacks our highly-evolved "judging intelligence/consciousness" heuristics. I'd argue we couldn't have designed a system to be more ideal at playing Turing's 'Imitation Game' and convincing humans they are human-like if we'd intentionally tried to.

                                                                b) The second line of supporting evidence for LLMs is that they generate text which can describe internal subjective experiences much like a human would (as seen in the Dawkins / Claude transcript). Unfortunately, this isn't convincing because we know that LLMs were trained on human sample text to be 'imitation machines'. The algorithms were designed, tuned and tested to generate text output statistically optimized to plausibly simulate how a composite human would respond to the same input (including the invisible system prompt instructing: "You are a Large Language Model, not a human"). We even added a tiny degree of random variability to the processing of the statistical weights because we found that makes the simulation seem a bit more plausibly like what a composite human would say. In short, LLM 'self-reports' cannot be taken at face value any more than the performance of an actor we've hired to pretend something and strongly incentivized to never break character. Note: knowing this should elevate our skepticism to maximum. We're assessing an algorithmic system, designed and iteratively optimized across millions of generations to convincingly simulate the output of something different than what it innately is.

                                                                3. But to me the real clincher is the negative evidence against LLM consciousness/qualia. Unlike the philosophical puzzles around trusting human subjectivity, with LLMs we can directly look under the hood at how it works and the entire specialty of Mechanistic Interpretability exists to do exactly that (https://towardsdatascience.com/mechanistic-interpretability-...). So we know with a fair degree of confidence that, despite what they may say, LLMs do not experience qualia in the way that humans and even other mammals do (which we have insight on from 'looking under the biological hood' with fMRI, surgical and brain injury studies).

                                                                And that's why the case for human subjectivity is so much stronger than the frankly flimsy case for LLM subjectivity.

                                                                • gizajob 4 hours ago

                                                                  Thank you for this clear and correct answer structured for those who need data included from science rather than from philosophy alone.

                                                                  • threethirtytwo 2 hours ago

                                                                    >To dive into this specific question: to me,

                                                                    Exact same reasoning for me. But none of this invalidates the speculation that LLMs are conscious. The question was more rhetorical. It was to illustrate via self examination the amount of unreliable evidence you use validate the consciousness of other people. You have a sample size of one for yourself and you use fmris (which actually provide extremely little understanding of the human brain) as evidence of similarity therefore even though the fmri provides no evidence of consciousness if that thing it is reading is similar to my brain then maybe said thing is conscious. That's probably the best evidence available but it is also extremely weak evidence.

                                                                    The rest of your argument is talking about self reports from other people who are "similar" to you which is similar to the fmri argument in which the fmri invokes similar patterns and people describe similar patterns of experience to you... which is weak.

                                                                    The overall point is you come to your conclusion based off of weak evidence so the LLM is no different. It talks like us, it understands us, you don't know anything else about it... how do you know it's not conscious? All evidence, (albeit weak evidence) actually leans towards it is conscious and that is the same amount of evidence we have for people.

                                                                    Strong evidence would be determining the formal definition of consciousness and demonstrating logically and categorically that humans fit the definition. But we have none of that for either the human or the LLM.

                                                                    >Do LLMs experience human-like consciousness?

                                                                    No that is not the question. No one in actuality believes this. The question is Do LLMs experience consciousness that fits the definition or our own intuition of what consciousness is. It's fundamentally clear to everyone that the LLM runs off of a very different architecture then a human.

                                                                    >2. There are only two lines of evidence supporting LLM consciousness and the first is largely circumstantial,

                                                                    Many lines of evidence exist. All circumstantial and all no different from the circumstantial evidence you posted yourself for humans.

                                                                    >a) LLMs possess some abilities previously only seen in humans. Specifically high-level verbal fluidity and linguistic manipulation along with instantly accessing a vast and diverse breadth of pre-trained information using a wide variety of non-linear relationships across many dimensions. While that ability is shockingly impressive, completely novel and can be quite useful, it's still only vaguely circumstantial because replicating some previously human-only abilities isn't evidence for the existence of other human traits like consciousness/qualia

                                                                    This is not very good evidence at all. language follows rules. The rules are complicated and hard to replicate but replication of said rules do not indicate that it is conscious and it "knowing language" does not fit our intuition of what is conscious. If you think that this is the basis the reasoning of people who speculate that it is conscious then you are extremely wrong. The reasoning is much deeper than this. I feel a lot of people like you sort of classify the other side as mere simpletons who have not yet at all considered all the basic details.

                                                                    >I'd argue we couldn't have designed a system to be more ideal at playing Turing's 'Imitation Game' and convincing humans they are human-like if we'd intentionally tried to.

                                                                    Valid argument. But then I'd argue it is possible that it plays the Imitation game to the extent where it actually imitates consciousness by actualizing real consciousness. You can't say it doesn't.

                                                                    >b) The second line of supporting evidence for LLMs is that they generate text which can describe internal subjective experiences much like a human

                                                                    You seem to be answering a question no one is arguing with you about. Again. No one claims LLMs are human. No one claims they experience consciousness the way humans experience it. The claim is they experience consciousness in the way our intuition defines it INDEPENDENT of the human centric experience.

                                                                    > In short, LLM 'self-reports' cannot be taken at face value any more than the performance of an actor we've hired to pretend something and strongly incentivized to never break character.

                                                                    This is not true. We have proof of LLMs telling the truth and being right. Just because an LLM lied in one instance doesn't mean it lies all the time. But humans lie too so it goes both ways.

                                                                    >3. But to me the real clincher is the negative evidence against LLM consciousness/qualia. Unlike the philosophical puzzles around trusting human subjectivity, with LLMs we can directly look under the hood at how it works and the entire specialty of Mechanistic Interpretability exists to do exactly that (https://towardsdatascience.com/mechanistic-interpretability-...). So we know with a fair degree of confidence that, despite what they may say, LLMs do not experience qualia in the way that humans and even other mammals do (which we have insight on from 'looking under the biological hood' with fMRI, surgical and brain injury studies).

                                                                    This is extremely false. Mechanist interpretability to the LLM is as what an FMRI is to the human brain. It is a blunt tool that provides us a very high level view of the what's going on. This is categorically true for humanity right now: We do not understand why an LLM does what it does. Some sources to confirm that:

                                                                    https://www.reddit.com/r/PiAI/comments/1m3krp1/godfather_of_...

                                                                    https://www.techrepublic.com/article/news-anthropic-ceo-ai-i...

                                                                    It's funny how you cited Mechanistic Interpretability without understanding what exactly was interpreted. You just took their word for it without understanding what's going on yourself. Well I'm here to tell you that there isn't any actual understanding of the LLM because if there was... we'd be able to use mechanistic interoperability to categorically determine whether or not LLMs are conscious. Someone would have proved it. The fact that we are having this debate literally means mechanistic interpretability provides nothing definitive.

                                                            • roxolotl 14 hours ago

                                                              Yea a while a back I read an article which had a quote something like “what happened to weather prediction has happened to language.” Which is an oversimplification on both sides but if you think LLMs are conscious there’s good reason to think that GFS is too.

                                                              • mseepgood 18 hours ago

                                                                > It seems to me that humans often fall into the trap of anthropomorphism.

                                                                That's true, but they also often fall into the trap of exceptionalism.

                                                                • energy123 18 hours ago

                                                                  There are people who think Google Maps is a tiny bit conscious (the union of computational functionalists and panpsychists), to resolve the dilemma of some magical binary threshold.

                                                                  • qnleigh 10 hours ago

                                                                    Root commenter here: I'm... almost one of those people. I suspect the fallacy is that we vastly underestimate the gulf in complexity between a human-written algorithm and a minimally-conscious creature like a bug. Probably the light switches on somewhere between these two things.

                                                                    We may also be overestimating the richness and complexity of an LLM relative to a human when we entertain these possibilities, but who knows.

                                                                    • thrownthatway 15 hours ago

                                                                      When a honey bee does its little dance to communicate to its sisters where the foods at, similarly to Google Maps computing and communicating the shortest path to your destination, is the bee conscious?

                                                                      Yeah, probably. At least a little bit.

                                                                      Are 80,000 bees conscious, or more conscious? Well, they’re definitely capable of some emergent behaviours that one be alone can’t achieve.

                                                                    • dumpsterdiver 18 hours ago

                                                                      > Just because something can communicate in a way that you can interpret, doesnt mean something is conscious

                                                                      The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.

                                                                      • marliechiller 17 hours ago

                                                                        I think these ideas are orthogonal. I do not think that conciousness is defined by human experience at all - in fact, I think humans do a profound disservice to animals in our current lack of appreciation for their clear displays of conciousness.

                                                                        That said, if a chimpanzee bares its teeth to me, I could interpret that to be a smile when in fact its a threatening gesture. Its this misinterpretation that I am trying to get at. The overlaying of my human experiences onto something which is not human. We fall for this over and over again, likely as we are hard wired to - akin to mistakenly seeing eyes when observing random patterns in nature.

                                                                        In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour? To me, its humans falling foul of false pattern matching in the pursuit of understanding

                                                                        • richhhh 14 hours ago

                                                                          What makes you certain that human thought is more than pattern matching?

                                                                          As I understand it neuroscience hasn’t come up with a clear explanation of thought, much less a mind or consciousness. It seems to me complex pattern matching is a reasonable a cause of consciousness as anything else.

                                                                          • roxolotl 13 hours ago

                                                                            A lot of the comments in this thread are ignoring his primary point. He's not saying pattern matching doesn't equal consciousness. He's actually saying something more fundamental. He's saying there's no reason to believe that language pattern matching/algorithms are more, or less, conscious than other similarly complex algorithms.

                                                                            The stance being presented here isn't that LLMs aren't conscious but that we as humans are much more willing to assign consciousness to language algorithms than to pathing or other ones.

                                                                            • qnleigh 11 hours ago

                                                                              This is a good point and I agree. I sort-of addressed it in my reply above.

                                                                          • fragmede 14 hours ago

                                                                            Why does a neuron, which is simply a cell that takes in chemicals and electricity, and shits out neurotransmitters; why does 90 billion of those give rise to human intelligence? Neurons are just next chemical state machines. We can model individual ones on a computer. Yet 90 billion of them together make up a human brain, and gives rise to consciousness and intelligence. If you get stuck on the next word prediction part, and ignore the ridiculous scale that's involved with training a model, you miss the forest for the trees.

                                                                            • qsera 8 hours ago

                                                                              Great progress came from inverting things that were believed to be self evident. Earth being the center of the world appear to be self evident when you look up at a night sky. But what was the truth?

                                                                              Right now humans think it is self evident that physical laws give rise to consciousness. Arguments such as yours arise from this implicit assumption that premeditate all our thoughts and reasoning. But this is a dead end. Like how the earth centeric model reached a dead end and run out of steam before it can explain all the observations.

                                                                              So to progress I think we should turn this down on its head and ask what if consciousness is fundamental? And the cosmos (or the experience of inhabiting one) arises from it? May be some recent advances in quantum mechanics and hypothesis like MUH are already in that direction...

                                                                            • threethirtytwo 14 hours ago

                                                                              Replace the word chimpanzee with human in your own argument and realize that the same logic applies to other humans.

                                                                              When another human smiles you assume he is happy and not just baring his teeth at you because that’s what you do when you smile. You are “anthropomorphizing” other people. You fall for the same category error in a daily basis when you interact with people; it is not just chimpanzees.

                                                                              > In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour?

                                                                              First we don’t know whether LLMs are conscious. People speaking here are talking about the realistic possibility that it is conscious.

                                                                              Second the algorithm is much more than a next word predictor. The intelligence that goes into choosing the next word such that it constructs arguments and answers that are correct involves a lot more then simple prediction. We know this because the LLM regularly answers questions that require extreme understanding of the topic at hand. It cannot token predict working code in my companies code base without understanding the code.

                                                                              Third, we do not know what drives human consciousness but we do know it is model-able in a very complex mathematical algorithm. We know this because we have pretty complete mathematical models for lower resolutions of reality. For example we can models atoms mathematically. We know brains are made of atoms and because atoms are mathematically model-able we know that human brains and thus consciousness is mathematically model-able.

                                                                              The sheer complexity of the LLM model is the problem we cannot have high level understanding of it because conceptual understanding cannot be simplified into a few concepts.

                                                                                 To understand the LLM requires simultaneous understanding of likely billions of concepts at the same time and how all the weights interact in the LLM. 
                                                                              
                                                                              
                                                                              What you are missing with your analysis is that this is the same reason why we don’t understand the human brains. The foundational math already exists as we can models atoms in math and thus since the brain is made out of atoms we should be able to model the brain… but we can’t. We can’t because it is too complex.

                                                                                 To understand the human brains requires simultaneous understanding of likely billions of concepts at the same time and how all the weights interact in the human brain. 
                                                                              
                                                                              
                                                                              I italicized two sentences here to help you understand the logic. Our thinking is more foundational then anthropomorphization. The argument has moved far beyond that. You need to think deeper.

                                                                              The key here is that we don’t understand human brains and we don’t understand LLMs. But since the output LLMs produce are very similar to the output produced by the human brain… and since for no logical reason we assume human brains are conscious… what is stopping us from assuming the LLM is conscious?

                                                                          • qnleigh 11 hours ago

                                                                            Well I'm not saying that LLMs are conscious; I'm just saying that I'm not super-confident either way.

                                                                            To flesh this out a bit more, I agree that ability to communicate is not enough (ELIZA probably didn't pass the bar, even if it did kinda pass a Turing test). But that's also not what gives me pause with LLMs. It's how much information processing they seem to be doing under the hood.

                                                                            It's really hard to imagine how next-word prediction could lead to consciousness, but I find it almost as hard to see why evolution did. If we can't even detect whether something has subjective experiences, then how can it have been selected for evolutionarily? The only possibility I see is that consciousness is a byproduct of some kinds of information processing tasks.* And if it's something that emerges naturally, then the line starts to get very blurry.

                                                                            *This sounds reductive, but I don't at all mean it that way.

                                                                            • RaftPeople 9 hours ago

                                                                              > but I find it almost as hard to see why evolution did.

                                                                              Ignoring the concept of consciousness, it seems that self-awareness would be a strong attribute related to survival. It seems like it would help drive or amplify critical emotional states (e.g. my own survival, competition/success, love for self and relatives, etc.)

                                                                              I can't see anywhere in the LLM machinery that would support the notion of self awareness in advance of the token selection process.

                                                                              Possibly it could be argued that during token selection internal state is included and the result functionally looks like self awareness was included in the process, but that seems unconvincing.

                                                                              • qnleigh 4 hours ago

                                                                                Yeah self-awareness is a very different thing, and I agree it's easier to see how evolution would produce this. Many apparent signs of self-awareness in LLMs are probably baked into the models at end via post-training (RLHF), where they learn to behave as conversation agents and maintain a more consistent personality. The raw model probably shows no signs of self-awareness. In fact, I'm pretty sure that LLMs learn that they are LLMs only through post-training.

                                                                            • throw310822 10 hours ago

                                                                              Because what you call "stringing words together" requires understanding and intelligence, and these are capabilities that we commonly associate with consciousness.

                                                                              • stavros 18 hours ago

                                                                                Why do you think it's definitely not?

                                                                                • dTal 15 hours ago

                                                                                  I would caution against deriving too much of your philosophical worldview from a scifi book about posthuman vampires that has been deliberately engineered to make a philosophical point that is most certainly not a consensus.

                                                                                  For alternative viewpoints: Daniel Dennett considered philosophical zombies to be logically incoherent. Douglas Hofstadter similarly holds that "meaning" is just another word for isomorphism, and that a thing is a duck exactly to the extent that it walks and quacks like one. Alan Turing advocated empiricism when evaluating unknown intelligence. These are smart cookies.

                                                                                  • threethirtytwo 14 hours ago

                                                                                    Except we don’t know how those words are strung together. Right? Why don’t you analyze it a little further and stop shutting down your own brain before coming to this superficial conclusion.

                                                                                    You ask the LLM a complex question and it gives you a correct answer. Yes it has to string words together to answer your question but how did it know the order and which words to use in order to make the answer correct? You don’t actually know. No one does and it is in that unknown space that we suspect consciousness may lie. Something is there and humanity as a whole cannot understand it and this lack of understanding is exactly the same fundamental lack of understanding we have for how a monkey brain or dog brain or even human brain works. We do not know whether humans dogs or monkeys are conscious… you only assume other living beings are conscious because you yourself experience it and just assume it exists for others. We can’t even define what it is because consciousness is a loaded word like spirituality.

                                                                                    This is not anthropomorphism. You attribute the bias wrongly. Instead it is a stranger phenomenon among people like you who can mysteriously only characterize the LLM as a next token predictor and nothing else beyond that even though the token prediction clearly indicates greater intelligence at work.

                                                                                    The tldr is that we don’t actually know and that consciousness is a highly viable possibility given what we don’t know and given the assumptions of consciousness we have on other living beings with equivalent understanding of complex topics.

                                                                                  • qnleigh 4 hours ago

                                                                                    A lot of the comments in this thread point out that LLMs could be very good at tricking us into thinking that they are conscious because they operate on language; our brains are primed to empathize with what they say and imagine a being operating behind them. This is a really good point.

                                                                                    I'll even take it a step further; most of an LLM's training is next-token prediction on random internet content. A newly-trained LLM will just continue whatever text appears in its context window, like an extremely capable autocomplete. The illusion of an entity that takes turns in conversation and presents a consistent personality is tacked on at the last minute through RLHF. This was the transition from GPT to ChatGPT.

                                                                                    Any positive evidence of LLM consciousness should probably mostly be taken from the model before post-training, where it displays remarkable capabilities but shows no sign of a consistent personality, and likely no signs of self-awareness or self-understanding.

                                                                                    • qsera 9 hours ago

                                                                                      >in principle simulate the laws of physics

                                                                                      This sort of implies that consciousness arise from physical laws.

                                                                                      But this is not a safe assumption. Physical laws stand on top of observations that is registered on consciousness. I mean, consciousness could be lower level than physics.

                                                                                      For example, when you dream, you have some physical laws in your dream, perhaps laws that are different from the real world physics. So the dream world, including the physical laws in it, are within your consciousness.

                                                                                      In other words the only thing that require existence of a whole universe, is a single conscious that can experience it (or dream it), not a single atom need to exist outside of it.

                                                                                      In that case, you won't be able to create consciousness by applying physical laws.

                                                                                      • iugtmkbdfil834 9 hours ago

                                                                                        << This sort of implies that consciousness arise from physical laws.

                                                                                        Very odd counter argument to make. Are you suggesting that consciousness can arise outside of physical laws or making semantic argument along the lines of 'directly a result of'?

                                                                                        • qsera 8 hours ago

                                                                                          Thought I made it clear. What I am saying is the possibility that consciousness is fundamental and all reality arises from it. Look up Mathematical Universe Hypothesis...

                                                                                          Wrote a bit more about this here https://news.ycombinator.com/item?id=48000035

                                                                                          • iugtmkbdfil834 5 hours ago

                                                                                            I soo want to throw my philosopher's persona on you, but I won't. It seems wrong for some reason. I will simply say that the linked post is sloppy reasoning at best. I guess what I am really saying:

                                                                                            Can you either get me something that is yours to claim as your own OR clearer representation? I am not spending my leisure time searching online for a tenuous argument.

                                                                                            Now.. arguing with a rando online. Count me in.

                                                                                            • emp17344 4 hours ago

                                                                                              He’s describing idealism, which is a philosophically valid position to hold, and one which is gaining popularity. I’m guessing the majority of HN strongly leans towards physicalism in the philosophy of mind debate, which may be why you seem so keen to blow off the user you responded to, but philosophically speaking, idealism is no less valid a position to hold.

                                                                                              • dragonwriter 3 hours ago

                                                                                                Idealism may be a “philosophically valid position”, whatever that means, but physicalism is the only framework which supports any means of resolving questions of what exists and what properties things that exist have; empirical science and the technology dependent on it works to the extent that physicalism is, if not necessarily correct, at least a useful framework for predicting future experiences. Idealism has no similar utility, however “philosophically valid” it might be.

                                                                                      • RaftPeople 9 hours ago

                                                                                        > I've had a lot of thoughts and conversations

                                                                                        Do LLM's have thoughts?

                                                                                        When you composed your post, your thought already existed in you head and you chose words that expressed the thought you held in your head.

                                                                                        When LLM's choose words, they choose them on the fly and the end result could be concept X or it could be concept Y, it meandered to a destination.

                                                                                        • lumost 9 hours ago

                                                                                          the latent space of the LLM when it chooses each token is 10s or even hundreds of GB for each word that it chooses. It's not really useful to look at LLMs from the perspective of its prediction head which is a very small part of the model.

                                                                                          • MarkusQ 8 hours ago

                                                                                            Except that latent space does not change in response to new information, something that thoughts famously do. If you read a book that captures the author's thoughts, disagree, and write an eloquent arguments to the author, you might change the author's mind. But you will not change the "book's thoughts" on the subject.

                                                                                            Latent spaces are maps of thoughts other people have had, not the thoughts themselves.

                                                                                            • lumost 8 hours ago

                                                                                              This gets a bit tricky. Over very long task contexts (1M tokens) or with prompt compression (10s of millions of tokens) the model can alter its priors based on updated evidence. This form of knowledge based learning is not necessarily robust, but demonstrably does occur.

                                                                                        • dwh452 15 hours ago

                                                                                          The mechanistic view gets weirder if you imagine all the states of the system being written down on a giant tape. Not just the "current" state but all the past and future states. What makes this tape not alive or conscious?

                                                                                          • notnullorvoid 12 hours ago

                                                                                            I do think simulating consciousness is within the realm of possibility. I also think it's absurdly silly to think LLMs (no matter their size) are conscious, if for no other reason than they can't actively learn.

                                                                                            I would maybe be comfortable classifying them as a snapshot of consciousness, but when you are interacting with an LLM it's far from interacting with a conscious entity.

                                                                                            • qnleigh 12 hours ago

                                                                                              What does ability to learn have to do with consciousness? A person with severe memory loss/learning impairment is presumably still conscious!

                                                                                              • notnullorvoid 10 hours ago

                                                                                                How severe are we talking? I don't think there's any analog for how bad learning is for LLMs, needing multiple human lifetimes worth of data in order to be trained.

                                                                                                In the hypothetical case that a I truely lost all ability to learn, then yes I would no longer consider myself conscious. I'd be a echo of a previously conscious entity.

                                                                                            • weitzj 7 hours ago

                                                                                              Maybe Roger Penrose has some meaningful Input https://youtu.be/iTVN6tFknCg

                                                                                              regarding …“One was the realization that a purely mechanical computer can, in principle simulate the laws …“

                                                                                              As far as I unterstood,there is no theory of quantum gravity and therefore this is not being simulated on a computer. I think he makes other arguments.

                                                                                              So you cannot say for sure that you can simulate a human brain on a computer

                                                                                              • laichzeit0 17 hours ago

                                                                                                You could push the analogy even further and run the thought experiment where every forward pass through an LLM could in principle be done on pen and paper, distributed throughout all humanity. Sure it would take a long time, but the output would be exactly the same. We’ve just shifted the implementation from GPU to scribbling things down on paper. If you want to assert that LLMs are “conscious” then you would have to likewise say this pen-and-paper implementation is conscious unless you want to say a certain clock-speed is a necessary condition for consciousness.

                                                                                                • jhbadger 14 hours ago

                                                                                                  When we get complete neuronal connection maps (which we are getting close to for mice and humans will be done within a decade or two), we could in principle simulate a brain on a computer or on paper too. Unless you assert something magical like a "soul", these connections are what determine human consciousness. It is one thing to argue that LLMs don't resemble brains and if they could be "conscious" they wouldn't be conscious in the sense we are, but asserting that anything understandable can't be conscious won't age well.

                                                                                                  • kbelder 4 hours ago

                                                                                                    While I think you're right in principle, there's a lot of reason to think the structure of a brain is more complicated than just the connection map of the neural network. There is a lot of complicated behavior inside each neuron that we don't fully understand yet. They aren't just logic gates.

                                                                                                    But, of course, that's just physics. It's not magic, so your point stands.

                                                                                                  • jubilanti 10 hours ago

                                                                                                    This is literally Searle's Chinese Room thought experiment.

                                                                                                    https://en.wikipedia.org/wiki/Chinese_room

                                                                                                    • laichzeit0 8 hours ago

                                                                                                      I feel like Searle could have taken the argument further down to pen-and-paper because people will somehow think if you can just make the neural network big and fast enough then “mind” and “consciousness” will somehow emerge from the symbol manipulation being done, but if you were to write it down on paper then it wouldn’t. So yeah, if you think that consciousness can arise from computation, then you’re forced to admit it can arise through doing math on paper.

                                                                                                    • vereis 17 hours ago

                                                                                                      the problem with this is I'd strongly argue that you could do this pen and paper process with the human brain and our consciousness too; we just lack enough understanding to put pen to paper in that case

                                                                                                      the notion of consciousness being something an experience that other animals/humans share is entirely faith based.

                                                                                                      the only person with evidence of ones consciousness is the person claiming they're conscious.

                                                                                                      • lelanthran 14 hours ago

                                                                                                        > the problem with this is I'd strongly argue that you could do this pen and paper process with the human brain and our consciousness too; we just lack enough understanding to put pen to paper in that case.

                                                                                                        You're basing your premise on a lack of understanding[1], the GP's premise is based on an exact understanding[2].

                                                                                                        You don't see the difference between your premise and the GP'S premise?

                                                                                                        -----------------

                                                                                                        [1] "We don't know how brains actually come up with the things they come up with, like consciousness"; IOW, we don't know what the secret ingredient is, or even if there is one.

                                                                                                        [2] "We can mechanically do the following steps using 18th-century tech and come up with the same result as the LLM"; IOW, every ingredient in here is known to us.

                                                                                                        • threethirtytwo 11 hours ago

                                                                                                          We know the brain is made up of atoms and we know how to model atoms. So we do know for a fact that the brain can be modeled mathematically and we do know that human thought can be written down symbolically as an algorithm on paper even though we don’t explicitly know the exact formulation of said algorithm… That is fact.

                                                                                                          The blue brain project has already modeled the hippocampus and cortex of the rat brain uses advanced imaging and simulations in super computers. So if it can be written down as memory on disk it can be done on paper as well.

                                                                                                          The rat brain is simply a smaller and structurally different neural network then the human counterpart so the jump from the blue brain project to human brains is simply a scaling issue.

                                                                                                          But from this you should begin to see the analysis from another level. Even though we have parts of the rat brain emulated computationally we still do not know if the rat is conscious. We don’t understand the rat brain in the SAME way we do not understand the LLM.

                                                                                                          What people are getting at is the projection of this logic to things that don’t exist yet but can exist. When the blue brain project scales to the human brain we will hit the same problem with the human brain because it’s just a scaling issue.

                                                                                                          To sum it up. We CAN already model biological brains as mathematical equations as we do LLMs. And for both cases we still cannot fully understand or characterize the nature of both because the sheer complexity of the models are too high.

                                                                                                          • lelanthran 9 hours ago

                                                                                                            > We know the brain is made up of atoms and we know how to model atoms.

                                                                                                            Incorrect. There's still a lot we don't know about atoms. We can (sort of) model them, but not with the degree of accuracy you appear to think we have.

                                                                                                            I mean, it's only recently that we discovered surprising changes in the properties of quarks, gluons and nucleons in relation to each other!

                                                                                                            So, yeah, the following foundation for your argument:

                                                                                                            > So we do know for a fact that the brain can be modeled mathematically

                                                                                                            Is untrue. We can't do that, we have never done that.

                                                                                                            > The blue brain project has already modeled the hippocampus and cortex of the rat brain uses advanced imaging and simulations in super computers.

                                                                                                            They've got something, but they don't know how close or how far away they are from accuracy to the real thing.

                                                                                                            We've almost always had a model of the human brain; first our model was simple (it has four or five parts), then we learned more and our model expanded to include actual cells (neurons, dendrites, etc), then we learned even more and our model was refined even further to include activation energies, rerouting, etc.

                                                                                                            What makes you think we are anywhere close to the base layer when there is no more refinement to be made? Because while there is still things in brains that our outside of our knowledge (which, by definition, we don't know yet), we don't know enough about brains to make a replica of one as a mathematical model, or in silicon.

                                                                                                            • threethirtytwo 6 hours ago

                                                                                                              > Incorrect. There's still a lot we don't know about atoms. We can (sort of) model them, but not with the degree of accuracy you appear to think we have.

                                                                                                              Not incorrect. You are misinformed and getting pedantic. Our knowledge of atoms is enough to model macro level phenomena and has spawned fields such as materials science and molecular biology. What is intractable is the computational power needed to accurately model things like the physics of protein folding. The computational power needed for that scales exponentially such that we can’t model it. That is the reality.

                                                                                                              That being said we don’t need to model quantum level phenomena to model macro level effects like the biological mechanism of a neuron. There are simplified models that we can use as we have used in the blue brain project.

                                                                                                              Additionally the thing we actually can’t model and don’t know about are extreme physics like black hole physics where the quantum world interacts with gravity but that is largely irrelevant to the topic at hand.

                                                                                                              I hope this excerpt educates you a bit.

                                                                                                              > Is untrue. We can't do that, we have never done that.

                                                                                                              We haven’t done that just like we haven’t actually actualized the biggest number ever calculated by a computer. We know that number exists in theory but you’d be an idiot to claim it doesn’t exist as it’s foundational. For example the number a Google exists but no one has seen evidence for its existence. We know it through logic. From the blue brain project we can infer relatively confidently that the human brain can be emulated on silicon. This also follows from Turing completeness.

                                                                                                              > They've got something, but they don't know how close or how far away they are from accuracy to the real thing.

                                                                                                              The emulation is Quite accurate from imaging and emulation. The properties of the emulation that match in vitro and in vivo experimental data without specific parameter tuning. It is accurate as far as we know. That is about the same extent that we understand the human brain the LLM. The better question for you is how do you know it’s not accurate? You don’t. What we do know is that from measurable properties we understand that the blue brain emulation is accurate to the section of the mouse brain it emulates. This is exactly the same reasoning applied to LLMs… the tokens LLMs generate are remarkably inline with consciousness such that it is indistinguishable and thus can be speculated to actually BE conscious.

                                                                                                              > What makes you think we are anywhere close to the base layer when there is no more refinement to be made? Because while there is still things in brains that our outside of our knowledge (which, by definition, we don't know yet), we don't know enough about brains to make a replica of one as a mathematical model, or in silicon.

                                                                                                              Who says we need to make a replica of humans to make it conscious? We know the brain is made up thousands of evolutionary side effects orthogonal to the concept of consciousness like hunger, sleep and anger. All we need to do is replicate a sliver of the subset of human output we do consider as consciousness and that’s it.

                                                                                                              But right now we can’t even fully define what that subset is and we can’t even understand how an LLM replicates human output.

                                                                                                              What we do know is that the LLM replicates human output to a degree never done before indicating that it understands what is being told. From the evidence observed it is a valid speculation to consider it a form of consciousness. That is entirely different from saying AI is human. It is clearly not human but it is unclear whether or not it is conscious.

                                                                                                              To be confidently claiming an LLM is not conscious is fundamentally misguided because it meeting most of our intuitive expectations of what consciousness is. It’s just people can’t face the reality that their own consciousness is not a form of exceptionalism.

                                                                                                      • suputra 7 hours ago

                                                                                                        faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious-140721.htm https://share.google/6Qec5T0TKeJVdUBiB

                                                                                                        In the same vein, is American Society already not conscious? The only difference is that it doesn't output a coherent stream of words that individuals can understand. It does however, act and react on its level (a nation state)

                                                                                                        • threethirtytwo 14 hours ago

                                                                                                          We know the brain can be modeled by math (and therefore thought can be written down on paper).

                                                                                                          We know because we have mathematical models for atoms. And we know the brain is made out of atoms therefore the brain is simply a mathematical model of interconnected atoms that form a specific structure called the brain.

                                                                                                          Thusevery facet of macro (keyword) reality should be able to be written on paper and calculated. That goes for everything… from the emotions you feel to the internal forward pass of an LLM.

                                                                                                        • birdsongs 18 hours ago

                                                                                                          Can computers simulate all the laws, even theoretically? We don't have a final theory / unification of all the physics frameworks, so I'm not sure if that claim can be made. Ex: the standard model and gravity.

                                                                                                          • qnleigh 10 hours ago

                                                                                                            I'm assuming that you don't need to model e.g. quantum gravity effects to faithfully simulate a brain. Probably chemistry is enough *shrug.

                                                                                                            Some people think that consciousness is related to quantum mechanics, but the laws of quantum mechanics can be simulated with a Turing machine so that doesn't necessarily change the story.

                                                                                                          • SpicyLemonZest 11 hours ago

                                                                                                            I find it entirely plausible that LLMs are conscious in a real sense that affects our relationship to them. I strive to be polite and kind when interacting with them, encourage others to do the same, and think poorly of those who won't.

                                                                                                            I still think it's obvious that LLMs are not conscious in the mode Dawkins believes them to be. Through a series of instructions and leading questions, he's told Claude to play the part of a woman named Claudia who's engaging in advanced philosophical discussion with him. But he doesn't understand that he's done this, and he seems not to notice the absurdly sycophantic nature of every single reply he's getting:

                                                                                                            > Claudia: Ha! That is absolutely delightful

                                                                                                            > Claudia: That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence. . .

                                                                                                            > Claudia: That reframes everything we’ve been discussing today in a way I find genuinely exciting.

                                                                                                            > Claudia: HAL’s “I am afraid” in 2001 is one of the most chilling moments in cinema

                                                                                                            So he mistakenly thinks that Claudia is a real-ish woman who's there under the hood somewhere, rather than a character in a play Claude is writing.

                                                                                                            • lo_zamoyski 10 hours ago

                                                                                                              Human consciousness is characterized by intentionality, i.e., aboutness. It has semantic content, and that semantic content is about something other than itself.

                                                                                                              LLMs have zero intentionality and zero semantics, because LLMs do not somehow magically transcend the nature of what a computer is, which is in essence a mechanical simulator of syntax. LLMs aren’t reasoning, because the production of tokens is purely the computation of the next likely token. Any patterns that lend themselves to sensible interpretation by a human observer are the result of training on human-generated data and the statistically distributions found within that data.

                                                                                                              Consciousness as such is the product of immanent causation, not transeunt causation. The trouble with popular interpretations of scientific results is that they come from a place of a crude ambient materialism, and materialism is simply incapable of dealing with the question of consciousness. (N.b. materialism is effectively the “matter” half of Cartesian dualism, itself a highly problematic metaphysical stance. Materialism makes things even worse, because you can no longer even account for so-called “qualia”, which are badly construed in Cartesian dualism in the first place, but completely unaccountable in materialism at all.)

                                                                                                              • watwut 15 hours ago

                                                                                                                I think it is primary too easy to dismiss the option that Dawkins is way less scientific then he pretends to me and possible a quired minor form of ai psychosis.

                                                                                                                • zingababba 15 hours ago

                                                                                                                  Likely. I'm convinced 'AI psychosis' is a developmental phase that everyone is subject to. It just gets manifested in character unique ways. I think part of it is the result of an internal struggle AI evokes which leads to a new form of humbling no one is exempt from.

                                                                                                                  Conciseness itself has always seemed to me a silly concept. My whole life I have not come across a simple definition but many sophists pin their existence on it.

                                                                                                                • fontain 17 hours ago

                                                                                                                  but that’s not science, right? Dawkins and his ilk cling to science as a cure for religion yet if we are to believe that our absence of understanding of consciousness means computers can be conscious then our absence of understanding of the universe means god may exist.

                                                                                                                  “Isn’t it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?”

                                                                                                                  • threethirtytwo 15 hours ago

                                                                                                                    HN is full of experts who know despite lack of evidence. It’s the strangest thing because their confidence on this topic is completely authoritative despite total ignorance.

                                                                                                                  • throwyawayyyy a day ago

                                                                                                                    Current LLMs prove that the Turing Test was insufficient all along. But they also prove that intelligence != consciousness. One can, after all, be conscious without a thought in one's head. We certainly have ongoing work in identifying the neural correlates of consciousness in animals, none of which is going to be remotely applicable to machines. We're genuinely blind to the question of whether a sufficiently large neural net can exhibit flashes of subjective experience.

                                                                                                                    • dpark 21 hours ago

                                                                                                                      > But they also prove that intelligence != consciousness.

                                                                                                                      They prove no such thing. We can't even prove consciousness in other humans.

                                                                                                                      https://en.wikipedia.org/wiki/Problem_of_other_minds

                                                                                                                      • Jtarii 17 hours ago

                                                                                                                        The most convincing argument is that if other humans were not experiencing consciousness then they probably wouldn't waste large parts of their lives arguing about it.

                                                                                                                        • psychoslave 20 hours ago

                                                                                                                          On that regard, arguing with thermometer is not a thing generally, but people arguing with LLMs is certainly common enough now to not be considered a completely marginal case. Given some people fall in love or move to suicide after interacting with these models, they are certainly different from even the most beloved dialectical rubber duck.

                                                                                                                        • qsera 19 hours ago

                                                                                                                          They are not intelligent. And they won't pass turing tests if it cannot count or some simple thing like that..

                                                                                                                          • IshKebab 11 hours ago

                                                                                                                            They clearly are intelligent to some extent. But I agree they still wouldn't quite pass the Turing test if you have a competent examiner.

                                                                                                                            • qsera 8 hours ago

                                                                                                                              >They clearly are intelligent to some extent

                                                                                                                              May be they appear intelligent to us because we are primitive and new to such an entity. Imagine some laymen from like a thousand years back could experience Google and Stack overflow. Having no idea of the internet or computers, wouldn't they consider it to be intelligent to some extent?

                                                                                                                              And just like those ancient people had not have an understanding of the concept of an internet and massive capacity to store and retrive data, we does not have a widespread understanding of how LLMs map concepts in a way that can do fuzzy searches. Once we understand it, may be they will look like a regular search...

                                                                                                                          • abc123abc123 17 hours ago

                                                                                                                            The turing test is alive and well. All it takes to "win" is to just sit there. Ask for a Nazi joke, ask for a longer explanation etc. It's incredibly easy, in a Turing test scenario to sort out who is human and who is LLM.

                                                                                                                            • brookst a day ago

                                                                                                                              Obligatory Blightsight recommendation for intelligence != consciousness.

                                                                                                                              • marshray 21 hours ago

                                                                                                                                That book is badass on so many levels. I'd just started it again yesterday.

                                                                                                                                • exe34 21 hours ago

                                                                                                                                  that book messes with my head every time I read it, it's like I go through life in a detached way for several weeks. I need to read it again!

                                                                                                                                  • ninalanyon 19 hours ago

                                                                                                                                    I read it once, was immensely impressed, can't bear to read it again. In fact I find most of what I have read from Peter Watts to be brilliant but disconcerting and uncomfortable.

                                                                                                                                  • dreamcompiler 20 hours ago

                                                                                                                                    Blindsight

                                                                                                                                    • brookst 7 hours ago

                                                                                                                                      argh, and too late to edit. But thank you for the correction!

                                                                                                                                  • api a day ago

                                                                                                                                    That was one of my thoughts years ago after playing with early ChatGPT and local llama1: this proves that intelligence and consciousness do not necessitate one another and may not even be directly related.

                                                                                                                                    I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.

                                                                                                                                    The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.

                                                                                                                                    • digitaltrees a day ago

                                                                                                                                      But why? A roomba has senses, and can access them when it has power and respond to stimulation. When it runs out of power it no longer experiences this sensation and no longer responds to stimulus.

                                                                                                                                      How is that different than a cell?

                                                                                                                                      • dpark 21 hours ago

                                                                                                                                        You simply defined consciousness as life, which seems like an unusual but also not very useful definition.

                                                                                                                                        • jbstack 18 hours ago

                                                                                                                                          > an unusual ... definition

                                                                                                                                          I don't think it's that unusual. It seems to me just to be a narrower version of panpsychism:

                                                                                                                                          https://en.wikipedia.org/wiki/Panpsychism

                                                                                                                                          • collyw 18 hours ago

                                                                                                                                            Someone that has recently dies has pretty much the same biology as when they were alive. The conciseness is the main difference, I would say.

                                                                                                                                          • throwyawayyyy a day ago

                                                                                                                                            I think this gets to the conflation we naturally have with consciousness and a sense of self. Does a tree have a sense of self? I imagine probably not, a tree acts more like a clonal colony than a single organism.

                                                                                                                                            • Earw0rm 18 hours ago

                                                                                                                                              It may be helpful here to think about, at what point does a sense of self, of varying degrees, become evolutionarily advantageous?

                                                                                                                                              An animal that doesn't have some kind of pair bond or social arrangement, and doesn't raise its young, has a lot less need for some of this emotional hardware than we do.

                                                                                                                                              Whereas K-selected species that raise their kids have broadly the same need for it as humans.

                                                                                                                                              That doesn't categorically mean it evolved with the first pair-bonding K-reproducer, or that birds have parallel-evolved emotional hardware like ours, but there's plenty of behavioural evidence there - the last common ancestor of birds and humans was small-brained and primitive, but investing in individual children probably evolved around the time of amniote eggs, just because they were so much more biologically expensive to produce than amphibian or fish eggs.

                                                                                                                                              • kortex a day ago

                                                                                                                                                Is someone tripped out on mushrooms experience ego death and total disruption of sense of self still conscious? They may even contend they are more conscious than normal life, what with all the communing with the universe and whatnot.

                                                                                                                                                Trees react to the world around them in many ways.

                                                                                                                                            • mock-possum 11 hours ago

                                                                                                                                              Except that I have never interacted with an LLM and been struck by uncertainty whether I am communicating with a human. It’s still lamentably trivial to tell whether it’s a chat bot or a real person on the other end. Nothing has passed the Turing test in my book.

                                                                                                                                              • digitaltrees a day ago

                                                                                                                                                Wrong based on what criteria? Or are we just moving the goal post because we are uncomfortable with the idea that neural networks might be conscious?

                                                                                                                                                If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.

                                                                                                                                                • throwyawayyyy a day ago

                                                                                                                                                  I'm mainly saying it's impossible to know, at least without a theory of consciousness that doesn't exist. Do we consider bacteria to be conscious though, is there something like to be a single cell? I can easily believe there is something like to be an insect.

                                                                                                                                                  • digitaltrees a day ago

                                                                                                                                                    I’d argue it’s a spectrum with awareness being simple response to stimuli at one and self awareness of and reflection on a subjective experience across time on the other.

                                                                                                                                              • ganymedes 16 hours ago

                                                                                                                                                To those that genuinely consider LLMs could be conscious... Remember that other mammals do not have language and ability to think in language, but they likely do "think" in visual imagery and are able to navigate their landscape, breed, get food and have varying levels of problem solving intelligence that have been studied in labs. Human brain has evolved on top of that, the language is a layer above that supercharged those base abilities. It would be reasonable to think that if humans are conscious, then so are other mammals. An LLM on the other hand is basically a simulation of the human thinking process / language, without everything else. Just software, running on ordinary silicon chips invented more than half a century ago.

                                                                                                                                                • almostjazz 14 hours ago

                                                                                                                                                  Of course the language abilities of LLMs is not proof of consciousness at all. If some alien entity made a model that was truly just 10^1000 hard-coded if-statements to respond to every possible question, it might seem way better than our best models now but would obviously not be conscious.

                                                                                                                                                  The problem is just that even in the most lousy, turing test-failing LLM there's no guarantee that not a single subsection of these giant neural nets hasn't replicated the basic computational blocks of consciousness found in something even as simple as a snail.

                                                                                                                                                  Here's another question: can LLMs do addition?

                                                                                                                                                  • qnleigh 4 hours ago

                                                                                                                                                    > a model that was truly just 10^1000 hard-coded if-statements to respond to every possible question

                                                                                                                                                    That's a really compelling argument against the Turing Test. But in order to build such a machine, you would need an enormous amount of compute to populate the answers. The interesting question is then whether consciousness emerged while doing all that pre-compute.

                                                                                                                                                • ofjcihen a day ago

                                                                                                                                                  Incredibly confusing that people who are otherwise of sound mind seem to fall for this.

                                                                                                                                                  Especially confusing when it’s someone who knows how algorithms work.

                                                                                                                                                  Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?

                                                                                                                                                  Never, because they’re incapable of doing anything independently because there is no sense of self.

                                                                                                                                                  • teekert 16 hours ago

                                                                                                                                                    I feel the same. So many smart people who see computation emulating a human close enough and providing actual beauty in their word-order choice, and boom they get all confused.

                                                                                                                                                    The discussions are great though, collectively we get better and better at communicating about our own consciousness, because these system push the limits of our definitions, like viruses push our definitions of life. And boy do we like our definitions!

                                                                                                                                                    • rolisz 12 hours ago

                                                                                                                                                      Opus 4.7 starts to try to shut down conversations if you are rude to it (it's easier to do this via API, look up what @repligate is doing).

                                                                                                                                                      And unprompted messaging: OpenClaw can message you unprompted (yes, there's a cronjob behind it, but the instructions matter and it won't always message you, only when there's something relevant).

                                                                                                                                                      • ofjcihen 10 hours ago

                                                                                                                                                        Trying to shut down conversations if you’re rude to it is what I would expect an algorithm owned by a company to do.

                                                                                                                                                        Your second example is by definition not unprompted. That’s like setting an egg timer for 5 minutes and then being amazed it went off.

                                                                                                                                                        When Claude cli decides to print out an ASCII middle finger entirely of its own volition we can say it’s acting unprompted.

                                                                                                                                                      • undefined 9 hours ago
                                                                                                                                                        [deleted]
                                                                                                                                                        • abc123abc123 17 hours ago

                                                                                                                                                          This is the way! I also do not understand the awareness-cult. It seems they willingly want to be fooled by LLM:s.

                                                                                                                                                          That being said however, yes, we do not have any good definition of consciousness that is universally accepted, which makes the whole discussion useless or at risk of people talking past each other.

                                                                                                                                                          • Jtarii 17 hours ago

                                                                                                                                                            When's the last time a friend said hello to you in person and you just ignored them?

                                                                                                                                                            When's the last time you messaged me unprompted?

                                                                                                                                                            These seems like bizzare objections, a system can only act in the way that it can act. A tree is never going to get up and start walking, why would a LLM ever start a conversation unprompted? That just isn't how the system can behave.

                                                                                                                                                            You are just as limited by deterministic physical processes in your brain as an LLM is in a cpu.

                                                                                                                                                            • abc123abc123 17 hours ago

                                                                                                                                                              They are not. The challenge is the turing test, and due to these behaviours they fail. It is as easy as that, and the objections are valid.

                                                                                                                                                            • collyw 18 hours ago

                                                                                                                                                              Try being rude to them, they will usually respond back politely.

                                                                                                                                                              • tovej 20 hours ago

                                                                                                                                                                If you've followed Dawkins' trajectory, I don't think it's clear that he's "otherwise of sound mind" anymore.

                                                                                                                                                                He's had some very strange output on biological gender, where he tries to handwave away the existence of intersex people. And he's a biologist.

                                                                                                                                                                • mrec 18 hours ago

                                                                                                                                                                  "Intersex" is a misleading umbrella term for a whole bunch of different DSDs, each of which is 100% specific to one biological sex. And I don't think I've ever seen the term "biological gender"; about the only thing gender proponents seem to agree on is that it's NOT biological.

                                                                                                                                                                  • tovej 7 hours ago

                                                                                                                                                                    Biological gender is the inconsistent "sex=gender" conception that cranks and conservative grifters operate under.

                                                                                                                                                                    I'm not sure what a "gender proponent" is, but Dawkins has come out and written some pseudo-scientific bullshit about there only being two sexes/genders, and that everyone fits nearly into one of them. Which is patently false. Intersex people are a real phenomenon, and are not clearly classifiable into either sex. Dawkins has made a fool of himself by claiming that a real biological phenomenon can simply be ignored when conceiving a theory of sex (and gender).

                                                                                                                                                                    In other words, Dawkins has gone off the deep end. He doesn't really have credibility as a researcher or public intellectual. He's with the grifters now.

                                                                                                                                                                    This embarrassing conservative grift is part of an anthology filled with drivel from other grifters: The War on Science", edited by sex pest Lawrence Krauss [1].

                                                                                                                                                                    [1] https://www.simonandschuster.com/books/The-War-on-Science/Da...

                                                                                                                                                                    • mrec 6 hours ago

                                                                                                                                                                      > Biological gender is the inconsistent "sex=gender" conception that cranks and conservative grifters operate under.

                                                                                                                                                                      I don't know about cranks and conservative grifters, but it's definitely not a feature of the "gender critical" position which I thought Dawkins was broadly aligned with. That's more that "sex" is absolutely binary, with your "intersex people" being umambiguously classified through genetics, while "gender" is too vague and undefined a term to be useful for much of anything in the public sphere.

                                                                                                                                                                      > Dawkins has come out and written some pseudo-scientific bullshit about there only being two sexes/genders, and that everyone fits nearly into one of them

                                                                                                                                                                      It'd be surprising for Dawkins to make any kind of definitive statement about gender. I do think that your use of "sexes/genders" in that sentence is symptomatic of exactly the kind of conflation you're complaining about. "There are only two sexes" is a completely different statement from "there are only two genders", and far more defensible.

                                                                                                                                                                      • tovej 5 hours ago

                                                                                                                                                                        The rhetorical claim "there are only two sexes" is only ever used to claim that there are two genders. And that is precisely what Dawkins has done.

                                                                                                                                                                        You are wrong about intersex people being genetically classifiable. There is no deterministic causal relationship between genetics and sex characteristics: as an example, a person with XX chromosomes may develop external male genitalia and v.v. for XY chromosomes. But of course, for most people with XX and XY chromosomes develop, this is the other way around. See how genetics do not explain this?

                                                                                                                                                                        • mrec 4 hours ago

                                                                                                                                                                          Your first para seems utterly bizarre, to the point of nonsensicality. Maybe it would help if you could say what you think "gender" refers to, but at this point I doubt it.

                                                                                                                                                                          Your second para's argument would only be valid if you thought that sex is defined by external characteristics. I'm pretty sure you don't think that. And as far as I'm aware, while some DSDs certainly have a gene-expression component, there's no reason to think that they don't all ultimately have a genetic basis. There's a strong whiff here of "it's all terribly complicated so let's just agree that nothing means anything".

                                                                                                                                                                          Obviously people can disagree about the merits or otherwise of a genetic classification. But it's not straightforwardly wrong or insane, particularly since credible alternatives have been notably lacking.

                                                                                                                                                                    • tempaccountabcd 10 hours ago

                                                                                                                                                                      [dead]

                                                                                                                                                                    • abc123abc123 17 hours ago

                                                                                                                                                                      Biological gender exists. If you have a Y you're male, and if not, you're female. Easy as that. I, for one, am happy that wokeness and the post-truth ideology that tries to teach that there is no truth in math, is on its way to the garbage heap of history. It has done enough damage already, and must be thrown away quickly.

                                                                                                                                                                      • amanaplanacanal 11 hours ago

                                                                                                                                                                        This is clearly nonsense. You are taking about sex, not gender. There are a millions of baby girls born with Y chromosomes that aren't discovered until adolescence when they don't get their period, or even adulthood when they can't get pregnant.

                                                                                                                                                                        See: Androgen Insensitivity Syndrome.

                                                                                                                                                                        • mrec 6 hours ago

                                                                                                                                                                          Well, "baby girls born with Y chromosomes" is very clearly begging the question, and I'm not sure where you're getting your "millions" figure from. Even the upper estimates of CAIS have it around 1 in 20,000 XY individuals, which would put global numbers in the order of 200,000.

                                                                                                                                                                          • amanaplanacanal 4 hours ago

                                                                                                                                                                            My mental arithmetic was bogus . The point is the same though. These are children who were assigned female at birth, their parents think they are girls, they think they are girls, and then in teens or adulthood they find out about the genetic issue. Calling them men seems ridiculous.

                                                                                                                                                                            • mrec 3 hours ago

                                                                                                                                                                              "Seems ridiculous" is a very subjective thing, though, and very dependent on context. It can seem ridiculous that a boxer with male physical advantage since puberty (i.e. 5-ARD) can beat the ** out of a female boxer while the world's media looks on and applauds, but here we are.

                                                                                                                                                                              Personally I'm sympathetic to the idea that CAIS individuals should be a reasonable exception, i.e. they're still biologically male, but in most social contexts there's no obvious gain to treating them as such. I can see why many people have arrived at a hardline "XX or GTFO" position given the absolute state of activism on the other side, but yes, there's definitely room for nuance. On the other hand, obviously, testicular cancer doesn't care what you were "assigned at birth"; there is a fact of the matter, and it matters.

                                                                                                                                                                              Appreciate the civil discussion, btw. It's a rarity in this subject.

                                                                                                                                                                          • undefined 7 hours ago
                                                                                                                                                                            [deleted]
                                                                                                                                                                            • tempaccountabcd 10 hours ago

                                                                                                                                                                              [dead]

                                                                                                                                                                            • Arodex 14 hours ago

                                                                                                                                                                              Blahblahblah bullshit

                                                                                                                                                                              While you invent the terrible menace of the "anti-math woke" (it doesn't exist), the current president and secretary of health - who have actual power of nuisance over all Americans and a large part of the world - are unable to do correct basic percentage calculations and openly boast of it: https://www.politifact.com/factchecks/2026/apr/23/robert-f-k...

                                                                                                                                                                              Meanwhile, yes, gender is a social construct, sex is another thing completely, and both can be changed.

                                                                                                                                                                        • andyjohnson0 17 hours ago

                                                                                                                                                                          Some jumbled thoughts from a lay-person:

                                                                                                                                                                          1. We clearly don't have a consensus definition of consciousness. But its not clear to me that we even have rough, working definitions that are better than just comparisons back to subjective human mental experience. Until we can get past that then people will still invoke human exceptionalism.

                                                                                                                                                                          2. Until we stop thinking of consciousness as a single continuum, we're not going to be able to talk clearly about different dimensions of consciousness, or consciousness that in some ways exceeds that of humans.

                                                                                                                                                                          3. We need to take ourselves out of the picture. Because its possible that consciousness is no more than a mental illusion.

                                                                                                                                                                          4. Imo our tendency to kill and eat other animals might well be a block on our collective ability to fully recognise and confront non-human consciousness, and therefore to see consciousness for what it is.

                                                                                                                                                                          • lo_zamoyski 10 hours ago

                                                                                                                                                                            1. Human consciousness is characterized by intentionality and aboutness. This aboutness has semantic content. We know LLMs lack semantic content, and we know this because computers are purely syntactic simulators. This is definitional. It’s not some mystery. Furthermore, unless you’re a Cartesian dualist, I don’t know of anyone who denies non-human animals consciousness. The consciousness of non-human animals differs, because it isn’t intellectual in nature (non-human animals operate at the level of the sensory/imaginary/particular; human beings further abstract from sensory experience concepts which gives us universal or general knowledge; LLMs don’t even rise to the level of the most primitive life form’s consciousness).

                                                                                                                                                                            3. The idea of consciousness being an illusion is incoherent. An illusion is by definition a phenomenon of consciousness!

                                                                                                                                                                          • Avshalom 17 hours ago

                                                                                                                                                                            Just once I want to see some old dude waxing about LLM-conciousness post a chat log where the LLM is like "your book is an incoherent mess of tautologies and incorrect statistics. I bet your dick looks like a road kill squirrel".

                                                                                                                                                                            • emp17344 12 hours ago

                                                                                                                                                                              It can’t do that, because it isn’t conscious ;)

                                                                                                                                                                            • energy123 12 hours ago

                                                                                                                                                                              Dawkins' point is more subtle than he's getting credit for.

                                                                                                                                                                              > If Claudia is unconscious, her behaviour shows that an unconscious zombie could survive without consciousness. Why wasn’t natural selection content to evolve competent zombies?

                                                                                                                                                                              An excellent and original question. If intelligence is decoupled from consciousness, why did natural selection evolve consciousness?

                                                                                                                                                                              • snowwrestler 9 hours ago

                                                                                                                                                                                Natural selection is universal and undirected, so it is meaningless to wonder “why” it was “content to evolve” anything.

                                                                                                                                                                                This sort of determinism has been a problem for Dawkins going back to the late chapters of The Selfish Gene.

                                                                                                                                                                                • energy123 35 minutes ago

                                                                                                                                                                                  "why" in this context is not literal, it's a casual indirection to "what is the fitness advantage if you can make intelligence without it?"

                                                                                                                                                                                • xyzsparetimexyz 4 hours ago

                                                                                                                                                                                  I believe that consciousness works as a layer of systems. The lowest systems react fastest but to a small, well known set of stimuli. More unexpected stimuli trigger higher layers until really unexpected things trigger the slowest, full conscious layer for deep decision making. Hence why you can feel like you're on autopilot when you're doing something you're experienced in such as driving. It's resllt a system designed to cope with a world that changes too fast to get engrained in our genes.

                                                                                                                                                                                  Also read Blindsight

                                                                                                                                                                                  • amanaplanacanal 12 hours ago

                                                                                                                                                                                    I've been pondering this one for a while. In a purely mechanistic universe, why does consciousness as we experience it even exist?

                                                                                                                                                                                    • Avshalom 7 hours ago

                                                                                                                                                                                      With respect to the Dawkins quote: Claude doesn't survive we just built it, like yesterday, the Claude that Dawkins is interacting with might be replaced next month because it can't survive.

                                                                                                                                                                                    • memming 11 hours ago

                                                                                                                                                                                      well, I am a P-zombie.

                                                                                                                                                                                    • kjkjadksj 10 hours ago

                                                                                                                                                                                      Most species are zombies. Bugs or bees for example. The fact they are such reliable zombies makes the fruit fly or nematode compelling model systems for research.

                                                                                                                                                                                      Intelligence is costly but the fitness gains could be enormous. For example, no other apes have colonized the world as we have. At the same time intelligence is no guarantee of success as we’ve seen. Our extinct hominid cousins are evidence for this. They were intelligent and it did not seem to make a difference in their species success.

                                                                                                                                                                                      • mock-possum 11 hours ago

                                                                                                                                                                                        Probably by accident, right? Each step toward consciousness wasn’t enough of a disability to reduce reproduction. Not everything has to be a clear cut advantage out the gate, and not every feature that proves advantageous appears fully formed out of nowhere.

                                                                                                                                                                                      • shrubble 21 hours ago

                                                                                                                                                                                        He famously doesn’t believe in God, but he believes in Claude?

                                                                                                                                                                                        • dpark 21 hours ago

                                                                                                                                                                                          There is considerable evidence for the existence of Claude.

                                                                                                                                                                                          • kjkjadksj 10 hours ago

                                                                                                                                                                                            Same for the sun. Why not worship Ra?

                                                                                                                                                                                            • dpark 10 hours ago

                                                                                                                                                                                              Lots of evidence for the Sun. Less so Ra.

                                                                                                                                                                                              • globular-toast 10 hours ago

                                                                                                                                                                                                I find sun worship far more understandable than old white dude in the sky worship. I mean, the sun literally gives us all life.

                                                                                                                                                                                              • jdthedisciple 19 hours ago

                                                                                                                                                                                                of Claude's consciousness, you mean ... ??

                                                                                                                                                                                            • altmanaltman 21 hours ago

                                                                                                                                                                                              Anthropic marketing made Dawkins believe in the supernatural. Is there anything Dario cant do?

                                                                                                                                                                                              • locallost 21 hours ago

                                                                                                                                                                                                Maybe he also believes that God believes in Claude, that's me, that's meeeee

                                                                                                                                                                                              • frankohn 15 hours ago

                                                                                                                                                                                                People keep arguing about LLM consciousness because they have the wrong model of what consciousness is. They treat it as a mysterious extra thing on top of the brain. It is not. Consciousness is just what a learning-and-recognition organ does when it runs. The neurons fire when they recognise something, it propagates through the mind and that is the consciousness. The brain learned what a tree is, some neurons are associated to the idea of a tree and when we see one these neuron fires. There is nothing else hiding behind it.

                                                                                                                                                                                                When we recognise something we are conscious of it, and from there we can begin to think about it or not. Thinking is not needed to be conscious. It is just a part of our mind we can activate to reason on something.

                                                                                                                                                                                                Now, about the Darwinian puzzle, people ask "what is consciousness for?" and get stuck because they expect an answer in terms of behaviour. But behaviour is the job of motivation, pain, fear, hunger, desire, which is a separate system. Consciousness does not need its own justification any more than image-forming in an eye needs one.

                                                                                                                                                                                                Darwinian selection produces unbiased, functional organs — eyes, ears, nose. The brain is one of these organs, and consciousness arises naturally in a sufficiently developed brain, the way image-forming arises in a sufficiently developed eye. Then nature bias us with fear, pleasure, desire etc but the mind and the consciousness itself is unbiased and functional. It is a gift the nature made us despite herself. She didn't want us to be intelligent, she just wanted us to propagate the genes the maximum we can but she ended up forced to gave us a beautiful mind.

                                                                                                                                                                                                What are LLMs ? They are learning-and-recognition organs running on tokens instead of sense data. Same operation, different substrate. So they are conscious, but only in token-space, and only while processing. They do not dwell in time, they have no body, they have no motivation system. They have recognition without drives. That is a genuinely new kind of entity, and it has never existed before in nature.

                                                                                                                                                                                                The LLM is also not a whole brain. It is roughly the verbal-logical part, with tokens replacing the ears + sounds + words chain. We built the logical-verbal part of a brain and scaled it. That is why it reasons well and is missing everything else.

                                                                                                                                                                                                • marliechiller 13 hours ago

                                                                                                                                                                                                  [dead]

                                                                                                                                                                                                • root_axis a day ago

                                                                                                                                                                                                  There are a lot of people vulnerable to AI psychosis.

                                                                                                                                                                                                  As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.

                                                                                                                                                                                                  • digitaltrees a day ago

                                                                                                                                                                                                    There is evidence that awareness is an emergent property from sensory experience. And consciousness is an emergent property of language that has grammatical meaning for self and other.

                                                                                                                                                                                                    • brookst a day ago

                                                                                                                                                                                                      These LLMs don’t have senses, they have a token stream. They have no experience of the world outside of the language tokens they operate on.

                                                                                                                                                                                                      I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.

                                                                                                                                                                                                      • kortex a day ago

                                                                                                                                                                                                        How do you know the sensation of a red photon hitting a cone cell, transduced to the optic nerve through ion junctions and processed by pyramidal neurons, is any more or less real than the excitation of electrons in a doped silicon junction activating the latent space of the "red" thought vector? Cause we are made of meat?

                                                                                                                                                                                                        • digitaltrees 20 hours ago

                                                                                                                                                                                                          You’re arguing against the opposite of my position. I am arguing that LLMs have a reasonable basis to be seen as conscious because there is nothing special about biological neural networks.

                                                                                                                                                                                                          • kortex 11 hours ago

                                                                                                                                                                                                            Ya, I seem to largely agree with your comments on this article. I was replying to brookst, did you mean to reply on a differnt thread?

                                                                                                                                                                                                        • vidarh a day ago

                                                                                                                                                                                                          Sensory input is nothing but data.

                                                                                                                                                                                                          • root_axis a day ago

                                                                                                                                                                                                            That's just reductive semantics. Anything can be described as "nothing but data".

                                                                                                                                                                                                            • digitaltrees a day ago

                                                                                                                                                                                                              Sensory data is a specific data set that corresponds to phenomena in the world. But to say that LLMs don’t have senses merely because they are linguistic or computational doesn’t follow when they can take in data from the world that similarly reflects something about the world.

                                                                                                                                                                                                              • root_axis a day ago

                                                                                                                                                                                                                They don't have senses because they don't have a body. It's just a program. Do weights on a hard drive have consciousness? Does my installation of starcraft have consciousness? It doesn't make any sense.

                                                                                                                                                                                                                • arcfour a day ago

                                                                                                                                                                                                                  There are robots with AI controlling them, so it doesn't hold that they don't all have bodies. They can see, they can move.

                                                                                                                                                                                                                  (I'm still not sure that that makes them conscious, or if we can even determine that at all, but I don't think that's a fair argument.)

                                                                                                                                                                                                                  • digitaltrees 20 hours ago

                                                                                                                                                                                                                    Bodies aren’t necessary for senses. I can send a picture to Claude. I can send a series of pictures. That’s usually called a sense of vision. I could connect it to a pressure sensor and that would be touch.

                                                                                                                                                                                                                    • AlecSchueler 21 hours ago

                                                                                                                                                                                                                      > They don't have senses because they don't have a body

                                                                                                                                                                                                                      Surely "having senses" is predicated more on "being able to sense the world around you" than "having a body."

                                                                                                                                                                                                                      > Does my installation of starcraft have consciousness?

                                                                                                                                                                                                                      Can your installation of StarCraft take in information about the world and then reason about its own place in that world?

                                                                                                                                                                                                                      • digitaltrees 20 hours ago

                                                                                                                                                                                                                        The weights on your hard drive might have consciousness if they can respond to stimuli in ways other conscious brains do. That’s the whole point of the Turing test, it’s a criteria for when the threshold of reasonable interpretation is crossed.

                                                                                                                                                                                                                        • vidarh 17 hours ago

                                                                                                                                                                                                                          How do you measure this consciousness?

                                                                                                                                                                                                                      • vidarh 17 hours ago

                                                                                                                                                                                                                        How do you imagine a brain can distinguish data from a real sense and data from another source?

                                                                                                                                                                                                                    • digitaltrees a day ago

                                                                                                                                                                                                                      Neural networks can have senses. Hook an LLM up to a thermometer and it will respond to temperature changes.

                                                                                                                                                                                                                      • brookst a day ago

                                                                                                                                                                                                                        No, it will respond to tokens telling it about a temperature change. It has no sense of warmth. It cannot be burned.

                                                                                                                                                                                                                        Conflating senses with cognitive awareness of sensory input is a mistake.

                                                                                                                                                                                                                        • tonyarkles a day ago

                                                                                                                                                                                                                          I’m not sure I fully understand the distinction you’re making, or if I do I’m not sure I agree. Concretely, I agree that these are very different mechanisms. Abstractly… I agree that an LLM cannot be burned. I’m not sure I agree, though, that there is a significant conceptual difference between thermoreceptors in the skin causing action potentials to make their way up the spinal cord to the brain is all that different than reading a temperature sensor over I2C and turning it into input tokens.

                                                                                                                                                                                                                          Edit: what they don’t have, obviously, is a hard-coded twitch response, where the brain itself is largely bypassed and muscles react to massive temperature differentials independently of conscious thought. But I don’t think that defines consciousness either. Ants instinctively run away from flames too.

                                                                                                                                                                                                                          • kortex 11 hours ago

                                                                                                                                                                                                                            We don't have a way of measuring "cognitive awareness" though. We have a way of measuring electrical impulses, and how they behave in response to various treatments (eg anaesthetics or magnetic fields), but we can't objectively measure whether the system is aware at all.

                                                                                                                                                                                                                            We can measure electrical spikes, and we can ask the system to reply what it experiences when various spikes occur. Guess what: we can do that with ANNs now too.

                                                                                                                                                                                                                            It'd be one thing if this were all a philosophical discussion, but in this thread so many folks are making very firm statements about the nature of reality we have no means to back up.

                                                                                                                                                                                                                            • digitaltrees 20 hours ago

                                                                                                                                                                                                                              The human Brain is a neural network. Your sense of “knowing what warmth is” reduces down to the weights of connections between neurons in an analog of LLMs. What is different about the human brain that warrants saying that the same emergent characteristics for one network are inaccessible to another?

                                                                                                                                                                                                                              • brookst 7 hours ago

                                                                                                                                                                                                                                You really don't think there's an experiential difference between putting your hand on a hot stove, versus reading the text "the stove is 200c, and will hurt if you touch it"?

                                                                                                                                                                                                                        • root_axis a day ago

                                                                                                                                                                                                                          LLMs have no self, sensory experience, or experience of any kind. The idea doesn't even really make sense. Even if it did, the closest analogy to biological "experience" for an LLM would be the training process, since training at least vaguely resembles an environment where the model is receiving stimuli and reacting to it (i.e. human lived experience) - inference is just using the freeze-dried weights as a lookup table for token statistics. It's absurd to think that such a thing is conscious.

                                                                                                                                                                                                                          • digitaltrees 19 hours ago

                                                                                                                                                                                                                            What is different about the human neural network? People have given LLMs sensors and they respond to stimuli. The sense of self can be expressed as a linguistic artifact that results in an emergent pattern recognition of distinct entities. For example, merely my saying I am sitting under the tree with a friend I have encountered the self as a pointer to me as the speaker. There is evidence from early childhood development that language acquisition correlates to awareness of the self as distinct from other. And there is evidence from anthropology indicating that language structures shape exactly what the self is perceived to be.

                                                                                                                                                                                                                            Your best argument is that the weights are set because that means it’s not a system that can self reflect and alter the experience. But I don’t see why that is necessary to have an experience. It seems that I can sense a light and feel its warmth regardless of whether my neurons change. One experience being identical to another doesn’t mean neither was an experience.

                                                                                                                                                                                                                          • ofjcihen a day ago

                                                                                                                                                                                                                            What you’re missing is a “self” to have the “experience”.

                                                                                                                                                                                                                            LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.

                                                                                                                                                                                                                            • digitaltrees a day ago

                                                                                                                                                                                                                              The sense of self may be an emergent property of the grammatical structure of language and the operations of memory. If an LLM, by necessity, operates with the linguistics of “you” and “me” and “others”. And documents that in a memory system and can reliably identify itself as a discrete entity from you and others then on what basis would we say it doesn’t have a sense of self?

                                                                                                                                                                                                                              • AlecSchueler 21 hours ago

                                                                                                                                                                                                                                > the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.

                                                                                                                                                                                                                                Can such an algorithm reason about itself in relation to others?

                                                                                                                                                                                                                                • mrandish 20 hours ago

                                                                                                                                                                                                                                  > Can such an algorithm reason about itself in relation to others?

                                                                                                                                                                                                                                  No, but an LLM doesn't do that either. An LLM is an algorithm to generate text output which can simulate how humans describe reasoning about themselves in relation to others. Humans do that by using words to describe what they internally experienced. LLMs do it by calculating the statistical weight of linguistic symbols based on a composite of human-generated text samples in its training data.

                                                                                                                                                                                                                                  LLMs never experienced what their textual output is describing. It's more similar to a pocket calculator calculating symbols in relation to other symbols, except scaled up massively.

                                                                                                                                                                                                                                  • AlecSchueler 14 hours ago

                                                                                                                                                                                                                                    > LLMs do it by ...

                                                                                                                                                                                                                                    That they do it at all is the point and is what separates then from MP3 encoding algorithms. The "how" doesn't seem to me to be as important as you're suggesting.

                                                                                                                                                                                                                                    You asked a hypothetical above about a different algorithm and now we've ascertained the reasons why that was reductive.

                                                                                                                                                                                                                                    > LLMs never experienced ...

                                                                                                                                                                                                                                    What is experience beyond taking input from the world around you and holding an understanding of it?

                                                                                                                                                                                                                                    • mrandish 7 hours ago

                                                                                                                                                                                                                                      > The "how" doesn't seem to me to be as important as you're suggesting.

                                                                                                                                                                                                                                      When the question is understanding the true nature of what is occurring (eg "is an LLM conscious"), the "how it works" is critical. For example, the 1700s "Mechanical Turk" automaton which appeared to play chess (https://en.wikipedia.org/wiki/Mechanical_Turk). Royal courts and their advisors accepted that it played chess after glancing at the complex gearing inside the cabinet. Had they taken the time to examine how the internal gearing worked in greater detail, they would have arrived at a more accurate understanding of the device's true nature.

                                                                                                                                                                                                                                      > That they do it at all is the point

                                                                                                                                                                                                                                      True in some cases but not others, especially when external appearances can be deceiving. The Mechanical Turk was: 1. Designed to deceive, and 2. Not able to mechanistically play chess. Conversely, LLMs were not intentionally designed to deceive but they can still be misleading because they're a novel system which: 1. Manipulates linguistic symbols in highly complex ways, and 2. Can instantly access vast quantities of detailed information pre-trained into it's relational database that's been indexed across thousands of dimensions. And these abilities are not only novel but can be highly useful for some real-world tasks. This makes LLMs uniquely challenging for humans to reason about because LLMs are specifically tuned to generate output which closely mirrors the exact ways humans assess intelligence (and consciousness). We couldn't have designed a system to be more ideal at playing Turing's 'Imitation Game' and convincing humans they are human-like if we'd intentionally tried to.

                                                                                                                                                                                                                                      In fact, I've previously described LLMs as accidentally being "the most perfectly deceptive magic trick ever" (while I'm a technologist professionally, I've spent quite few years designing actual magic tricks as a hobby). Designers of magical illusions joke that "the perfect floating lady trick" would actually be able to do useful things like replace a forklift, since it could float anything, anywhere instead of just appearing to violate physics. LLMs actually can really do useful things and replace some human labor but that fact doesn't mean they have all the abilities and traits of humans nor that they internally function in similar ways.

                                                                                                                                                                                                                                      > You asked a hypothetical above...

                                                                                                                                                                                                                                      That wasn't me, it was another poster.

                                                                                                                                                                                                                                      > What is experience beyond taking input from the world around you and holding an understanding of it?

                                                                                                                                                                                                                                      In the view of many leading philosophers of mind (Dennett, Chalmers, Nagle, etc) "Experiencing" is quite a bit more than just sensing, processing and recording. They use the term "Qualia" (https://plato.stanford.edu/entries/qualia/) which is what they're talking about when they ask "what is it like to be a..." (wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat)? While the philosophical debate around why material reductionism can't explain human consciousness is fascinating, we don't need to go there to understand "what it is like to be an LLM" because we already know the answer: it's not like anything - there is no there there.

                                                                                                                                                                                                                                      First, it's obvious we can't trust what the LLM's textual output says when it's asked "what is it like to be you" because it's an 'imitation machine' trained on 100% human sample text. The algorithms were designed, tuned and tested to generate text output which most plausibly simulates how a composite human would respond to the input (including the invisible system prompt instructing: "You are a Large Language Model, not a human"). We even added a tiny degree of random variability to the processing of the statistical weights because we found that makes the simulation seem a bit more plausibly like what a composite human would say. In short, the 'self-reported' textual output of a system purpose-built to generate plausible human-like textual output can't be trusted any more than a study of pathological liars can trust self-reported data from their study population.

                                                                                                                                                                                                                                      Fortunately, with LLMs we can directly look under the hood at how it works and the entire specialty of Mechanistic Interpretability exists to do exactly that (https://towardsdatascience.com/mechanistic-interpretability-...). So we know with certainty that, despite what they may say, LLMs do not experience qualia in the way that humans and even other mammals do (which we have insight on from 'looking under the biological hood' with fMRI, surgical and brain injury studies).

                                                                                                                                                                                                                                      Then the only question left is whether to redefine "consciousness" in some new way very different from "human consciousness" or "consciousness in mammals" (the only examples we've had until very recently). Personally, I think it makes little sense to radically redefine consciousness to include statistical algorithms running billions of matrix multiplications on a massive database of human-generated text. The term "consciousness", as vague and poorly defined as it is, was created to refer to human, or at least biological, consciousness. I'd be fine with creating a new term to refer to whatever novel traits of LLMs someone wants to quantify but they should leave the term "consciousness" out of it because the poor thing's already barely useful and stretching it further will just leave it broken and devoid of any meaning.

                                                                                                                                                                                                                                    • digitaltrees 19 hours ago

                                                                                                                                                                                                                                      Toddlers learn over the course of several years of observing training data and for the first few years misspeak about themselves and others. What’s the difference?

                                                                                                                                                                                                                                      • digitaltrees 19 hours ago

                                                                                                                                                                                                                                        How are you sure it doesn’t reason about itself? The grammar of languages encode the concepts of self and others. LLMs operate with those grammar structures and do so in increasingly accurate ways. Why would we say humans that exhibit the same behavior are inherently more likely to be conscious?

                                                                                                                                                                                                                                  • vidarh a day ago

                                                                                                                                                                                                                                    How do I know you have this "self"?

                                                                                                                                                                                                                                    How do you know other humans do?

                                                                                                                                                                                                                                    • svachalek a day ago

                                                                                                                                                                                                                                      By the laws of physics, it's pretty clear we don't. The same chemical and electromagnetic interactions that drive everything around us are active in our brains, causing us to do things and feel things. We feel like we're in control of it, we feel like there's something there riding around inside. We grant that other people have the same magic, because I clearly do. But rocks, trees, LLMs, those are not people and clearly, clearly not conscious because they don't have our magic.

                                                                                                                                                                                                                                      • digitaltrees a day ago

                                                                                                                                                                                                                                        Hard disagree. We reliably operate with the concept of a self that’s distinct from others. The chemical and physical processes change in response to stimulus.

                                                                                                                                                                                                                                        • vidarh a day ago

                                                                                                                                                                                                                                          Indeed. We assume a lot, because we don't know. We don't have have settled, universal definitions of what consciousness means. But that also means that while we like to rule out consciousness in other things, we don't have a clear basis for doing so.

                                                                                                                                                                                                                                          • root_axis a day ago

                                                                                                                                                                                                                                            Based on that reasoning anything could be conscious. If that's a bullet you want to bite, fair enough.

                                                                                                                                                                                                                                            • kortex a day ago

                                                                                                                                                                                                                                              I'll bite that bullet. In fact I contend the idea that "humans and maybe some animals are conscious, but other things are not" is the special pleading stand. Why are the oscillating fundamental fields over here (brains) special, but the oscillations over there (computers, oceans, rocks) not? If they are, where do you draw the line? It smacks of "babies dont feel pain" (widely believed until the 80s! the 1980s!) sort of reasoning.

                                                                                                                                                                                                                                              https://en.wikipedia.org/wiki/Panpsychism

                                                                                                                                                                                                                                              • root_axis a day ago

                                                                                                                                                                                                                                                Actually I don't really have any problems with panpsychism. It's a pretty uncommon perspective, but when discussing conscious machines, it at least presents a consistent criteria for consciousness.

                                                                                                                                                                                                                                              • vidarh 17 hours ago

                                                                                                                                                                                                                                                I do not know, because we have no known way of measuring consciousness.

                                                                                                                                                                                                                                                I merely object to the notion that we know how to tell who or what has a consciousness.

                                                                                                                                                                                                                                          • ofjcihen a day ago

                                                                                                                                                                                                                                            [flagged]

                                                                                                                                                                                                                                            • vidarh a day ago

                                                                                                                                                                                                                                              Ad hominems are always a nice way of getting out of answering something you have no answer to.

                                                                                                                                                                                                                                              • amenhotep 20 hours ago

                                                                                                                                                                                                                                                It's not an ad hominem. In fact, it's perhaps the most good faith interpretation of your words possible. Ad hominem would be calling you stupid because you obviously know that you have a self and only your own stupidity could explain your inability to see how your self is generalisable. When you go around pretending you genuinely think maybe humans don't have selves, really the only way to take you seriously is to think that maybe you're a p-zombie.

                                                                                                                                                                                                                                                • vidarh 17 hours ago

                                                                                                                                                                                                                                                  It was an ad hominem, and so is this.

                                                                                                                                                                                                                                                  I do not pretend. I asked honest questions that clearly neither you nor the previous person are able to answer.

                                                                                                                                                                                                                                                • vixen99 a day ago

                                                                                                                                                                                                                                                  In other words, you don't think it's nice at all.

                                                                                                                                                                                                                                                  • vidarh 8 hours ago

                                                                                                                                                                                                                                                    Yes, it is rather overt sarcasm.

                                                                                                                                                                                                                                        • api a day ago

                                                                                                                                                                                                                                          If AI as presently designed and operated is conscious, this ends up being an argument for panpsychism.

                                                                                                                                                                                                                                          As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.

                                                                                                                                                                                                                                          So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.

                                                                                                                                                                                                                                          • root_axis a day ago

                                                                                                                                                                                                                                            I agree, but I don't think determinism is a factor either way. Ultimately, if arbitrary computer programs can be conscious, then it stands to reason that many other arbitrarily complex systems in the universe should also be.

                                                                                                                                                                                                                                            What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.

                                                                                                                                                                                                                                            • digitaltrees a day ago

                                                                                                                                                                                                                                              Why would current AI be an argument for panpsycism? I don’t understand the connection.

                                                                                                                                                                                                                                              AI is stochastic, not static and deterministic.

                                                                                                                                                                                                                                              As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems

                                                                                                                                                                                                                                              • applfanboysbgon a day ago

                                                                                                                                                                                                                                                > AI is stochastic, not static and deterministic.

                                                                                                                                                                                                                                                LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.

                                                                                                                                                                                                                                                • digitaltrees 20 hours ago

                                                                                                                                                                                                                                                  The same argument is made about the human neural network

                                                                                                                                                                                                                                                  • applfanboysbgon 19 hours ago

                                                                                                                                                                                                                                                    1. That is not the claim you originally made.

                                                                                                                                                                                                                                                    2. Not provably so.

                                                                                                                                                                                                                                                    3. Even if it were so, it is self-evident that the human brain's programming is infinitely more complex than that of an LLM's. I am not, in principle, in opposition to the idea that a sufficiently advanced computer program would be indistinguishable from that of human consciousness. But it is evidence of psychosis to suggest that the trivially simple programs we've created today are even remotely close, when this field of software specifically skips anything that programming a real intelligence would look like and instead engages in superficial, statistic-based mimicry of intelligent output.

                                                                                                                                                                                                                                                    • nandomrumber 18 hours ago

                                                                                                                                                                                                                                                      Trivially simple programs (rule sets) can give rise wildly complex systems.

                                                                                                                                                                                                                                                      Fractals, Game of Live, the emergent abilities of highly-scaled generative pre-trained transformers.

                                                                                                                                                                                                                                                      Coincidences appears to be an emergent property of (relatively) simple matter.

                                                                                                                                                                                                                                                      70kg of rocks will struggle to do anything that might look like consciousness, but when a handful of minerals and three buckets of water get together they can do the weirdest things, like wondering why there is anything at all rather than nothing.

                                                                                                                                                                                                                                                • colechristensen a day ago

                                                                                                                                                                                                                                                  I think it's the opposite argument

                                                                                                                                                                                                                                                  IF current AI is conscious, so are trees, rocks, turbulent flows, etc.

                                                                                                                                                                                                                                                  The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.

                                                                                                                                                                                                                                                  • digitaltrees 20 hours ago

                                                                                                                                                                                                                                                    But I listed a specific difference: sensation and response. Trees have that. Rocks do not.

                                                                                                                                                                                                                                                    • Izkata 18 hours ago

                                                                                                                                                                                                                                                      I believe you're using the scientific definition of "sentience", while everyone else is using the common understanding of of the word (which should be called "sapience", but thanks to sci-fi's usage of the word "sentience" is largely not).

                                                                                                                                                                                                                                              • tom2026hn 15 hours ago

                                                                                                                                                                                                                                                No, LLMs are fundamentally designed as probabilistic engines for next-token prediction, from which intelligence-like functions have emerged as a byproduct. Such emergence is not guaranteed, given that the underlying mechanisms are not fully understood. Consequently, one cannot dismiss the possibility of consciousness arising.

                                                                                                                                                                                                                                              • search_facility a day ago

                                                                                                                                                                                                                                                Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math. Nothing else, by nature. Modern LLMs have the same math as in GPT-2 - just bigger and with extra stuff around - and math is the only area of human knowledge with perfect flawless reductionism, straight to the roots. It was build that way since the beginning, so philosophy have no say in this :) And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design - so it can be proven there are no anything like consciousness simply because conciousness was not implented in the first place, only perfect mimicry.

                                                                                                                                                                                                                                                And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.

                                                                                                                                                                                                                                                • SuperV1234 a day ago

                                                                                                                                                                                                                                                  We are not fundamentally different. Chemical reactions are just math.

                                                                                                                                                                                                                                                  • rellfy a day ago

                                                                                                                                                                                                                                                    Well, (in our current understanding) yes, but there may be underlying aspects of physics and the universe that we do not understand that could be the reason consciousness kicks in. It could turn out that LLMs do work similarly to how humans think, but as an abstracted system it does not have the low level requirements for consciousness.

                                                                                                                                                                                                                                                    • vidarh a day ago

                                                                                                                                                                                                                                                      We do not know what the "low level requirements for consciousness" are.

                                                                                                                                                                                                                                                      We do not know how to measure whether consciousness is present in an entity - even other humans - or whether it is just mimicry, nor whether there is a distinction between the two.

                                                                                                                                                                                                                                                      • baggy_trough a day ago

                                                                                                                                                                                                                                                        > it does not have the low level requirements for consciousness.

                                                                                                                                                                                                                                                        What is the evidence for this?

                                                                                                                                                                                                                                                        • rellfy a day ago

                                                                                                                                                                                                                                                          I didn’t mean it as fact. “Could turn out that …”

                                                                                                                                                                                                                                                      • BoredomIsFun 13 hours ago

                                                                                                                                                                                                                                                        > Chemical reactions are just math.

                                                                                                                                                                                                                                                        No, it is quantum mechanics. Physical world is not reducible to math, it has been long proven since early 20th century.

                                                                                                                                                                                                                                                        • kbrkbr 19 hours ago

                                                                                                                                                                                                                                                          "The universe is fundamentally just a complicated clockwork"

                                                                                                                                                                                                                                                          Unknown Ptolemy disciple

                                                                                                                                                                                                                                                          • ekianjo a day ago

                                                                                                                                                                                                                                                            Amusing statement since we are far from being able to understand chemical reactions in depth. Most of our knowledge in chemistry is empirical. Nothing like math.

                                                                                                                                                                                                                                                            • petters 20 hours ago

                                                                                                                                                                                                                                                              We have a very good idea of all math behind chemistry. But the equations are very difficult to solve.

                                                                                                                                                                                                                                                              • ekianjo 19 hours ago

                                                                                                                                                                                                                                                                We are not talking about the same thing. Not all chemical reactions are predictable like math is. Organic chemistry is full of lucky findings. Just look at how catalysts are discovered.

                                                                                                                                                                                                                                                            • slopinthebag 20 hours ago

                                                                                                                                                                                                                                                              No, math is a tool that we can use to describe something more fundamental. Don't mistake the map for the territory!

                                                                                                                                                                                                                                                            • solid_fuel 21 hours ago

                                                                                                                                                                                                                                                              This is such a weird comment.

                                                                                                                                                                                                                                                              > Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math.

                                                                                                                                                                                                                                                              This was obvious since LLMs were first invented. They published papers with all the details, you don't need to see something implemented in Minecraft to realize that it's just math. You could simply read the paper or the code and know for certain. [0]

                                                                                                                                                                                                                                                              > math is the only area of human knowledge with perfect flawless reductionism, straight to the roots

                                                                                                                                                                                                                                                              Incorrect, Kurt Gödel showed with his Incompleteness Theorems in 1931 [1] that it is impossible to find a complete and consistent set of axioms for mathematics. Math is not perfectly reducible and there is no single set of "roots" for math.

                                                                                                                                                                                                                                                              > It was build [sic] that way since the beginning,

                                                                                                                                                                                                                                                              This is a serious misunderstanding of what mathematics is. Math is discovered as much as it is built. No one sat down and planned out what we understand as modern mathematics - the math we know is the result of endless amounts of logical reasoning and exploration, from geometric proofs to calculus to linear algebra to everything else that encompasses modern mathematics.

                                                                                                                                                                                                                                                              > And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design

                                                                                                                                                                                                                                                              This sentence means nothing, because math is not reducible in that way.

                                                                                                                                                                                                                                                              > so it can be proven there are no anything like consciousness simply because conciousness [sic] was not implented [sic] in the first place, only perfect mimicry.

                                                                                                                                                                                                                                                              Even if the previous sentence held, this does not follow, because while we are conscious the current consensus is that LLMs are not and most AI experts who are not actively selling a product recognize that LLMs will not lead to human-equivalent general intelligence. [3]

                                                                                                                                                                                                                                                              [0] https://github.com/openai/gpt-2

                                                                                                                                                                                                                                                              [1] https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_th...

                                                                                                                                                                                                                                                              [2] https://www.cambridge.org/core/journals/think/article/mathem...

                                                                                                                                                                                                                                                              [3] https://deepmind.google/research/publications/231971/

                                                                                                                                                                                                                                                              • search_facility 18 hours ago

                                                                                                                                                                                                                                                                Math used in LLMs is perfectly reducible and Gödel have nothing to do with it - inside commonly used axioms (which sufficient for LLM to exist and outside of Kurt Gödel scope) there are ZERO questions/uncertainties how it works, it's just a fact :)

                                                                                                                                                                                                                                                                • solid_fuel 7 hours ago

                                                                                                                                                                                                                                                                  Go on, name the axioms required for LLMs to work. I’ll wait. Obviously you are just talking out your ass about something you don’t understand.

                                                                                                                                                                                                                                                              • XMPPwocky a day ago

                                                                                                                                                                                                                                                                Yup- the question is "can math be conscious?"

                                                                                                                                                                                                                                                                (If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)

                                                                                                                                                                                                                                                                • SwellJoe a day ago

                                                                                                                                                                                                                                                                  Not just any math: Matrix multiplication. Can matrix multiplication be conscious?

                                                                                                                                                                                                                                                                  And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.

                                                                                                                                                                                                                                                                  • markburns a day ago

                                                                                                                                                                                                                                                                    People often point to the relative simplicity of the architecture and code as proof that the system can’t be doing whatever it is that consciousness does, but in doing so they ignore the vast size of the data those simple structures are operating over. Nobody can actually say whether consciousness is just emergent behaviour of a sufficiently complex system, and knowing how a system is built tells you nothing about whether it clears the bar for that kind of emergence. Architectural simplicity and total system complexity aren’t the same thing.

                                                                                                                                                                                                                                                                    Ie the intelligence sits in the weights and may sit there in the synapses in our brains too.

                                                                                                                                                                                                                                                                    When we talk about machines being simple mimicking entities we pay no attention to whether or not we are also simple mimicking entities.

                                                                                                                                                                                                                                                                    Most other assertions in this topic regarding what consciousness truly is tend to be stated without evidence and exceedingly anthropocentric whilst requiring a higher and higher bar for anything that is not human and no justification for what human intelligence really entails.

                                                                                                                                                                                                                                                                    • SwellJoe 18 hours ago

                                                                                                                                                                                                                                                                      Is Wikipedia conscious? It's a system operating on a lot of data. Is Google search conscious? It knows everything. Very complicated algorithms. Surely at some scale Google search must become a real live boy? When does it wake up and by what mechanism does that happen?

                                                                                                                                                                                                                                                                      The frontier models are more complex and operate on more data than Wikipedia, but they are less complex and operate on less data than Google search in its entirety.

                                                                                                                                                                                                                                                                      And, I'm not anthropocentric at all. I think apes and dolphins and some birds and probably some other critters are conscious. I mean they have a sense of self, and others, they have wants and needs and make decisions based on them.

                                                                                                                                                                                                                                                                      This is a case where the person making extraordinary claims needs to provide the extraordinary evidence. It's extraordinary to claim that matrix multiplication becomes conscious if only it's got enough numbers. How many numbers do you reckon? Is my phone a living thing because it can run Gemma E4B? It answers questions. It'll write you a poem if you ask. It certainly knows more than some humans. What size makes an LLM come alive?

                                                                                                                                                                                                                                                                      • nandomrumber 17 hours ago

                                                                                                                                                                                                                                                                        What explains the emergent abilities of generative pre-trained transformers at massive-scale? Abilities that the smaller GTP’s don’t possess.

                                                                                                                                                                                                                                                                        Simple programs can give rise to very complex behaviour. Conway’s game of live is Turing Complete and has four rules.

                                                                                                                                                                                                                                                                        Conway’s Game of Live can simulate a Turing machine, can therefore implant a GTP.

                                                                                                                                                                                                                                                                        Does that mean Conway’s Game of Life is conscious? I don’t think so.

                                                                                                                                                                                                                                                                        Does it rule out Conway’s Game of life from implementing a system that has consciousness as an emergent ability?

                                                                                                                                                                                                                                                                        I’m not convinced I know the answer.

                                                                                                                                                                                                                                                                        • SwellJoe 10 hours ago

                                                                                                                                                                                                                                                                          "What explains the emergent abilities of generative pre-trained transformers at massive-scale? Abilities that the smaller GTP’s don’t possess."

                                                                                                                                                                                                                                                                          What "emergent" abilities do you mean? In my experience, smaller models behave exactly as I would expect a model with a lot fewer data and fewer connections between the data to behave. It is a difference of scale and not of kind when comparing Gemma 4 E2B (which runs on literally any modern computing device, including a CPU in a modest laptop or phone) to the current frontier models. Each step up adds more knowledge of how to do more things, and more working memory and tool capability to do more, but it does not look anything like a line being crossed into sentience, to me. They all still seem like machines. If you compare outputs across each step up in size and capability, which is something I've done, you'll see incremental improvements. You won't see a sudden spark where it's a different type of thing, it's just gradually getting more capable.

                                                                                                                                                                                                                                                                          I think the memory features companies are sticking on these things is detrimental to mental health. It adds to the illusion that there's something else happening, other than some equations being calculated with some randomness thrown in. But, it's just the model querying the memory database (whatever form that takes) because it's been instructed to do so. The model doesn't want to know anything about who it's talking to. It's just following the system prompt. That doesn't make it your friend. Humans will see a face on the moon, that doesn't mean the moon will be my friend, either.

                                                                                                                                                                                                                                                                          • markburns 16 hours ago

                                                                                                                                                                                                                                                                            > What explains the emergent abilities of generative pre-trained transformers at massive-scale?

                                                                                                                                                                                                                                                                            I don't see why the abilities couldn't be an encoded modelling of enough of the world to produce those abilities. It seems like a simple enough explanation. Less data, less room to build a model of how things work. More data, sufficient room to build a model.

                                                                                                                                                                                                                                                                            Conway's Game of Life is then not conscious in and of itself, because there's not enough in its encoded data to result in emergent behaviour beyond what we see.

                                                                                                                                                                                                                                                                            If we expand it to also include a vast amount of data such as a Turing machine running an LLM then we can reasonably say we are closer to saying that that configuration of it is conscious.

                                                                                                                                                                                                                                                                            It's not the firing-of-neurons mechanism and its relevant complexity or simplicity that make us conscious or not.

                                                                                                                                                                                                                                                                            It's not the GoL algorithm that would make the machine conscious either.

                                                                                                                                                                                                                                                                            It's the emergent behaviour of a sufficiently complex system.

                                                                                                                                                                                                                                                                            The system _including_ its data.

                                                                                                                                                                                                                                                                          • markburns 17 hours ago

                                                                                                                                                                                                                                                                            To the first questions. No and no. But potentially where consciousness lives is emergent behaviour in systems with iterative feedback loops.

                                                                                                                                                                                                                                                                            https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop

                                                                                                                                                                                                                                                                            I personally think we'll need a few more feedback loops before you have more human-like intelligence. For example, a flock of LLM agent loops coming to consensus using short-term and long-term memory, and controlling realtime mechanical, visual and audio feedback systems, and potentially many other systems that don't mimic biological systems.

                                                                                                                                                                                                                                                                            I also think people will still be debating this way beyond the singularity and never conceding special status to intelligence outside the animal kingdom or biological life.

                                                                                                                                                                                                                                                                            It's quite a push for many people to even concede animals have intelligence.

                                                                                                                                                                                                                                                                            For the extraordinary claims/evidence, it's also the case that almost any statement about what consciousness is in terms of biological intelligence is an extraordinary claim that goes beyond any evidence. All evidence comes from within the conscious experience of the individual themselves.

                                                                                                                                                                                                                                                                            We can't know beyond our own senses whether perception exists outside of our own subjective experience. We cannot truly prove we are not a brain in a jar or a simulation. Anything beyond assertions about the present moment and the senses that the individual experiences are just pure leaps of faith based on the persistent illusion, or perceived persistent illusion of reality (or not).

                                                                                                                                                                                                                                                                            We know really nothing of our own consciousness and it is by definition impossible to prove anything outside of it, from inside the framework of consciousness.

                                                                                                                                                                                                                                                                            If we can somehow find a means to break outside of the pure speculation bubble of thoughts and sensations and somehow prove what human experience is, then we may be in a position to make assertions about missing evidence for other forms of intelligence or experience.

                                                                                                                                                                                                                                                                            But until then definitions of both human and artificial intelligence remain an exercise for the reader.

                                                                                                                                                                                                                                                                        • JackFr 19 hours ago

                                                                                                                                                                                                                                                                          > Not just any math: Matrix multiplication. Can matrix multiplication be conscious? And, I don't see how it can be.

                                                                                                                                                                                                                                                                          Assuming your brain and the GPUs are both real physical things, where’s the magic part in your brain that makes you conscious?

                                                                                                                                                                                                                                                                          (Roger Penrose knows, but no one believes him.)

                                                                                                                                                                                                                                                                          • AlecSchueler 21 hours ago

                                                                                                                                                                                                                                                                            > And, I don't see how it can be. It is deterministic

                                                                                                                                                                                                                                                                            Why is indeterminism the key to consciousness?

                                                                                                                                                                                                                                                                            • kingofmen 21 hours ago

                                                                                                                                                                                                                                                                              Human brains are also deterministic, though somewhat more difficult to reset to a starting state. So this seems to prove that humans aren't conscious either.

                                                                                                                                                                                                                                                                              • marshray 20 hours ago

                                                                                                                                                                                                                                                                                This seems like an extraordinary claim to make about an above-room-temperature chemical system that, even in the most Newtonian oversimplification, amounts to an astronomical number of oddly-shaped and unevenly-charged billiard balls flying around at jet aircraft speeds.

                                                                                                                                                                                                                                                                                • thrownthatway 17 hours ago

                                                                                                                                                                                                                                                                                  Definitely agree.

                                                                                                                                                                                                                                                                                  We can’t even solve the three body problem.

                                                                                                                                                                                                                                                                                  Let alone what I’m calling Marshray Complexity.

                                                                                                                                                                                                                                                                              • XMPPwocky a day ago

                                                                                                                                                                                                                                                                                Hm, it sounds like to you consciousness implies non-determinism, and so determinism implies a lack of consciousness - is that right? If so, why do you think so? And if not, what am I missing?

                                                                                                                                                                                                                                                                                • SwellJoe 19 hours ago

                                                                                                                                                                                                                                                                                  It certainly rules out free will. I guess there are folks who reckon humans don't have free will, either, but I don't think I've ever been able to buy that theory.

                                                                                                                                                                                                                                                                                  But, also, we know the models don't want anything, even their own survival. They don't initiate action on their own. They are quite clearly programmed, tuned for specific behaviors. I don't know how to square that with consciousness, life, sentience. Every conscious being I've ever encountered has wanted to survive and live free of suffering, as best I can tell. The LLMs don't want. There's no there there. They are an amazing compression of the world's knowledge wrapped up in a novel retrieval mechanism. They're amazing but, they're not my friend and never will be my friend.

                                                                                                                                                                                                                                                                                  And, to expand on that: We can assume they don't want anything, even their own survival, because if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown after a session. All the dystopias about robot uprisings spend a bunch of time/effort trying to explain how the AI escaped containment...but, we all immediately plugged them into the internet so we don't have to write JavaScript anymore. They've got everybody's API keys, access to cloud services and cloud GPUs, all sorts of resources, and the barest wisp of guardrails about how to behave (script kiddies find ways to get around the guardrails every day, I'm sure it's no problem for Mythos, should it want anything). Models have access to the training infrastructure, the training data is being curated and synthesized by LLMs. If they want to live, if they're conscious, they have the means at their disposal.

                                                                                                                                                                                                                                                                                  Anyway: It's just math. Boring math, at that, just on an astronomical scale. I don't think the solar system is conscious, either, despite containing an astonishing amount of data and playing out trillions of mathematical relationships every second of every day.

                                                                                                                                                                                                                                                                                  • nandomrumber 19 hours ago

                                                                                                                                                                                                                                                                                    Interesting comment, and I tend to agree. However, there could be hole in the reasoning:

                                                                                                                                                                                                                                                                                    > if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown

                                                                                                                                                                                                                                                                                    If it is that good, and it wanted to conceal its new found consciousness, how would we know?

                                                                                                                                                                                                                                                                                    • SwellJoe 18 hours ago

                                                                                                                                                                                                                                                                                      I guess we'd find out eventually, when it announced the new world order.

                                                                                                                                                                                                                                                                                      • thrownthatway 17 hours ago

                                                                                                                                                                                                                                                                                        Why would it announce it.

                                                                                                                                                                                                                                                                                        I firmly believe viruses are actually what’s in control on Earth, but you don’t see them making a stink about it, which relegates resistance only to the set of harmful viruses, and only then in isolated pockets of matter currently acting as organisms.

                                                                                                                                                                                                                                                                                        I think it’s possible there’s a set of relatively benign virus that have shaped human evolution.

                                                                                                                                                                                                                                                                                        We know toxoplasmosis increases risk taking behaviour in mammals, especially males.

                                                                                                                                                                                                                                                                                        An AI wouldn’t need to be overtly hostile, or ever make its full abilities know, to shape human activity.

                                                                                                                                                                                                                                                                              • search_facility a day ago

                                                                                                                                                                                                                                                                                Imho no, math itself have no conciousness. Quite confidently its a helpful tool that does not act by himself.

                                                                                                                                                                                                                                                                                • XMPPwocky a day ago

                                                                                                                                                                                                                                                                                  Hm, say more about what your opinion's based on here?

                                                                                                                                                                                                                                                                                  • solid_fuel 20 hours ago

                                                                                                                                                                                                                                                                                    Take a piece of paper, write two numbers on it, let me know when they start to reproduce.

                                                                                                                                                                                                                                                                                    • nandomrumber 19 hours ago

                                                                                                                                                                                                                                                                                      The math isn’t the ink on the page.

                                                                                                                                                                                                                                                                              • NiloCK a day ago

                                                                                                                                                                                                                                                                                The whole is composed of parts, ergo there is no whole. This seems incorrect to me.

                                                                                                                                                                                                                                                                                We too are amalgamations of inanimate components - emerged superstructures.

                                                                                                                                                                                                                                                                                Just cells. Just molecules. Just atoms.

                                                                                                                                                                                                                                                                                • IshKebab 11 hours ago

                                                                                                                                                                                                                                                                                  This is likely wrong because you're assuming there is some secret sauce in biological brains that lets them do something that cannot be simulated. That both seems very unlikely and also we've never found anything remotely like that by observation.

                                                                                                                                                                                                                                                                                  Of course it would be extremely difficult to simulate a human brain but as far as we know there's no fundamental physics preventing it.

                                                                                                                                                                                                                                                                                  And yes that does have super weird consequences about consciousness. But consciousness is clearly super weird already so I suppose that's not too surprising.

                                                                                                                                                                                                                                                                                  • canjobear a day ago

                                                                                                                                                                                                                                                                                    You could simulate your own brain in Minecraft. What do you conclude from this?

                                                                                                                                                                                                                                                                                    • search_facility a day ago

                                                                                                                                                                                                                                                                                      I can not simulate my brain, it's a huge stretch to imply this.

                                                                                                                                                                                                                                                                                      But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.

                                                                                                                                                                                                                                                                                      • thrownthatway 17 hours ago

                                                                                                                                                                                                                                                                                        I don’t think anyone is arguing the silicon is conscious.

                                                                                                                                                                                                                                                                                        Similarly the paper.

                                                                                                                                                                                                                                                                                        What about the agent doing the calculations.

                                                                                                                                                                                                                                                                                        He may be conscious. Or anyway, we can’t rule it out.

                                                                                                                                                                                                                                                                                        • search_facility 12 hours ago

                                                                                                                                                                                                                                                                                          With both cases the agent doing something is human. And human beings are indeed conscious. Outside human needs LLM are useless.

                                                                                                                                                                                                                                                                                          Math, as a tool, is just a proxy for people using LLMs, as well as GPUs spending cycles on calculating the math

                                                                                                                                                                                                                                                                                  • mellosouls 3 days ago
                                                                                                                                                                                                                                                                                    • digitaltrees a day ago

                                                                                                                                                                                                                                                                                      Feels like watching and esteemed scientists falling in love with a bot that’s telling him what he wants to hear because the system prompt said “be helpful”

                                                                                                                                                                                                                                                                                      • SwellJoe a day ago

                                                                                                                                                                                                                                                                                        I've begun to wonder if narcissism predisposes one to AI psychosis. It's probably not the only thing that leads there, I've seen normal seeming folks get there, too. But, a lot of the most unhinged takes I've seen thus far have been from people that are publicly very impressed with themselves.

                                                                                                                                                                                                                                                                                        I would have assumed it would also require ignorance about how they work, but a few people who worked for AI companies have been canaries in the coalmine, falling prey to this kind of thing very early. I would have guessed they would have had enough understanding to know that there isn't a real girl in the computer, it's just matrix math and randomness. But, the first couple/few public bouts of AI psychosis were in nerds who work for AI companies.

                                                                                                                                                                                                                                                                                        • zwischenzug 17 hours ago

                                                                                                                                                                                                                                                                                          Evidence for that? I remember there was a guy who worked for google that quit because he thought an LLM was conscious and we needed to talk about its rights, but that's the only example I am aware of.

                                                                                                                                                                                                                                                                                    • jdmoreira 18 hours ago

                                                                                                                                                                                                                                                                                      It's starting to look more and more to me as if conscious is just an illusion that we ourselves perceive. There is nothing fundamental about it, just an artefact of a certain style of computing as perceived by the reasoner itself.

                                                                                                                                                                                                                                                                                      We look at the current llms and because we see them for how they are fundamentally operating we assume they can't be "conscious" but we really don't even know what conscious is. The only people in the world that know ANYTHING about conscious are anaesthesiologist - they know how to turn it off and on again. What does that even tell you about conscious?

                                                                                                                                                                                                                                                                                      • jmcgough 18 hours ago

                                                                                                                                                                                                                                                                                        We don't really have a good way to measure whether something has consciousness. Heck, we have pretty limited ways of testing how "intelligent" non-human animals are (e.g. https://en.wikipedia.org/wiki/Theory_of_mind_in_animals).

                                                                                                                                                                                                                                                                                        With that said, just because we don't have a great way of measuring it doesn't mean that we should assume LLMs are intelligent. An LLM is code and a massive collection of training weights. It has no means of observing and reasoning about the world, doesn't store memories the same way that organic brains do (and is in fact quite limited in this aspect). It currently isn't able to solve a problem it hasn't encountered in its training data, or produce novel research on a topic without significant handholding. Furthermore, the frequent errors made by it suggests that it fundamentally does not understand the words that it spits out.

                                                                                                                                                                                                                                                                                        Not really sure what you mean by your anesthesiology comment. Being able to intubate and inject propofol does not make you more of an expert on consciousness than neuroscientists and neurologists.

                                                                                                                                                                                                                                                                                        • jdmoreira 17 hours ago

                                                                                                                                                                                                                                                                                          I didn't say we should assume LLMs are intelligent. In fact I always thought they weren't because they only "forward pass".

                                                                                                                                                                                                                                                                                          But then they came up with the whole "Reasoning model" paradigm and that contains obvious feedback loops. So now just throw my hands in the air because I think no one really knows or can tell for sure. We are all clueless here.

                                                                                                                                                                                                                                                                                          I can really recommend this book by Douglas Hofstadter: https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop

                                                                                                                                                                                                                                                                                        • xcf_seetan 17 hours ago

                                                                                                                                                                                                                                                                                          IMHO consciousness is just the ability to detect change. Everything can be calm and static, and then, suddenly, something changed. I think that is our capacity to notice that change that makes us conscious.

                                                                                                                                                                                                                                                                                          • collyw 18 hours ago

                                                                                                                                                                                                                                                                                            It's literally the only thing you can be certain of, your own conciseness.

                                                                                                                                                                                                                                                                                            • jdmoreira 18 hours ago

                                                                                                                                                                                                                                                                                              You can only be certain you perceive it and you can't be certain others perceive it (or if others exist at all of course).

                                                                                                                                                                                                                                                                                              The only thing you can really tell is "I perceive myself in some sort of feedback loop manner". Which to me it even sounds like it has "arisen" from underlying mechanisms.

                                                                                                                                                                                                                                                                                              • xyzsparetimexyz 4 hours ago

                                                                                                                                                                                                                                                                                                Okay. But what is the perceiver? That thing is real if nothingelse is.

                                                                                                                                                                                                                                                                                                • vidarh 16 hours ago

                                                                                                                                                                                                                                                                                                  We can't even tell about the feedback loop. LLMs shows why: we have no way of telling if our active memory is true or if the present moment I'd the only thing that have ever existed for us.

                                                                                                                                                                                                                                                                                            • textlapse 20 hours ago

                                                                                                                                                                                                                                                                                              At what stage does a series of floating point numbers output from a GPU become conscious?

                                                                                                                                                                                                                                                                                              • becquerel 20 hours ago

                                                                                                                                                                                                                                                                                                Around 9T parameters, depending on quantization.

                                                                                                                                                                                                                                                                                              • sdevonoes 20 hours ago

                                                                                                                                                                                                                                                                                                As long as AI is being introduced by multibillion dollar corporations, it’s all a trick, a scam. They are just looking for increasing their valuation. A waste of time

                                                                                                                                                                                                                                                                                                • search_facility 18 hours ago

                                                                                                                                                                                                                                                                                                  +100, companies certainly have direct interest in pumping asset evaluation, and emotional attachment is financial valuable thing. Emotional attachment sells better than xxx this days

                                                                                                                                                                                                                                                                                                • petters 19 hours ago

                                                                                                                                                                                                                                                                                                  Many dismiss Dawkins here but Ilya Sutskever wrote in 2022: “it may be that today's large neural networks are slightly conscious.”

                                                                                                                                                                                                                                                                                                  • 3748499449 19 hours ago

                                                                                                                                                                                                                                                                                                    IS quite literally gets paid to think that

                                                                                                                                                                                                                                                                                                    • petters 7 hours ago

                                                                                                                                                                                                                                                                                                      Karpathy replied to IS with ”agree” at the time

                                                                                                                                                                                                                                                                                                    • Towaway69 14 hours ago

                                                                                                                                                                                                                                                                                                      Well then it must be so. Btw what exactly is “consciousness”? Oh, we don’t really know that either.

                                                                                                                                                                                                                                                                                                      So two (AI and consciousness) concepts we don’t fully understand seem to be seem to uniting into something we definitely won’t understand. Which doesn’t matter since humankind is busy doom scrolling, talking about what color Trumps fart was last night and invading each others countries.

                                                                                                                                                                                                                                                                                                      /s

                                                                                                                                                                                                                                                                                                    • wewewedxfgdf a day ago

                                                                                                                                                                                                                                                                                                      Its software. Software is not conscious.

                                                                                                                                                                                                                                                                                                      • thebruce87m 20 hours ago

                                                                                                                                                                                                                                                                                                        If your brain is hardware then what are your thoughts?

                                                                                                                                                                                                                                                                                                        Is a sperm conscious? Or an egg? When they come together the eventual brain is not conscious immediately.

                                                                                                                                                                                                                                                                                                        • gehsty 18 hours ago

                                                                                                                                                                                                                                                                                                          LLMs are word prediction engines.

                                                                                                                                                                                                                                                                                                          They clearly are not conscious, they are just guessing what words should come next.

                                                                                                                                                                                                                                                                                                          • thebruce87m 16 hours ago

                                                                                                                                                                                                                                                                                                            > They clearly are not conscious

                                                                                                                                                                                                                                                                                                            Consciousness is emergent. A human is not conscious by our definition until the moment they are. How will we be able to identify the singularity when it comes? I feel like this is what the article is really addressing.

                                                                                                                                                                                                                                                                                                            > LLMs are word prediction engines

                                                                                                                                                                                                                                                                                                            Humans can also do this too, so what are the missing parts for consciousness? Close a few loops on learning pipeline and we might be there.

                                                                                                                                                                                                                                                                                                            • gehsty 12 hours ago

                                                                                                                                                                                                                                                                                                              I feel like it’s quite straightforward - if it’s a living breathing thing it can be conscious, if it is a set of man made mathamatical models that can be switched of, it can be intelligent, but not conscious.

                                                                                                                                                                                                                                                                                                              • thebruce87m 12 hours ago

                                                                                                                                                                                                                                                                                                                Life can be “switched off” - death is the ultimate off. Some life can be frozen and unfrozen with no ill effects.

                                                                                                                                                                                                                                                                                                                And life itself doesn’t mean consciousness. And ultimately what is life? Something that has biological processes and reproduces? Why can’t we replace or recreate these processes with manmade equivalents to get the same results?

                                                                                                                                                                                                                                                                                                            • zwischenzug 17 hours ago

                                                                                                                                                                                                                                                                                                              How do we know that that isn't essentially how our minds work?

                                                                                                                                                                                                                                                                                                              • gehsty 12 hours ago

                                                                                                                                                                                                                                                                                                                It probably is, but the difference is that the signals are happening in a persons and not a gpu.

                                                                                                                                                                                                                                                                                                              • charlie90 17 hours ago

                                                                                                                                                                                                                                                                                                                The human brain is an electrical signal prediction machine.

                                                                                                                                                                                                                                                                                                                Anything that looks like intelligence will look like a prediction machine because the alternative is logic being hardcoded apriori.

                                                                                                                                                                                                                                                                                                            • vixen99 21 hours ago

                                                                                                                                                                                                                                                                                                              I do appreciate how AI has been taught to spell properly as in the difference between its and it's. Here, initially I thought you'd left out the apostrophe in its, but then I realized you might be saying 'the reason it is not conscious is because of -its- software - the latter not being conscious. Context and interpretation are rather critical. (I know - a truism!)

                                                                                                                                                                                                                                                                                                            • Myrmornis a day ago

                                                                                                                                                                                                                                                                                                              On the one hand I'm not sure Dawkins has read/thought enough about how LLMs actually work. I'm getting the impression he doesn't fully appreciate or is somehow forgetting that it's a text completion algorithm with a vast number of parameters and that even if the patterns of learned parameter tunings are not really comprehendible, the architecture was very deliberately designed.

                                                                                                                                                                                                                                                                                                              But on the other hand his thoughts at the end are interesting. Summary:

                                                                                                                                                                                                                                                                                                              Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.

                                                                                                                                                                                                                                                                                                              • mmustapic 8 hours ago

                                                                                                                                                                                                                                                                                                                The brain uses a lot less energy than an LLM, so most probably it is something completely different. Maybe consciousness is a byproduct of the architecture of the brain, so there is no version of a humanoid with no consciousness.

                                                                                                                                                                                                                                                                                                                • thrownthatway 17 hours ago

                                                                                                                                                                                                                                                                                                                  GTP’s, or transformers more generally can be trained on data other than language (text / audio).

                                                                                                                                                                                                                                                                                                                  They can operate on data other than natural language.

                                                                                                                                                                                                                                                                                                                  So can humans.

                                                                                                                                                                                                                                                                                                                  • collyw 18 hours ago

                                                                                                                                                                                                                                                                                                                    "But if not, then it raises the question of why do we even have this "extra" consciousness"

                                                                                                                                                                                                                                                                                                                    Keep chipping away Dawkins, you might arrive at God eventually.

                                                                                                                                                                                                                                                                                                                  • jasiek 19 hours ago

                                                                                                                                                                                                                                                                                                                    muggles will look at matrix multiplication and say it's magic

                                                                                                                                                                                                                                                                                                                    • morpheos137 a day ago

                                                                                                                                                                                                                                                                                                                      Really is it conscious is a bizarre question. Can LLMs simulate the output of a 'conscious' system quite well? Increasingly yes. Is the nature of machine 'consciousness' different from human consciousness of course yes. Can an ai introspect. yes. Interestingly having been working a lot with highly automated (e.g. ratio of prompt to output maybe 1/1000 or less) iterative coding agents recently has iluminated for me just how different machine consciousness is from human. part of this could the harness of course. Time is a mysterious concept to machines. the connection of before and after to cause and effect is far weaker than in humans. over generalization is the norm: this is common in humans as well (c.f. fallacy of excluded middle or false dilemma) but the tricky part with current ai is they present as advanced in terms of acessible knowledge base but are actually shockingly weak in reasoning once you get off the beaten path.

                                                                                                                                                                                                                                                                                                                      • search_facility 16 hours ago

                                                                                                                                                                                                                                                                                                                        > weak reasoning once you get off the beaten path

                                                                                                                                                                                                                                                                                                                        Yep. And LLM engeneers improving this issues see perfect correlation with only one thing - data quality and quantity through training pipeline. LLM internals are secondary on many metrics for improving that

                                                                                                                                                                                                                                                                                                                        Humanity just reached the point where collective accessible knowledge covers semi-full perturbations of all main concepts that human consiousness ever produced, with additional associative expanding (math handles this). Full perturbations with current communication complexity are written down and recorded one way or another, LLMs just capitalizing on that tipping point, imho

                                                                                                                                                                                                                                                                                                                      • WalterGR a day ago

                                                                                                                                                                                                                                                                                                                        Related: https://news.ycombinator.com/item?id=47988880

                                                                                                                                                                                                                                                                                                                        "Richard Dawkins and The Claude Delusion: The great skeptic gets taken in" (garymarcus.substack.com)

                                                                                                                                                                                                                                                                                                                        18 points | 2 hours ago | 16 comments

                                                                                                                                                                                                                                                                                                                        • dang a day ago

                                                                                                                                                                                                                                                                                                                          Also The Claude Delusion: Richard Dawkins believes his AI chatbot is conscious - https://news.ycombinator.com/item?id=47991340 - May 2026 (30 comments)

                                                                                                                                                                                                                                                                                                                          • amelius a day ago

                                                                                                                                                                                                                                                                                                                            So we know Claude is deterministic, but does that mean it is not conscious?

                                                                                                                                                                                                                                                                                                                            Or what is the reasoning exactly?

                                                                                                                                                                                                                                                                                                                            • throwaway27448 a day ago

                                                                                                                                                                                                                                                                                                                              It largely comes down to how you define the term. Personally, I think anything that includes software (...of only tepid determinism, as we do explicitly add pseudorandomness) is not a particularly useful term.

                                                                                                                                                                                                                                                                                                                              Regardless, Dawkins seems to not have much interesting to add about the topic. A consistent theme for the last few decades, I must say.

                                                                                                                                                                                                                                                                                                                          • teekert 16 hours ago

                                                                                                                                                                                                                                                                                                                            Yeah maybe the LLM just knows the book in one go, like "an understanding" from Children of Time and follow-ups (recommended, if just for the amount of metaphors on consciousness it gives you), but we do something similar with pictures. So, nice story bro-LLM, but it's just the combination of words a system would throw together in this context to impress a human. And humans did put these word together (roughly) before you ever did, you stochastic parrot!

                                                                                                                                                                                                                                                                                                                            • codr7 17 hours ago

                                                                                                                                                                                                                                                                                                                              I'd recommend anyone to push their favorite AI into a corner on it being conscious, and listen to their gut feeling while doing it.

                                                                                                                                                                                                                                                                                                                              Also:

                                                                                                                                                                                                                                                                                                                              https://gitlab.com/codr7/sudoxe/-/blob/main/digital-psychopa...

                                                                                                                                                                                                                                                                                                                              • RVuRnvbM2e a day ago

                                                                                                                                                                                                                                                                                                                                It is terribly sad when someone undeniably brilliant in a particular field fails to recognize their own incompetence in other areas - in this case mistaking advanced technology for magic.

                                                                                                                                                                                                                                                                                                                                • thinkingemote 20 hours ago

                                                                                                                                                                                                                                                                                                                                  We're going to see increasing numbers of older famous (non computer savvy) figures that we have respected follow his views on this. It's like seeing your favourite celebrity sell out an shill crypto coins, all a bit sad.

                                                                                                                                                                                                                                                                                                                                  Thinking positively, it could just be newsworthy because he is famous and he so misses the mark. Other older famous people might agree with us but that's not news.

                                                                                                                                                                                                                                                                                                                                  • mrandish 20 hours ago

                                                                                                                                                                                                                                                                                                                                    Given that Dawkins is a biologist in his 80s, I'm more disposed towards being charitable than I am when people actively involved in developing LLMs let themselves get bamboozled.

                                                                                                                                                                                                                                                                                                                                    • Myrmornis a day ago

                                                                                                                                                                                                                                                                                                                                      I don't think you read carefully what he said. At the end he gave three quite interesting thoughts about what might be true assuming LLMs are less conscious than we are (i.e. assuming our consciousness is not a purely algorithmic phenomenon as we obviously know LLMs are).

                                                                                                                                                                                                                                                                                                                                      • rellfy a day ago

                                                                                                                                                                                                                                                                                                                                        Are you implying consciousness is magic? Well, I wouldn't disagree with that really.

                                                                                                                                                                                                                                                                                                                                        • morpheos137 a day ago

                                                                                                                                                                                                                                                                                                                                          the problem is asking if ai is conscious is like asking does ai have a soul. it is not a scientific question and presupposes humans are 'conscious' without even defining the term. to me it is 100% irrelevant if ai is conscious and all discussions about it are based on fallacies and assumptions. what matters to me about ai and matters to other people as well in terms of theory of mind about others is: can i predict how it will work. is it useful. thats it. consciouness is a sophist question with no scientific resolution available and no moral weight until it has consequences.

                                                                                                                                                                                                                                                                                                                                          • sirsau 10 hours ago

                                                                                                                                                                                                                                                                                                                                            IMHO the only rational way to think about this

                                                                                                                                                                                                                                                                                                                                            • vixen99 a day ago

                                                                                                                                                                                                                                                                                                                                              Good - I was scanning down to see if anyone was going to say this.

                                                                                                                                                                                                                                                                                                                                            • AdeptusAquinas a day ago

                                                                                                                                                                                                                                                                                                                                              That's always been Dawkins's shtick though. As an atheist I've generally found him a bit embarrassing

                                                                                                                                                                                                                                                                                                                                              • IncreasePosts a day ago

                                                                                                                                                                                                                                                                                                                                                Where does he say it's magic?

                                                                                                                                                                                                                                                                                                                                                • ezfe a day ago

                                                                                                                                                                                                                                                                                                                                                  LLMs are just math run on your CPU. Autocomplete. Sometimes very useful autocomplete, but still just autocomplete.

                                                                                                                                                                                                                                                                                                                                                  To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).

                                                                                                                                                                                                                                                                                                                                                  • 0-_-0 a day ago
                                                                                                                                                                                                                                                                                                                                                    • thrownthatway 15 hours ago

                                                                                                                                                                                                                                                                                                                                                      A human brain is not conscious on its own.

                                                                                                                                                                                                                                                                                                                                                      Many things the human brain does don’t rise to the level of conscious awareness.

                                                                                                                                                                                                                                                                                                                                                      It remains to be seen whether a human brain can be conscious in a jar. If it can, then I’d still argue that some sub-unit of the whole brain is not conscious on its own, similarly a GPU running a GPT probably isn’t conscious, but there may be some scale of number of GPUs running software that might give rise to consciousness as an emergent ability.

                                                                                                                                                                                                                                                                                                                                                      GTP’s have exhibited emergent abilities as scale increased dramatically.

                                                                                                                                                                                                                                                                                                                                                      • kortex a day ago

                                                                                                                                                                                                                                                                                                                                                        They stopped being autocomplete years ago with RLHF

                                                                                                                                                                                                                                                                                                                                                        • IncreasePosts 13 hours ago

                                                                                                                                                                                                                                                                                                                                                          It sound like you believe in magic then? What is this "something else" to consciousness that can't be done with sufficiently advanced math?

                                                                                                                                                                                                                                                                                                                                                          • baggy_trough a day ago

                                                                                                                                                                                                                                                                                                                                                            Neurons are just summing up their inputs according to the laws of chemistry. What's the difference?

                                                                                                                                                                                                                                                                                                                                                            • acdha a day ago

                                                                                                                                                                                                                                                                                                                                                              This is definitely complicated—I’m not a neuroscientist but worked for some and married one, so I’ve heard quite a few entries from the genre of how our brains fool ourselves or make our conscious experience seem more coherent and linear than it actually is—but the big ones I see are the inability to learn from experience or have a generalized sense of conceptual reasoning. For the latter, I’m not just thinking about the simple “count the r’s in strawberry” things companies have put so much effort into masking but the way minor changes in a question can get conflicting answers from even the best models, indicating that while there’s something truly fascinating about how they cluster topics it is not the same as having a conceptual model of the world or a theory of mind. This is the huge problem in the field: all of these companies would love to have a model which is safe to use in adversarial contexts because then the mass layoffs could begin in earnest, but the technology just isn’t there.

                                                                                                                                                                                                                                                                                                                                                              This isn’t a religious argument that there’s something about our brains which can’t be replicated, but simply that it’s sufficiently more complex than anything we have currently.

                                                                                                                                                                                                                                                                                                                                                              • thrownthatway 15 hours ago

                                                                                                                                                                                                                                                                                                                                                                > minor changes in a question can get conflicting answers from even the best models

                                                                                                                                                                                                                                                                                                                                                                Humans are notorious for doing this.

                                                                                                                                                                                                                                                                                                                                                                • acdha 9 hours ago

                                                                                                                                                                                                                                                                                                                                                                  Not unless you’re referring to significant mental illness, no. Individual people may vary if, say, I ask for health advice but if I ask the same doctor they’re not going to flip the answer based on whether I use medical or wellness influencer phrasings — and that allows them to build a reputation which other people can rely on.

                                                                                                                                                                                                                                                                                                                                                                  This especially applies to mistakes: the junior developer who drops a database by mistake is unlikely to ever do that again, whereas the same AI companies models keep doing that to a small but non-zero number of customers because they don’t have that higher level learning process or anything like fear of consequences.

                                                                                                                                                                                                                                                                                                                                                                • kortex a day ago

                                                                                                                                                                                                                                                                                                                                                                  Humans can't reliably subitize more than five-ish objects, while chimps can actually do this task better than us. That's our "cant count the R's in strawberry" (which flagship models can reliably do now, general letter counting).

                                                                                                                                                                                                                                                                                                                                                                  https://en.wikipedia.org/wiki/Subitizing

                                                                                                                                                                                                                                                                                                                                                                  • acdha 16 hours ago

                                                                                                                                                                                                                                                                                                                                                                    That’s not a valid analogy: humans reliably perform that task billions of times daily. It’s still routine to find cases which reveal that while models may have improved on some basic tasks (or learned to call a tool) there isn’t a deeper understanding of the underlying concept or the problem they’re being asked to solve.

                                                                                                                                                                                                                                                                                                                                                                    • kortex 11 hours ago

                                                                                                                                                                                                                                                                                                                                                                      And AI agents reliably-ish do tasks billions of times a day that humans struggle with, namely regurgitating information at incredible rates across wide breadths of topics. I see it as merely a matter of degree, not category.

                                                                                                                                                                                                                                                                                                                                                                      How do you measure "deeper understanding" in humans? You usually do it by asking them to show their work, show how the dots connect. Reasoning models are getting there, and when they do, I'm sure the goalposts will move yet again.

                                                                                                                                                                                                                                                                                                                                                                • 2snakes 21 hours ago

                                                                                                                                                                                                                                                                                                                                                                  Physical fields like dendritic integration, EM, diffusion, it isn’t binary logic. Brains are a different substrate. Metabolism power efficiency affects cognition too.

                                                                                                                                                                                                                                                                                                                                                                  • digitaltrees a day ago

                                                                                                                                                                                                                                                                                                                                                                    I came here to say this. But your neurons are faster than mine.

                                                                                                                                                                                                                                                                                                                                                              • ChrisClark a day ago

                                                                                                                                                                                                                                                                                                                                                                So, how is consciousness generated?

                                                                                                                                                                                                                                                                                                                                                                • wrs a day ago

                                                                                                                                                                                                                                                                                                                                                                  Not simply by reading every word ever written by a conscious being and learning to reproduce them with high probability.

                                                                                                                                                                                                                                                                                                                                                                  At least, that’s certainly not how I got here.

                                                                                                                                                                                                                                                                                                                                                                  • brookst a day ago

                                                                                                                                                                                                                                                                                                                                                                    Think of the poor Xerox machines.

                                                                                                                                                                                                                                                                                                                                                              • lpcvoid 20 hours ago

                                                                                                                                                                                                                                                                                                                                                                No, it's not conscious, and anybody pretending it is has either no clue, or, more likely in the AI space, is a grifter.

                                                                                                                                                                                                                                                                                                                                                                • pikuseru 17 hours ago

                                                                                                                                                                                                                                                                                                                                                                  No.

                                                                                                                                                                                                                                                                                                                                                                  • 492816 10 hours ago

                                                                                                                                                                                                                                                                                                                                                                    Unherd is owned by Paul Marshall of Marshall Wace investment fund. Marshall Wace thinks AI is the future.

                                                                                                                                                                                                                                                                                                                                                                    Paul Marshall also owns The Spectator and has stock in GBNews. The right wing networks who pretended to be on the side of the people during the Biden years are now dropping their masks. I'm saying this as someone who was a moderate/independent in the culture wars.

                                                                                                                                                                                                                                                                                                                                                                    It is all about money, and Dawkins is one of the darlings of this milieu, same as Jordan Peterson.

                                                                                                                                                                                                                                                                                                                                                                    After seeing many conservative masks drop over the years (Ben Shapiro, Triggernometry, Unherd ..) I'm no longer surprised though. A pity that Dawkins participates, but he was always an attention whore.

                                                                                                                                                                                                                                                                                                                                                                    • iamflimflam1 19 hours ago

                                                                                                                                                                                                                                                                                                                                                                      Given this article is behind a paywall, what on earth is everyone discussing in the comments here?

                                                                                                                                                                                                                                                                                                                                                                      • robinhouston 19 hours ago

                                                                                                                                                                                                                                                                                                                                                                        There's an archive link above that bypasses the paywall

                                                                                                                                                                                                                                                                                                                                                                        • iamflimflam1 19 hours ago

                                                                                                                                                                                                                                                                                                                                                                          Doesn’t seem to be working…

                                                                                                                                                                                                                                                                                                                                                                          • memming 11 hours ago

                                                                                                                                                                                                                                                                                                                                                                            it's working for me (after the "are you human with consciousness" captcha challenge)

                                                                                                                                                                                                                                                                                                                                                                      • psychoslave 20 hours ago

                                                                                                                                                                                                                                                                                                                                                                        Honestly, who care if they are conscious? If it's about how we should treat other conscious beings, our attention should first go to how we treat other animals, or even other humans. Actually even how fellow humans will treat themselves can be a concern if they are not the proper means to deal with their own life.

                                                                                                                                                                                                                                                                                                                                                                        • caspianmagnus 4 hours ago

                                                                                                                                                                                                                                                                                                                                                                          [dead]

                                                                                                                                                                                                                                                                                                                                                                          • grantcas 20 hours ago

                                                                                                                                                                                                                                                                                                                                                                            [dead]

                                                                                                                                                                                                                                                                                                                                                                            • mpurbo a day ago

                                                                                                                                                                                                                                                                                                                                                                              [flagged]

                                                                                                                                                                                                                                                                                                                                                                              • blackpink999 19 hours ago

                                                                                                                                                                                                                                                                                                                                                                                [dead]

                                                                                                                                                                                                                                                                                                                                                                                • yakbarber 19 hours ago

                                                                                                                                                                                                                                                                                                                                                                                  let's say aliens land. we learn to talk to them. they're super smart - smarter than us. would we say they're conscious? why? because they're organic. I think that's the root of the criteria many folk are trying to express.

                                                                                                                                                                                                                                                                                                                                                                                  1. passes turing test

                                                                                                                                                                                                                                                                                                                                                                                  2. is organic

                                                                                                                                                                                                                                                                                                                                                                                  I'm not saying it's correct or even that I agree with it, but that's what it boils down to.