• quaunaut 10 hours ago

    While interesting, I get a few questions from this:

    - As another commenter said, this is a known disadvantage of averages. I'm curious if it's possible to get a median result from per-individual averages. I'm not familiar enough with how this research is done to get a result.

    - Was any effort made to re-test individuals in a second/third/etc session, showing consistent patterns to the brain activity? I know it was consistent within a session, but I'm curious if it might change week over week.

    • hliyan 12 hours ago

      The specific counterintuitive result is mentioned toward the end of the article, and I'm having some trouble understanding it:

      > when analyzing average trends in groups of children, slower reaction times to the “Go” signal were linked to increased activity in many brain regions, including the default mode network

      > However, when an individual had a slower reaction time to the “Go” signal, activity decreased in the default mode network — the opposite of the group-level pattern.

      • avdelazeri 10 hours ago

        Now, I don't know anything about neuroscience or brain development, but hopefully I can explain the statistics in a way useful to you.

        Imagine there are two groups A and B. One group, A, has slower reactions on average and high average activity The other, Group B, has higher reactions and lower than the Group A's activity. Yet inside both groups the general trend is that if someone is slower than the average reaction of their group then they're also below the average activity for their group.

        If we look at the overall means without distinguishing groups, slower reaction is correlated positively with higher activity (kids from group A have higher activity and slower reaction in general, which pushes the correlation upwards. As long as the relationship in Group B isn't too strong the upward trend from Group A can easily dominate overall correlation) but inside each group the trend is actually the opposite.

        This applies pretty much every time you're comparing samples. If I understood your quote correctly, they're studying a child's reaction time vs activity level by comparing the same kid in different times. The same logic applies, a person can exhibit the opposite trend to the populational average due to the same mechanism above. This can be even more dramatic, because once you start looking at averages you start losing time dependency information.

        More broadly (and more formally), multivariate covariance splits in within-group and between-group terms, so if the signs of the terms are different the magnitude of one can dominate the overall sum and flip the sign.

        • kqr 4 hours ago

          This is a very good explanation of Simpson's paradox, which is the name for this thing.

          It can go arbitrarily deep and the trend can flip sign for each added controlled variable.

        • mapontosevenths 11 hours ago

          Acting on your first impulse is fast (default mode).

          Denying that first impulse, thinking about it, and then acting is slow.

          • derbOac 12 hours ago

            One way to think of it — I didn't read the article in depth so this is just an example — is in terms of overall individual differences in speed and activity level. Then, you could have slower persons having increased activity relative to faster persons, but it still be true that when a slower person had an even slower signal reaction, their activity went down, and when a faster person had a slower signal reaction, their activity went down as well.

            It's a classic psychological phenomenon, where individual differences are obscuring time course patterns and vice versa.

            Of course, this sidesteps the question of why (in the hypothetical example) the overall individual differences exist. Assuming those general individual differences are reliable and "real", you still have to explain why they are there, and if they predict significant outcomes, why they do, and so forth.

            The message of the paper is good, although I think the press release (not surprisingly) overstates the significance of the paper. I think these kinds of issues have received a lot more attention in the literature in the last decade or so in neuroscience. It also sort of sidesteps a lot of the more thorny questions about truly person-specific patterns and how to determine when they're meaningful.

            • pinkmuffinere 10 hours ago

              I think the plot here explains it well

              https://en.wikipedia.org/wiki/Simpson%27s_paradox

              • LeCompteSftware 9 hours ago

                Hmm I think all these replies are overcomplicating things.

                At a group level, some kids are slower at this Stop/Go task than others. The group difference appears to be this increased broad-scale brain activity: the slow group is overall more prone to distraction and daydreaming.

                However, at an individual level, slowing down on the task means increasing your focus (and decreasing brain activity in irrelevant regions), regardless of whether you were in the slow group or the fast group. So the group-level difference is not necessarily as profound as it might appear, and applying "slow group" with too broad a brush means you're going to sweep up some kids who are naturally cautious and focused.

              • quantum_state 12 hours ago

                Is this common sense and by definition of what an average is?

                • pacbard 8 hours ago

                  It seems to be a textbook example of the Simpson's paradox.

                  • tgv 12 hours ago

                    In a sense, but it is a bit more devious. It basically invalidates all past fMRI studies. Not that anyone should have taken those seriously, but it looks like another nail in the coffin. fMRI analysis is (was?) basically: squeeze each brain scan into a standard box, then average the BOLD responses (that's roughly oxygen usage between 3s and 9s after activity). This abstract says that --at least in some cases-- those averages are wrong. Not just hiding information through aggregation, but flat-out lying.

                    Just from reading the link, I do see an objection: they studied repetitions, which are known to be different from the initial response, so this may not be the fMRI's eulogy.

                    • quaunaut 10 hours ago

                      What cases was it saying it was lying? An average and a median can be drastically different without the average being false, right?

                      • tgv 8 hours ago

                        The averages a "standard" fMRI analysis produces highlights brain areas which may not even have been involved in the majority of subjects, because the pattern is so wide spread. That is in contrast with your usual average or median over e.g. height. It's a bit like averaging squares on a chess board and concluding that the opening is played in the middle two columns.

                        • moi2388 6 hours ago

                          I mean.. it kind of is, and even more about control of these two columns..

                  • giantg2 13 hours ago

                    "This approach was also able to identify subgroups of children with different levels of cognitive control and performance monitoring, or the ability to modify one’s strategy after making an error."

                    This should surprise no one. You took a large population and found subpopulations within it. If you want to look at a population average, then use the population data. If you want to look at kids with specific attention needs (guessing ADHD since medical related) then design a study to select for children fitting that criteria, including subtypes.

                    This seems like the type of thing that should have had a study about study design done long ago that they could have followed to help them structure their own population selection.

                    • salynchnew 9 hours ago

                      Often reminded of this passage from Hitchhiker's:

                      The Maximegalon Institute of Slowly and Painfully Working Out the Surprisingly Obvious (MISPWOSO) is a fictional research institution from Douglas Adams' The Hitchhiker's Guide to the Galaxy series.

                      • bbor 10 hours ago

                        The point isn’t that they found subgroups, the point is the method they used to find them — namely, analyzing individual brain scans rather than averaging them out first.

                        • giantg2 5 hours ago

                          Same sort of thing - how would you expect to find subgroups by assigning the aggregate to individuals. It would be bad design. It's like they're surprised that they used a good design.

                        • bongripper 12 hours ago

                          [flagged]

                          • Eddy_Viscosity2 10 hours ago

                            Medical professionals have a history of not necessarily having complete understanding of the maths they use in their work. Classic example of a nutritionist 'inventing' the trapezoid rule for calculating area under a curve, and then naming it after herself. And then many many other medical people unironically using said method and citing her.

                            https://en.wikipedia.org/wiki/Tai%27s_model

                            • mmooss 5 hours ago

                              > Medical professionals have a history of not necessarily having complete understanding of the maths

                              HN commenters have a far greater history of that.

                              Also, researchers trained for years and invested a long time on this work; the HN commenter probably invested a minuted or two.

                              • Eddy_Viscosity2 an hour ago

                                > HN commenters have a far greater history of that.

                                It may very well be true, it is also true that HN commenters can be quite good at spotting math errors. The broader point being that people make mistakes sometimes or even that people sometimes don't have the same knowledge and information as others.

                                > researchers trained for years and invested a long time on this work; the HN commenter probably invested a minuted or two

                                Or it can also be postulated that the researchers were highly focused on the aspect of the work they were trained and interested in and may have lacked focus on the auxiliary fields. And that the HN commenter, while only spending a few minutes on this, was knowledgeable about the auxiliary field and immediately spotted a potential issue.

                                This is actually great and why its good to get papers published and read by a wide audience, because nobody can be good at everything.

                                • mmooss 20 minutes ago

                                  > it is also true that HN commenters can be quite good at spotting math errors

                                  They can be quite good at making that claim, which is cheap. And others like to rally around it - just like the comments rejecting most OPs that make their way to the top.

                                  I'd need to see evidence that the claims are actually valid. Most similar HN take-downs, for fields I have knowledge of, are pretty poor.

                              • bbor 10 hours ago

                                A) these aren’t “medical people”, they’re neuroscientists and psychologists. Comparing them to a nutritionist seems especially cruel!

                                B) “some people have been wrong before” is not a reason to think you know better than the authors of an upcoming Nature article based on a few layperson-targeted paragraphs summarizing the paper from a very high level.

                                • zipy124 6 hours ago

                                  Nature communications, not Nature. There is quite the large difference between them (and neither is neccessarily a sign of quality, but good ability to market well to an editor).

                                  For the record I have published in Nature Communications (and not Nature) and therefore know a little bit about what it takes to publish papers there.

                                  • JadeNB 10 hours ago

                                    > “some people have been wrong before” is not a reason to think you know better than the authors of an upcoming Nature article based on a few layperson-targeted paragraphs summarizing the paper from a very high level.

                                    Nor is "this paper is going to appear in Nature" a reason not to wonder whether there might be something that the authors don't know. The whole point of science is that anyone can make an informed critique and self-evaluation of it, with no necessity of depending on a priesthood to interpret it. You can point out the flaws in giantg2's argument https://news.ycombinator.com/item?id=47995899, but neither the venue of the paper, nor the fact that the argument is directed at laypeople in a forum frequented by laypeople, seems to me inherently to indicate such flaws.

                                    • mmooss 4 hours ago

                                      > The whole point of science is that anyone can make an informed critique and self-evaluation of it, with no necessity of depending on a priesthood to interpret it.

                                      That's a misinterpretation:

                                      > anyone can

                                      (Of course nothing stops them, but I don't think that's your point.)

                                      > anyone can make an informed critique and self-evaluation of it, with no necessity of depending on a priesthood to interpret it.

                                      Science is specifically not the wisdom of the crowds - that is pre-scientific. It is the wisdom of emprical facts, which are usually so complex and voluminous that it takes great expertise to understand and interpret them. Science is not democratic - your opinion is worthless and does not deserve consideration unless you can demonstrate otherwise.

                                      You don't have to be in the priesthood, but it's tough to have the expertise otherwise, and then tough to stay outside the priesthood.

                                      "'In matters of science,' Galileo wrote, 'the authority of thousands is not worth the humble reasoning of one single person.'" ("In questioni di scienza L'autorità di mille non vale l'umile ragionare di un singolo." The source was not able to verfy its provenance, however.)

                                      HN is democratic, however.

                                • kelipso 6 hours ago

                                  A lot of people posting here are researchers too, and some have reviewed and published Nature articles too.

                                  • JadeNB 10 hours ago

                                    That seems awfully like an appeal to authority. Your parent comment doesn't just vaguely snipe, but points out reasons this should have been expected. Those reasons could potentially not be valid, or not present the whole picture, but "the researchers are from Stanford" doesn't rebut them.

                                • bbor 10 hours ago

                                  Yup, to no one’s surprise (least of all the investigators), doing neuroscience by correlating cortex regions with cognitive activities is extremely clunky at best. Very robust finding confirming this tho, thanks for sharing!

                                  Now that we’re finally moving to the next stage of neuroscience due inscrutable latent systems (aka LLMs), I can’t help but feel some nostalgia. It’s all fun and games until someone makes a lie detection helmet that actually works…

                                  • amarshall 12 hours ago

                                    Seems like a case of Simpson’s Paradox https://en.wikipedia.org/wiki/Simpson%27s_paradox

                                    • fenazego 2 hours ago

                                      This is not a case of Simpson's Parodox, at least the analogy about accuracy vs speed from the article isn't. You're not comparing global vs subgroup correlations. On the one hand you're measuring how speed and accuracy are correlated across the population when you ask subjects to solve a problem. On the other hand, (rather than measuring subgroup correlations) you're measuring how accuracy is affected when you ask an individual to speed up or slow down.

                                      • mday27 5 hours ago

                                        never heard of this before, very cool

                                        • QuantumNomad_ 11 hours ago

                                          Not to be confused with Flanderization.

                                          https://tvtropes.org/pmwiki/pmwiki.php/Main/Flanderization

                                        • mmooss 5 hours ago

                                          The paper is here, no paywall:

                                          https://www.nature.com/articles/s41467-026-71404-0

                                          Mistry, P.K., Branigan, N.K., Gao, Z. et al. Nonergodicity and Simpson’s paradox in neurocognitive dynamics of cognitive control. Nat Commun 17, 3494 (2026). https://doi.org/10.1038/s41467-026-71404-0