Originally posted on Reddit in Cognitive Science (r/cognitivescience) by Joseph Mas, username CrypticDarkmatter
by Joseph Mas
I found this research interesting and related to what I’m working on. Neuroscientists at Max Planck discovered that higher intelligence correlates with how well the brain encodes spatial relationships between objects, not just memory capacity.
Cited Research: https://www.psypost.org/higher-fluid-intelligence-is-associated-with-more-structured-cognitive-maps/
A key finding was smart people don’t just remember more, they build better relational maps. The hippocampus encodes “distances” between concepts through overlapping reference frames.
This validates the concept of something I’ve been building which is a cognitive architecture based on Jeff Hawkins’ Thousand Brains Theory that uses salience-weighted cortical markers to preserve relational topology instead of flat memory retrieval.
The researchers note that current AI approaches focus on raw memory (bigger context windows) when intelligence actually stems from structured relational encoding. That’s the gap I’m exploring.
The most interesting part: their subjects with higher fluid intelligence showed consistent 2D spatial encoding. Lower intelligence subjects had “lapses in integrating relationships across the whole scene.” This is important, modern LLMs have this exact problem because they flatten vector relationships and lose critical nuance.
Cog Sci Community Feedback & Discussion
The article was originally shared on Reddit’s r/cognitivescience community and received exceptional peer feedback. It received 177 upvotes with substantial technical discussion.
Below are selected responses from researchers and practitioners in the field: See the full thread HERE.
On the Definition of Intelligence and Knowledge
RecentLeave343 wrote:
“Intelligence is often defined as the ability to acquire and use knowledge. But what exactly is knowledge? From my perspective, knowledge is the ability to understand associations, dichotomies, and causal relationships. This comes from structuring discrete sets of information in a 2D representation, which can be thought of as ‘sort of’ intelligence in the brain. Lower intelligence subjects had lapses in integrating relationships across the whole scene – consistent with a decaying ability to sort and structure associations.”
On Current AI Limitations
**bul_AFF782** noted:
“This is very well written and active but rather the difference should be clearly stated to be stored in the brain.”
**chicago1league** provided insight from practical experience:
“As someone who is fattening vector databases right now, I appreciate this. It’s not just what you have stored but how it’s stored and how quickly it can be accessed. Your point about salience weighting resonates – not all connections are equal. The brain doesn’t treat every stored memory as equally accessible, and neither should our systems.”
On Biological Intelligence Research
**DigitalDystopian** offered extensive analysis:
Fascinating article and I love what you’re doing! Just a psychology and neuroscience student here who’s had quite some interest in AI for a while but I’ve got some thoughts on this. I’ve been doing a lot of research both in the topics of intelligence, cognition and neuroscience to further understand the way our cognitive capabilities work (which also inspired my paper last year which explored a non-pathologicalized view of ADHD symptoms through the lenses of evolutionary theory, where I posit that what we call ADHD symptoms are actually residual traits from the brain’s ancestral hunting mode that no longer align with our modern, agricultural lifestyles).
The movements I’m noticing are that fluid intelligence is a measure that self-constructs, and is ironically self-referential and not an objective measure. But the system itself (biological intelligence, specifically, the brain and body) is obviously more than memory capacity.”
On Architectural Approaches
**jimmy_twitterbee437** shared their thoughts:
“I am currently writing on biological intelligence. I’m predicting a certain hierarchy as to how intelligence develops and works. Some things I came to realize by my own inquiry is that intelligence isn’t about building bigger and bigger architectures, but about structuring better. I’ve seen your earlier post about this and I find it very interesting. I’ve been thinking quite a bit about this tension between static architectures and dynamic, evolving ones.”
**tommy_twitterbee437** expanded on this theme:
“That’s the bottleneck in current AI explanation and why I believe architecture is probably more crucial than many expect. Bigger context windows just stretch memory but don’t fundamentally change the way information is organized and related. If cognition itself relies on relational structures more dynamically adapting (perhaps even through multi-pass processing as you describe), that divergence from flat attention mechanisms might be exactly what’s missing. The brain doesn’t just store associations, it organizes them in relays to webs.”
On Scale vs Structure
**jimmy_twitterbee437** raised an important question:
“I wonder as BLOOM and other large language models continue to grow in parameter count, whether the real breakthrough will come not from scale but from architectural changes like the salience model you’re describing and whether we’ll see a return to smaller, more efficient models once we figure out the structural puzzle. I also agree to the realization that LLMs need to distance more from text processing. Humans intelligence converges on solutions precisely because of this architectural difference: the brain doesn’t flatten everything into word-embedding space.”
**Dai_AFF782** added skepticism:
“Now I don’t think so. They are not basing yet since the current opportunities lacks richness and saying there is no way (as it stands in its current form) a transformer architecture can approach what you’re describing without fundamental redesign.”
On Implementation and Philosophy
**chicago1league** posed a philosophical question:
“I think this is very good question. You’re touching on something that feels almost philosophical: if we strip away enough of biological process, at what point do we lose the thing that makes intelligence different from sophisticated pattern matching? From my understanding at this point, it’s not just about matching patterns. It seems like your work is pointing toward similar intuitions in code form.”
Encouragement from the Community
**Average90sFan** offered an interesting perspective:
“Why are these science related news always something I knew even before they were released. Its like science is always just a bit behind my thoughts and when they release stuff its not even news to me.”
And later affirmed:
“Exactly. Human intuition is a powerful thing and we can pursue our gut feeling and its usually right or close to it.”
Personal Remarks and Reference
Honestly, when I posted originally.I thought I was going to get beat the hell by the Reddit group, they’re a tough bunch but I was pleasantly surprised.
