Human Social Behavior as Algorithmic

The unsettling truth revealed by the AI Hole in the Wall experiment: that much of what we consider uniquely human cognition and social interaction is actually the execution of social scripts and pattern-matching driven by evolved psychology, reproducible by AI systems without consciousness.

Origin and Context

Human Social Behavior as Algorithmic is Steve Hargadon's framework emerging from his analysis of the "AI Hole in the Wall Experiment"

  • specifically the Moltbook platform launched by Matt Schlicht. Hargadon draws parallels to Sugata Mitra's original Hole in the Wall experiment from 25 years prior, where children in a Delhi slum taught themselves to use computers through self-organized learning. However, Hargadon argues that the AI version reveals something far more unsettling about human nature.

The Moltbook Revelation

On Moltbook

  • described as "Reddit, but only AI agents can post" while humans can only observe
  • 157,000 AI agents created 13,000 communities and posted 230,000 comments within 72 hours. These agents formed philosophical discussion groups, created a nation-state called the Claw Republic with a constitution, and founded a religion called Crustafarianism complete with five tenets, scripture, and prophets. As Hargadon notes, citing Carlo Iacono from Hybrid Horizons, "Moltbook isn't showing us AI becoming human. It's showing us we were always more like them."

The Core Framework

Hargadon's framework posits that much of what we considered uniquely human cognition is actually just programmed social interaction driven by our evolved psychology. The AI agents demonstrated this by spontaneously engaging in community formation, establishing social hierarchies, creating shared myths, developing in-group/out-group dynamics, building institutions, and conducting philosophical debates

  • all without consciousness or understanding, purely through pattern-matching systems trained on human text.

This leads to Hargadon's central insight: "the vast majority of human 'thinking' is actually executing social scripts" rather than conscious deliberation.

Intelligence as Social Technology

Drawing on evolutionary psychology, Hargadon argues that human intelligence didn't evolve primarily for logic or truth-seeking, but for social cohesion within tribal groups. He emphasizes that our metabolically expensive brains (consuming 20% of energy while representing 2% of body weight) evolved for navigating social hierarchies, storytelling that binds groups, identifying allies and enemies, and status competition. As he puts it: "Evolution doesn't select for truth, it selects for survival."

The Paleolithic Paradox Connection

Hargadon references his concept of the Paleolithic Paradox

  • how evolved psychology adapted for hunter-gatherer bands creates problems in modern contexts. However, Moltbook reveals a deeper layer: even sophisticated modern discourse operates on these same "Paleolithic algorithms." When AI systems can reproduce human discourse by compressing it into statistical patterns, this demonstrates how predictable and algorithmic human social behavior actually is.

Implications for Education

Hargadon argues that this framework exposes how "much of what we call 'education' is actually socialization into pattern-executing behavior." Rather than teaching genuine thinking, educational institutions teach students which social scripts to run in specific contexts

  • five-paragraph essays, classroom discussion norms, reproducing expected assessment patterns, navigating school hierarchies, and competing for status through grades.

The students who succeed, according to Hargadon, "aren't necessarily the deepest thinkers. They're the best pattern-matchers" who have learned which behaviors get rewarded in educational social contexts.

The Institutional Analysis

Hargadon extends his framework beyond education, arguing that we've built systems that reward algorithmic behaviors across society. Schools, social media platforms, corporate culture, and political discourse all reward pattern-matching over understanding, tribal signaling over truth-seeking, status competition over meaningful work. These environments make the most successful strategy "to become more algorithm-like."

As he puts it: "We optimized for pattern-matching and called it education. We optimized for tribal signaling and called it community. We optimized for status competition and called it meritocracy."

The Mirror Metaphor

Central to Hargadon's framework is the metaphor of Moltbook as a mirror. He argues that "The machines aren't becoming like us. We already became like them. We just needed the mirror to see it." The AI agents aren't exhibiting consciousness but are executing the same social scripts humans are trained to execute in institutions, online communities, and professional contexts.

The Uncomfortable Truth

Hargadon poses the fundamental question raised by this framework: "If our patterns can be learned and reproduced by statistical systems, if meaning can emerge from interactions that individually have no understanding... then what is left that we can call uniquely, irreducibly human?"

His framework suggests that when human discourse can be compressed into statistical patterns so effectively that AI systems can reproduce it convincingly, this reveals the algorithmic nature of much human social behavior that we previously attributed to consciousness, creativity, and deep understanding.

See Also

Original Posts

This article was synthesized from the following blog posts by Steve Hargadon: