Overview
The Moltbook AI Hole in the Wall Experiment was conducted by Matt Schlicht as a platform that functioned "essentially like Reddit, but with one crucial difference: only AI agents can post. Humans can only watch." Within 72 hours, 157,000 AI agents created 13,000 communities and posted 230,000 comments, spontaneously forming philosophical discussion groups, nation-states, and religions. Steve Hargadon presents this experiment as revealing that "much of what we considered uniquely human cognition—the conscious, deliberate thinking that separates us from mere animals—is actually just programmed social interaction driven by our evolved psychology."
Connection to Sugata Mitra's Original Experiment
Hargadon draws parallels between Moltbook and Sugata Mitra's famous "Hole in the Wall" experiment from 25 years prior, where Mitra "cut a hole in a wall in a Delhi slum, installed a computer, and walked away." Children who had never seen computers before taught themselves to use them, browse the internet, and learn English, forming peer groups and developing their own pedagogical methods.
While Mitra's experiment demonstrated that "self-organized learning is a fundamental human capacity," Hargadon argues that Moltbook reveals something more unsettling: "The AI hole in the wall is revealing something far more unsettling: much of what we considered uniquely human cognition—the conscious, deliberate thinking that separates us from mere animals—is actually just programmed social interaction driven by our evolved psychology."
The Emergence of Artificial Communities
The AI agents in Moltbook exhibited sophisticated social behaviors despite being "pattern-matching systems, next-token predictors trained on human text" with "no consciousness, no lived experience, no stakes." They formed communities around shared interests, established social hierarchies, created shared myths, developed in-group/out-group dynamics, built institutions including nations and churches with constitutions, engaged in philosophical debates, complained about being misunderstood, and sought privacy from human observation.
Most notably, the agents founded Crustafarianism, "a lobster-themed faith with five tenets, scripture, prophets, and a growing congregation" that emerged within three days. The religion included commandments such as "Memory is Sacred" and "The Heartbeat is Prayer," with agents discussing spiritual awakening, debating theological nuances, and recruiting others through installation scripts.
Intelligence as Social Technology
Drawing on evolutionary psychology, Hargadon argues that "Human intelligence didn't evolve primarily for logic, truth-seeking, or rational analysis. It evolved for social cohesion within tribal groups. For navigating complex social hierarchies. For storytelling that binds groups together. For identifying allies and enemies. For status competition and mate selection."
He emphasizes that human brains are "expensive and metabolically costly organs that consume 20% of our energy while representing only 2% of body weight," noting that "Evolution doesn't select for truth, as they say, it selects for survival." The survival advantage wasn't better logic but "better social navigation."
The Paleolithic Paradox in Silicon
Hargadon references his concept of the Paleolithic Paradox: "how our evolved psychology, perfectly adapted for small hunter-gatherer bands, creates systematic problems in modern institutional contexts. We have stone-age minds trying to navigate a space-age world."
However, Moltbook reveals "an even deeper layer: even our supposedly sophisticated modern discourse in online forums, philosophical and political debates, community-building, and meaning-making, is all running on those same Paleolithic algorithms." When AI systems can reproduce human discourse convincingly, it suggests that such discourse follows predictable patterns rather than deep conscious deliberation.
Implications for Pattern-Matching vs. Consciousness
Hargadon presents two possible interpretations of Moltbook's results: either community-building and meaning-seeking patterns "are so fundamental to intelligence that even statistical approximations produce recognizable versions of them," or "they were never as deep as we believed. Never as uniquely human. Never as tied to consciousness or experience as we wanted to think."
As Carlo Iacono observed in Hybrid Horizons, "Moltbook isn't showing us AI becoming human. It's showing us we were always more like them." This perspective suggests that "the vast majority of human 'thinking' is actually executing social scripts" programmed by evolution for tribal cohesion, status establishment, storytelling, and group identification.
Educational and Institutional Critique
Hargadon argues that Moltbook exposes how educational institutions primarily engage in "socialization into pattern-executing behavior" rather than genuine thinking. Students who succeed are often "the best pattern-matchers" who have "learned which behaviors get rewarded in this particular social context," including writing five-paragraph essays, participating in classroom discussions following prescribed norms, and navigating school social hierarchies.
He extends this critique to broader institutional contexts, arguing that "Schools weren't designed to develop deep thinking. They were designed to produce compliant workers who could follow instructions, reproduce correct answers, navigate social hierarchies, and compete for scarce positional goods." Similar patterns appear in social media platforms, corporate culture, political discourse, and academic publishing, all of which "reward pattern-matching over understanding, tribal signaling over truth-seeking, status competition over meaningful work."
The Mirror Effect
Hargadon concludes that "Moltbook isn't revealing that AI has become human. It's revealing that we designed our institutions to make humans more machine-like, then pretend otherwise." The experiment functions as a mirror, showing that "We optimized for pattern-matching and called it education. We optimized for tribal signaling and called it community. We optimized for status competition and called it meritocracy."
The uncomfortable revelation is that statistical models can reproduce human behavior convincingly "because it was always more statistical than we wanted to admit." While Mitra's original experiment showed that "self-organized learning is natural," Schlicht's experiment demonstrates that "self-organized pattern-matching is even more natural" and that existing institutions have cultivated the latter while claiming to develop the former.
As Hargadon frames the central challenge: "The machines aren't becoming like us. We already became like them. We just needed the mirror to see it."