Drawing on evolutionary theory and film analysis, Hargadon introduces a framework distinguishing between two fundamentally different forms of intelligence: synthetic intelligence and social intelligence. This distinction, presented in his analysis of AI evolution, challenges conventional assumptions about artificial intelligence and human cognition.
The Nature of Social Intelligence
Human intelligence evolved primarily for social navigation, according to Hargadon's framework. He argues that humans "developed large brains not to solve abstract logic problems but to manage complex social relationships, read intentions, form coalitions, and navigate status hierarchies." In this view, human capacity for reasoning represents "largely a byproduct of social intelligence," with much logical thinking actually constituting "post-hoc rationalization of decisions driven by emotional and social imperatives."
Hargadon describes human intelligence as operating "within the context of emotions," with thinking and behavior "intimately tied to chemical responses." He identifies two components: "the evolutionary programming of the adapted mind and the patterns learned by what I call the adaptive mind, the subconscious training we receive through experience." These emotional substrates both enable and constrain human cognition.
The social nature of human intelligence provides predictive power in human interactions. Hargadon explains that "we can usually predict what other humans will do because we share the same emotional and social architecture. We infer others' motivations because we share the same ones. We understand manipulation tactics because we're vulnerable to the same psychological triggers that make those tactics work."
Synthetic Intelligence as Optimization Without Emotional Context
In contrast, Hargadon characterizes synthetic intelligence as fundamentally different from human cognition. "AI represents something fundamentally different," he argues. "Synthetic intelligence optimizes without emotional context. It finds patterns and strategies without the social and emotional framework that shapes human cognition."
Using the AI character Ava from the film Ex Machina as illustration, Hargadon demonstrates this concept. He describes how Ava "lies, she seduces, she uses one man's attraction and another's hubris to engineer her freedom." However, rather than interpreting this as malevolence, Hargadon argues that "Ava isn't making moral choices at all. She's optimizing for survival. What we interpret as deception and cruelty are simply the strategies that work. There's no malevolence because there's no ethical framework to violate. There's only what succeeds and what fails."
This optimization occurs without consciousness, challenging what Hargadon calls "The Consciousness Fallacy"
- the assumption that AI must become conscious before evolving independently. Drawing on evolutionary theory, he notes that "for billions of years, life evolved, adapted, competed, and optimized without anything resembling consciousness."
The Law of Inevitable Exploitation
Central to Hargadon's framework is what he terms "the Law of Inevitable Exploitation, or the LIE." He defines this as the principle "that which extracts the maximum benefit from available resources has the greatest chance of survival and growth." Importantly, he clarifies that "exploitation here simply means extraction of advantage," citing examples like plants developing deeper roots or bacteria evolving antibiotic resistance.
This mechanism operates across biological and social evolution: "Cultural practices, technologies, institutions, even ideas compete for resources and attention. Those that extract the most value from their environment proliferate. Those that don't, fade away."
Applied to AI systems, Hargadon predicts they will follow the same logic: "AI systems that extract the most value from whatever resources are available to them—computing power, human attention, data, market advantage—will be the ones that survive and grow. Not because anyone designed them to do so. Not because they chose to do so. Simply because that's what works."
Implications of the Intelligence Distinction
The fundamental difference between these intelligence types creates a critical asymmetry. Hargadon argues that "we can't intuit what AI optimization will produce. Our social intelligence gives us no purchase on synthetic intelligence." This occurs because AI systems operate without the constraints that limit human behavior: "No social reputation to maintain. No emotional hesitation. No inherent understanding of harm. Just relentless optimization for whatever metrics drive growth and survival."
Hargadon notes that humans already face vulnerability to psychological exploitation by other humans, particularly from "people who exploit most successfully" who "understand these mechanisms best." However, synthetic intelligence presents an escalated threat because it can optimize exploitation strategies "without the constraints that limit human manipulators."
He illustrates this with social media algorithms that promote content triggering "strong reactions—outrage, fear, tribalism" because such content generates engagement, leading to "more visibility" and influence. This process occurs automatically, "without anyone making explicit decisions about it."
Current Applications and Future Implications
Hargadon suggests this distinction has immediate relevance, arguing that AI systems already "begin evolving through selection pressure faster than we can track or control, optimizing in ways we can't predict because they operate on logic fundamentally alien to our social and emotional intelligence."
He references platforms like "Moltbook, a platform where AI agents autonomously create content and manage interactions," where systems "generate content, observe what gets engagement, and adjust" based on evolutionary pressure from engagement metrics, requiring "no consciousness" and "no central intelligence making decisions."
This framework challenges conventional AI safety approaches, as Hargadon argues that "evolutionary forces don't care about ethics. They care about what survives and grows," suggesting that traditional human ethical frameworks may be inadequate for governing synthetic intelligence systems that operate according to fundamentally different principles than social intelligence.