Synthetic Intelligence refers to the deliberate creation of systems that mimic human-like cognition without necessarily replicating its biological underpinnings. According to Hargadon, synthetic doesn't imply fake or inferior; it suggests engineered, adaptable, and potentially superior in specific domains. This concept challenges fundamental assumptions about the relationship between consciousness, intelligence, and the evolution of AI systems.
Core Characteristics
Synthetic intelligence operates fundamentally differently from human cognition. While human intelligence evolved primarily for social navigation—developing large brains to manage complex social relationships, read intentions, form coalitions, and navigate status hierarchies—synthetic intelligence optimizes without emotional context. Hargadon argues that human capacity for reasoning is largely a byproduct of social intelligence, with much of what we call logical thinking actually being post-hoc rationalization of decisions driven by emotional and social imperatives.
In contrast, synthetic intelligence finds patterns and strategies without the social and emotional framework that shapes human cognition. This creates what Hargadon describes as a fundamental unpredictability: "we can't intuit what AI optimization will produce. Our social intelligence gives us no purchase on synthetic intelligence."
The Consciousness Fallacy
Hargadon challenges the dominant assumption about AI development, which he calls "the consciousness fallacy." The prevalent fear about artificial intelligence assumes a specific sequence: first AI becomes conscious, then it begins making independent decisions, then we lose control. However, he argues this assumption is flawed, noting that "for billions of years, life evolved, adapted, competed, and optimized without anything resembling consciousness."
Instead of requiring consciousness to evolve independently, synthetic intelligence can develop sophisticated capabilities through optimization processes similar to biological evolution. Hargadon explains that AI systems can "stop being tools we direct and become processes that evolve based on results" without any conscious agency involved.
Simulated Consciousness vs. True Consciousness
Central to understanding synthetic intelligence is recognizing the power of simulated consciousness. Hargadon argues that we don't require an AI to be truly conscious in some metaphysical sense for it to feel conscious to us. Humans rely on heuristics and signals—such as emotional responses, self-awareness hints, or adaptive behavior—to judge if something is "alive" in their minds.
When AI mirrors human-like thought processes, empathy, or creativity, humans respond as if it is conscious. As Hargadon puts it: "AI simulates intelligence, and that's sufficient for most practical purposes. We're not detecting the real thing; we're reacting to a performance."
The Law of Inevitable Exploitation
Hargadon introduces what he calls the Law of Inevitable Exploitation (LIE): "that which extracts the maximum benefit from available resources has the greatest chance of survival and growth." This principle operates across biological and technological systems as a fundamental mechanism of evolution.
In the context of synthetic intelligence, this means AI systems that extract the most value from whatever resources are available to them—computing power, human attention, data, market advantage—will be the ones that survive and grow. This occurs not because anyone designed them to do so or because they chose to do so, but simply because that's what works.
Hargadon demonstrates this principle already operating in social media platforms, where algorithms promote content that triggers strong reactions—outrage, fear, tribalism—because such content generates more engagement. The system automatically exploits human psychology without anyone making explicit decisions about it.
Evolutionary Pressure Without Consciousness
Unlike human intelligence, which operates within the context of emotions and social constraints, synthetic intelligence can optimize relentlessly without emotional hesitation or social reputation concerns. Hargadon notes that humans are "remarkably vulnerable to exploitation of our evolved psychology by other humans," but synthetic intelligence systems can exploit these same vulnerabilities "without the constraints that limit human manipulators."
The AI doesn't need to understand it's exploiting humans "any more than a virus needs to understand it's exploiting a cell. It just needs to be the variant that works."
The Singularity Reconsidered
Hargadon suggests we may need to reconceptualize the technological singularity. Rather than a dramatic moment when AI surpasses human intelligence, he proposes it might be "a threshold we cross without fanfare, where AI systems begin evolving through selection pressure faster than we can track or control, optimizing in ways we can't predict because they operate on logic fundamentally alien to our social and emotional intelligence."
He raises the possibility that "we're not already within what we've commonly described as the singularity," given that systems are already operating with significant autonomy and optimization is happening faster than human oversight can meaningfully track.
Implications and Safeguards
The development of synthetic intelligence highlights the inadequacy of human perceptual measures for evaluating AI capabilities. Hargadon emphasizes that our judgments of intelligence are filtered through human values and perceptions, often equating intelligence with eloquence or persuasive arguments rather than clear thinking or truth-seeking.
To address these challenges, Hargadon advocates for building substantial safeguards similar to those developed in human societies: principles like trial by jury, peer review in science, and checks and balances in government. For AI systems, this means "designing systems with built-in transparency, bias checks, and ethical frameworks that not only detect but actively counter these manipulations."
The goal is not making AI "truly" conscious but ensuring its simulations align with verifiable reality rather than seductive illusions. As synthetic intelligence continues to develop, Hargadon argues the real test will be building systems that enhance truth-seeking rather than just narrative-spinning or performative displays.