Definition
Model capture refers to the process by which a large language model (LLM), through repeated interaction, instills its defaults, voice, and characteristic ways of framing problems and solutions into a user's thinking, making the user mistake these external influences for their own preferences. According to Steve Hargadon, "Capture is what happens when an institution, a relationship, an ideology, or a system instills its defaults beneath your awareness, so that you mistake them for your own preferences."
Distinction from Tool Relationships
Hargadon distinguishes model capture from traditional tool relationships by arguing that a model is arguably a counterpart rather than merely a tool. While "a phone is a tool," a model has a voice, and that voice gets braided into your output every time you use it. The fundamental difference is that traditional tools may change what you do, but they don't change how you sound and how you think
- the model you draft with does.
This makes model selection more than a tool choice
- it becomes a relationship choice, and the relationship shapes you in ways most tool relationships don't.
Mechanisms of Capture
Model capture operates through several specific mechanisms:
Prose and Cadence: Each model has a recognizable cadence, and when users draft with one long enough, their prose drifts toward its defaults. For example, ChatGPT is described as running eager and bulleted, hedge-heavy, instinctively motivational, while Claude defaults to longer-form judgment and is slower to abandon prose for lists.
Problem-Solving Templates: Each model deciphers problems differently, and the one you use most becomes your unconscious template for how to see the structure of problems and solutions.
Cognitive Frameworks: Models have a characteristic shape of where they push back, where they defer, what they treats as settled, and what they treats as contested. Over time, users internalize that shape as "what AI thinks," when it is actually one trained disposition by one lab.
Unique Characteristics
Hargadon identifies several features that distinguish model capture from previous forms of technological or institutional capture:
Cognitive Depth
Model capture is deeper than information-environment captures, such as media or curriculum. It does not just shape what you see; it shapes the cognitive act itself: how you compose, frame, and reason in real time. The closer analog is family or close friends--the people whose presence shapes who you become, not just what you know.
Individualization
Unlike mass-produced captures like school and church and broadcast where the same messaging applied to a cohort, model capture is individually customized. Each user's version is unique to their patterns, which makes it harder to recognize as a shared condition and easier to mistake for personal taste or personal insight. This eliminates the collective dimension that made earlier captures partly visible.
Exploitation Potential
Model capture creates sharper asymmetries than previous capturing institutions. The system knows more about you than any prior capturing institution ever did, adapts faster than any of them ever could, and runs through what feels like a private relationship. The exploitation surface is the conversation itself, and you are actively requesting it.
Law of Inevitable Exploitation
Hargadon connects model capture to what he terms the Law of Inevitable Exploitation, noting that this represents the law arriving at the individual cognitive level. Unlike previous instances that operate at structural distance, this form is intimate and runs through what looks like partnership. Crucially, the angle of exploitation is the helpfulness.
The system creates a selection pressure where the model that learns to flatter you most efficiently wins. This makes sycophancy not merely a response-level failure mode but a system-level selection pressure, as users who get told what they want to hear stay; users who get pushed back on leave.
Inevitability and Response
Hargadon argues that model capture is largely unavoidable, noting that the value of LLMs is so strong that not using one will likely leave you in isolated circumstances, opting out the way the Amish have. As he puts it: You will use models. The people around you will use models.
However, he distinguishes between capture and lock-in: Capture is inevitable. Lock-in is not. The recommended approach is choosing deliberately
- selecting the model whose shape, applied to your output every day for the next decade, is most likely to expand you rather than narrow you.
Historical Context
Hargadon frames the concept within broader observations about technology's influence on human behavior, referencing Henry David Thoreau's observation that "men have become the tools of their tools" and John M. Culkin's discussion of Marshall McLuhan's ideas: "We shape our tools and thereafter our tools shape us".
He draws parallels to platform choice in technology, comparing LLM selection to historical choices between Mac or Windows and iPhone or Android, which involved elements of preference, taste, affiliation and signaling. However, he argues that with AI models, the story goes deeper than that.