Draft vs. Deliverable Distinction (in AI use)

The critical difference between using AI for private exploration and idea generation (a 'draft') and presenting its output as a finished, human-vetted product (a 'deliverable'), with sloppy AI occurring when the draft is treated as the deliverable.

The Draft vs. Deliverable Distinction

The Draft vs. Deliverable Distinction is Steve Hargadon's framework for understanding the most common form of "sloppy AI" usage: treating AI's initial output as a finished product rather than recognizing its proper role in the creative and analytical process. According to Hargadon, this distinction provides the key to using AI effectively while avoiding the production of what Merriam-Webster termed "slop"

  • digital content of low quality produced in quantity by generative AI.

The Core Framework

Hargadon's framework centers on recognizing two fundamentally different stages in AI-assisted work. The draft stage represents "a place to explore ideas, go wide, generate options, and think out loud" where AI serves as a powerful tool for private exploration and idea generation. The deliverable stage represents the final output intended for public use, requiring human judgment, verification, and care.

The critical insight in Hargadon's framework is that "a draft can be sloppy. A deliverable cannot." The problems with AI usage "begin at the handoff, the moment something moves from private exploration to public use." According to this framework, AI is "genuinely powerful as a draft space," but becomes problematic when users skip the essential human work required to transform a draft into a proper deliverable.

The Handoff Moment

Central to Hargadon's framework is the concept of the handoff

  • "the moment something moves from private exploration to public use." This represents the critical decision point where a human must apply "the judgment, verification, and care that the task required." Hargadon argues that "the question isn't whether to use AI. It's whether, at the moment of handoff, a human applied the judgment, verification, and care that the task required."

When humans skip this handoff process, they engage in what Hargadon terms sloppy AI usage

  • "the act of substituting a prompt for the work the prompt was supposed to support." This creates a pattern where "someone uses AI to skip a step that shouldn't be skipped."

The Automatic Camera Analogy

Hargadon illustrates his framework through an extended analogy with automatic cameras. Just as the automatic camera "removed the barrier" of technical mastery and "expanded the number of people capable of capturing a striking image by orders of magnitude," AI functions as "the most powerful 'automatic camera' ever built — for writing, for code, for analysis, for nearly every form of intellectual work."

However, the analogy emphasizes that automation doesn't eliminate human responsibility. With cameras, "someone still has to choose what to point the camera at, decide when to press the shutter, and recognize whether the result is worth sharing. The camera handles the exposure. The human handles the choices that reflect value (or don't!)." Similarly with AI, "the value still depends on the choices a human makes before and after the tool does its part."

Applications Across Domains

Hargadon's framework explains failures across multiple domains where the draft-deliverable distinction is violated:

Sloppy sourcing occurs when people publish AI-generated citations without verification, despite knowing that "language models don't verify facts; they predict plausible next words." Hargadon cites a 2025 Deakin University study showing ChatGPT fabricated roughly one in five academic citations, leading to lawyers being sanctioned for hallucinated case law and publications like the Chicago Sun-Times recommending non-existent books.

Sloppy engineering happens when people deploy AI-generated code "without the engineering discipline the code required, i.e., treating generation as a substitute for understanding."

Sloppy content results from publishing AI output that "leans on filler phrases, presents shallow balance instead of actual analysis, and contributes nothing that wasn't already said better somewhere else." Hargadon points to BuzzFeed's pivot to mass-produced AI content, which coincided with financial losses and market value decline.

Cognitive Implications

Hargadon identifies sloppy thinking as a particularly consequential violation of the draft-deliverable distinction. Drawing on research about "cognitive atrophy" and citing Ethan Mollick's observation that AI "works best for tasks we could do ourselves but shouldn't waste time on, yet can actively harm our learning when we use it to skip necessary struggles," Hargadon warns against outsourcing fundamental cognitive work to AI.

Sloppy thinking represents "the assumption that AI can do the hard work of understanding for you." According to Hargadon, this creates a dangerous illusion because AI "can produce text that resembles understanding, which is worse than producing nothing, since it lets you believe you've done the work when you haven't."

The Selection Problem

Rather than advocating against AI usage, Hargadon frames the issue as "a selection problem." Just as "we wouldn't ban automatic (and now digital and smartphone incorporated) cameras because of the tsunami of low-effort photographs posted everywhere," the solution to sloppy AI isn't prohibition but proper application of the draft-deliverable distinction.

The framework ultimately positions AI as a powerful amplification tool that "can dramatically expand who is able to produce valuable output," while emphasizing that avoiding slop requires recognizing when human judgment and verification are essential to transform AI-assisted drafts into worthy deliverables.

See Also

Original Posts

This article was synthesized from the following blog posts by Steve Hargadon: