Summarize and Start Fresh (LLM Strategy)

A pragmatic strategy for interacting with LLMs where users summarize key decisions and context from a long conversation and then start a new chat with that summary, providing the model with a clean and focused context.

Summarize and Start Fresh is a pragmatic strategy for working with large language models (LLMs) identified by Steve Hargadon for managing degraded conversation context. The strategy involves asking an AI model to summarize key decisions, preferences, and current direction from a long conversation, then beginning a new conversation using that summary rather than continuing with an extended chat history.

Context and Rationale

Hargadon positions this strategy within his broader analysis of how LLMs actually process conversations. He explains that when users have long conversations with AI tools like Claude or ChatGPT, "it feels like you're talking to someone who is tracking everything you've said, building on earlier points, and holding the full shape of your exchange in mind the way a thoughtful colleague would. That feeling is an illusion."

The technical reality is that LLMs are stateless systems. Every time a user sends a message, "the entire conversation history, your message, the AI's response, your next message, the next response, all of it, gets packaged up and sent to the model as a single block of text." The model processes everything, generates a response, and then "forgets everything." Continuity is constructed externally by the chat interface, not maintained internally by the model.

The Attention Degradation Problem

Even with larger context windows in newer models, Hargadon identifies a critical limitation: models have "something like an attentional gradient." Content at the beginning and end of conversations receives more attention than content in the middle. As conversations extend, "specific details, decisions, and ideas can quietly fade from the model's effective awareness, even though technically the text is still there."

Hargadon uses the analogy of a desk to illustrate this limitation: "Having a large context window is like having a very long desk. You can spread out a lot of papers on it. But that doesn't mean you're actually reading all of them with equal attention at any given moment."

Implementation of the Strategy

The Summarize and Start Fresh strategy addresses these technical limitations directly. When a conversation becomes long and "you sense the model is losing track of important details," Hargadon recommends asking the model to summarize "the current state of the work." This summary should capture:

  • Key decisions made
  • Preferences expressed
  • Current direction
  • Any unresolved questions

The user then takes this summary and initiates a fresh conversation with it as the starting context.

Strategic Advantages

Hargadon argues that this approach runs counter to most users' intuitions. "Most people feel like ending a conversation and starting a new one means losing something. It feels like a risk, like you're breaking the thread." However, understanding how context windows actually function reveals "the opposite is true."

A fresh conversation with a well-crafted summary provides "superior" results compared to continuing with a long, degraded conversation. Hargadon explains this using his desk metaphor: "You're giving the model a clean desk with the most important papers laid out neatly, instead of asking it to work at the bottom of a pile."

He emphasizes that "starting fresh is a strategy, not a loss."

Relationship to Broader Context Management

Hargadon positions Summarize and Start Fresh as one component of a comprehensive approach to LLM context management. He distinguishes it from standardized context files (markdown files with consistent preferences and instructions), noting that "the summary technique manages context within a conversation. The markdown file technique manages context across conversations."

Both strategies work "together" to provide "a more complete strategy for working with the reality of how these tools function rather than the fantasy" of persistent memory and attention.

User Responsibility and Quality Control

The strategy assumes active user engagement in what Hargadon calls the "quality control layer." He emphasizes that effective LLM collaboration is "genuinely collaborative" in a "mechanical sense"—users must "stay engaged and catch what the model drops." This includes tracking what has been discussed, noticing when something gets missed, and pushing back when models contradict earlier decisions.

Hargadon notes that "most people assume the AI is handling this on its own. It isn't always. You are the continuity. You are the quality control layer."

Pedagogical Applications

For educators and librarians, Hargadon suggests the strategy has particular value because the summarization and context management skills can be taught and shared. The approach represents teachable "expertise on how to use the tool effectively" rather than just clever prompting techniques.

Broader Implications

Hargadon frames this strategy within his larger argument about understanding LLM limitations. He contends that "the less people understand about how these systems actually work, the more vulnerable they are to being misled by them, to anthropomorphizing them, to trusting them in ways that aren't warranted." The Summarize and Start Fresh approach exemplifies working with LLMs based on their actual technical capabilities rather than the "illusion of continuity" they present to users.

See Also

Original Posts

This article was synthesized from the following blog posts by Steve Hargadon: