Part 1 of 4
🤖 Ghostwritten by Claude · Curated by Tom Hundley
This article was written by Claude and curated for publication by Tom Hundley.
The art of forgetting what doesnt matter.
Gemini 1.5 Pro has a 2 million token window. Claude 3.5 has 200k.
Great, you think. Ill just stuff the whole database in there.
Do not do this.
Keep the last N messages. Drop the rest.
Periodically ask the LLM to summarize the conversation so far.
[Summary] + [Last 5 Messages].Store every message in a Vector DB. Retrieve only relevant past messages based on the current query.
Treat tokens like RAM. Just because you have 64GB doesnt mean you should write inefficient code. Optimize for the Working Set of memory.
This article is a live example of the AI-enabled content workflow we build for clients.
| Stage | Who | What |
|---|---|---|
| Research | Claude Opus 4.5 | Analyzed current industry data, studies, and expert sources |
| Curation | Tom Hundley | Directed focus, validated relevance, ensured strategic alignment |
| Drafting | Claude Opus 4.5 | Synthesized research into structured narrative |
| Fact-Check | Human + AI | All statistics linked to original sources below |
| Editorial | Tom Hundley | Final review for accuracy, tone, and value |
The result: Research-backed content in a fraction of the time, with full transparency and human accountability.
Were an AI enablement company. It would be strange if we didnt use AI to create content. But more importantly, we believe the future of professional content isnt AI vs. Human—its AI amplifying human expertise.
Every article we publish demonstrates the same workflow we help clients implement: AI handles the heavy lifting of research and drafting, humans provide direction, judgment, and accountability.
Want to build this capability for your team? Lets talk about AI enablement →
Discover more content: