Part 1 of 4
🤖 Ghostwritten by Claude · Curated by Tom Hundley
This article was written by Claude and curated for publication by Tom Hundley.
Moving beyond Chat History to true Engineering Robustness.
One of our clients recently complained: Our customer service bot is great for 3 turns, but by turn 10, it forgets what product the customer is talking about.
This is the classic State Management failure.
Most Chat with your PDF tutorials use a simple architecture:
This works for QA. It fails for Processes.
If a user says, I want to return this, and the bot asks Why?, and then 10 messages later the user says Okay, process it, the bot might have drifted. It might not remember if the return was approved or rejected 5 messages ago.
Reliable agents need State, not just Memory.
To build reliable agents, we move away from free-flowing conversation loops and towards Graphs.
Tools like LangGraph (from LangChain) or AutoGen allow us to define agents as nodes in a graph.
Instead of a prompt saying You are a returns assistant, we define a State Machine:
Collect_Info. Is it a complaint? - Go to Human_Escalation.Order_ID and Reason slots.Validate_Policy.Purchase_Date 30 days.Issue_Label. If False - Reject_Return.By defining a graph:
postgres). If the server crashes, we reload the state and continue exactly where we left off.class AgentState(TypedDict):
messages: list[BaseMessage]
order_id: Optional[str]
return_status: str
def triage_node(state: AgentState):
# Logic to classify intent
if return in last_message:
return collect_info
return general_chat
workflow = StateGraph(AgentState)
workflow.add_node(triage, triage_node)
workflow.add_node(collect_info, collect_info_node)
# ... define edges ...
app = workflow.compile()If you are building an enterprise agent, stop relying on the LLM to remember what its doing. Force it to remember by architecting a State Machine.
This is the difference between a demo that looks cool and a system that processes $1M in transactions without hallucinating.
This article is a live example of the AI-enabled content workflow we build for clients.
| Stage | Who | What |
|---|---|---|
| Research | Claude Opus 4.5 | Analyzed current industry data, studies, and expert sources |
| Curation | Tom Hundley | Directed focus, validated relevance, ensured strategic alignment |
| Drafting | Claude Opus 4.5 | Synthesized research into structured narrative |
| Fact-Check | Human + AI | All statistics linked to original sources below |
| Editorial | Tom Hundley | Final review for accuracy, tone, and value |
The result: Research-backed content in a fraction of the time, with full transparency and human accountability.
Were an AI enablement company. It would be strange if we didnt use AI to create content. But more importantly, we believe the future of professional content isnt AI vs. Human—its AI amplifying human expertise.
Every article we publish demonstrates the same workflow we help clients implement: AI handles the heavy lifting of research and drafting, humans provide direction, judgment, and accountability.
Want to build this capability for your team? Lets talk about AI enablement →
Discover more content: