TkDD: Ticket-Driven Development and the Knowledge We’re Throwing Away
The value of the things we don't keep.
TL;DR
Agentic coding sessions generate substantial contextual information—research, decisions, alternatives—that vanishes when the session ends
TDD captures behavior expectations; SDD captures requirements; neither captures the evolution of thinking as you figure things out
Ticket-Driven Development (TkDD) treats tickets as persistent knowledge containers for human-AI collaboration, not just task assignments
The workflow: Claude.AI builds specs → Linear captures them via MCP → coding agents pull work and write findings back via mcp-ticketer → knowledge accumulates instead of evaporating
TkDD is the opposite of vibe coding: structured context that compounds over time
I’ve been thinking about everything we throw away.
Last week I spent four hours with Claude Code researching authentication approaches for a SmartThings integration. Evaluated OAuth flows, considered token refresh strategies, dug into the API documentation, tested three different implementation patterns. The session produced maybe 200 lines of actual code. But the research—the reasoning about why I chose approach A over approach B, the edge cases I discovered, the documentation inconsistencies I noted—that took ten times longer to develop than the code itself.
And it’s gone. Buried somewhere in a chat history I’ll never scroll back through. Two days later, a colleague asked why I didn’t use the SmartThings webhook approach. I couldn’t remember. I’d evaluated it—I was 90% sure I had a good reason for rejecting it—but the rationale had evaporated. Ended up spending another hour re-researching something I’d already figured out.
That keeps happening to me. And I suspect it happens to you too.
The Knowledge Hemorrhage Problem
Every agentic coding session bleeds information. You meta-prompt, the agent refines the prompt then researches, you discuss, it proposes, you refine, it implements. Along the way you’re building context—understanding the problem space, eliminating dead ends, discovering constraints. That context is often more useful than the code itself.
But where does it go?
The code lands in a commit. Maybe you write a comment. The rest? Scattered across chat windows, lost in context limits, forgotten by tomorrow. Eleanor Berger calls this the shift from “interactive AI” to “asynchronous agents”—but even she focuses on the task delegation pattern, not the knowledge loss.
The irony gets me. We have these incredibly capable reasoning systems generating insights, and we’re treating their output like scratch paper. Use it once, toss it.
Even within a single project this gets painful. Three weeks into mcp-smarterthings, I needed to revisit the rate limiting approach. Had I already evaluated exponential backoff versus fixed delays? What were the SmartThings API’s actual limits versus what their docs claimed? I’d done that research. Somewhere. In some chat window. On some day. I ended up re-deriving half of it from scratch because finding the original conversation would’ve taken longer than just figuring it out again.
The Problem With TDD and SDD
I don’t use Test-Driven Development. Conceptually elegant, sure—write the test first, watch it fail, make it pass. But TDD assumes you know what you’re building before you build it. When you’re working to a spec, great. When you’re figuring things out as you go? Too restrictive. You end up writing tests for behavior you’ll change three times before lunch.
Same problem with Spec-Driven Development. You can do the research and write the spec. But as they say, a plan is only good until you get punched in the face. The spec captures your initial understanding. It doesn’t capture how that understanding evolved when you hit the first unexpected constraint. Or the second. Or the fifth.
What both paradigms miss: the thought process and changes to it.
That’s what I actually need when I come back to a project. Not the final answer—the path to it. The dead ends explored. The assumptions challenged. The “wait, that won’t work because...” moments. The pivots.
Paradigm What It Captures What It Loses TDD Behavior expectations via tests Research, decisions, context, evolution of thinking SDD Initial requirements and architecture How understanding changed during implementation Vibe Coding Nothing structured Everything—just vibes and prayers TkDD Work units + context + decisions + thinking evolution Still figuring this out
Tests document what the code should do. Specs document what you planned to build. Neither documents how you figured out what to build—which is exactly what you need when you come back in six months and can’t remember why you chose approach B over approach A.
TkDD: Tickets as Knowledge Containers
What I’ve been experimenting with lately: treating tickets as structured knowledge artifacts for human-AI collaboration, not just task assignments.
A ticket can hold:
The problem statement (not just “implement auth” but why and what constraints)
Research conducted (links, findings, dead ends identified)
Alternatives considered (and why they were rejected)
Decision made (with rationale)
How thinking evolved (initial approach → why it didn’t work → final approach)
Implementation notes (gotchas, edge cases discovered)
Links to related work (other tickets, PRs, documentation)
Tickets persist. They’re searchable. They have natural hierarchy—epic → story → task maps cleanly to context → decision → implementation. They survive sessions, agents, team members.
The tooling is catching up to this idea. GitHub Copilot’s coding agent now accepts GitHub Issues as input—you assign an issue to @copilot and it works autonomously. Devin integrates directly with Linear, triggering work when you add a label. Port.io documented an entire workflow for routing Jira tickets through GitHub Issues to Copilot. deepsense.ai built what they call an “AI Teammate” that reads Jira tickets and produces PRs.
The pattern is emerging. But most implementations focus on the task execution side—ticket goes in, PR comes out. They’re not capturing the knowledge generated along the way.
That’s the gap I’m trying to fill.
From aitrackdown to mcp-ticketer: The Human-AI Collaboration Insight
I built aitrackdown as an AI-first ticketing system. The idea was straightforward: design a ticket structure specifically for AI agents to consume—structured fields, clear acceptance criteria, machine-readable context. And it worked. To a degree.
But here’s what I got wrong: tickets aren’t just for AI. They’re for human-AI interaction.
The tooling that lets humans read and respond to tickets matters just as much as the tooling that lets agents process them. A ticket perfectly structured for Claude Code but unreadable by your PM is a failure. A ticket that captures agent findings but buries them in JSON blobs nobody will ever review? Also a failure.
That insight flipped my approach. I stopped trying to build for AI and started building for the collaboration. That’s when mcp-ticketer happened.
The mcp-ticketer Approach
mcp-ticketer works with multiple ticketing systems. Not because I couldn’t pick one, but because that’s where the work actually lives.
I use GitHub Issues to track reported problems—that’s where users file bugs, that’s where they should stay. Linear handles my personal projects because I love the interface and the keyboard shortcuts don’t make me want to throw my laptop. Client work? Some clients use Linear, others are Jira shops. You meet people where they are.
aitrackdown still exists in the stack. I rarely use it these days. The AI-first structure turned out to matter less than the human-AI collaboration layer on top.
The critical capability I built into mcp-ticketer: agents can write to tickets, not just read from them.
This isn’t standard behavior in most integrations. The typical pattern is ticket-in, PR-out. mcp-ticketer lets an agent update the ticket as it works. When a coding agent hits a decision point, it can record what it learned. When it discovers an edge case, that goes into the ticket. When it rejects an approach, the reasoning gets captured. The ticket becomes a living document of the work—not just the assignment, but the execution.
More importantly: when your thinking changes, the ticket captures that evolution. “Started with approach X, but discovered Y constraint, pivoted to Z.” That’s the knowledge that disappears in every other workflow.
The Workflow: Thinking and Doing, Separated
Here’s how the pieces fit together in my current setup:
I start in Claude.AI—the web interface, not Claude Code. This is deliberate. Claude.AI is for thinking. Researching approaches, discussing tradeoffs, building specifications. The Linear MCP connector lets me create tickets directly from the conversation.
A session might go like this:
“Let’s figure out how to handle SmartThings device state synchronization”
[Research, discussion, alternatives considered]
“Create a Linear ticket capturing this approach”
[Ticket created with full context, not just a one-liner]
The specification lives in the ticket. The research lives in the ticket. The decision rationale lives in the ticket.
Then the coding agent takes over. Claude Code pulls work from tickets via mcp-ticketer. The ticket provides context—not just “implement sync” but the full specification, the constraints identified, the approach selected.
The agent works. When it hits decisions, it updates the ticket. When it discovers undocumented API behavior, that goes in the ticket. When the original approach doesn’t work and thinking evolves—that gets captured too. When it completes, the implementation notes go in the ticket.
The result: knowledge that compounds. Next time I need to work on this codebase—or a similar one—the tickets are there. Searchable. Structured. I’m not starting from zero. I’m not re-researching things I already figured out.
The claude-code-skills repository shows what this can look like at scale—29 production skills implementing full Agile automation with Linear, including Epic → Story → Task hierarchy management. That’s the direction: tickets as the coordination layer for AI-augmented development.
mcp-smarterthings: Knowledge Capture in Action
The mcp-smarterthings project became my testing ground for TkDD. SmartThings integration has enough complexity—OAuth, device capabilities, real-time events, state synchronization—that I knew I’d lose critical decisions if I didn’t capture them somewhere.
Here’s what ticket-captured knowledge actually looks like. During the implementation, the agent documented complete code samples for the SmartThings API integration patterns:
// Example: Device capability handler pattern
const handleCapability = async (deviceId: string, capability: string) => {
const device = await smartthings.devices.get(deviceId);
const status = await smartthings.devices.getCapabilityStatus(
deviceId,
capability
);
return { device, status };
};
These code samples were originally designed for the classic PM-to-engineer handoff. “Here’s what we need, here’s roughly how it should work, go build it.” But in a TkDD workflow, they serve a different purpose: persistent knowledge available for any future human or agent review.
Six weeks from now when I need to add a new capability handler? I don’t have to re-derive the pattern. The ticket has it. When a different agent picks up related work? Context is already there. When I’m explaining the architecture to a collaborator? I can point them to the ticket instead of recreating the explanation from memory.
The tickets in that Linear project contain:
Initial research on SmartThings API versions and deprecation timelines
Decision rationale for choosing the new API over legacy endpoints
Code samples for common patterns (auth, device commands, event subscriptions)
Edge cases discovered during implementation
Links between related tickets showing how the architecture evolved
That last point matters. The tickets aren’t isolated—they reference each other. You can trace how “implement basic device control” led to “handle rate limiting” led to “add request queuing” led to “implement webhook fallback.” The evolution of understanding is visible.
Building Context, Not Burning It
LLMs need context to be effective. That’s not news. But where does context come from?
Right now, mostly from re-explaining things every session. “This is a Next.js project, we’re using TypeScript, here’s the authentication pattern, here’s why we chose this approach...” Over and over.
TkDD builds a structured context base over time. The tickets contain the decisions. The tickets contain the rationale. The tickets contain the evolution of thinking—how you got from “I think we should do X” to “actually Y works better because...”
When you start a new session, you’re not starting from scratch—you’re starting with accumulated knowledge.
Pull in the relevant tickets. The agent has context. Not just “what to do” but “why we’re doing it this way” and “what we already tried” and “what constraints matter” and “how our understanding changed.”
Cross-project learning becomes possible too. Authentication patterns you figured out on project A? The tickets document the research—including the dead ends. When project B needs similar auth, you’re not re-deriving first principles. You’re not re-exploring the same dead ends.
The Paradigm Claim
Test-Driven Development: Tests define expected behavior. Assumes you know the behavior upfront.
Spec-Driven Development: Specifications define requirements. Assumes requirements survive contact with reality.
Ticket-Driven Development: Tickets define work units AND capture how understanding evolves while doing the work. The ticket is both the input and the output. Built for human-AI collaboration, not just AI consumption.
TDD asks: “What should this code do?” (Assumes you know.) SDD asks: “What are we trying to build?” (Assumes the plan survives.) TkDD asks: “What do we know, what are we learning, and how is our thinking changing?”
Vibe coding treats every session as a fresh start. TkDD treats every session as a contribution to an accumulating knowledge base—one that captures not just conclusions, but the reasoning that got you there.
I’m still working out the edges of this. The tooling is imperfect—mcp-ticketer exists because nothing else handled the multi-system reality of how I actually work. The workflow requires discipline that pure vibe coding doesn’t demand.
But the knowledge loss problem is real. I’ve wasted hours re-researching things I’d already figured out. I’ve made decisions twice because I couldn’t find where I’d made them the first time. I’ve watched context evaporate at the end of every session.
We can do better than that.
I’m Bob Matsuoka, writing about agentic coding and AI-powered development at HyperDev. For more on multi-agent workflows, see my analysis of Claude Code’s orchestration capabilities or my deep dive into the knowledge management problem in AI development.






