The Canuts were Lyon’s master silk weavers. Legendary craftspeople. Their identity was wrapped up in thread manipulation—every pass of the shuttle, every tension adjustment, every pattern emerging from their hands.
Then Jacquard showed up with his programmable loom. Punch cards replaced manual thread selection. The Canuts rioted. Some adapted. Many didn’t.
The ones who survived weren’t the fastest weavers. They were the ones who realized the job had changed. Post-Jacquard operators didn’t weave. They designed patterns, translated those patterns to punch cards, loaded the cards, supervised execution, and quality-checked output. The skill shifted from “manipulate threads” to “program the loom.”
Sound familiar?
The Numbers Don’t Lie
Here’s what happened to the Canuts:
Year Master Weavers Wages What Changed 1789 5,575 Baseline Peak of craft era 1804 ~5,000 Declining Jacquard loom introduced 1812 ~4,500 -20% 11,000 Jacquard looms in France 1830 3,000-4,000 -50% Wages half of 1810 levels 1831 ~3,500 Crisis First Canut revolt, 600+ dead (estimates vary) 1834 ~3,000 Crisis Second revolt, ~10,000 imprisoned/deported
Sources: Encyclopedia.com, Marxist.com analysis, Wikipedia
Total silk workers stayed around 30,000. The looms didn’t eliminate jobs—they compressed the master craftsman class while creating lower-wage operator roles. By 1831, the 308 silk merchants controlled pricing for 5,575 master weavers who managed 20,000+ workers in cramped workshops, often working 14-18 hour days.
The Jacquard loom didn’t kill weaving. It commoditized the skill. Pattern selection—the highest-value cognitive work—moved to punch cards. What remained was loading, monitoring, and maintenance. The same number of people worked. Fewer could call themselves craftsmen.
The cautionary tale isn’t mass unemployment. It’s wage collapse and status compression for those who kept doing the same job while the job’s value eroded beneath them.
The IDE isn’t the issue. The mindset is.
If you see yourself as “person who writes lines of code,” you’re a Canut. Nothing wrong with that—master craftspeople, all of them. But when the loom becomes programmable, thread manipulation skills matter less than pattern design skills.
My own numbers back this up. Since Opus 4.5 dropped November 24, I’ve pushed 77 PRs, 3,167 commits, and 2.9 million lines across 27 repositories. All through Claude Code and claude-mpm. About 54% is Markdown—specs, design docs, research notes. The punch cards. Roughly 1.3 million lines of actual Python, TypeScript, and Svelte came out the other end. These are throughput metrics, not quality proxies—but throughput matters when your constraint is “how much can I ship this month.” Right now I’m running 5 MPM instances across ports 8765-8769, each orchestrating its own Claude Code session. The claude-mpm ecosystem accounts for 55% of my commit activity. Production infrastructure, not experimental. I haven’t opened VS Code for anything except quick file diffs in six weeks.
I’m not weaving anymore. I’m programming looms.
Don’t believe me? Here’s what the engineers actually building these tools are saying:
Boris Cherny (Claude Code Creator, Anthropic)
According to interviews circulated in late 2025, Cherny didn’t open an IDE for the entire month of December. His reported December output: 259 PRs, 497 commits, 40,000 lines added—all AI-written through the tool he created.
But here’s the part that matters: he reportedly runs 5-15 Claude instances simultaneously. Not one agent typing code. A swarm. He orchestrates rather than implements.
His interview with Developing.dev captures the psychological adjustment: “Software engineering is radically changing, and the hardest part even for early adopters and practitioners like us is to continue to re-adjust our expectations.”
Per Coder’s analysis, Anthropic’s internal estimates show engineering output jumped 70% per engineer even as headcount tripled. Cherny’s estimate: a project that would’ve required 20-30 engineers working 2 years at Meta now takes 5 engineers and 6 months. Trending toward 1.
Andrej Karpathy (OpenAI Co-Founder)
Karpathy coined “vibe coding” in February 2025—the practice of giving in to AI suggestions without fully understanding the code. Collins Dictionary named it Word of the Year. That’s how fast this entered the lexicon.
But his position evolved. By April 2025, he acknowledged vibe coding becomes a “painful slog” for deployed production apps. By October, he released nanochat—8,000 lines, entirely hand-coded—calling AI tools “net unhelpful” for novel work that sits outside training data.
His December 2025 take is the most honest assessment I’ve seen: “I’ve never felt this much behind as a programmer. The profession is being dramatically refactored.”
The guy who named the movement now feels behind. Let that sink in.
Google Engineers
Jaana Dogan, Principal Engineer on the Gemini API team, dropped a bomb on Twitter: Claude Code replicated in one hour what her team spent a year building. A toy version, she clarified. But directionally significant.
Her broader observation: “In 2025, they can create and restructure entire codebases. Quality and efficiency gains beyond what anyone could have imagined.”
Sundar Pichai confirmed that over 25% of new Google code is now AI-generated. A quarter of all new code at one of the world’s largest engineering organizations. That’s not a pilot program.
Coinbase
Rob Witoff, Head of Platform at Coinbase, reported 90% speedups for code restructuring and test writing. Specific task categories, but when your fintech platform sees 90% acceleration on restructuring work, you stop asking whether AI coding tools are useful.
Steve Yegge (Amazon/Google Veteran, Sourcegraph)
Yegge’s been in this industry longer than most. His take borders on inflammatory: “If you’re still using an IDE to develop code by January 1st, 2025, you’re a bad engineer.”
He claims 12,000 lines of production code per day while spending $300 daily on AI tokens. He runs 3-4 agents simultaneously. He co-authored Vibe Coding with Gene Kim.
Built an entire issue tracker called “Beads” through pure vibe coding—as a proof of concept that the approach actually ships production software, not just demos.
The Register’s review captures the thesis: trust the AI. Let go of the illusion that you need to understand every line.
Open Source Maintainers
The shift isn’t limited to well-funded companies. Open source maintainers report the same pattern.
Simon Willison (Django creator) coined “vibe engineering” to distinguish production-quality AI work from casual demos. Vibe coding can be sloppy. Vibe engineering requires you to understand enough to guide the AI toward robust solutions.
Marc Nuri (Red Hat, Kubernetes MCP Server) went from 10-15 contributions per day to 25+. He migrated an entire frontend—193 files—in minutes using Claude Code.
Indragie Karunaratne (Mac developer since 2008) shipped a 20,000-line macOS app and wrote fewer than 1,000 lines by hand. 95%+ generated.
Meta
Zuckerberg predicted AI would replace “mid-level engineers” by 2025. Whether that’s happened is debatable. That he said it publicly isn’t. Meta is building internal AI coding agents with a stated goal: automate the implementation work that currently occupies the engineering org.
The Emerging Workflow Patterns
Across all these examples, certain patterns repeat:
Multiple parallel agents. Cherny runs 5-15. Yegge runs 3-4. I run 5. The single-agent model where you have one AI helping you code is already dated. Orchestration is the new skill.
Context engineering becomes critical. CLAUDE.md files in repositories. Specification documents before code. The upfront investment in context pays compound returns when agents can reference shared understanding.
CLI over IDE. Google previewed Antigravity (AI-first IDE) in November 2025. But the momentum favors terminal-based tools. Claude Code. Cursor. Command-line interfaces that agents understand natively.
Human-on-the-loop, not human-in-the-loop. Supervision, not co-creation. You review and redirect rather than collaborate keystroke-by-keystroke.
The Engineers Who Improve the Looms
Here’s what the Canut analogy misses: Jacquard’s 1804 loom wasn’t production-ready. The original had problems. Punch cards wore out. The mechanism jammed. Complex patterns exceeded the machine’s capacity. The silk coming out wasn’t consistent enough for commercial production.
It took decades of engineering to fix this. Card durability improved through material science. Tension systems got recalibrated. Contemporary estimates suggest 11,000 Jacquard looms were operating in France by 1812, up from essentially zero eight years earlier. That scaling didn’t happen because pattern masters got better at loading cards. It happened because engineers made the looms themselves more reliable.
Charles Babbage visited Lyon in 1840, obsessed not with silk but with the punch card concept. His Analytical Engine borrowed directly from Jacquard’s mechanism. He wasn’t weaving patterns. He was abstracting the loom’s logic into something more general.
The parallel to agentic coding infrastructure maps cleanly.
Pattern masters use Claude Code and orchestration frameworks to ship software. That’s valuable work. But someone has to build the MCP servers that give agents access to external tools. Someone has to write the orchestration layers that coordinate multiple agents without context collision. Someone has to optimize the vector databases that make retrieval-augmented generation actually work at scale.
That’s what I spend half my time on now. The claude-mpm ecosystem—the orchestration framework, the MCP vector search, the ticketing integration—that’s loom improvement, not pattern design. Making the infrastructure more reliable so pattern masters can trust it.
The Canuts who survived didn’t all become pattern masters. Some became loom mechanics. Some designed improvements to the punch card system. Some figured out how to chain looms together for industrial-scale production.
Agentic coding has the same split. You can master the patterns, or you can improve the looms. Both roles survive the transition. Thread manipulation shrinks in leverage.
The Caveats Worth Noting
Karpathy’s hand-coded nanochat provides the clearest counterexample. For novel work—concepts that sit outside training data—current AI struggles. He called AI tools “net unhelpful” for that specific project.
This tracks with the Jacquard parallel. The loom couldn’t weave patterns that hadn’t been programmed. Novel patterns still required human design. Punch cards automated execution, not invention.
Quality concerns persist for deployed production apps. The demos look impressive. The maintenance burden on generated code remains less studied.
What This Means
According to YC partners speaking at Demo Day, 25% of Winter 2025 batch companies had 95% AI-generated codebases. Startups are shipping with almost no hand-written code. They skipped the Canut phase entirely.
The productivity metrics cluster around consistent ranges:
70% improvement at Anthropic
90% speedups for specific tasks at Coinbase
2.5x daily contribution increase for Marc Nuri
3,167 commits in 44 days for my own work (72 per day average)
These aren’t marginal gains. They’re what happens when you stop manipulating threads and start programming looms.
The Mindset Shift
The Canuts who survived the Jacquard revolution weren’t the fastest weavers. They were the ones who recognized that “weaver” was becoming “loom operator” and then “pattern designer.” Some took a different path—becoming loom mechanics, improving the card systems, figuring out how to scale production.
The engineers in this article made the same recognition. Cherny running 15 agents isn’t “faster typing.” He’s programming looms, not working threads. Yegge spending $300/day on tokens isn’t “expensive autocomplete.” He’s investing in loom capacity.
The IDE is an artifact of the implementer mindset. If you see yourself fixing lines of code, you’re a Canut. Masterful. Skilled. And increasingly misaligned with how production software actually gets built.
Two paths forward survive the transition. Design patterns that looms execute—writing specs, orchestrating agents, supervising output. Or improve the looms themselves—build the infrastructure that makes agentic coding reliable at scale.
Thread manipulation shrinks in economic value.
The transition is underway. The question is whether you’ve noticed.
I’m Bob Matsuoka, writing about agentic coding and AI-powered development at HyperDev. For more on multi-agent orchestration, read my analysis of claude-mpm or my deep dive into the economics of AI token consumption.
Further Reading on the Canuts and Jacquard Loom: If you want to understand this history properly, start with La Maison des Canuts in Lyon’s Croix-Rousse district—the old weaving quarter. It’s part museum, part working workshop. I visited years ago and they still operate 19th-century Jacquard looms, still produce silk for clients worldwide. The fabric samples alone are worth the trip. The Encyclopedia.com entry on the Silk Workers’ Revolts covers the political and economic context. For the technical evolution of the loom itself, James Essinger’s Jacquard’s Web traces the line from punch cards to Babbage to modern computing.






