Shumer’s Right About the Tsunami. His Advice Points at the Wrong Shore
The viral AI displacement post gets the diagnosis right and the prescription backward
Matt Shumer’s “Something Big Is Happening” went viral this week. If you are one of the few that haven’t read it, the argument runs like this: AI has crossed a capability threshold. GPT-5.3 Codex and Claude Opus 4.6 can complete complex projects autonomously. The displacement timeline is 1-5 years, not decades. Prepare accordingly.
He’s not wrong about the diagnosis. I’ve been writing about this transformation for nine months now, tracking my own productivity metrics as AI tools evolved from “fancy autocomplete” to something genuinely different. The capability leap is real. So is the timeline.
Where Shumer loses me is the prescription.
His advice: use premium AI tools, build financial reserves, pursue genuine interests, spend an hour daily experimenting. Seems sensible enough. But completely backward about what I believe the transformation actually requires.
The Diagnosis We Agree On
Credit where due: Shumer captures something most commentary misses.
The METR measurements he cites—AI task completion capacity doubling every seven months, now accelerating to four—match what I’ve observed in practice. Claude Code didn’t just get incrementally better between Opus 4.0 and 4.6. It crossed a threshold where orchestration became viable. Not “AI helps me code faster” but “AI completes projects while I supervise.”
My own numbers tell the story: 77 completed code changes across 27 different projects in six weeks. I run multiple AI assistants simultaneously, each working on its own task while I review the results. I haven’t opened my traditional coding software for actual development in months.
Shumer’s right that this changes things. Where he’s wrong is assuming the response is survival preparation.
The Problem with Survival Tips
“Build financial reserves.” “Pursue genuine interests rather than traditional career paths.” “Spend one hour daily experimenting.”
This is advice for people who expect to be displaced. It’s the response you’d give someone watching a wave approach—find high ground, protect what you can, hope you make it through.
But that framing assumes the wave destroys rather than transforms. History suggests otherwise.
I wrote recently about the Jacquard loom lesson. The Canuts were Lyon’s master silk weavers—legendary craftspeople whose identity was wrapped up in thread manipulation. When Jacquard’s programmable loom arrived in 1804, they rioted. Some adapted. Many didn’t.
Here’s what the numbers actually show: total silk workers stayed around 30,000 through the transition. The looms didn’t eliminate jobs—they compressed the master craftsman class while creating lower-wage operator roles. By 1831, 308 silk merchants controlled pricing for 5,575 master weavers managing 20,000+ workers.
The cautionary tale isn’t mass unemployment. It’s wage collapse and status compression for those who kept doing the same job while the job’s value eroded beneath them.
The Canuts who survived weren’t the fastest weavers. They were the ones who recognized that “weaver” was becoming “pattern designer” and “loom operator” and “machine mechanic.” The skill didn’t disappear. It changed shape.
What Shumer’s Advice Misses
“Spend one hour daily experimenting with AI tools.”
This is advice for a Canut. Practice with the new loom. Get comfortable with the interface. Learn the commands.
It completely misses what actually becomes valuable.
The Faros AI Productivity Paradox Report analyzed data across thousands of developers and found something telling: “Adoption skews toward less tenured engineers. Usage is highest among engineers who are newer to the company.”
Why? Because junior engineers face different constraints. Their bottleneck is navigating unfamiliar code, accelerating early contributions, learning system patterns. AI helps enormously with that.
Senior engineers showed lower adoption not because they’re Luddites—because their constraints aren’t code-writing speed. Their bottleneck is “deep system knowledge and organizational context” that AI can’t access. Generating code faster doesn’t help when the constraint is understanding why the system works the way it does.
A University of Chicago Booth working paper found experienced developers were 5-6% more likely to successfully use AI agents for every standard deviation of work experience. Not because they typed better prompts—because they used “plan-first” approaches, laying out objectives and steps before invoking AI.
Expertise improves the ability to delegate. That’s not something you learn from an hour of daily experimentation.
What’s Irreducibly Human
During a recent knowledge base project—120 commits over 9 days, roughly 90% Claude-assisted—I tracked where my time actually went.
The 12 human-only commits weren’t about implementation. They were:
Configuration tweaks requiring domain knowledge (model selection for specific use cases)
Debug logging when something felt wrong
Release management
One research document on architecture options
The human contributions were about judgment. Choosing the right model for email writing versus general queries. Knowing when the AI’s suggestion would create problems downstream. Understanding the client’s actual workflows well enough to structure the system appropriately.
What surprised me: the time savings didn’t come from faster typing. They came from eliminating iteration cycles between “write code” and “realize it doesn’t fit requirements.” Specifying clearly upfront meant fewer rewrites—but that specification work was irreducibly human.
The Qodo 2025 State of AI Coding survey found 65% of developers cite missing context as the primary barrier to shipping AI code without review. That “missing context” is exactly what Shumer’s advice doesn’t address:
Business model specifics: How supplier relationships actually work. Which data matters for a specific service model. Why certain integrations take priority.
Organizational constraints: Budget limitations. Timeline pressures. The technical capabilities of staff who’ll maintain the system.
Historical context: Why previous approaches to similar problems failed. What the client tried and rejected. Political dynamics around adoption.
None of this lives on the public web. It exists in Jira tickets, PowerPoint decks, Slack conversations, and institutional memory. You don’t acquire it through an hour of daily experimentation.
Pattern Masters, Not Refugees
Shumer frames AI as something happening TO workers. The response he offers is defensive: prepare for impact, build reserves, hope the wave passes.
The Jacquard lesson suggests a different frame. AI is changing WHAT the work is. The response isn’t preparation for displacement—it’s understanding what becomes valuable when implementation gets automated.
Two paths survived the Canut transition:
Pattern designers who translated vision into punch cards the loom could execute. Not thread manipulators—system architects who understood what patterns were possible and how to specify them precisely.
Loom improvers who made the infrastructure more reliable. Not operators—engineers who fixed the tension systems, improved card durability, figured out how to chain looms for industrial-scale production.
The agentic coding transition has the same split.
You can master the patterns—writing specs, orchestrating agents, supervising output. Or you can improve the looms—build the MCP servers, write the orchestration layers, optimize the vector databases that make retrieval-augmented generation work at scale.
Both roles survive. Thread manipulation shrinks in economic value.
Methodology Beats Stockpiling
The people seeing productivity gains from AI tools aren’t spending an hour daily experimenting. They’re developing methodology.
Microsoft Research field experiments across nearly 5,000 developers found 26% productivity gains with AI coding assistants—with less experienced developers showing higher adoption and greater improvements. But those gains came from structured workflows, integrated tooling, and verification processes—not casual usage.
The developers in the METR randomized controlled trial who were 19% slower with AI assistance? They were using AI the Shumer way—open a chat, ask a question, accept the output, repeat. No structured context. No optimized prompts. No verification layer. They felt faster while actually slowing down.
I’ve written about three golden rules that structure my own workflow: let AI write your prompts (research shows 17-50% improvement), make context searchable rather than just present (Lost in the Middle kills accuracy), and build verification into every workflow.
This isn’t what you learn from an hour of daily experimentation. It’s what you develop through deliberate methodology applied to real work with real stakes.
The Question Shumer Should Have Asked
The question isn’t “will I have a job in five years?”
It’s “what does my job become when implementation gets automated?”
For senior engineers, the answer is increasingly clear: subject matter expert plus systems architect. The person who translates ambiguous requirements into precise specifications. The person who identifies when agent outputs miss critical organizational context. The person who maintains system coherence across automated development workflows.
Jue Wang at Bain told MIT Technology Review that developers already spend only 20-40% of their time coding. The rest goes to analyzing problems, customer feedback, product strategy, administrative tasks.
AI doesn’t change what senior engineering is. It reveals what it always was.
The implementation layer was never the irreducible core. It was infrastructure—important, but increasingly invisible. What emerges when that layer automates is something both familiar and different. The same judgment work senior engineers always did, now concentrated and visible.
The Year of Software
Here’s what Shumer’s displacement framing completely misses: the long tail.
My friend Matt Rosenberg has zero software development experience as a builder. He’s a marketer who also manages a vacation rental property on Cape Cod. Over the past few weeks he built himself a revenue optimization tool—a proper one, with dynamic pricing recommendations based on local events, seasonal patterns, and competitor analysis.
He didn’t hire a developer. He didn’t buy enterprise software designed for property management chains. He built exactly what he needed for his specific situation.
Here’s what matters about how he got there: Matt spent hours over several weeks doing a deep dive into the tools. Not casual experimentation—serious investment in understanding what AI coding assistants could and couldn’t do. His transformation from marketer to builder wasn’t magic. It was built on two things he already had: deep knowledge of UX from his marketing career, and years of accumulated expertise in the vacation rental space.
That combination—domain expertise plus serious tool investment—is a model for career transformation. Not everyone will replicate Matt’s results. But the path he took isn’t “spend an hour a day experimenting.” It’s “leverage what you already know deeply, and invest real time in learning to express it through new tools.”
Here’s the economics that matter: Matt has a reasonable chance to recoup his effort by sharing this with other Cape Cod hosts—a small audience with the exact same problem. A smaller but real chance someone picks it up for broader distribution. Maybe it stays a side project. Maybe it becomes a micro-business serving vacation rental owners in seasonal markets. A larger concern would never take on a project with such a small TAM.
None of those paths existed before.
In the old model, Matt’s revenue optimizer would never exist. No developer would build it for one vacation rental property. No SaaS company would target Cape Cod vacation rentals as a market segment. The problem was real, Matt’s domain expertise was real, but the economics of software creation didn’t work.
Now they do. And so do the economics of software distribution. The same tools that let Matt build also let him iterate based on feedback from ten other hosts, add features they need, package it for sharing.
This is the year of software. Not because developers are being displaced—because software is finally reaching the long tail of problems that were never economical to solve. The domain expert who understands Cape Cod rental patterns better than any enterprise vendor can encode that knowledge into a working system and find the small audience that needs exactly that.
Shumer sees AI automating existing jobs. He misses AI creating new economic paths for people who were never developers in the first place.
The Canuts didn’t just become pattern masters and loom mechanics. Some became textile entrepreneurs who could suddenly afford custom patterns for small-batch production. The technology didn’t only change who did the work—it changed what work was possible.
This is the new normal if you learn to use the tools.
The Bottom Line
Shumer’s right that something big is happening. The capability threshold is real. The timeline is compressed. The transformation will affect every knowledge worker who touches a computer.
He’s wrong about what to do.
The response isn’t defensive preparation for displacement. It’s understanding what becomes valuable when AI handles implementation. It’s developing methodology for specification and orchestration. It’s acquiring the domain expertise and organizational context that AI can’t access.
And for the Matt Rosenbergs of the world—the domain experts who never learned to code—the response is recognizing that this is their year. The problems they understand better than anyone can finally become software.
Don’t stockpile. Don’t experiment an hour a day. Don’t prepare to be a refugee.
Become a pattern master. Or become a loom improver. Or become the domain expert who finally builds the tool that only you could specify.
The Canuts who survived didn’t out-weave the machines. They recognized that the job had changed shape and positioned themselves for what actually remained valuable.
The transformation is underway. The question isn’t whether you’ll make it through. It’s whether you’ve recognized what the job is becoming—and what new jobs are becoming possible.
I’m Bob Matsuoka, writing about agentic coding and AI-powered development at HyperDev. For more on the pattern master thesis, read my analysis of the Jacquard loom lesson or my deep dive into what remains irreducibly human.
Related reading:
Don’t Be a Canut—Be a Pattern Master - The Jacquard history with data
The Irreducibles: What a Pattern Master Does - Where human value actually sits
HyperDev’s Three Golden Rules - Methodology for professional AI work





