Let’s start with what’s actually changed.
A year ago, bootstrapping a product meant setting up a Next.js repo, fiddling with Tailwind configs, wiring up auth flows, stitching in Supabase or Firebase, figuring out where to deploy, and hoping your initial routing didn’t collapse under tech debt two weeks later. Today? That same journey—scaffolding to semi-functional—is something AI agents can complete in under an hour.
That’s not incremental. It’s foundational. We’ve entered the first 70% era: where AI gets us most of the way, fast—and humans step in to finish the job.
The New Baseline
Agentic AI systems can now:
Scaffold fully typed TypeScript monorepos
Generate reusable UI with modern component libraries (Shadcn, Radix, Tailwind)
Spin up database schemas and create API routes
Bootstrap workflows with tools like Temporal or LangChain
Deploy to Vercel or Netlify without manual config
The result: your baseline dev loop is compressed. First commits come sooner. Hello worlds are full apps. The cognitive cost of “starting” is trending toward zero.
Most projects now begin with AI scaffolding—not from scratch.
But Here’s the Catch: The Last 30% Still Belongs to Humans
I believe this—contrary to the techno-optimists. And I have the battle scars to prove it.
Here’s where these systems fall short:
Integrating with legacy systems and less modern stacks (hello Java?)
Performance tuning across systems
System-level refactors that involve tough trade-offs
Architecture decisions spanning front end, back end, and infrastructure
Complex bug fixing, especially with async logic, ambiguous stack traces, or tricky edge cases
Polish loops, such as accessibility, nuanced design, or user testing feedback
That last 30% isn’t just work—it’s judgment. And judgment is still our domain.
As the New York Times recently pointed out in a cautionary piece on AI's lack of reasoning, these models don’t have mental models. One telling example was an AI-generated map of Manhattan: it looked convincing at a glance but misplaced Central Park and botched the street grid—because the system had no real understanding of what a city is or how it functions., these models don’t have mental models. They don’t have taste or intuition. What they have is brute force—and that still fails for entire classes of problems., these models don’t have mental models. They don’t have taste or intuition. What they have is brute force—and that still fails for entire classes of problems.
AI doesn’t handle ambiguity well—yet. It won’t refactor a mental model. It won’t navigate architectural tension. It won’t intuit the side effects of an edge-case bug across microservices.
You still need people who understand how systems behave in production. You need technical leads who can resolve uncertainty, not just resolve merge conflicts.
Where We’re Going: Shrinking the Gap
The 30% gap won’t stay that wide for long. The pace of progress is blistering—what was experimental six months ago is now baseline. Every month, new capabilities emerge that were unthinkable a quarter ago.
Debugging assistance is improving—tools like Claude Code, Augment, and OpenDevin are evolving toward test generation, auto-fixes, and context-aware repairs.
Opinionated workflows are starting to encode best practices (e.g., Zod schemas for validation, structured project instructions).
Context and memory continue to expand, meaning agents will soon hold broader architectural understanding and business intent.
Even if AI handles 90% next year, the remaining 10% will still shape the product: edge cases, trade-offs, judgment calls.
That’s where real engineering happens.
The Staffing Shift: Fewer Hands, Higher Leverage
If an AI-powered engineer can ship a full-stack MVP over a weekend, what happens to the traditional software team?
We’re already seeing strong signals (source):
Satya Nadella says AI now writes 20–30% of code in some Microsoft projects.
Sundar Pichai reports that 30% of new code at Google is AI-generated—up from 25% just months ago.—up from 25% just months ago.
Mark Zuckerberg expects AI to handle half of Meta’s Llama model development within a year.
Kevin Scott, Microsoft CTO, projects that 95% of code could be AI-written by 2030—but humans will still define what that code should do.—but humans will still define what that code should do.
Dario Amodei, CEO of Anthropic, predicts AI could write 90% of all code in 3 to 6 months—and potentially all of it within a year.
These aren’t theoretical forecasts. They’re operational roadmaps from the companies building the future.
But after three months of intensive, production-grade work—millions of lines of code reviewed, generated, and integrated—I’m more sanguine.
AI is fast, and getting faster. It’s great at boilerplate, CRUD flows, and structured APIs. But it still breaks on ambiguity. It stumbles on edge cases. It can’t resolve architecture-level tension or synthesize design trade-offs.
This isn’t about typing speed. It’s about knowing what to supervise, refine, or accelerate—because AI still needs guidance.
Small teams get smaller. One senior engineer with product instincts and AI fluency might replace a team of three to five.
Juniors get squeezed. The traditional apprenticeship model—where juniors level up by handling low-risk tasks—starts to break.
Mid-levels must evolve. Either they step up into integrators and orchestrators—or risk getting bypassed by more fluent peers and faster tools.
Team structure flattens. Fewer layers. More cross-functional builders. AI-native PM/engineer hybrids who can think in systems and ship across the stack.
This doesn’t mean teams disappear. It means every role shifts—from "what can I build?" to "what can I oversee that AI can’t?"
The best teams will look more like special forces: small, high-trust, multi-skilled. Fast in, fast out. No handoffs. High leverage.
How to Operate in the First 70% Era
✅ Default to AI for Scaffolding
If it takes more than 15 minutes from idea to running code, your loop is broken.
🧠 Treat Instructions Like Contracts
Clear instructions are the new source code. Define naming, structure, and target—once.
⚡ Build for Dual Speeds
Fast loops for AI, deep loops for humans. Don’t expect the same rhythms across both.
💡 Invest in Debugging Fluency
It’s the one thing AI still struggles with—and where engineering judgment still shines.
✍️ Write the Spec, Not the Script
Let AI handle implementation. Your job is defining the right outcome.
The Real Takeaway
This isn’t about AI replacing developers. It’s about developers redistributing their time.
The first 70% is increasingly automated, standardized, and fast. The last 30%—the part that shapes the outcome—is still deeply human.
Yes, AI will keep improving. But we’ll still be the ones solving the real problems.
And that’s the work worth doing.