I'm taking a break from my usual technical deep-dives to give you some light Friday reading and highlight a gem of a term coined by Josue Gandarilla, a QA developer working on Recess—the parent activity marketplace I'm leading tech for. According to Josue, a "pregnant bug" is a bug that gives birth to other bugs. Picture this: you think you've squashed one issue cleanly, only to watch as baby bugs scuttle out into the darkness, multiplying your problems exponentially.
While everyone should have a Josue keeping a watchful eye on their features (seriously, find yourself a meticulous QA person), the real point of this post isn't about human bug hunting per se. It's about reinforcing why agentic development absolutely shines when you're re-platforming.
The Re-platforming Advantage
Unlike new feature work—where you're often building into ambiguity, iterating on uncertain requirements, and navigating the fog of "what should this actually do?"—re-platforming gives you something invaluable: a rock-solid specification to work from. You have existing APIs with well-defined contracts, established type schemas, proven frontend components, and battle-tested user flows. These are precisely the elements that AI excels at modeling and implementing.
Using this natural advantage, we re-platformed Recess in August during what should have been a months-long sprint. The existing system served as our perfect blueprint—every endpoint documented through usage, every data transformation proven in production, every edge case already discovered and handled.
Optimizing for Velocity
We're not stopping there. We're currently optimizing our stack by moving to Neon for its superior developer experience and feature-aligned branching capabilities. When this migration is complete, we expect to have a much more agile platform that will significantly increase our development velocity. And watching that acceleration will require more than just Josue keeping guard—though he'll certainly be busy.
The Testing Multiplier Effect
Here's where agentic coding becomes truly powerful: testing. Unit tests, regression tests, end-to-end tests, smoke tests, canary deployments—all the tools you need to accelerate development safely while avoiding those pregnant bugs altogether.
One of the first things I look for when evaluating codebases for founders is test coverage. Perhaps not surprisingly, it's one of the things development teams NOT led by an internal engineering leader consistently ignore. And it's not just contractors—the original dev team for Recess had zero test coverage when I arrived.
Let me be clear: I'm not a TDD zealot. There are very good reasons why you might add test coverage AFTER building, especially if you're developing in an agile fashion where requirements are fluid. This is very different from writing to external requirements where the spec is locked down. But once you've nailed down functionality and data models—even while still in flight—write the tests. It's non-negotiable.
This becomes particularly crucial with agentic development, where LLMs absolutely love to write code (sometimes too much code). Test coverage isn't just nice to have—it's your safety net when AI is generating implementations at superhuman speed.
When AI can generate comprehensive test suites based on your existing codebase and API contracts, you're not just moving faster; you're moving more safely. We're targeting 90%+ test coverage as we ramp up our velocity, because with great speed comes great responsibility to not break things.
The beauty is that tests are perfect AI work: they're highly structured, follow predictable patterns, and have clear success criteria. Feed an AI your API schema and a few examples, and it can generate exhaustive test cases covering happy paths, edge cases, and error conditions you might not have even considered.
The Call to Action
If you're facing any re-platforming work, I encourage you to fire up Claude Code, Augment Code, or Cursor and experience this for yourself. Don't approach it like you're building something net-new. Instead, treat your existing system as the detailed specification it actually is, and let AI do what it does best: faithfully implementing well-defined requirements at superhuman speed.
Your future self—and your QA team—will thank you for the robust, well-tested platform you'll end up with. Plus, you'll have far fewer pregnant bugs scuttling around in the shadows.
Originally published on hyperdev.matsuoka.com