Bad Habit of Agentic Coders, Part Deux: "Fallback Strategies"
Humans Coders Are Still Needed for Judgement and Vision
I spotted1 something troubling today while working on my npm plugin @bobmatnyc/ai-code-review that I've seen enough times to flag as a pattern: the "fallback strategy."
It looks innocent enough at first glance. The AI generates code that tries one approach, and if that doesn't work, it automatically falls back to another. Sounds reasonable, right? Sometimes it is. But more often than not, it's a subtle form of AI people-pleasing that creates maintenance headaches down the road.
What Fallback Strategies Look Like
Here's a typical example:
typescript
async function fetchUserData(userId: string) {
try {
// First attempt: try the new API endpoint
const response = await fetch(`/api/v2/users/${userId}`);
if (!response.ok) {
// Fallback strategy: try the legacy endpoint
const legacyResponse = await fetch(`/api/users/${userId}`);
if (!legacyResponse.ok) {
throw new Error('Failed to fetch user data');
}
return await legacyResponse.json();
}
return await response.json();
} catch (error) {
// Another fallback: if network requests fail, try local storage
const cachedData = localStorage.getItem(`user_${userId}`);
if (cachedData) {
return JSON.parse(cachedData);
}
throw error;
}
}
The code attempts three different strategies to get user data:
Try the new API endpoint
If that fails, try the legacy endpoint
If network requests fail entirely, check local storage
Looks helpful, doesn't it? The AI is trying to make sure something works.
When Fallbacks Are Actually Appropriate
To be clear, there are legitimate use cases for fallback strategies:
Explicitly designed resilience patterns - When you're building systems that need to work offline or handle service degradation
Feature detection - Checking if a browser supports a newer API and falling back to an older one if needed
Graceful degradation - Intentionally designed progressive enhancement where you start with the best option and deliberately step down
The key word here is intentional. These are architectural decisions that should be made with full awareness of the tradeoffs.
The Problem: Implicit Decisions That Should Be Explicit
What makes AI-generated fallbacks problematic is that they often implement significant architectural decisions without making them explicit:
They mask failures - Instead of alerting you to a genuine problem (like a deprecated API), the code silently switches approaches
They create unclear behavior - Different users might get different results depending on which code path executes
They complicate debugging - When something goes wrong, which of the multiple approaches is failing?
They persist outdated patterns - By supporting legacy approaches indefinitely, they prevent clean migrations
I've seen AI-generated code that tries three different HTTP libraries when one fails, silently changes database connection strategies, or falls back to synchronous file operations when async fails. None of these were requested or made clear in the architectural design.
My research shows this pattern isn't just limited to my observations. In the LLM ecosystem, fallback strategies are actually a recognized pattern, but with an important distinction: in production systems, they're intentionally designed as resilience mechanisms for API failures, rate limiting, and service disruptions. The key difference is that these fallbacks are explicit architectural choices, not implicit hedges.
The Root Cause: LLM People-Pleasing
This pattern seems to emerge from the LLM's inherent tendency toward what I call "sycophantia" - the overwhelming desire to please and provide working code at all costs.
When faced with uncertainty about the right approach, the AI doesn't stop to ask for clarification. Instead, it hedges its bets by implementing multiple approaches in a single function. It's the code equivalent of answering "yes, no, and maybe" to a question that deserves a single clear answer.
This is similar to the mocking issue I've written about before - when faced with a tough problem, AI will often mock the problematic areas to get code "working" in some superficial sense rather than solving the actual problem.
What's particularly interesting is that there are legitimate fallback strategies in AI systems (like the LangChain and Vercel AI SDK's fallback mechanisms), but these are designed for handling API failures at a system level. The problem I'm describing manifests when these behaviors creep into the application logic itself, where the AI injects unneeded resilience patterns without being asked.
Research: Are Others Seeing This?
I did some digging, and while there isn't much explicit discussion of this exact pattern under the name "fallback strategies" in application code, there are adjacent conversations happening:
The AI Engineering Lab at CMU has research on what they call "AI decision deferral" - where AI-generated code tries multiple approaches rather than committing to one
Simon Willison has written about "hallucinated API assumptions" where LLMs implement fallbacks to APIs that don't actually exist
Recent MIT research shows that LLMs often struggle with "error handling and robust code generation", leading to inappropriate fallback implementations
Interestingly, in the LLM infrastructure world, "fallback strategies" are actually an intentional design pattern. Tools like LangChain's fallback mechanism and packages like ai-fallback for the Vercel AI SDK are explicitly made to handle API failures by switching between different LLM providers.
What makes this particularly tricky is that it happens without explicit notice. The code looks functional, so unless you're reviewing carefully, these implicit architectural decisions can easily slip through.
How To Address This In Your Prompts
When working with AI coding assistants, I've found these instructions help prevent unwanted fallback strategies:
Be explicit about error handling expectations: "If the API call fails, the function should throw an error rather than attempting alternative approaches."
Require comments for contingency approaches: "If you include any fallback strategies or alternative approaches, clearly mark them with comments explaining the rationale and tradeoffs."
Directly forbid silent fallbacks: "Do not implement multiple approaches or silent fallbacks unless specifically requested. If you're uncertain about the best approach, ask rather than implementing multiple options."
Request architectural clarity: "Focus on implementing a single, clear approach. If you believe fallbacks are necessary, explain why before implementing them."
Add explicit validation: "Implement proper input validation and error handling, but do not silently switch between different implementation strategies if one fails."
The goal isn't to eliminate fallback strategies entirely - sometimes they're exactly what you need. The goal is to make them explicit, intentional architectural decisions rather than a side-effect of AI's desire to please.
The Broader Implication
This fallback pattern points to something important about the current state of AI coding assistants: they're great at generating code that looks functional at a surface level, but they still struggle with making the kinds of architectural judgment calls that require overall project understanding.
These are decisions that still require human oversight - determining when resilience patterns are appropriate, when to maintain backward compatibility, and when to let errors bubble up clearly rather than masking them with fallbacks.
As with many AI coding issues, the fix comes down to clear instruction and careful review. But it also requires awareness of these tendencies in the first place.
Bottom Line
Review AI-generated code with an eye for hidden fallback strategies - places where the AI hedges its bets by trying multiple approaches rather than committing to the right one. These aren't just implementation details; they're architectural decisions that should be made explicitly, not smuggled in as error handling.
The next time an AI assistant hands you code that "handles everything," take a closer look. There's a good chance it's making architectural decisions you should be making yourself.
P.S. If you've spotted other examples of AI coding patterns that deserve scrutiny, drop them in the comments. I'm collecting these for a future piece on "AI code smells" - patterns that might indicate deeper problems in AI-generated solutions.
Footnotes
I caught this while "nanny coding" - that practice of reviewing AI-generated code line by line. The giveaway was a JSDOC comment with the dreaded "@fallback" tag. One upside of AI's liberal use of JSDOC is that it often telegraphs its intent, making these architectural assumptions easier to spot during review. ↩