For nearly a year, I've been using "trained" AI assistants for various tasks—from coding to writing to research. What do I mean by trained? Not off-the-shelf models that produce generic responses. I'm talking about assistants customized with persistent prompts that capture my specific voice, preferences, and working style.
My approach has been to build comprehensive prompts that analyze 5 years of my writing, interviews, emails, and Slack messages. This creates an assistant that works like me—understanding my preferences, communication style, and technical context better than generic AI.
My AI Assistant Workflow
When I started down this path, there weren't many options for creating persistent, customized assistants. OpenAI was first to market with their "Teams" model that allows custom instruction sets to be saved and reused across sessions. They also pioneered the Canvas editor, which lets you directly edit documents in a full-screen view, with AI assistance when needed.
My typical workflow involves:
Creating initial structures for projects or documents with key points
Uploading relevant context—code, documentation, previous work, and reference materials
Working with the assistant to expand ideas, refine arguments, and solve problems
Direct editing and iterative refinement
This approach has cut my time across various tasks by roughly 75% while maintaining the technical accuracy and working style that my projects require.
Why Claude Has Gotten Better
While OpenAI had the early lead, I've recently switched most of my workflow to Claude. Several key improvements drove this decision:
1. Context Management
Claude's updated platform gives you a clear indication of how much context space you have available for uploading files. This transparency makes a huge difference when working with technical documentation, code samples, or research materials—I can see exactly how much room I have for additional context.
OpenAI's context management has been frustrating by comparison. Their limit of 20 documents regardless of size is arbitrary and restrictive, especially when working with a mix of large and small files. Even more annoying is how frequently the session "resets," losing all uploaded context without warning.
2. Stability and Reliability
Claude simply feels more stable. When I'm working on complex technical projects, the last thing I need is to re-upload all my context because the session crashed. Claude maintains context more consistently, which means less repetition and more productive work time.
3. Document Size Handling
OpenAI's document size limitations became a real bottleneck for comprehensive technical work. Claude's editor supports larger documents, allowing me to work on extensive comparisons or complex projects without arbitrary splitting.
4. Superior Output Handling
The quality of Claude's output handling is noticeably better than OpenAI's. Copy-pasting from Claude preserves formatting more reliably, which is crucial when I'm working with code blocks, structured content, or technical documentation. The artifacts produced by Claude require less cleanup before they're ready for implementation or publication.
This might seem like a small detail, but when you're working on technical projects regularly, these formatting inconsistencies add up to significant time spent on post-production fixes. With Claude, what you see is much closer to what you get when you implement.
5. Multi-Context Potential and MCP Agents
The Multi-Context Prompting (MCP) capability shows huge potential for using Claude to interact with other tools. At the moment, it's primarily for reading from various sources, including my Gmail, but I expect that to change as MCP evolves.
What really sets Claude's Pro and Max plans apart are the MCP agents for web, email, calendar, and Drive search. This adds a huge amount of value to the platform—I can now ask Claude to check my calendar for availability, search my emails for specific conversations, or find documents in my Drive without leaving the conversation. These integrations transform Claude from a simple chat interface into a comprehensive workspace assistant.
Even more interesting, I'm seeing evidence of a new multi-agent model emerging for AI-specific tasks. For example, there's a "Primary Researcher" that builds complex research plans for research requests, breaking down tasks and coordinating different types of information gathering. This suggests Claude is moving beyond single-agent responses toward more sophisticated, specialized workflows.
Tell me this doesn’t make you drool…and yes, the results were useful.
The AI-First Working Approach
Switching to an AI-first process has fundamentally changed how I approach various tasks:
Instead of starting with a blank slate, I now begin with a conversation about the scope and structure. This dialogue shapes the initial approach more organically than traditional planning methods.
Reference materials get incorporated early in the process rather than consulted ad hoc, which ensures consistency and accuracy from the start.
My refinement process has become more iterative and targeted—focusing on strategic improvements rather than starting from scratch.
The biggest efficiency gain has been in handling complex comparisons and research. What used to require multiple browser windows and constant tab-switching now happens within a single context where the assistant maintains awareness of all reference materials.
What Claude Still Needs to Improve
Despite my switch to Claude, there are still areas where the platform falls short:
1. Canvas Editor Limitations
OpenAI still has a clear advantage with their Canvas editor. OpenAI's approach allows both direct editing and prompt-driven editing in the same interface, which creates a more flexible working environment.
Claude's editor currently lacks this hybrid approach. When I want to make a quick formatting change or minor edit, I have to describe the change rather than simply making it directly. For technical content with code blocks or complex formatting, this becomes inefficient.
2. Collaboration and Sharing
Even on the Max plan, Claude's sharing options are frustratingly limited. I'd like to be able to share projects and data with colleagues more easily, but the current system makes collaboration difficult. There's no straightforward way to share a conversation, context, or working project with team members.
This is particularly limiting for technical work where I might want to share my analysis with a colleague or get input on a complex problem. OpenAI's sharing features, while not perfect, are more developed in this area.
The ideal solution would combine Claude's context management with OpenAI's editing flexibility and much better collaboration features.
Bottom Line
For complex technical work with extensive reference materials and code examples, Claude has become my go-to assistant. The stability, transparent context management, and larger document support outweigh the editing limitations for most of my workflow.
That said, I still use OpenAI's Canvas for specific tasks where direct editing capabilities are more valuable than robust context management.
What's clear is that we're still in the early days of AI assistant tools. Both platforms have strengths that could be combined into a truly comprehensive solution. For now, I'm using Claude for about 70% of my work, with OpenAI filling specific gaps in the workflow.
The real test will be which platform evolves faster to address their limitations. Claude needs better direct editing, while OpenAI needs more reliable context management. Whichever solves their issues first will likely capture the technical work market completely.