Someone left an upvote, so there seems to be one listener. Dear listener, let me continue...
When someone is having a hard time getting great results, you are going to hear the "No true Scotsman" or appeal to purity.
The pundits will say that if I just had better tests, or better Project Specification Document (PRD) docs, or used the better brand Agentic Tool, or the other labs models, or just learnt to prompt-engineer better, I would have got better results.
Okay, well, on my last project, I used all of these US-based tools on a single client project:
- Claude Code CLI
- OpenAI Codex CLI
- GitHub Copilot CLI
- Cursor Agent CLI
- OpenCode CLI
- Copilot IDE
- OpenCode Desktop
- Codex Desktop
- Aider Chat
I like working with multiple models and some features of each tool. I have patched codex to add in both Claude and Gemini. I then added back the 'Ask' read-only mode, as I find it a good emergency feature when a model has gone a bit rogue.
You might think, "Gee, if the guy had just stuck to one tool to learn how to use it properly, maybe he could have got it to work!" At one point, I would burn through a $200 Max sub in the first week of the month. The new 5-hour token limits mean that to work a full day, I need two Max subs. That is why I needed all the tools to have enough subsidised Max subs to get through the month.
I now avoid the least reliable tool, Claude Code, until I have hit the weekly rate limits of the other tools. Yes, you read that correctly. I would rather use any of the other tools before Claude Code. Once again, not because I am unfamiliar with it. Because I have used it the most. I know from personal experience that I get better results when mixing Claude with other models across different tools. If you are not using Claude with GPT and others with something like Cursor Agent CLI or OpenCode, then you are missing out.
Surely you cannot prefer OpenAI Codex, I can hear you cry. Well, as I said, I have patched OpenAI Codex CLI, which is Apache-2.0 open-source, to run Claude Opus 4.6 (not 4.7!), and Gemini 3 Pro, next to GPT-5.4 (not 5.5!). Why? As you can spawn agents like tmux session, and go talk to them! I can have the Main agent pass work to named agents. Yes, Claude Code has subagents, as does OpenCode. Yet neither lets you switch to them or talk to them. Yet you can go back to asking the Main agent to delegate work to them! Remember, I patched in Claude, Gemini, and GPT into my build. So, I switch between them during an agent session to get them to code-review or pair-program.
So, once again, nope. You cannot say that I am simply not experienced enough to get the tech to work. On the contrary, I have not heard of anyone who has used as many tools as aggressively as I have to try to get things to peak performance. I am not sure that the people I watch videos of know about all the tools and models. Literally, only two influencers I know have mentioned that they dislike "Claude-isms" and get great results with ChatGPT 5.x.
Also, if you have not used ChatGPT-5.3-Codex-Spark at 1,000 tokens per second, well, you have not lived! There is nothing more beautiful than watching 5.3-Spark slam through a task in 15 minutes that would have taken any other model an hour. There is nothing more frustrating than seeing it not solve the problem, run up a $10 bill in a few minutes, and get into a compression cycle, leaving you with zero.
On the custom-kit front, I use speech2type to voice dictate. I am a fast typer, yet going flat out with voice dictation is a boost. Yet, in order to get a grip on the situation, you must slow right down.
More on that in the next post. If there is one.
End.
United States
NORTH AMERICA
Related News
What Does "Building in Public" Actually Mean in 2026?
19h ago
The Agentic Headless Backend: What Vibe Coders Still Need After the UI Is Done
19h ago
Why Iβm Still Learning to Code Even With AI
21h ago
I gave Claude a persistent memory for $0/month using Cloudflare
1d ago
NYT: 'Meta's Embrace of AI Is Making Its Employees Miserable'
1d ago