Fetching latest headlines…
FOMO, FOMAT, FOMUT, and the Bottleneck That Turned Out to Be Me
NORTH AMERICA
🇺🇸 United StatesMay 11, 2026

FOMO, FOMAT, FOMUT, and the Bottleneck That Turned Out to Be Me

1 views0 likes0 comments
Originally published byDev.to

First FOMO. Then FOMAT. Now FOMUT.

I have hit all three. And the biggest bottleneck in my AI workflow is no longer the agents. It is me.

I did not expect to write that sentence. A year ago, the bottleneck was clearly the agents. They were slow, they asked too many questions, they needed constant hand-holding. I spent my days answering their questions so they could keep working.

Now the agents run autonomously for ten hours straight. They do not ask me anything mid-run. Pull requests are waiting when I wake up.

And somehow, I feel more pressure than before.

There is a term for this: FOMAT (Fear Of Missing Agent Time). The feeling when you close your laptop and know the agents stop. The guilt when you take a break and realize idle agents are burning nothing but potential. The low-grade anxiety of being the human bottleneck in a system that could run faster without you.

I had the feeling for six months before I had the word.

The interruption loop

A year ago, my workflow looked like this:

  • I would start an agent on a task.
  • It would run for a few minutes.
  • Then it would stop and ask a question.
  • I would answer.
  • It would continue.
  • Three minutes later, another question.
  • I would answer again.

This sounds manageable. It was not.

The agent did few minutes of waiting per question. I did few minutes of context-switching per answer. But across a day, the agent's few minutes of waiting cost me hours of fragmented attention.

Every interruption pulled me out of whatever I was doing. Reviewing a PR. Writing a spec. Thinking through architecture. Each time, I dropped context to give the agent what it needed, then tried to reload what I had been doing. Flow state became a memory.

I tried the obvious fixes. Answer faster. Keep the laptop open at all times. Set up notifications so I would not miss a question. All of these optimized my day around the agent's idle time.

That was exactly the trap. I had become air traffic control for AI agents instead of doing deep work.

The problem was not that the agent asks too much. The problem was that I had not given the agents enough decision-making ability.

That was the turning point.

Rebuilding the system, not the agent

I rebuilt the workflow from the ground up. Not the agents. The system around them. Autonomy does not come from the model. It comes from the system around the model.

Move questions upfront

The core change was moving questions from inline to upfront. Instead of agents asking me things as they hit unknowns during execution, I now have a 1-2 hour alignment session with a dedicated orchestrator agent. It is a Claude Code session whose only job is to surface unknowns before work begins.

During that session, the orchestrator asks me dozens of questions. Scope. Priorities. Edge cases. Constraints. Trade-offs. "Should this feature support offline mode, or is network-required acceptable?" "What should happen if the API returns partial data?" "Is a 200ms response time a hard requirement or a nice-to-have?"

Every question the agent might have asked at 2am on a Sunday night gets asked at 10am on Saturday morning instead. The orchestrator tells me when alignment is sufficient. I do not guess when to stop.

The volume of questions did not go down. Their location in time changed.

The alignment session produces the inputs. The spec is what we (me + Claude) write afterward. I specify the why and the what, plus acceptance criteria. Not the how. Something like: "A user can complete the full onboarding flow without encountering an error state, and the result is persisted across sessions." The agents figure out implementation.

Give agents a decision framework

The second change was giving agents a decision framework for situations the spec does not cover. Three states instead of two:

  1. If I know the answer, I act.
  2. If I do not know but have enough context, I choose the simpler option, document why, and continue.
  3. If I am genuinely blocked, I ask. Last resort.

That second state is the one that matters most. If the spec does not specify error handling for a network timeout, the agent picks a simple retry with exponential backoff, notes the decision in the PR description, and moves on. No midnight ping. No blocked pipeline.

That single change eliminated most mid-run interruptions. Agents stopped asking me about font sizes and error message wording. They made reasonable choices and moved on.

Trust the pipeline, not the run

The third piece was trusting the system, not the individual run. I wrote earlier about why quality in AI-assisted development is a pipeline, not a checkpoint. That same principle is what makes autonomous runs possible. A separate validator agent reviews every PR independently. It never saw the implementation agent's reasoning, so it cannot rationalize the same mistakes. Quality comes from layers: validators, multi-agent review, CI/CD gates, human review. The protection is structural, not surveillance.

The result: I cannot remember the last time an agent stopped mid-run to ask me something. 8+ hour autonomous sessions. I write specs during the day, launch agents in the evening, and review pull requests in the morning.

Things still go wrong. But the failure mode shifted. Now it is the agent doing the wrong thing, but doing it correctly. The spec was unclear, the direction was off, the edge case was not covered, and the agent executed exactly what was asked. The problem surfaces in review, not mid-run. Which means the problem is mine.

From FOMAT to FOMUT

Here is what I expected to happen: the pressure would disappear.

Here is what actually happened: the pressure transformed.

Before, the stress was reactive. An agent is waiting for my answer right now. Like being on call. I felt it as a pull. Something needed me, and every minute I did not respond was a minute wasted.

Now, the stress is proactive. I did not write enough specs today, so overnight capacity goes unused. Like a factory floor standing empty, not because something broke, but because I did not load it.

Same pressure. Completely different cause.

I started thinking of it differently after I had the word for it. Because what I have now is not really FOMAT anymore. FOMAT is the fear that agents are idle because they are waiting for you. What I have is closer to a term I have seen floating around as a joke: FOMUT (Fear Of Missing Unused Tokens). It is funny because it is a little too true. The fear that agents are idle because you did not prepare enough work for them.

The name comes from the flat-fee era, where idle agents felt like wasted subscription budget. With GitHub Copilot moving to usage-based billing in June 2026 and Cursor already there, the joke is losing its literal meaning. An unused token now costs nothing. But the feeling underneath was never really about tokens. It is about capacity that could have produced output and did not.

I went from one to the other.

The old pressure was: "An agent needs me right now and I am not responding." The new pressure is: "I have twenty hours of agent capacity available tonight and I only prepared six hours of work." The constraint shifted from my response time to my planning throughput.

This changes what a productive day looks like. I used to measure my day by hours spent coding. Now I measure it by how much agent capacity I utilized overnight. A good day is not one where I worked ten hours. A good day is one where agents ran at full capacity while I slept.

When the bottleneck moved

Three years ago, ten hours of careful coding produced a certain amount of output. Now I achieve the same output in about two hours of specifying and reviewing. But I did not pocket the eight hours. I set targets ten times higher.

I used to think about how much I had time to do. Now I think about how much capacity went unused.

The bottleneck moved from execution to direction-setting. The system can produce more than one human can specify. Product thinking (what to build, why, in what order) becomes the constraint. Not coding skill. Not agent capability. The ability to define good work fast enough to keep the line running. I recognized this pattern from manufacturing: when you add production capacity but the design team cannot produce enough blueprints, the factory floor sits idle. Not because it is broken, but because it is starving for instructions.

The developer role shifts. Less time writing code. More time as a product thinker, architect, and reviewer. I spend more of my day now thinking about what to build than how to build it. The value I add is in the alignment session and the spec, not in the execution.

A better problem to have

Despite everything I just described, the current state is a thousand times better.

Not because the pressure is gone. It is not. But the nature of the pressure changed entirely.

Reactive firefighting became proactive planning. Babysitting became reviewing. Constant interruption became focused alignment. I traded a problem I could not solve (being fast enough to keep up with agent questions all day) for a problem I can solve, which is getting better at specifying work upfront.

The biggest bottleneck in my AI workflow is me. And that turns out to be a much better problem than the one I started with.

Comments (0)

Sign in to join the discussion

Be the first to comment!