
Recap
In the previous article on Atomic Transactions, we looked at how transactions can guarantee state consistency.
In this article, we will go one level deeper into the design of the Scheduler. We will discuss the trade-offs behind different scheduling strategies, and map our implementation back to the corresponding paths in the scheduling flowchart.
The Core Concept of a Scheduler
A Scheduler is the “invisible commander” inside a reactivity system.
- It decides when update tasks should run.
- It directly affects both performance and the developer mental model.
Simply put, the Scheduler is not about whether something should be computed.
It is about when it should be computed.
Categories of Scheduling Strategies
Scheduling strategies can roughly be divided into the following categories.
Synchronous Scheduling
- Characteristic: After data is updated, the corresponding side effects run immediately.
- Pros: The logic is straightforward. State changes are reflected instantly.
- Cons: It can easily lead to excessive computation, especially when multiple updates happen within a short period of time.
Batch Scheduling
- Characteristic: Updates within the same tick are merged, and side effects run only once at the end.
- Pros: Reduces duplicate computation and improves performance.
- Cons: Requires an additional “task buffering” mechanism, which increases mental complexity.
Priority Scheduling
-
Characteristic: Tasks are ordered by priority. For example:
- Updates related to user input → high priority
- Background data synchronization → low priority
Pros: Prevents low-value updates from blocking high-value interactions.
Cons: The implementation is much more complex because the system needs to track both deadlines and priorities.
Lazy vs. Eager Scheduling
- Lazy: Work is performed only when it is actually needed, such as in pull-based reactivity.
- Eager: Once an update happens, downstream computation is scheduled or invalidated immediately, such as in push-based reactivity.
This difference affects the overall strategy of a system:
- Lazy scheduling is suitable for read-heavy, write-light scenarios.
- Eager scheduling is suitable for write-light, read-heavy scenarios.
Scheduler Strategy Comparison Flowchart
This diagram shows the different strategy paths from data update to UI update.
Strategy Comparison Table
| Strategy | Trigger Timing | Core Data Flow | Pros | Cost / Risk | Suitable Scenarios | Common Examples |
|---|---|---|---|---|---|---|
| Synchronous Scheduling (Sync) | Runs immediately after set()
|
write → compute → effect |
Intuitive mental model, easy to debug | Duplicate computation, interaction jitter | Minimal pages, low-frequency updates | Small frameworks, teaching PoCs |
| Batch Scheduling (Batch) | Flushes once at the end of the same tick/microtask | write × N → queue → flush |
Reduces re-rendering, good performance | Deferred consistency, requires a queue | Form interactions, outer animation layers | React batched updates, Vue job queue |
| Priority Scheduling (Priority) | Slices work by deadline / importance | enqueue(prio) → run until deadline |
Does not block high-value interactions | High complexity, possible starvation | Long lists, concurrent rendering | React Concurrent, startTransition
|
| Lazy | Computes only when read | write → mark-dirty → compute on read |
Low write cost | Delay on first read | Read-heavy, write-light scenarios | Signals memo / computed
|
| Eager | Marks downstream work immediately on write | write → mark downstream → flush |
Faster reads, clear invalidation | Higher write cost | Write-light, read-heavy scenarios; strongly consistent UI | Solid / Preact Signals |
How Different Frameworks Implement Scheduling
| Framework | Scheduling Mode | Strategy |
|---|---|---|
| React | Batch + Priority | microtask queue + concurrent rendering |
| Vue 3 | Batch | job queue |
| Solid | Microtask Batch (Eager) | Each update marks dependencies and flushes in batches |
| Signals (Preact / Angular) | Push + Lazy | Dependencies are recomputed only when read |
As we can see, each framework makes different trade-offs in Scheduler design. These trade-offs directly affect both performance optimization and the developer mental model.
Mapping Atomic Transactions to the Flowchart
In the Atomic Transaction chapter, we had three important scenarios.
1. Success Path: begin → writes → commit
- Flowchart path:
data update → batch scheduling + eager marking → flush → side effects - Strategy: Batch + Eager
2. Failure Path: begin → writes → rollback
- Flowchart path:
data update → not flushed yet → rollback → mark downstream as dirty → recompute only on the next read - Strategy: Eager marking + Lazy recomputation
3. Read Consistency
When reading values during a transaction, there are two possible approaches:
- Block cross-transaction reads → only stable values are visible before commit.
- Allow reads of temporary values → snapshot isolation is required to avoid exposing half-finished state.
API Mapping
-
signal.set()→ write -
markStale(node)→ eager marking -
track()/link()/unlink()→ dependency tracking -
queueMicrotask(flush)→ batch flush -
flush()→ topological recomputation +runEffects -
rollback()→ restore snapshots + mark values as dirty
Implementation Alignment: atomic() / transaction()
In the previous article, I kept the original transaction() implementation so that it would be easier to compare with earlier examples. However, in practice, the logic of transaction() should actually be replaced by atomic().
From the perspective of the code, the API that has complete transaction semantics is atomic():
- Begin: Create a write log and intercept all writes during the transaction.
-
Commit: Discard the write log and batch
flushJobs()at the end of the same tick. This belongs to Batch + Eager. - Rollback: Restore signal values based on the write log, and mark downstream computed values as dirty, ensuring that cached values are not incorrectly reused before the next read or next successful transaction.
The current transaction() is only equivalent to a batch operation. It lacks rollback, so its semantics do not fully match the previous article.
Suggested Adjustments
- To make
transactionconsistent with the previous semantics, I recommend turning it directly into an alias ofatomic():
export function transaction<T>(fn: () => T): T;
export function transaction<T>(fn: () => Promise<T>): Promise<T>;
export function transaction<T>(fn: () => T | Promise<T>): T | Promise<T> {
return atomic(fn);
}
- Add a
mutedcheck toscheduleJob()to avoid enqueueing new tasks during rollback:
export function scheduleJob(job: Schedulable) {
if (job.disposed) return;
if (muted > 0) return; // Do not enqueue jobs during rollback / muted phase
queue.add(job);
if (!scheduled && batchDepth === 0) {
scheduled = true;
queueMicrotask(flushJobs);
}
}
No other parts need to change. This adjustment simply aligns the implementation behavior with the scheduling strategy described above.
Scheduler and the Developer Mental Model
An interesting question is:
Do developers need to know that the Scheduler exists?
In React, developers often need to consider that “state updates are not immediately reflected,” so the mental model must include the concept of batching.
In Solid or Signals-based systems, because computation is fine-grained, developers almost do not feel the scheduling mechanism. They can usually focus only on the data itself.
Therefore, we can summarize it this way:
A Scheduler is not only a performance optimization tool. It is also a design decision for developer experience.
Looking Ahead
More advanced Schedulers may also include:
- Idle Scheduling: Use idle time to process non-urgent tasks.
- Work Stealing: Distribute tasks across threads or CPU cores in multi-threaded / multi-core environments.
- Adaptive Scheduling: Dynamically adjust scheduling behavior based on device performance or user interaction patterns.
These design concepts are very similar to the schedulers in operating systems. The difference is that here, they are applied in the context of frontend reactivity.
Conclusion
In a reactivity system, the Scheduler plays the role of an “invisible commander.”
It not only determines the upper bound of performance, but also shapes the developer mental model.
- React chooses priority + batching.
- Vue chooses a batched job queue.
- Solid chooses fine-grained eager updates.
Behind the trade-offs of different frameworks, they are all answering the same question:
At what moment should we pay the cost of computation in exchange for immediate UI consistency?
In the next article, we will discuss the memory and graph-management challenges that a Scheduler needs to deal with.
United States
NORTH AMERICA
Related News
What Does "Building in Public" Actually Mean in 2026?
19h ago
The Agentic Headless Backend: What Vibe Coders Still Need After the UI Is Done
19h ago
Why I’m Still Learning to Code Even With AI
21h ago
I gave Claude a persistent memory for $0/month using Cloudflare
1d ago
NYT: 'Meta's Embrace of AI Is Making Its Employees Miserable'
1d ago
