Ruby has had a concurrency story for years. For most of that time, the story was "threads exist but the GIL means they don't give you parallelism for CPU-bound work, and Fibers are for cooperative scheduling if you want to manage it yourself." Ruby 3 changed the first half of that sentence.
Ractors (introduced in Ruby 3.0, still experimental as of 3.3) give you genuine parallelism β multiple Ractors run on multiple OS threads without the GIL constraining them. Fibers (overhauled in Ruby 3.1 with Fiber::Scheduler) give you async I/O concurrency without threads.
These are different tools for different problems. This article puts them side by side with real code so you can reason about which one belongs in your next design.
The Core Distinction, Precisely
Before the code: get the mental model right.
Fibers are cooperative. A Fiber runs until it explicitly yields. The scheduler decides what runs next. There is still only one OS thread (by default). You get concurrency β multiple things making progress β but not parallelism β multiple things running simultaneously on different CPUs. The payoff is I/O-bound work: while one Fiber waits for a socket, another runs.
Ractors are parallel. Each Ractor runs on its own OS thread, free of the GIL. The payoff is CPU-bound work: genuine multi-core computation. The cost is strict isolation β Ractors cannot share mutable objects, and the rules Ruby enforces to guarantee this are strict enough to break a lot of idioms you're used to.
Fibers: [F1]--yield--[F2]--yield--[F1] (one thread, interleaved)
Ractors: [R1] (thread 1, core 1)
[R2] (thread 2, core 2, simultaneously)
Neither replaces threads. Both are alternatives with sharper tradeoffs.
Part 1: Fibers
The Basics β Fibers as Resumable Closures
A Fiber is a chunk of code with a suspension point. Fiber.yield suspends it; fiber.resume picks it back up. State is preserved across yields.
fib = Fiber.new do
a, b = 0, 1
loop do
Fiber.yield a
a, b = b, a + b
end
end
10.times { print "#{fib.resume} " }
# => 0 1 1 2 3 5 8 13 21 34
This is an infinite Fibonacci generator that produces one value at a time. No array allocated, no recursion limit. The Fiber's stack frame persists between resumes β a and b are just sitting there between calls.
Fibers as Enumerators
Ruby's Enumerator is built on Fibers under the hood. Understanding this makes lazy enumerators click:
# Enumerator::new takes a yielder β same pattern as Fiber.yield
counter = Enumerator.new do |yielder|
n = 0
loop { yielder << n; n += 1 }
end
counter.take(5) # => [0, 1, 2, 3, 4]
counter.lazy
.select(&:odd?)
.first(3) # => [1, 3, 5] β only computes what's needed
# You can compose lazy pipelines over Fibers manually too
def integers_from(n)
Fiber.new do
loop { Fiber.yield n; n += 1 }
end
end
def fiber_map(fiber, &block)
Fiber.new do
loop { Fiber.yield block.call(fiber.resume) }
end
end
def fiber_select(fiber, &predicate)
Fiber.new do
loop do
val = fiber.resume
Fiber.yield val if predicate.call(val)
end
end
end
source = integers_from(1)
squares = fiber_map(source) { |n| n * n }
evens = fiber_select(squares, &:even?)
5.times { print "#{evens.resume} " }
# => 4 16 36 64 100
Each step pulls exactly one value. No intermediate arrays. This is why lazy pipelines are memory-efficient at scale.
Fiber::Scheduler β The Real Power
Ruby 3.1's Fiber::Scheduler lets you plug in a custom scheduler that intercepts blocking I/O operations and runs other Fibers while one is waiting. The standard library doesn't ship a scheduler β you implement the interface, or use a gem like async or evt.
Here's a minimal scheduler that makes the interface concrete:
# A stripped-down scheduler β production would use libev or io_uring
class MinimalScheduler
def initialize
@readable = {} # io => fiber
@writable = {}
@timers = [] # [[fire_at, fiber], ...]
@ready = [] # fibers ready to run immediately
end
# Called by Fiber.schedule { } β registers a fiber to run
def fiber(&block)
fiber = Fiber.new(&block)
@ready << fiber
fiber
end
# Intercepted by Kernel#sleep inside a Fiber
def kernel_sleep(duration)
@timers << [Process.clock_gettime(Process::CLOCK_MONOTONIC) + duration,
Fiber.current]
Fiber.yield # suspend the current fiber
end
# Intercepted by IO#wait_readable inside a Fiber
def io_wait(io, events, timeout)
if events & IO::READABLE > 0
@readable[io] = Fiber.current
elsif events & IO::WRITABLE > 0
@writable[io] = Fiber.current
end
Fiber.yield
end
# The event loop β called by Ruby when the main Fiber has nothing to do
def run
until @readable.empty? && @writable.empty? && @timers.empty? && @ready.empty?
# Run anything immediately ready
@ready.dup.each { |f| @ready.delete(f); f.resume if f.alive? }
now = Process.clock_gettime(Process::CLOCK_MONOTONIC)
# Fire elapsed timers
@timers.select! do |fire_at, fiber|
if fire_at <= now
fiber.resume if fiber.alive?
false
else
true
end
end
# Select on I/O with a short timeout
timeout = @timers.map(&:first).min&.-(now) || 0.1
timeout = [timeout, 0].max
readable, writable = IO.select(
@readable.keys, @writable.keys, [], timeout
) || [[], []]
readable.each { |io| @readable.delete(io)&.resume }
writable.each { |io| @writable.delete(io)&.resume }
end
end
end
Now wire it up:
scheduler = MinimalScheduler.new
Fiber.set_scheduler(scheduler)
# These three Fibers run concurrently on ONE thread
Fiber.schedule do
puts "[A] start"
sleep 0.1 # intercepted β this fiber suspends, others run
puts "[A] after sleep"
end
Fiber.schedule do
puts "[B] start"
sleep 0.05
puts "[B] after sleep"
end
Fiber.schedule do
puts "[C] instant"
end
# Output:
# [A] start
# [B] start
# [C] instant
# [B] after sleep β B's timer fires first (0.05s)
# [A] after sleep β A's timer fires second (0.1s)
One thread. Three Fibers making progress "simultaneously." The scheduler is the traffic cop.
Fibers for Concurrent HTTP Requests (Real Example)
Using the async gem, which ships a production-grade scheduler:
# gem 'async'
require 'async'
require 'async/http/internet'
urls = %w[
https://api.github.com/users/rails
https://api.github.com/users/matz
https://api.github.com/users/tenderlove
]
results = {}
Async do |task|
internet = Async::HTTP::Internet.new
tasks = urls.map do |url|
task.async do
response = internet.get(url)
body = response.read
results[url] = JSON.parse(body)["public_repos"]
end
end
tasks.each(&:wait)
internet.close
end
results.each { |url, repos| puts "#{url.split('/').last}: #{repos} repos" }
All three HTTP requests fire concurrently. The Fiber scheduler switches between them as each one waits on I/O. Total wall-clock time β the slowest single request, not the sum of all three.
When Fibers win:
- Many concurrent I/O operations (HTTP, DB, Redis, file)
- You want async without threads (no locking, no race conditions on shared state)
- Streaming pipelines, generators, lazy evaluation
When Fibers don't help:
- CPU-bound computation β one thread means one core
- Work that blocks without yielding (C extensions that don't release the GIL)
Part 2: Ractors
The Basics β Isolated Parallel Workers
A Ractor is an actor-model execution unit. It has its own heap, runs on its own thread, and communicates via message passing. The GIL does not apply between Ractors.
# Simple parallel computation
r = Ractor.new do
sum = 0
1_000_000.times { |i| sum += i }
sum
end
puts r.take # => 499999500000
# Runs on a separate thread while main continues
Multiple Ractors running in parallel:
workers = 4.times.map do |i|
Ractor.new(i) do |id|
# Each Ractor computes its slice
start = id * 250_000
finish = start + 250_000
sum = 0
(start...finish).each { |n| sum += n }
sum
end
end
total = workers.sum(&:take)
puts total # => 499999500000 (same answer, ~4x faster on 4 cores)
The Isolation Rules β Where Most Code Breaks
Ractors cannot share mutable objects. This is enforced at runtime, not compile time. The rules:
# β
Immutable objects can be shared freely
CONSTANT = "hello".freeze
r = Ractor.new { puts CONSTANT } # fine β frozen string is shareable
# β
Value types are always shareable
r = Ractor.new { puts 42 + Math::PI } # integers, floats, symbols: fine
# β Mutable objects cannot be passed by reference
shared_hash = { count: 0 }
r = Ractor.new(shared_hash) do |h|
h[:count] += 1 # Ractor::IsolationError!
end
# β
Pass a copy instead β the sender loses the reference
data = { count: 0 }
r = Ractor.new(data) do |h|
h[:count] += 1
h
end
# After this line, `data` is moved β the main Ractor can no longer access it
result = r.take
puts result[:count] # => 1
The move semantics are the hardest part. When you pass a mutable object to a Ractor, you're transferring ownership:
payload = [1, 2, 3, 4, 5]
r = Ractor.new(payload) do |arr|
arr.map { |n| n ** 2 }
end
result = r.take
# payload is now inaccessible from the main Ractor
begin
puts payload # => Ractor::MovedError: moved object has been used
rescue Ractor::MovedError => e
puts "As expected: #{e.message}"
end
This forces you to think about ownership, which is uncomfortable if you're used to Ruby's free-for-all sharing. It's also what makes Ractor-based code genuinely thread-safe without locks.
Message Passing Patterns
Ractors communicate via send/receive (push-based) or yield/take (pull-based):
# Push model: supervisor sends work to workers
workers = 3.times.map do
Ractor.new do
loop do
job = Ractor.receive # blocks until a message arrives
break if job == :stop
result = job * job
Ractor.yield result # makes result available to anyone who calls .take
end
end
end
# Distribute work
jobs = [10, 20, 30, 40, 50, 60, 70, 80, 90]
jobs.each_with_index do |job, i|
workers[i % workers.size].send(job)
end
workers.each { |w| w.send(:stop) }
# Collect results (order not guaranteed)
results = workers.flat_map do |w|
# Drain each worker's yielded values
collected = []
loop { collected << w.take }
rescue Ractor::ClosedError
collected
end
puts results.sort.inspect
# => [100, 400, 900, 1600, 2500, 3600, 4900, 6400, 8100]
A Worker Pool with a Shared Queue
The most common Ractor pattern: a pool of workers pulling from a central pipeline.
# Ractor.make_shareable deep-freezes an object so all Ractors can read it
PIPELINE = Ractor.new do
loop { Ractor.yield Ractor.receive }
end
N_WORKERS = 4
workers = N_WORKERS.times.map do
Ractor.new(PIPELINE) do |pipeline|
loop do
job = pipeline.take # pull next job from the pipeline
break if job == :done
# CPU-intensive work β genuinely parallel across workers
result = {
input: job,
output: job.chars.sort.join, # simulate work
worker: Ractor.current.inspect,
}
Ractor.yield result
end
end
end
# Feed jobs
words = %w[banana apple cherry date elderberry fig grape]
words.each { |w| PIPELINE.send(w) }
N_WORKERS.times { PIPELINE.send(:done) }
# Collect in arrival order
results = []
N_WORKERS.times do |_|
loop do
_ractor, result = Ractor.select(*workers)
results << result
rescue Ractor::ClosedError
break
end
end
results.each { |r| puts "#{r[:input]} β #{r[:output]} (#{r[:worker]})" }
Ractor.select is the key β it blocks until any of the given Ractors has a value ready, then returns whichever fires first. This is how you avoid blocking on a slow worker when faster ones have results ready.
Ractors and Classes β Where Things Get Thorny
Most Ruby classes are not Ractor-safe because they hold mutable class-level state. This is the thing that will bite you most in practice:
# β Class with mutable state β breaks in Ractors
class Counter
@@count = 0 # class variable: mutable, shared
def self.increment = @@count += 1
def self.value = @@count
end
r = Ractor.new { Counter.increment } # Ractor::IsolationError
# β
Approach 1: Pass data explicitly, return results
r = Ractor.new(0) do |count|
count + 1
end
puts r.take # => 1
# β
Approach 2: Freeze the class's shareable parts
class Config
DEFAULTS = {
timeout: 30,
retries: 3,
}.freeze # frozen Hash is shareable
def self.defaults = DEFAULTS
end
r = Ractor.new { Config.defaults[:timeout] }
puts r.take # => 30 β works because DEFAULTS is frozen
# β
Approach 3: Ractor-local state via instance variables on the Ractor
# Each Ractor has its own object space β instance vars on objects created
# inside the Ractor are private to it by definition
r = Ractor.new do
@local_cache = {} # this lives only in this Ractor's heap
@local_cache[:key] = "value"
@local_cache[:key]
end
puts r.take # => "value"
Practical Ractor: Parallel File Processing
require 'json'
# Process a directory of JSON files in parallel
files = Dir.glob("data/*.json")
# Freeze paths β strings are mutable but freeze makes them shareable
frozen_files = files.map(&:freeze)
workers = frozen_files.map do |path|
Ractor.new(path) do |file_path|
raw = File.read(file_path)
data = JSON.parse(raw)
# Simulate per-file processing
{
file: file_path,
count: data.length,
sample: data.first,
}
end
end
# Collect as workers finish (whichever finishes first)
until workers.empty?
done, result = Ractor.select(*workers)
workers.delete(done)
puts "#{result[:file]}: #{result[:count]} records"
end
Without Ractors, Dir.glob + processing would be sequential. With 4 Ractors on a 4-core machine, you're parsing 4 files simultaneously.
Head-to-Head Benchmarks
Let's measure both approaches on their native problem types.
I/O-Bound: 50 Concurrent HTTP Requests
require 'benchmark'
require 'net/http'
require 'async'
require 'async/http/internet'
URLS = Array.new(50) { "https://httpbin.org/delay/0.1" }
# Approach 1: Sequential (baseline)
sequential_time = Benchmark.realtime do
URLS.each do |url|
uri = URI(url)
Net::HTTP.get(uri)
end
end
# Approach 2: Threads
threads_time = Benchmark.realtime do
threads = URLS.map do |url|
Thread.new { Net::HTTP.get(URI(url)) }
end
threads.each(&:join)
end
# Approach 3: Fibers with Async scheduler
fibers_time = Benchmark.realtime do
Async do |task|
internet = Async::HTTP::Internet.new
tasks = URLS.map { |url| task.async { internet.get(url).read } }
tasks.each(&:wait)
internet.close
end
end
puts "Sequential: #{sequential_time.round(2)}s"
puts "Threads: #{threads_time.round(2)}s"
puts "Fibers: #{fibers_time.round(2)}s"
# Typical output (50 requests Γ 100ms each):
# Sequential: 5.21s
# Threads: 0.38s β thread overhead per request
# Fibers: 0.12s β near-theoretical minimum (one connection pool)
Fibers beat threads for high-concurrency I/O because they have lower overhead per "concurrent unit" and can share a single connection pool efficiently.
CPU-Bound: Parallel Prime Sieve
require 'benchmark'
def primes_in_range(start, finish)
sieve = Array.new(finish + 1, true)
sieve[0] = sieve[1] = false
(2..Math.sqrt(finish)).each do |i|
if sieve[i]
(i*i..finish).step(i) { |j| sieve[j] = false }
end
end
(start..finish).select { |n| sieve[n] }
end
RANGES = [
[2, 500_000],
[500_001, 1_000_000],
[1_000_001, 1_500_000],
[1_500_001, 2_000_000],
]
# Approach 1: Sequential
sequential_time = Benchmark.realtime do
results = RANGES.map { |s, e| primes_in_range(s, e) }
puts "Sequential primes found: #{results.sum(&:length)}"
end
# Approach 2: Threads (GIL-constrained β won't parallelize Ruby bytecode)
threads_time = Benchmark.realtime do
threads = RANGES.map do |s, e|
Thread.new { primes_in_range(s, e) }
end
results = threads.map(&:value)
puts "Threads primes found: #{results.sum(&:length)}"
end
# Approach 3: Ractors (genuine parallelism)
ractors_time = Benchmark.realtime do
workers = RANGES.map do |s, e|
Ractor.new(s, e) { |start, finish| primes_in_range(start, finish) }
end
results = workers.map(&:take)
puts "Ractors primes found: #{results.sum(&:length)}"
end
puts "\nSequential: #{sequential_time.round(3)}s"
puts "Threads: #{threads_time.round(3)}s (should be ~same as sequential)"
puts "Ractors: #{ractors_time.round(3)}s (should be ~1/4 on 4 cores)"
# Typical output on a 4-core machine:
# Sequential primes found: 148933
# Threads primes found: 148933
# Ractors primes found: 148933
#
# Sequential: 1.847s
# Threads: 1.831s β GIL: no speedup
# Ractors: 0.512s β ~3.6x speedup (real parallelism)
This is the number that matters. Threads give you zero speedup on CPU-bound Ruby work because the GIL serializes bytecode execution. Ractors give you near-linear scaling with cores.
The Comparison Table
| Fibers | Ractors | |
|---|---|---|
| Parallelism | No (cooperative, 1 thread) | Yes (preemptive, N threads) |
| Use case | I/O-bound concurrency | CPU-bound parallelism |
| Shared state | Freely shared (same heap) | Forbidden (isolated heaps) |
| Communication | Direct β same memory | Message passing only |
| Error handling | Exceptions propagate normally | Ractor errors are isolated |
| Maturity | Stable | Experimental (Ruby 3.x) |
| Ecosystem |
async, evt gems |
Limited; most gems not safe |
| Overhead | Very low (kb per fiber) | Higher (full OS thread) |
| Debugging | Standard tools work | Limited tooling |
| When it breaks | Blocking C extensions | Any mutable shared state |
Combining Both: Parallel Workers, Each with Async I/O
The really interesting architecture is Ractors for parallelism at the top level, with each Ractor using a Fiber scheduler internally for its I/O work. This gives you both:
# Each Ractor handles a batch of URLs concurrently via Fibers
# Multiple Ractors run in parallel on multiple cores
require 'async'
require 'async/http/internet'
URL_BATCHES = [
%w[https://api.github.com/users/rails https://api.github.com/users/matz],
%w[https://api.github.com/users/tenderlove https://api.github.com/users/dhh],
].map { |batch| batch.freeze }.freeze
workers = URL_BATCHES.map do |batch|
Ractor.new(batch) do |urls|
# Inside this Ractor: Async scheduler for concurrent I/O
results = {}
Async do |task|
internet = Async::HTTP::Internet.new
fetches = urls.map do |url|
task.async do
response = internet.get(url)
body = JSON.parse(response.read)
[url, body["public_repos"]]
end
end
fetches.each do |f|
url, repos = f.wait
results[url] = repos
end
internet.close
end
results
end
end
all_results = workers.map(&:take).reduce(:merge)
all_results.each do |url, repos|
puts "#{url.split('/').last}: #{repos} repos"
end
Each Ractor runs on its own core. Within each Ractor, Fibers handle concurrent HTTP. You get horizontal scaling (Ractors) and I/O multiplexing (Fibers) simultaneously.
Note the freeze calls β both the batch arrays and the URL strings must be frozen to be passed into Ractors. In practice you'll build helpers for this:
module RactorSafe
def self.freeze_deep(obj)
case obj
when Hash
obj.transform_values { |v| freeze_deep(v) }.freeze
when Array
obj.map { |v| freeze_deep(v) }.freeze
when String
obj.frozen? ? obj : obj.dup.freeze
else
obj.frozen? ? obj : obj.freeze
end
end
end
payload = RactorSafe.freeze_deep({ urls: ["http://example.com"], timeout: 30 })
r = Ractor.new(payload) { |p| p[:timeout] }
puts r.take # => 30
What's Still Broken with Ractors (Honest Assessment)
Ractors are marked experimental for a reason. As of Ruby 3.3:
# Most stdlib classes are not Ractor-safe
require 'date'
r = Ractor.new { Date.today } # Ractor::IsolationError in some versions
# Logger is not Ractor-safe
require 'logger'
logger = Logger.new($stdout)
r = Ractor.new(logger) { |l| l.info("hello") } # IsolationError
# ERB is not Ractor-safe
# OpenSSL contexts are not Ractor-safe
# Most ActiveRecord is emphatically not Ractor-safe
The practical upshot: Ractors work well for pure computation with simple data types. They break badly when you reach for anything that has mutable class-level state β which is most of the Ruby ecosystem.
The path forward is gems marking themselves as Ractor-safe and Ruby's standard library getting audited. It's happening, slowly. Ractors today are best suited to isolated computation pipelines where you control the code, not general-purpose Rails-style applications.
Decision Guide
Use Fibers when:
- You're doing many concurrent network calls, DB queries, or file operations
- You want async without the complexity of thread synchronization
- You're building a streaming pipeline or lazy generator
- Your codebase uses gems that aren't Ractor-safe (which is most gems)
Use Ractors when:
- You have genuinely CPU-bound work (parsing, compression, cryptography, simulation)
- You control the full call stack and can audit it for Ractor-safety
- You need to saturate all available cores, not just improve I/O throughput
- You're building a new system, not retrofitting an existing Rails app
Use Threads when:
- You need Ractor-like parallelism but can't freeze your data (legacy code)
- You're wrapping a C extension that releases the GIL (SQLite, some crypto libs)
- The ergonomics of Ractors aren't worth it for your team yet
The honest answer for most production Ruby applications in 2024: Fibers via async for I/O, threads for the rare parallelism case, and Ractors for isolated computation services where you own the entire stack. Ractors are the future β they're just not universally the present yet.
United States
NORTH AMERICA
Related News
What Does "Building in Public" Actually Mean in 2026?
19h ago
The Agentic Headless Backend: What Vibe Coders Still Need After the UI Is Done
19h ago
Why Iβm Still Learning to Code Even With AI
21h ago
I gave Claude a persistent memory for $0/month using Cloudflare
1d ago
NYT: 'Meta's Embrace of AI Is Making Its Employees Miserable'
1d ago