You're 15 minutes into an Amazon coding round. You've identified the pattern, your solution handles the examples, you're about to submit. Then the interviewer changes a constraint. Sorted input is now unsorted. The complexity ceiling drops from O(n log n) to O(n). Your original solution collapses, and the rest of the round is now about whether you can reconstruct the approach from the invariant rather than from memory.
That moment, the constraint shift after the working solution, is where the Bar Raiser's evaluation lives. Most prep treats it as a bonus question. The Bar Raiser treats it as the round.
TL;DR: Amazon's Bar Raiser is an independent interviewer with veto power who tests the same problems as a regular round but probes the follow up much harder. Two things make Amazon prep different from Google or Meta prep: the pattern coverage is broader (eleven plus pattern families vs. six or seven), and the round's evaluation centres on whether you can adapt your solution after a constraint changes, not on whether you can produce the optimal solution first.
What the Bar Raiser is, in one paragraph
Amazon's Bar Raiser is an interviewer pulled from outside the hiring team. Their job is to evaluate the candidate against Amazon's company wide hiring bar, independent of how badly the team wants to fill the seat. They can veto a hire that everyone else approved. They can also push a borderline candidate over the line if they see genuine depth.
In practice, you don't know which of your interviewers is the Bar Raiser. Amazon keeps that ambiguous through the loop. What you do know is that at least one round will be a Bar Raiser round, and that round usually includes a coding problem.
The problem itself looks the same as any other Amazon technical screen. The difference is what happens after you produce a working answer. A standard interviewer often moves on once you've solved it. A Bar Raiser keeps going. They'll change a constraint, ask you to optimise, ask why your solution is correct (not just whether it works), or walk you through the failure case you didn't test.
The initial solution gets you to the conversation. The follow up is the conversation.
The solution gets you into the round. The adaptation decides the round.
Why Amazon's pattern coverage is wider
Every FAANG company has favourite pattern categories. Google leans heavily on predicate search (binary search on the answer space). Meta concentrates on sliding window and design. Amazon doesn't have a tight central focus the way those two do. Its problem set spans counting, fixed and variable sliding window, prefix sum, LRU style design, randomised set design, binary search and its 2D / staircase variants, queue design, and backtracking.
That's eleven plus pattern families versus roughly seven or eight for Google and six or seven for Meta.
What this means for prep is concrete: the strategy of going deep on two or three pattern families and hoping the round lands in your zone has a lower hit rate at Amazon. A Google candidate who deeply understands predicate search, counting, and graphs covers a meaningful fraction of what they'll see. An Amazon candidate who goes equally deep on three families has blind spots across the other eight.
The Bar Raiser amplifies this. Because they're not bound to the hiring team's domain, they can pull a problem from any of those families. If the round happens to land on a category you skipped, the safety net of "the team usually asks X" doesn't catch you.
The broader your pattern coverage, the smaller the chance any single Bar Raiser pick puts you in unfamiliar territory. Going deep matters too, but at Amazon the breadth axis carries more weight than at the narrower companies.
What the follow up is testing
The interviewer isn't asking the follow up to be cruel. They're testing whether you understood the mechanism of your solution or just the solution itself. If your initial answer was a recalled template for the problem, the constraint change exposes that. If your initial answer was constructed from an invariant, the constraint change is something you can adapt to.
Strong candidates optimise from understanding. Weak candidates optimise from memory.
A worked example makes this concrete. LRU Cache is one of Amazon's most reliably tested problems. The standard solution composes a hash map (O(1) key lookup) with a doubly linked list (O(1) insert and removal at any node when you have the node reference). The hash map maps key to list node. Operations look like:
class LRUCache:
def __init__(self, capacity: int):
self.capacity = capacity
self.map = {} # key -> ListNode
self.head = ListNode() # most recently used end
self.tail = ListNode() # least recently used end
self.head.next = self.tail
self.tail.prev = self.head
def get(self, key: int) -> int:
if key not in self.map:
return -1
node = self.map[key]
self._remove(node)
self._add_to_front(node)
return node.value
def put(self, key: int, value: int) -> None:
if key in self.map:
self._remove(self.map[key])
node = ListNode(key, value)
self._add_to_front(node)
self.map[key] = node
if len(self.map) > self.capacity:
lru = self.tail.prev
self._remove(lru)
del self.map[lru.key]
That's the answer the Bar Raiser expects you to land. Now the follow up: each entry has a TTL. Reads on an expired entry must return -1. The cache must still operate at expected O(1) per call.
The candidate who memorised LRU starts redesigning from zero. The candidate who built it from the invariant asks one question first: which invariant did the doubly linked list maintain, and does that invariant still hold under TTL?
The list maintained an ordering invariant: nodes near the head were used more recently than nodes near the tail. Eviction removed the tail because tail was the least recently used. With TTL, the eviction criterion changes from "least recently used" to "expired or least recently used." The list ordering by recency is still useful for the second clause, but the first clause needs a different signal. Two choices land:
- Lazy expiry: on
get, if the node's expiry timestamp has passed, treat it as a miss, remove the node, return-1. Onput, when the cache is full, evict the tail as before. Simple, keeps the data structure unchanged. Worst case has expired entries lingering until they're touched. - Eager expiry: maintain a second ordered structure keyed by expiry time (a min heap or a second linked list ordered by expiry). On every
getorput, sweep expired entries off the front of the expiry structure. More memory, tighter behaviour on memory pressure.
The Bar Raiser doesn't necessarily care which one you pick. They care whether you can articulate which invariant you're preserving and which one you're modifying. The candidate who answers "the recency ordering still holds, I'm adding an eviction predicate that checks expiry first" has demonstrated something about reasoning that solving twenty more problems wouldn't have.
The three signals the round is graded on
Three threads run through what separates a "hire" rating from a "no hire" in a Bar Raiser coding round, and they correspond to the three places the round lives.
Clean reasoning under loose constraints. The problem statement is often deliberately under specified. Inputs might or might not be sorted. Sizes might be zero. Negative numbers might appear. The candidate who immediately asks the clarifying questions, defines assumptions on a whiteboard area, and only then starts coding signals the kind of structured thinking Amazon values. Jumping into code without defining the problem space reads as someone operating from pattern memory rather than from understanding.
Adaptation across the follow up. This is the LRU + TTL example above. The follow up isn't a new problem. It's the same problem with a constraint flipped. The candidate who can isolate which part of the previous solution depended on the changed constraint passes. The candidate who scraps and restarts shows that the previous solution was lookup, not construction.
The follow-up is usually testing whether your first solution was constructed or recalled.
Articulation of why. Bar Raisers listen for the explanation that goes past "I'm using a hash map here." They want to hear what invariant the hash map maintains, what would break with an array, what tradeoff you accepted. A candidate who can talk through the load bearing reasoning is a candidate who would raise the team's average performance, which is the literal evaluation Amazon writes down.
The four mistakes the round punishes
These are not coding mistakes. They're prep mistakes that compound under the format.
-
Solving silently. You finish in fifteen minutes and present the answer. The Bar Raiser has nothing to evaluate beyond the output. Your reasoning made no observable trace. Narrate the structural choices as you go: why a hash map and not a sorted array, why an
O(n)first pass instead of trying forO(log n), what edge case you'll come back to. The round is graded on observable reasoning, not just output. -
Chasing the optimal solution before the working one. Some candidates spend twenty minutes thinking before writing a line, hunting for the optimal solution out of the gate. Then the follow up arrives at minute thirty five and there's no time for it. Get a correct
O(n^2)orO(n log n)working first, confirm edge cases, then improve when the Bar Raiser asks. The follow up is part of the round's grade. Running out of time on it is a soft fail you didn't have to take. - Treating the follow up as a new problem. When the constraint changes, the instinct is to discard the previous solution and start fresh. That's almost always wrong. The follow up is testing whether you understand the structural relationship between the constraint and your approach. Ask which part of your solution depended on the constraint that just changed, then modify only that part. If the original used binary search because the input was sorted and the follow up removes the sort guarantee, you change the search mechanism, not the algorithm.
- Skipping edge cases until prompted. Empty input, single element, duplicate values, integer overflow boundaries. Walking through two of these unprompted before the interviewer asks signals thoroughness and pre empts the follow up that targets the case you skipped. Reactive edge case handling reads as a candidate who optimises for the example and not for the input space.
What this changes about how to prep
The standard FAANG prep approach is to go deep on two or three pattern families and hope the interview lands in their zone. That works at companies with narrower coverage. For Amazon's breadth it leaves too many uncovered surfaces, and for the Bar Raiser in particular it leaves you with no recovery path if the chosen pattern isn't one you went deep on.
Two adjustments help.
The first is interleaving over blocking. Block practice is a week on sliding window, a week on dynamic programming, a week on graphs. Interleaved practice rotates between pattern families inside the same study session. Cognitive science research on interleaved practice consistently shows that mixing problem types during study improves your ability to identify which pattern applies to a new problem. Mixed practice is harder while you're doing it (you can't lean on the priming of "this week is sliding window, every problem is sliding window") and that's the point. The harder identification work during practice is the same identification work the Bar Raiser's follow up will require.
The second is practising the follow up explicitly. After every problem you solve, ask: what's the constraint change that would invalidate this solution most? Implement the modified version. The constraint change is the rep that builds the adaptation skill, and adaptation is what the Bar Raiser scores. If LRU Cache is the practice problem, "add TTL" is the rep. If sliding window with at most K distinct characters is the problem, "K varies based on the substring contents" is the rep.
You can do both of these without any platform: a pattern list, a notebook of constraint flips per problem, and a habit of running the second pass before moving on. The reason most prep skips it is that there's no immediate signal it's working. The signal arrives in the round when the Bar Raiser shifts a constraint and you reach for the invariant instead of the template.
If you're preparing for Amazon and your practice still stops after finding the optimal solution, you're missing the part the Bar Raiser actually evaluates.
I wrote a longer breakdown covering:
- Amazon's highest frequency pattern families
- how the Bar Raiser evaluates coding rounds
- and the follow up drills that build adaptation under constraint shifts
The day of the loop
A few logistical things candidates often get wrong, worth noting because they're outside your technical preparation but inside the Bar Raiser's evaluation surface.
The onsite loop is typically four or five rounds across a single day or consecutive virtual sessions. One is system design for senior candidates. Two or three are coding rounds. One is behavioural, focused on the Leadership Principles. The Bar Raiser participates in at least one of these, and you won't know which until after the process ends.
Coding rounds are about forty five minutes. Five to ten minutes go to introductions and the problem statement. Twenty five to thirty minutes go to the initial solution. The last ten to fifteen are where the follow up happens. If your first solution takes thirty five minutes because you were reaching for the optimal answer from the start, you've already lost the follow up window.
Each round is evaluated independently. A weak round one doesn't doom you if rounds two and three are strong, and the Bar Raiser considers performance across the loop. Don't mentally rehearse what just happened in the breaks. Reset and walk into the next round.
The debrief happens without you. All interviewers, including the Bar Raiser, meet afterward. If the hiring manager wants to extend an offer and the Bar Raiser objects, the Bar Raiser's veto stands.
Three months from now
The Bar Raiser round isn't a fundamentally different format. It's the same problem solving test with a higher evaluation standard and broader pattern coverage. Two things move the needle for it: pattern breadth across families that gets exercised by interleaved practice, and the ability to adapt your solution when the constraint shifts in the follow up.
I wrote a longer version with the full pattern breakdown across Amazon's tested families on my own blog, including how the loop's other rounds factor into the Bar Raiser's overall judgement.
What's a follow up question from a real interview that changed how you think about the problem itself, not just the solution?
United States
NORTH AMERICA
Related News
What Does "Building in Public" Actually Mean in 2026?
20h ago
The Agentic Headless Backend: What Vibe Coders Still Need After the UI Is Done
20h ago
Why Iβm Still Learning to Code Even With AI
22h ago
I gave Claude a persistent memory for $0/month using Cloudflare
1d ago
NYT: 'Meta's Embrace of AI Is Making Its Employees Miserable'
1d ago