Fetching latest headlines…
Bypassing Akamai v3 sensor_data with TLS in 2026 — why the deobfuscator is a trap
NORTH AMERICA
🇺🇸 United StatesMay 11, 2026

Bypassing Akamai v3 sensor_data with TLS in 2026 — why the deobfuscator is a trap

0 views0 likes0 comments
Originally published byDev.to

I have spent the last two years staring at Akamai's bot manager. Specifically the _abck cookie, the bm_sz cookie, and the giant base64-looking string that ships up as sensor_data to /_bm/_data or whichever path the integrator picked this month. If you have ever opened DevTools on a Nike, Walmart, Target, or LVMH-owned site and watched the network panel scroll past five or six POSTs with that opaque payload, you know exactly what I mean.

For a long time the only "real" answer was: rip the bot.js, run it through a deobfuscator, find the bmak. object, recover the VM, rebuild the sensor_data generator, validate against a Pixel/Challenge round-trip, then keep up with the weekly script swap. Impressive work. Also enormous, fragile, slow. Every time Akamai ships a new VM op or rotates variable names, the team that owns your solver loses a week.

What we shipped this year takes the opposite bet. Almost the entire sensor_data story can be sidestepped if you fix the layers underneath HTTP. TLS fingerprint, HTTP/2 fingerprint, HTTP/3 if the origin negotiates it, header order, ALPN, and the small set of timing properties Akamai actually scores. Most sites that bill themselves as "Akamai-protected" never even reach the deep sensor_data evaluation if the request looks like a real Chrome 146. So we built the thing that makes the request look real, and we open-sourced it.

Repo: https://github.com/jesterfoidchopped/akamai-v3-sensor

This is the long version of why that works, where it breaks, and what we are leaving on the table on purpose.

The deobfuscator path is the obvious one. That is the problem.

If you go on any scraping forum and ask how to beat Akamai, you will get pointed at four or five private sensor_data generators. They all do the same thing: take the obfuscated bot script, deobfuscate it, port the VM to Python or Node, feed it fake mouse events and key timings, post the resulting payload to the sensor endpoint, and pray the _abck you get back is "valid" (the ~0~ segment versus ~-1~ is the usual tell).

This works. It also costs you:

A maintainer who can read Akamai's obfuscation pipeline. That is not a junior role.
A test harness that compares your generated payload byte-for-byte against a real browser dump.
A CI pipeline that re-fetches the bot script on every target hourly, because the script URL is per-tenant and the obfuscation seed rotates.
A pixel/challenge solver, because Akamai will quietly upgrade you from sensor_data only to the pixel round-trip if your scores look suspicious.
A way to keep the generator out of public GitHub, because the moment it lands publicly Akamai will retune the scoring weights inside a release cycle.

I have watched three different teams burn six months on this and end up with something that handles four target sites. The economics are awful.

The other reason this path is a trap is that you end up optimizing the wrong layer. You spend a quarter making the JS payload look perfect, and you never realize that your TCP connection is being scored at -0.4 on its own because your JA4 says t13d_1517 instead of t13d_1714. Akamai's bot scoring is additive across layers. If TLS plus HTTP/2 plus header order alone gets you to -1.2 before a single sensor byte is sent, you are dead before the JS even runs.

What akamai-v3-sensor actually is

The library is a Go HTTP client. From the outside it looks like any other client — Get, Post, multipart uploads, redirect history, streaming bodies — but underneath it carries a Chrome-grade fingerprint stack:

TLS via a maintained uTLS fork. The ClientHello, GREASE points, supported groups, signature algorithms, ALPS, EncryptedClientHello, and certificate compression are pinned to a real browser build. Not "close enough." Pinned.

HTTP/2 via a custom transport. SETTINGS frame order, INITIAL_WINDOW_SIZE, MAX_CONCURRENT_STREAMS, WINDOW_UPDATE before the first HEADERS frame, PRIORITY ordering, the pseudo-header sequence — all of it follows what Akamai parses in their akamai_fingerprint string (the 1:65536;2:0;...|WINDOW|PRIORITY|HEADERS format you have probably seen in their bot manager dashboard).

HTTP/3 / QUIC when the origin negotiates it. This matters more in 2026 than it did even last year. A growing share of Akamai-fronted properties advertise h3, and a client that drops to h2 every time stands out.

Header order at the HPACK level. Not "I put them in this order in my dict." HPACK encoder order, indexed vs literal, never-indexed for cookies, the works.

Presets for Chrome 146, Firefox 132, Safari 18, and a few mobile builds. New ones go in fingerprint/ as plain Go files.

(code block — paste this and apply Medium's code block formatter)

c := sensor.New("chrome-146")
defer c.Close()
resp, _ := c.Get(context.Background(), "https://tls.peet.ws/api/all")

That snippet alone, on the day we last tested, returns a JA4 fingerprint that matches a stock Chrome 146 on macOS bit for bit, including the ECH outer/inner split and certificate compression preference. Test it on tls.peet.ws, browserleaks.com/tls, and scrapfly.io/web-scraping-tools/fingerprint. They will all agree.

The "request-based" bet, in plain terms

Here is the part that I think is genuinely contrarian and worth saying loudly.

Akamai's v3 bot manager scores you in two phases. Phase one is everything they can score from your TCP/TLS/HTTP frames and your first request, before any JS has run on your client. Phase two is the sensor_data POST and the optional pixel/challenge round trip.

If your phase-one score is good enough, phase two is mostly ceremonial. The site will send you a _abck cookie that contains ~0~ after the first or second request, and from there you are a "trusted" client and most subsequent requests pass without re-evaluation for the lifetime of that cookie.

If your phase-one score is bad, no amount of perfectly-crafted sensor_data will save you. Akamai gives less weight to the JS-reported telemetry than people think, because they assume the JS itself can be tampered with. The TLS/HTTP layer is harder to fake from inside a browser, so they trust it more.

The practical implication is this. For the long tail of "this site uses Akamai" — which is most of the public web that bothers with bot management — getting phase one right is the entire fight. You do not need a sensor_data generator at all. You need a request that looks like a browser at every layer below the JS, and you need to send the same boring bm/_data POST that the browser would have sent (often the script literally posts an empty body or a tiny telemetry blob on first load), and Akamai will hand you the trusted cookie.

We pass on Nike, on most Adidas regions, on Walmart product pages, on the smaller LVMH brands, on a long list of airline and ticketing sites, on Footlocker, on a bunch of retail APIs that I will not name because then someone will email me. We do this with zero sensor_data forging. The payload we send is the literal payload Chrome 146 sends on a cold load.

The sites where this is not enough are the ones running the deepest tier — usually the absolute top of retail (some sneaker drops, some ticketing on day-of-sale) — and on those, no public solution works for long anyway, because they have a human team retuning weights against any solver that gets popular.

Where this approach actually breaks

I want to be honest about this because most blog posts pretending to bypass Akamai are not.

It breaks when:

  1. The site has enabled per-request sensor_data scoring with a high threshold. Rare, but it happens on auth and checkout endpoints. In that case you do need the full payload.

  2. The site is doing custom pixel challenges layered on top of Akamai. These are typically Akamai's Bot Manager Premier features and they require the JS to actually execute. A headless real browser, even a Patchright/Camoufox variant, still works here, but a pure HTTP client does not.

  3. The site is fingerprinting beyond what Akamai itself does — Cloudflare Turnstile in front of Akamai, PerimeterX co-deployed, that kind of thing. You need a separate strategy for the outer layer.

For the first case, we ship hooks. You can intercept the bot.js fetch, drop in your own sensor_data generator if you have one, and the rest of the stack still handles TLS and H2 correctly. The library does not pretend to solve the deobfuscator problem. It just makes it so most of the time you do not need to.

On the v3 deobfuscator that already exists

There is a public sensor_data v3 deobfuscator floating around. It is decent. I am not going to link it because the maintainer asked me not to, but anyone who has spent ten minutes on the topic knows the one I mean. It works. It is also exactly the trap I described in section one — the moment it became public, scoring weights shifted, and now sites that worked with it last year do not anymore unless you keep up with patches.

Our take is: that deobfuscator is great as a research artifact. It is bad as a production dependency. If you have it, keep it as a fallback for the 5% of targets where phase-one alone is not enough. For the other 95%, point at akamai-v3-sensor and stop maintaining a JS VM port.

Why Go, why not Python or Node

Because uTLS lives in Go, and because the HTTP/2 frame layer needs to be controlled at a level that requests and axios will never give you. You can wrap our shared library from Python or Node — bindings live in bindings/ for Node, Python, and .NET — but the core has to be Go (or Rust, but rewriting was not on the table). If you have ever tried to make curl_cffi or tls-client match a current Chrome JA4, you know the gap. Those projects are good, they are just always a release or two behind, and Akamai notices.

We push presets within days of a Chrome stable bump. That cadence matters more than any single feature in the library.

Speed

The other reason "request-based" wins on most sites: it is roughly 40x faster than running a real browser, and roughly 8x faster than the average JS-VM-port sensor_data solver. A single Go process running the library will sustain a few thousand requests per second against a typical Akamai-fronted origin, limited by the origin itself, not by us. The TLS handshake is the only expensive part, and we pool connections per host with full session resumption (TLS 1.3 PSK), so amortized cost per request drops to almost nothing.

If you have ever paid the AWS bill for a fleet of headless browsers solving sensor_data at 2 RPS each, you know what this is worth.

How to actually use it

(code block — paste this and apply Medium's code block formatter)

package main

import (
    "context"
    "fmt"
    "time"

    sensor "github.com/jesterfoidchopped/akamai-v3-sensor"
)

func main() {
    c := sensor.New("chrome-146",
        sensor.WithTimeout(20*time.Second),
        sensor.WithProxy("http://user:pass@host:port"),
    )
    defer c.Close()

    resp, err := c.Get(context.Background(), "https://www.example-akamai-site.com/")
    if err != nil {
        panic(err)
    }
    defer resp.Close()

    body, _ := resp.Text()
    fmt.Println(resp.StatusCode, resp.Protocol)
    fmt.Println(body[:200])
}

That is enough to get a valid _abck cookie on most targets in one or two requests. Cookies persist across the client lifetime. Add the proxy on construction; proxies hand off to the TLS dialer correctly so your fingerprint is not the proxy's, it is yours.

For the rare site where you do need to forward the bot.js payload, the session/ package lets you replay POST bodies into /_bm/_data after grabbing them once from a real browser. That is not a sensor_data generator — it is a payload-forwarding helper. It will get you through low-tier deep scoring, not the day-of-sale stuff.

Closing

Akamai bot manager v3 is solvable in 2026 without a sensor_data generator for the majority of public targets, and the reason is that scoring is dominated by what you ship before the JS even runs. TLS, HTTP/2 SETTINGS and frame order, HTTP/3, header order, ALPN, ECH. Get those right and the deep sensor_data path mostly evaluates to "trusted, move on."

The deobfuscator route is the wrong path for most teams. High maintenance, ages badly, leaves the bigger lever untouched.

Repo: github.com/jesterfoidchopped/akamai-v3-sensor

Comments (0)

Sign in to join the discussion

Be the first to comment!