📦 If you purchased on the old system, generate proxies at proxy.hellworld.io until your traffic is used up.

← Hell World Blog··16 min read

Anti-Bot Landscape 2026: Cloudflare, DataDome, Akamai, PerimeterX, Queue-it, Kasada — A Field Guide

A field guide to the 6 anti-bot vendors that actually matter in 2026 — what each one detects, where it's deployed, and what proxy strategy customers tend to land on.

Hell World Team#anti-bot#cloudflare#datadome#akamai#perimeterx#queue-it#kasada

We resell all 14 proxy brands listed on hellworld.io. We make a margin on every one of them. So when we write about which proxy works against which anti-bot vendor, the honest framing is this: we don’t have a horse in the race. The customer who buys Lumi from us pays the same retail price they’d pay buying direct, and so does the customer who buys F-Oxylab. Our incentive is just that you find the right tool fast, stop churning, and stay subscribed. That’s the bias to keep in mind while reading.

This is the second piece in our 2026 series. The first was 14 Brands Tested, which compared the proxy networks themselves. This one looks at the other side of the fence: the anti-bot stacks those proxies are trying to get past. Future posts will dive into each vendor one at a time. Consider this the map.

Why “anti-bot” is a more useful frame than “WAF” in 2026

A lot of buyer guides still talk about WAFs. WAF means web application firewall. It’s the layer that looks at the HTTP payload and asks: is this a SQL injection? An XSS string? A path traversal? Useful, but mostly orthogonal to the problem proxy customers actually have.

Anti-bot is a different layer. It doesn’t care that your request body is benign. It cares about who you are. Specifically, three things: the reputation of the IP you came from, the fingerprint of the browser or client you’re using, and the behavioral signals between your requests. WAFs predate the bot economy. Anti-bot stacks were built because WAFs couldn’t tell a scraper from a buyer.

For proxy customers in 2026, the WAF rarely matters. Almost every “my proxy doesn’t work” support ticket we see is anti-bot, not WAF. The HTTP request itself is fine. The IP got scored low, or the TLS fingerprint didn’t match the user-agent, or the cookie chain didn’t include the challenge token the site expected. None of that is firewall behavior in the classic sense.

So when you’re picking a proxy, the WAF on the target site is mostly noise. The anti-bot vendor in front of it is the variable that matters. That’s what this guide is about.

The 6 vendors that matter

There are dozens of anti-bot products out there. Six of them cover the overwhelming majority of high-protection traffic our customers actually run into. Here’s the lineup:

Vendor Where you see it Detection style Proxy strategy (rough)
Cloudflare Bot Management + Turnstile Everywhere — biggest CDN footprint IP reputation + JS challenge + TLS Residential preferred; datacenter often fails
DataDome E-commerce, media, classifieds Multi-layer score: IP + fingerprint + behavior Mobile most stable; budget pools often fail
Akamai Bot Manager (BMP) Sneakers, airlines, banks, big retail Sensor data payload + cookie chain Mobile / ISP for sneakers; residential mixed
PerimeterX (HUMAN) Supreme, StubHub, ticketing _px cookie family + JS + behavior Mobile strong; residential rotation often weak
Queue-it Ticketmaster, drops, gov services Virtual waiting room (not anti-bot) Sticky ISP / sticky mobile; rotation hurts
Kasada ANZ banks, Twitch, gov WASM client-side challenge Less IP-sensitive; solver matters more

Two warnings about this table. First, “proxy strategy” is the rough pattern we see in support tickets, not a guarantee. Each vendor tunes their config per customer. Nike’s Akamai is not Delta’s Akamai. Second, the line between vendors blurs. Sites stack two systems all the time — Cloudflare in front, DataDome behind, or Akamai for the storefront and PerimeterX for the checkout. The right answer for “what proxy do I use” depends on which layer is actually doing the blocking, and that changes site to site.

The next sections take each vendor in turn.

Cloudflare Bot Management & Turnstile

Cloudflare is the elephant. They sit in front of a huge slice of the modern web, and their Bot Management product is the thing most proxy customers run into first.

Public mechanics first. Cloudflare assigns every request a BotScore between 0 and 99. Cloudflare’s own documentation describes 1 as definitely a bot, 99 as definitely human, and the middle band as uncertain. Site operators write rules on top of this score: block under 30, challenge under 60, allow above 80, that kind of thing. The site’s WAF and rate limiter consume the score; Cloudflare itself doesn’t decide what to block, the customer does.

The score is built from a stack of signals. The IP’s ASN reputation is one input. JA3 / JA4 TLS fingerprint is another. HTTP/2 frame ordering. Browser-side JS challenges that report back. Behavioral patterns over a session. None of these are individually decisive — Cloudflare combines them.

Turnstile is the visible challenge surface. It’s the widget that replaced reCAPTCHA on a lot of sites, launched as v1 in 2022 and updated to v2 in 2025. When Turnstile passes, the browser gets a cf_clearance cookie that vouches for the session. That cookie is short-lived and tied to a fingerprint, so re-using a cf_clearance from a different browser is the kind of thing that ends with a fail.

Now the proxy angle. Cloudflare publicly states that datacenter ASNs carry higher bot risk by default — that’s in their own marketing, not something we’re inventing. So a request from a known datacenter IP starts with a score handicap before any other signal lands. Residential and mobile IPs start higher. ISP proxies, which are technically datacenter-hosted but on residential ASNs, fall in between and behave differently per provider.

In our customer support tickets, Cloudflare-protected targets are the single most common “why is my proxy not working” question we see. The pattern repeats. Customer buys a budget residential pool to scrape a Cloudflare-fronted site, runs into the challenge, switches to a premium pool, gets through. Or they were on premium residential already and the issue was actually their TLS fingerprint, not the IP. Cloudflare punishes both.

Our usual support flow looks like this. If a customer is failing on Cloudflare, we first ask whether their HTTP client emits a real browser TLS fingerprint. A python requests script with default urllib3 will fingerprint as automated regardless of how clean the IP is. If the fingerprint side is handled, then the IP layer matters, and we’d lean toward Lumi or F-Oxylab residential pools, which we see retried-into more often than not. Both are linked from our proxy comparison page.

Turnstile specifically is harder than the BotScore alone. It runs JS in the browser, collects entropy, and ships a token. Skipping the challenge means either rendering it in a real browser, using a third-party solver, or routing through a session that already has a valid cf_clearance cookie. None of these are a proxy problem in the strict sense — they’re a client-side problem. The proxy gets you the IP reputation; the client gets you the challenge token. We see customers conflate the two and blame the proxy when the issue is their headless setup.

The deep dive on Cloudflare bypass strategy gets its own post later in this series.

DataDome

DataDome is the second-most-common vendor we see in support tickets, especially from customers scraping e-commerce, classifieds, and media.

Their public documentation describes a multi-layer scoring approach. The layers, as DataDome describes them in their own materials: IP reputation, browser fingerprint, behavioral signals, and machine learning on top. Each layer feeds a score. The site sees a verdict — allow, challenge, block — without seeing the internals.

The visible cookie is datadome. It’s set after a successful pass and persists until the score drops. Sessions that mix clean and dirty signals tend to lose the cookie mid-flow, which surfaces as “I was working a minute ago and now I’m not.”

DataDome’s challenge is a JS interstitial. It runs, collects, and either grants the cookie or shows a CAPTCHA. The CAPTCHA itself is a downstream symptom — by the time you see it, the score already dropped you below the allow threshold.

On the proxy side, the rough industry consensus is that mobile carriers tend to score better than residential against DataDome, and residential tends to score better than datacenter. That’s not because DataDome explicitly whitelists carriers — it’s because carrier IPs share with millions of legitimate users, and shared scoring tilts in their favor. We’re not going to put a number on this. The pattern shows up across vendor blogs, scraping forums, and our own customer base.

Budget residential pools are the consistent failure mode we see. Customers sign up for a cheap pool, point it at a DataDome target, and the requests fail almost immediately. The cheap pool’s IPs are recycled fast, often shared across many customers running aggressive scrapers, and the IP reputation layer flags them. The same customer moves to a premium tier and the success rate jumps. Whether that jump is worth the price difference depends on volume — for one-off scrapes, sometimes the cheap pool plus retries is fine; for production data pipelines, the cheap pool burns more in failed requests than the premium would have cost.

For long-running DataDome work we tend to point customers at mobile pools or premium residential. Both are listed on our residential and mobile product pages. The choice between them is mostly about request volume and cost ceiling.

Akamai Bot Manager (BMP)

Akamai is the heavyweight in big retail, airlines, and banks. If you’re scraping or buying on Nike, Adidas, Footlocker, Delta, United, or most major banks, Akamai is what you’re up against.

The technical surface is well-documented in public reverse engineering writeups. Akamai BMP collects a large blob of browser fingerprint data via JS — mouse movements, screen size, plugin enumeration, timing, pointer events, the works. That blob gets serialized, base64-encoded, and posted back to the server as “sensor data.” The server scores it and sets the _abck cookie.

The _abck cookie has a structure that’s been picked apart by the sneaker community for years. The sec~ field inside the cookie value is the visible verdict. sec~-1~ means the sensor failed validation. sec~7~ (and other positive integers in that range) means the sensor passed. There are other fields, and Akamai rotates the meanings, but the -1 vs positive distinction is stable enough that solver developers key off it.

The other signal is the sec-cpt header chain. After certain challenge points, Akamai expects subsequent requests to carry a header proving a compute proof-of-work was completed. Skipping the cpt chain means failing later checkpoints even if the initial sensor passed.

Proxy strategy for Akamai is heavily target-dependent. The sneaker community has settled on mobile and ISP proxies as the default for Akamai-protected drops — the consensus there is years deep and we won’t argue with it. Our sneaker proxy page reflects that, with mobile and ISP brands featured first.

For non-sneaker Akamai targets — airlines, banks, general retail — the picture is fuzzier. Mobile still works. Residential works on some targets and not others. Datacenter fails almost everywhere. ISP is a middle ground that works well for sites where Akamai’s IP layer is moderately weighted but not dominant.

The thing to keep in mind: Akamai BMP’s IP layer is one input among many. A perfect IP with a broken sensor payload still fails. We see customers spend on premium ISP and still get blocked because their headless browser isn’t generating valid sensor data. The proxy is necessary; the proxy is not sufficient.

There’s a deeper Akamai post coming in this series. We’ll go into the sensor structure, the cpt chain, and what the typical solver toolchain looks like.

PerimeterX (HUMAN)

PerimeterX merged with HUMAN Security a few years back. The product still ships as PerimeterX in most contexts, the cookies are still _px*, but the parent company branding has shifted. Internal details have shifted with it, and we’ll keep this section conservative because the post-merger product cycle is still in motion.

Public mechanics. The cookie family is _px, _pxhd, _pxvid, with _px being the short-lived session token, _pxhd a longer-lived header value, and _pxvid a visitor ID. The challenge is JS-based, similar to DataDome’s pattern. Behavioral signals weigh heavier than they do in some other vendors — mouse movement and timing patterns are explicitly part of the model.

Where you see PerimeterX: Supreme, StubHub, ticketing platforms, and a long tail of e-commerce. Sneaker drops historically used PerimeterX too, though some have migrated.

Proxy strategy from our ticket history. Mobile pools tend to hold up well against PerimeterX. Residential rotation — meaning aggressive IP-per-request style scraping — tends to drop scores fast, because PerimeterX’s behavioral layer flags the lack of session continuity. Sticky residential or sticky mobile sessions, where the IP holds for several minutes, score better. This matches the broader industry pattern of “rotate less, behave more like a human session” against vendors that weight behavior.

Specific hot takes about PerimeterX internals — exact cookie field meanings, exact challenge bypass details — we’re keeping vague on purpose. The post-HUMAN merger has changed enough internals that any specific claim has a short shelf life. The stable advice: mobile sticky, residential sticky, slow your rotation, and care about session continuity.

Queue-it

Queue-it is the odd one in this list because it isn’t anti-bot in the same sense as the others. It’s a virtual waiting room. The distinction matters and a lot of customers get it wrong.

Anti-bot tries to identify and block automated traffic. Queue-it doesn’t really care if you’re a bot. It cares about throughput. Its job is to sit in front of a site that can’t handle peak load — Ticketmaster on a major drop, a government services portal during enrollment season, a limited-edition product launch — and queue everyone, human or bot, into a waiting room. You wait. You get a token. The token lets you through to the real site for a window of time. Then it expires.

Public mechanics. Queue-it sets a cookie chain on entry that includes a position token. Your position in the queue is tied to that token plus your IP. The site polls Queue-it’s API on a timer to check if your position has come up. When it does, you get a redirect token that proves you waited, and the upstream site accepts you.

Because the queue position is tied to IP, the proxy strategy here is the opposite of what works against most anti-bot vendors. Aggressive rotation — the default mode for residential scraping — actively hurts you. Every new IP starts a new queue position. You go to the back of the line. Customers who come from a Cloudflare or DataDome mindset and try to brute-force Queue-it with high rotation end up in a worse position than if they’d just queued like a human.

The pattern that works is sticky sessions. Long sticky sessions on ISP or mobile, holding the same IP for the full queue duration, are what we see customers settle on for ticketing and drops. Mobile is good because carrier IPs aren’t strongly penalized; ISP is good because the IPs are stable and quiet. Mobile sticky sessions come up in support tickets for ticketing customers more often than any other pool type.

There’s a layered pattern too. Some sites stack Queue-it in front of Akamai or Cloudflare. You get queued first, then once through the queue, the anti-bot layer kicks in. That stack means you need a proxy that survives both — sticky enough to hold queue position, clean enough to pass the bot scoring after. Mobile tends to be the answer because it’s strong on both axes.

The Queue-it deep-dive post will go into the token chain, the polling cadence, and the common mistakes we see customers make trying to skip the queue.

Kasada

Kasada is the smallest vendor in this list by deployment count, but they’re growing, and they’re meaningfully different from the others.

The product centers on a client-side WebAssembly challenge. When you hit a Kasada-protected site, the response includes a small WASM payload that runs in the browser. The WASM does a compute task — proof of work, fingerprinting, environment checks — and produces a token. The token is sent back, validated server-side, and admits the request.

The big architectural difference: Kasada weights client-side proof more than IP reputation. Their public material has been explicit about this. The bet is that compute-based challenges are harder to fake at scale than fingerprint-based ones, because faking fingerprint is cheap and faking compute means actually running the WASM correctly.

Where you see Kasada: ANZ banks (Westpac, ANZ, others have used them), Twitch, some government services. The footprint is heavy in Australia and growing in the US enterprise space.

Proxy strategy is unusual here. Because Kasada is less IP-sensitive than Cloudflare or DataDome, budget residential pools — which fail hard against IP-heavy vendors — can actually work against Kasada targets if the WASM challenge is being solved correctly. The bottleneck moves from the proxy to the solver.

The solver side is the harder part. The WASM is dynamically obfuscated and rotates. There are commercial solver services that handle Kasada tokens, and there are open-source attempts that work for a window before Kasada updates the obfuscation. We don’t sell solvers; we sell proxies. But we’ll mention this because the customers we see who fail against Kasada usually have the wrong mental model — they spend on premium residential expecting Cloudflare-style results, when the money should have gone to solver capacity.

The deep dive will cover Kasada’s challenge structure, the typical solver landscape, and how to tell from response codes whether the failure is IP, fingerprint, or token.

What this means for your proxy choice

The mistake people make is to look for a one-to-one mapping. “Cloudflare uses residential. Kasada uses datacenter. Akamai uses mobile.” It doesn’t work that way.

A more useful frame is two axes: how much weight does the vendor put on IP reputation, and how much on client fingerprint and challenge-solving?

IP-heavy vendors: Cloudflare Bot Management, DataDome, Akamai BMP (mostly), PerimeterX. Against these, the brand of proxy you buy genuinely matters. A budget pool will fail where a premium pool succeeds, holding everything else constant. The IP reputation difference is real. This is where our comparison page earns its keep — the price-per-quality ratio across the 14 brands does shift the win rate against IP-heavy vendors.

Fingerprint-heavy vendors: Kasada, parts of Akamai’s compute-proof layer, parts of Cloudflare’s Turnstile. Against these, the proxy brand matters less, and the client side — TLS fingerprint, browser stealth, challenge solver — matters more. Customers who buy expensive residential here are paying for the wrong layer.

Queue-it is its own category. It’s not anti-bot at all. The right answer is sticky sessions, regardless of brand. Mobile sticky and ISP sticky are both fine. Premium pricing here doesn’t buy you much.

For the geo angle — when the target site cares about which country the IP is from, on top of bot detection — the 14 Brands post covered the practical geo-locking limits brand by brand. Against IP-heavy anti-bot vendors that also care about geo (almost all of them, for region-locked content), both layers compound, and that post is the right place to start.

For AI agent traffic and headless-browser-driven flows, the same logic applies. The agent’s TLS fingerprint and behavior matter as much as the IP. Throwing premium proxies at an agent that fingerprints as automated will not save it.

For the ISP product line, the use case is sticky middle-ground: cleaner than rotating residential, stickier than mobile, cheaper per-GB than premium residential. Good against vendors that mildly care about IP quality but won’t punish a stable session.

Next in this series

This pillar covers the six vendors at the level needed to pick a proxy. Each vendor gets its own deep-dive post in the coming weeks. The current plan, in order:

  • C1: Cloudflare bypass — BotScore mechanics, Turnstile internals, what cf_clearance actually checks
  • C2: DataDome — the multi-layer score, behavioral signals, what triggers the JS challenge
  • C3: Akamai BMP — sensor data structure, _abck cookie internals, sec-cpt chain
  • C4: PerimeterX (HUMAN) — current cookie chain, behavioral fingerprinting, post-merger changes
  • C5: Queue-it — token mechanics, polling cadence, sticky session strategy
  • C6: Kasada — WASM challenge structure, solver landscape, when proxy choice stops mattering

We’ll publish in roughly that order, but we’ll bump priorities based on what readers ask for. If there’s a vendor you’d rather see first, drop into our Discord and tell us. We read every channel and the order isn’t fixed.