Scraping

Bot Detection Triggered by Fingerprint Mismatches (UA vs UA-CH)

Learn why UA spoofing can increase bot blocks: sites correlate UA, UA-CH, JS, and rendering signals. Audit inconsistencies to reduce flags.

Ibuki Yamamoto
Ibuki Yamamoto
February 5, 2026 4min read
If you spoofed your User-Agent to reduce browser fingerprinting, but suddenly got flagged as a bot more often—or started seeing more step-up verification—there’s a common culprit: inconsistent fingerprint signals. Modern bot-detection stacks don’t rely on a single value. They check whether multiple observations still make sense as “the same device and the same browser.” This article explains why inconsistencies are such a strong detection signal, and how real-world consistency checks are typically designed.
What You’ll Learn
  • How fingerprint inconsistencies become detection signals, and the most common patterns
  • How to think about consistency checks across HTTP, JavaScript, rendering, and environment signals
  • Common scraping pitfalls that trigger inconsistency flags—and practical ways to reduce them

Why fingerprint inconsistencies are easy to use for detection

The core reason is simple: inconsistencies tend to be a low-false-positive signal for defenders. Spoofing a single field (for example, User-Agent) is relatively easy. But producing a fully coherent “story” across HTTP headers, JavaScript APIs, rendering outputs, and OS-derived characteristics is harder—and more expensive.

Inconsistencies also hint at what you tampered with. If you only change UA, you’ll often introduce a mismatch between UA and UA-CH (Client Hints) and/or JavaScript-exposed values. Detection systems can reliably pick up these “misalignments that normal browsers rarely produce.”

Key takeaway
Bot detection isn’t only about guessing “who you are.”
In practice, a very effective strategy is to check whether your client still looks like a valid, internally consistent browser implementation—and block anything that doesn’t.

The big picture: what consistency checks cover

Consistency checks typically correlate signals across multiple layers:

Layer Typical signals Example inconsistency
HTTP User-Agent / UA-CH / Accept-Language UA claims iPhone, but Sec-CH-UA-Platform says “Windows”
JavaScript navigator / Intl / screen / WebRTC Timezone points to the US, but language and OS-related signals look implausible
Rendering / performance Canvas / WebGL / Audio / FPS GPU looks like a virtualized/software renderer, but the rest of the profile claims a high-end device
Behavior Input events / scrolling / focus Human-like events are missing—or are too perfectly regular

This is where things get real. In recent years, UA-CH (User-Agent Client Hints) has become more important than the classic UA string, and it’s increasingly common to check whether UA, UA-CH, and JS (via navigator.userAgentData) all agree. Low-entropy client hints like Sec-CH-UA, Sec-CH-UA-Mobile, and Sec-CH-UA-Platform may be sent by default (unless restricted), which makes mismatches even easier to detect.

UA vs UA-CH mismatches

Sec-CH-UA

A representative UA-CH header is Sec-CH-UA.
MDN describes it as a low-entropy hint that provides user agent information such as browser brand (for example, Chrome) and major version. Another practical point: unless it’s blocked by permission policy, it can be sent by default, which means sites often get it even when they didn’t explicitly opt in.

Example: request headers sent by Chrome

Warning
If you only swap the User-Agent header, you’ll often break consistency with UA-CH headers (Sec-CH-UA, Sec-CH-UA-Platform, Sec-CH-UA-Mobile, etc.).
That kind of mismatch can directly trigger bot detection.
A classic red flag is a “Safari-like UA string” paired with Sec-CH-UA still identifying as "Chromium"—an implausible combination for real user traffic.

navigator.userAgentData

In Chromium-based browsers, UA-CH isn’t only visible via HTTP headers. It’s also exposed to JavaScript via navigator.userAgentData.
Detection systems correlate what they see in HTTP with what they see in JS to decide whether it looks like a single, coherent client. In other words: even if you spoof one surface, any “honest” surface you didn’t fully control can become the inconsistency that gives you away.

Common inconsistency patterns

Below are inconsistency patterns that show up frequently in production. This isn’t a promise that “matching these will always pass.” It’s a pragmatic list of signals that tend to fail hard when they don’t line up.

OS and input devices

  • Mobile UA but Sec-CH-UA-Mobile is ?0 (or the reverse)
  • Profile claims iOS, but touch/pointer characteristics look like a desktop

Language and locale

  • Accept-Language diverges from navigator.languages
  • Intl.DateTimeFormat().resolvedOptions().timeZone is wildly implausible compared to IP-derived location

Screen and window sizing

  • The relationship between screen.width/height and window.innerWidth/innerHeight looks unnatural (UI chrome, zoom, and device pixel ratio often break)
  • You hardcode viewport sizes in the tab while OS scaling and DPR signals point elsewhere

WebGL and GPU

  • WebGL vendor/renderer looks like a virtualized or software GPU, while UA and performance imply a premium device
  • Rendered outputs (Canvas/WebGL) conflict with the font environment

What matters in the field
Individual values matter less than whether the relationships between values look normal. For example, a slightly unusual resolution can still pass if DPR, zoom behavior, UA-CH, and OS signals tell a consistent story. But one standout contradiction can make the whole session look suspicious.

How defenders evaluate consistency

In practice, there are two common approaches.

Rule-based checks

This approach blocks combinations that “shouldn’t exist.” For example: UA says Android but Sec-CH-UA-Platform says macOS. These checks are cheap to run and tend to have relatively low false positives, so they’re often deployed first.

Scoring models

Many inconsistencies are weak on their own. Scoring models treat them as additive risk. For example, small mismatches across language, timezone, fonts, and WebGL might be summed into a risk score; once it crosses a threshold, the site routes the session to CAPTCHA or step-up verification.

W3C guidance on fingerprinting mitigations makes a closely related point:

“Fingerprints can be formed by combining multiple observable characteristics.
When designing specifications, it’s important to evaluate which information may contribute to identification (fingerprinting surface) and assess its impact.” (Paraphrased from the original)

Note: W3C (the World Wide Web Consortium) is the international nonprofit that develops and maintains key web standards (HTML, CSS, DOM, HTTP, and more).

Mistakes scrapers commonly make

Only changing the UA string

This is the most common failure mode. You might successfully rewrite the UA string, but if UA-CH headers and/or navigator.userAgentData remain unchanged (or can’t be changed in your setup), you’re likely to fail consistency checks.

Too many (or too few) headers

Some UA-CH values are only sent when the server requests them via Accept-CH (high-entropy hints). If you force-send a fixed set on every request, you can create new “this doesn’t look like real traffic for this site” anomalies—missing headers, unusual ordering, or unexpected presence.

WAF compatibility issues

Chromium’s UA-CH documentation notes a real-world compatibility issue: certain characters included in headers like sec-ch-ua (such as double quotes) may be treated as suspicious by some WAFs or legacy intermediaries. That means you can get blocked before bot detection even runs.

A practical process for building consistency in scraping

This section doesn’t provide a guaranteed “bypass recipe.” Instead, it lays out a defensible process for consistency design—useful for both operational hardening and testing.

Inventory every observable surface

  1. HTTP: UA / UA-CH / Accept-Language, etc.
  2. JS: navigator / Intl / screen / permissions, etc.
  3. Rendering: Canvas / WebGL / fonts
  4. Behavior: input events, navigation flows, waits

Pick a baseline profile

Start with one “positive profile,” such as “Chrome on Windows 11 (x64, desktop).” The goal isn’t “values that feel plausible.” It’s the full set of values you’d naturally observe on that real environment.

Minimize diffs

The more you customize, the more contradictions you introduce. Counterintuitively, fewer changes often produce a more consistent profile. Prioritize alignment across UA / UA-CH / JS first, because they expose the same underlying facts (OS, mobile/desktop classification) via different routes.

Confirm what your requests actually send

To validate consistency, log what is truly observable in both HTTP and JS. For example:

# Example: consolidate “what the site can see” during Playwright-style validation (pseudocode)
# Use this for test-oriented consistency checks, not as a bypass recipe.

result = {
  "http_headers": captured_request_headers,
  "js": {
    "userAgent": page.evaluate("() => navigator.userAgent"),
    "uaData": page.evaluate("() => navigator.userAgentData"),
    "languages": page.evaluate("() => navigator.languages"),
    "timeZone": page.evaluate("() => Intl.DateTimeFormat().resolvedOptions().timeZone"),
    "screen": page.evaluate("() => ({w: screen.width, h: screen.height, dpr: devicePixelRatio})")
  }
}

print(result)

Warning
Trying to “spoof values to get through” can violate website terms, contracts, internal security policies, and/or applicable law. If you do this for work, confirm you have permission from the target site and that your approach is compliant.

Getting blocked after UA spoofing?

If your scraper started failing after changing headers, the real issue is often cross-signal inconsistencies (UA vs UA-CH vs JS vs rendering). If you want help auditing what a site can observe and stabilizing a compliant scraping setup, reach out.

Contact UsFeel free to reach out for scraping consultations and quotes
Get in Touch

Summary

  • Fingerprint “inconsistencies” are harder to fake than single values, making them strong bot-detection signals
  • Mismatches between UA / UA-CH / navigator.userAgentData are especially visible
  • Real-world detection commonly combines rule-based blocks with score-based risk models
  • A practical workflow is: inventory surfaces → choose a baseline profile → minimize diffs

About the Author

Ibuki Yamamoto
Ibuki Yamamoto

Web scraping engineer with over 10 years of practical experience, having worked on numerous large-scale data collection projects. Specializes in Python and JavaScript, sharing practical scraping techniques in technical blogs.

Leave it to the
Data Collection Professionals

Our professional team with over 100 million data collection records annually solves all challenges including large-scale scraping and anti-bot measures.

100M+
Annual Data Collection
24/7
Uptime
High Quality
Data Quality