Legal & EthicsNewsScraping

Google vs. SerpApi: Legal Risk in SERP Scraping

Google’s SerpApi lawsuit highlights where SERP scraping turns risky—DMCA 1201 circumvention, ToS breaches, resale, and high-scale automation.

Ibuki Yamamoto
Ibuki Yamamoto
February 1, 2026 4min read

On December 19, 2025, Google publicly announced it had filed a lawsuit against SERP scraping provider SerpApi. The core issue isn’t “whether web scraping is legal” in the abstract. It’s how courts should treat a more specific pattern: circumventing technical access controls to reach search results that may include copyrighted and licensed content, and then repackaging and reselling that data at scale.

This article summarizes what the case is about and—more importantly—where the practical legal risk lines tend to appear for teams building or buying SERP collection.

Bottom line

The biggest takeaway from Google v. SerpApi is that the legal risk usually doesn’t hinge on “fetching a SERP” by itself. Risk spikes when multiple factors stack up:

① Circumventing access controls (DMCA § 1201)
② Breaching contract terms (Terms of Service)
⑱ Redistribution/resale that creates rights and licensing problems
④ Large-scale, organized automated access

Also note: “It’s public data, so it’s always legal” is a risky assumption. Even if a page is publicly viewable, a project can still trigger real civil—and sometimes criminal—exposure depending on contract terms, anti-circumvention rules, copyright/unfair competition theories, or interference claims.

Case overview

When and where it happened

Google stated on December 19, 2025 that it had taken legal action against SERP scraping company SerpApi. Google’s public framing is that SerpApi circumvents security measures designed to protect third-party copyrighted and licensed content that appears within Google Search results. blog.google

Based on publicly available docket information, the case is listed as Google LLC v. SerpApi, LLC, filed on December 19, 2025 in the U.S. District Court for the Northern District of California (N.D. Cal.). The docket also shows early case management deadlines and an initial case management conference scheduled for March 20, 2026 (dates may change). dockets.justia.com

Google’s core allegations (as publicly described)

In Google’s official post, Google argues that SerpApi is (a) bypassing security and traffic-management defenses, (b) ignoring publisher and rightsholder choices about access, and (c) capturing and reselling results that include licensed content (such as certain images and real-time data features). blog.google

Google says it sued to stop SerpApi from bypassing protections designed to safeguard copyrighted content appearing in search results and to halt what it describes as unlawful scraping.

SerpApi’s response (based on public statements)

In a public blog post, SerpApi presents the dispute as an attempt to threaten access to publicly visible search results. It emphasizes that it does not bypass authentication or access private accounts, and frames Google’s approach as stretching anti-circumvention law beyond its intended scope (this is SerpApi’s public position; formal litigation arguments may differ). serpapi.com


DMCA § 1201: the anti-circumvention center of gravity

One of the most closely watched angles in this dispute is the U.S. DMCA’s anti-circumvention provision (17 U.S.C. § 1201). DMCA § 1201 can create liability for circumventing a technological measure that “effectively controls access” to a copyrighted work, even before you get to traditional copyright infringement questions. blog.google

As a practical matter, “circumvention” can include designing systems to avoid, bypass, disable, or otherwise defeat technical controls. That’s why the discussion is not limited to classic “paywall/login” scenarios—if access restrictions are implemented technically and your scraper is built to route around them, the risk analysis changes.

The “it’s public, so it’s fine” misconception

Even when content is publicly viewable, risk rises quickly when you combine factors like these:

  • The target is a search results page where the rendered page can contain third-party copyrighted works (for example, images) and licensed datasets
  • Your tooling is explicitly designed to get around challenges, detection, blocking, or other access controls
  • You redistribute the collected output (resale, public API, downstream licensing) where rights clearance is hard
  • You operate at scale in ways that increase the target’s security and infrastructure costs

In other words: the real inflection points are often circumvention, redistribution, and scale—not just “scraping vs. not scraping.”

Terms of Service violations: the very real contract problem

Even for publicly viewable pages, if the site’s Terms of Service prohibit automated access, a project can end up framed as breach of contract (and sometimes related tort claims). In practice, companies often pursue a multi-theory approach—contract, anti-circumvention, unfair competition, interference—rather than relying on a single “anti-hacking” theory.

CFAA and publicly accessible sites

In the U.S., teams frequently ask whether scraping a public site triggers the CFAA (Computer Fraud and Abuse Act) under a “without authorization” theory. In hiQ v. LinkedIn (Ninth Circuit, April 18, 2022), the court treated access to publicly available profiles as generally outside the CFAA’s “without authorization” framing (in that context). That said, hiQ involved LinkedIn profiles, not Google SERPs, and it doesn’t erase other legal vectors like contract and DMCA § 1201. law.justia.com

Important: Even if you believe your use case is “public data scraping” and therefore less likely to fit a CFAA theory, anti-circumvention (DMCA § 1201) and contract (ToS) can still carry meaningful risk. Google’s public messaging in this dispute is explicitly focused on circumvention and licensed/copyrighted content appearing in results. blog.google

Where the risk line usually is

To make this easier to apply in real projects, here’s a rough “low to high” risk spectrum (general guidance only—consult qualified counsel for decisions).

Activity Expected risk Why it’s treated that way
Manual, low-volume checking of search results Low No automation, no redistribution, and no technical bypass behavior
Low-volume automated collection for internal research Medium ToS violations and operational impact can still become issues
Bypass tactics (fingerprint spoofing, IP rotation, etc.) High Looks like “circumvention by design,” which can pull in DMCA § 1201-style arguments
Reselling SERP outputs via an API High Redistribution increases licensing/copyright exposure and makes injunctive relief more likely

robots.txt is a crawler-control convention, but complying with it does not automatically eliminate legal risk under contract, copyright, DMCA anti-circumvention, or other theories. Separately, Google documents how it interprets and applies the robots.txt standard for its own crawling behavior. blog.google


Safer design principles

“Zero legal risk” isn’t a realistic engineering requirement. A more practical goal is: avoid system designs that predictably become the core dispute. If you’re considering SERP collection, treat these as baseline guardrails.

Don’t build for bypass

  • Don’t assume you’ll defeat challenges, evade detection, spoof browsers, or rotate fingerprints as an operating norm
  • If you get blocked, stop—don’t build “keep going no matter what” workflows

Don’t design around redistribution

  • Avoid reselling raw scraped outputs or offering a public SERP API by default
  • Store only the minimum you actually need (for example, rank position and destination URLs) and avoid retaining page-rendered assets like images

Look for official alternatives first

If your goal is search performance analysis or website operations improvements, search platforms often provide official tools (dashboards and APIs) for parts of that workflow. Treat SERP scraping as the last resort—then re-check whether the requirements really demand it.

Keep logs and be ready to explain your choices

If your team (or a vendor) runs automated collection, document what you collect, how often, your stop conditions, what you store, and why you need it. Operational logs matter because, in a dispute, you’ll need to explain “what happened” clearly and credibly.

FAQ

Is saving SERP HTML risky?

It depends on your scale and what you do with it, but risk rises sharply when you combine long-term storage, redistribution, retaining embedded copyrighted assets (like images), or any bypass of access controls. This dispute is notable because it puts anti-circumvention and licensed/rightsholder content inside SERPs front and center. blog.google

Publicly viewable does not mean “free to copy and redistribute.” SERPs can contain third-party copyrighted works and licensed content, which means rights clearance and tight scoping still matter.

Does this matter for companies outside the U.S. (including Japan)?

If you target U.S.-based platforms (like Google), U.S. legal theories (including DMCA § 1201) can become part of your risk surface—along with practical enforcement like injunction requests, account action, or IP blocking. Separately, if you operate from Japan (or elsewhere), local laws may also apply (for example, unauthorized access, copyright, unfair competition, or contract issues), depending on the facts.

References

Need a safer SERP data plan?

If you rely on SERP data, the hard part is rarely the scraper—it’s avoiding bypass-by-design, limiting redistribution risk, and documenting operations. We can help you map requirements to a lower-risk collection approach.

Contact UsFeel free to reach out for scraping consultations and quotes
Get in Touch

Summary

Google v. SerpApi (filed December 19, 2025) is a strong reminder that SERP scraping is not automatically “safe because it’s public.” The dispute spotlights how legal exposure can escalate when technical bypass (DMCA § 1201), Terms of Service violations, redistribution/resale, and high-volume automation overlap. blog.google

If your business needs SERP data, avoid bypass-oriented designs, consider official alternatives first, minimize what you store and share, and build operational logging and accountability into the system from day one.

About the Author

Ibuki Yamamoto
Ibuki Yamamoto

Web scraping engineer with over 10 years of practical experience, having worked on numerous large-scale data collection projects. Specializes in Python and JavaScript, sharing practical scraping techniques in technical blogs.

Leave it to the
Data Collection Professionals

Our professional team with over 100 million data collection records annually solves all challenges including large-scale scraping and anti-bot measures.

100M+
Annual Data Collection
24/7
Uptime
High Quality
Data Quality