On December 19, 2025, Google publicly announced it had filed a lawsuit against SERP scraping provider SerpApi. The core issue isnât âwhether web scraping is legalâ in the abstract. Itâs how courts should treat a more specific pattern: circumventing technical access controls to reach search results that may include copyrighted and licensed content, and then repackaging and reselling that data at scale.
This article summarizes what the case is about andâmore importantlyâwhere the practical legal risk lines tend to appear for teams building or buying SERP collection.
Bottom line
The biggest takeaway from Google v. SerpApi is that the legal risk usually doesnât hinge on âfetching a SERPâ by itself. Risk spikes when multiple factors stack up:
â Circumventing access controls (DMCA § 1201)
⥠Breaching contract terms (Terms of Service)
âą Redistribution/resale that creates rights and licensing problems
⣠Large-scale, organized automated access
Also note: âItâs public data, so itâs always legalâ is a risky assumption. Even if a page is publicly viewable, a project can still trigger real civilâand sometimes criminalâexposure depending on contract terms, anti-circumvention rules, copyright/unfair competition theories, or interference claims.
Case overview
When and where it happened
Google stated on December 19, 2025 that it had taken legal action against SERP scraping company SerpApi. Googleâs public framing is that SerpApi circumvents security measures designed to protect third-party copyrighted and licensed content that appears within Google Search results. blog.google
Based on publicly available docket information, the case is listed as Google LLC v. SerpApi, LLC, filed on December 19, 2025 in the U.S. District Court for the Northern District of California (N.D. Cal.). The docket also shows early case management deadlines and an initial case management conference scheduled for March 20, 2026 (dates may change). dockets.justia.com
Googleâs core allegations (as publicly described)
In Googleâs official post, Google argues that SerpApi is (a) bypassing security and traffic-management defenses, (b) ignoring publisher and rightsholder choices about access, and (c) capturing and reselling results that include licensed content (such as certain images and real-time data features). blog.google
Google says it sued to stop SerpApi from bypassing protections designed to safeguard copyrighted content appearing in search results and to halt what it describes as unlawful scraping.
SerpApiâs response (based on public statements)
In a public blog post, SerpApi presents the dispute as an attempt to threaten access to publicly visible search results. It emphasizes that it does not bypass authentication or access private accounts, and frames Googleâs approach as stretching anti-circumvention law beyond its intended scope (this is SerpApiâs public position; formal litigation arguments may differ). serpapi.com
Key legal risk themes
DMCA § 1201: the anti-circumvention center of gravity
One of the most closely watched angles in this dispute is the U.S. DMCAâs anti-circumvention provision (17 U.S.C. § 1201). DMCA § 1201 can create liability for circumventing a technological measure that âeffectively controls accessâ to a copyrighted work, even before you get to traditional copyright infringement questions. blog.google
As a practical matter, âcircumventionâ can include designing systems to avoid, bypass, disable, or otherwise defeat technical controls. Thatâs why the discussion is not limited to classic âpaywall/loginâ scenariosâif access restrictions are implemented technically and your scraper is built to route around them, the risk analysis changes.
The âitâs public, so itâs fineâ misconception
Even when content is publicly viewable, risk rises quickly when you combine factors like these:
- The target is a search results page where the rendered page can contain third-party copyrighted works (for example, images) and licensed datasets
- Your tooling is explicitly designed to get around challenges, detection, blocking, or other access controls
- You redistribute the collected output (resale, public API, downstream licensing) where rights clearance is hard
- You operate at scale in ways that increase the targetâs security and infrastructure costs
In other words: the real inflection points are often circumvention, redistribution, and scaleânot just âscraping vs. not scraping.â
Terms of Service violations: the very real contract problem
Even for publicly viewable pages, if the siteâs Terms of Service prohibit automated access, a project can end up framed as breach of contract (and sometimes related tort claims). In practice, companies often pursue a multi-theory approachâcontract, anti-circumvention, unfair competition, interferenceârather than relying on a single âanti-hackingâ theory.
CFAA and publicly accessible sites
In the U.S., teams frequently ask whether scraping a public site triggers the CFAA (Computer Fraud and Abuse Act) under a âwithout authorizationâ theory. In hiQ v. LinkedIn (Ninth Circuit, April 18, 2022), the court treated access to publicly available profiles as generally outside the CFAAâs âwithout authorizationâ framing (in that context). That said, hiQ involved LinkedIn profiles, not Google SERPs, and it doesnât erase other legal vectors like contract and DMCA § 1201. law.justia.com
Important: Even if you believe your use case is âpublic data scrapingâ and therefore less likely to fit a CFAA theory, anti-circumvention (DMCA § 1201) and contract (ToS) can still carry meaningful risk. Googleâs public messaging in this dispute is explicitly focused on circumvention and licensed/copyrighted content appearing in results. blog.google
Where the risk line usually is
To make this easier to apply in real projects, hereâs a rough âlow to highâ risk spectrum (general guidance onlyâconsult qualified counsel for decisions).
| Activity | Expected risk | Why itâs treated that way |
|---|---|---|
| Manual, low-volume checking of search results | Low | No automation, no redistribution, and no technical bypass behavior |
| Low-volume automated collection for internal research | Medium | ToS violations and operational impact can still become issues |
| Bypass tactics (fingerprint spoofing, IP rotation, etc.) | High | Looks like âcircumvention by design,â which can pull in DMCA § 1201-style arguments |
| Reselling SERP outputs via an API | High | Redistribution increases licensing/copyright exposure and makes injunctive relief more likely |
âWe follow robots.txtâ is not a legal shield
robots.txt is a crawler-control convention, but complying with it does not automatically eliminate legal risk under contract, copyright, DMCA anti-circumvention, or other theories. Separately, Google documents how it interprets and applies the robots.txt standard for its own crawling behavior. blog.google
Safer design principles
âZero legal riskâ isnât a realistic engineering requirement. A more practical goal is: avoid system designs that predictably become the core dispute. If youâre considering SERP collection, treat these as baseline guardrails.
Donât build for bypass
- Donât assume youâll defeat challenges, evade detection, spoof browsers, or rotate fingerprints as an operating norm
- If you get blocked, stopâdonât build âkeep going no matter whatâ workflows
Donât design around redistribution
- Avoid reselling raw scraped outputs or offering a public SERP API by default
- Store only the minimum you actually need (for example, rank position and destination URLs) and avoid retaining page-rendered assets like images
Look for official alternatives first
If your goal is search performance analysis or website operations improvements, search platforms often provide official tools (dashboards and APIs) for parts of that workflow. Treat SERP scraping as the last resortâthen re-check whether the requirements really demand it.
Keep logs and be ready to explain your choices
If your team (or a vendor) runs automated collection, document what you collect, how often, your stop conditions, what you store, and why you need it. Operational logs matter because, in a dispute, youâll need to explain âwhat happenedâ clearly and credibly.
FAQ
Is saving SERP HTML risky?
It depends on your scale and what you do with it, but risk rises sharply when you combine long-term storage, redistribution, retaining embedded copyrighted assets (like images), or any bypass of access controls. This dispute is notable because it puts anti-circumvention and licensed/rightsholder content inside SERPs front and center. blog.google
If the content is public, does copyright not apply?
Publicly viewable does not mean âfree to copy and redistribute.â SERPs can contain third-party copyrighted works and licensed content, which means rights clearance and tight scoping still matter.
Does this matter for companies outside the U.S. (including Japan)?
If you target U.S.-based platforms (like Google), U.S. legal theories (including DMCA § 1201) can become part of your risk surfaceâalong with practical enforcement like injunction requests, account action, or IP blocking. Separately, if you operate from Japan (or elsewhere), local laws may also apply (for example, unauthorized access, copyright, unfair competition, or contract issues), depending on the facts.
References
Need a safer SERP data plan?
If you rely on SERP data, the hard part is rarely the scraperâitâs avoiding bypass-by-design, limiting redistribution risk, and documenting operations. We can help you map requirements to a lower-risk collection approach.
Summary
Google v. SerpApi (filed December 19, 2025) is a strong reminder that SERP scraping is not automatically âsafe because itâs public.â The dispute spotlights how legal exposure can escalate when technical bypass (DMCA § 1201), Terms of Service violations, redistribution/resale, and high-volume automation overlap. blog.google
If your business needs SERP data, avoid bypass-oriented designs, consider official alternatives first, minimize what you store and share, and build operational logging and accountability into the system from day one.