The ZenRows® Universal Scraper API is a versatile tool designed to simplify and enhance the process of extracting data from websites. Whether you’re dealing with static or dynamic content, our API provides a range of features to meet your scraping needs efficiently.
With Premium Proxies, ZenRows gives you access to over 55 million residential IPs from 190+ countries, ensuring 99.9% uptime and highly reliable scraping sessions. Our system also handles advanced fingerprinting, header rotation, and IP management, enabling you to scrape even the most protected sites without needing to manually configure these elements.
ZenRows makes it easy to bypass complex anti-bot measures, handle JavaScript-heavy sites, and interact with web elements dynamically — all with the right features enabled.
Render JavaScript on web pages using a headless browser to scrape dynamic content that traditional methods might miss.
When to use: Use this feature when targeting modern websites built with JavaScript frameworks (React, Vue, Angular), single-page applications (SPAs), or any site that loads content dynamically after the initial page load.
Real-world scenarios:
Additional options:
Leverage a vast network of residential IP addresses across 190+ countries, ensuring a 99.9% uptime for uninterrupted scraping.
When to use: Essential for accessing websites with sophisticated anti-bot systems, geo-restricted content, or when you consistently encounter blocks with standard datacenter IPs.
Real-world scenarios:
Additional options:
Add custom HTTP headers to your requests for more control over how your requests appear to target websites.
When to use: When you need to mimic specific browser behavior, set cookies, or a referer.
Real-world scenarios:
Use a session ID to maintain the same IP address across multiple requests for up to 10 minutes.
When to use: When scraping multi-page flows or processes that require maintaining the same IP across multiple requests.
Real-world scenarios:
Extract only the data you need with CSS selectors or automatic parsing.
When to use: When you need specific information from pages and want to reduce bandwidth usage or simplify post-processing.
Real-world scenarios:
While Python examples are provided, the API works with any programming language that can make HTTP requests.
Customize your scraping requests using the following parameters:
PARAMETER | TYPE | DEFAULT | DESCRIPTION |
---|---|---|---|
apikey required | string | Get Your Free API Key | Your unique API key for authentication |
url required | string | The URL of the page you want to scrape | |
js_render | boolean | false | Enable JavaScript rendering with a headless browser. Essential for modern web apps, SPAs, and sites with dynamic content. |
js_instructions | string | Execute custom JavaScript on the page to interact with elements, scroll, click buttons, or manipulate content. Use when you need to perform actions before the content is returned. | |
custom_headers | boolean | false | Enables you to add custom HTTP headers to your request, such as cookies or referer, to better simulate real browser traffic or provide site-specific information. |
premium_proxy | boolean | false | Use residential IPs to bypass anti-bot protection. Essential for accessing protected sites. |
proxy_country | string | Set the country of the IP used for the request (requires Premium Proxies). Use for accessing geo-restricted content or seeing region-specific content. | |
session_id | integer | Maintain the same IP for multiple requests for up to 10 minutes. Essential for multi-step processes. | |
original_status | boolean | false | Return the original HTTP status code from the target page. Useful for debugging in case of errors. |
allowed_status_codes | string | Returns the content even if the target page fails with specified status codes. Useful for debugging or when you need content from error pages. | |
wait_for | string | Wait for a specific CSS Selector to appear in the DOM before returning content. Essential for elements that load asynchronously. | |
wait | integer | 0 | Wait a fixed amount of milliseconds after page load. Use for sites that load content in stages or have delayed rendering. |
block_resources | string | Block specific resources (images, fonts, etc.) from loading to speed up scraping and reduce bandwidth usage. Enabled by default, carefull when changing it. | |
json_response | string | false | Capture network requests in JSON format, including XHR or Fetch data. Ideal for intercepting API calls made by the web page. |
css_extractor | string (JSON) | Extract specific elements using CSS selectors. Perfect for targeting only the data you need from complex pages. | |
autoparse | boolean | false | Automatically extract structured data from HTML. Great for quick extraction without specifying selectors. |
response_type | string | Convert HTML to other formats (Markdown, Plaintext, PDF). Useful for content readability, storage, or to train AI models. | |
screenshot | boolean | false | Capture an above-the-fold screenshot of the page. Helpful for visual verification or debugging. |
screenshot_fullpage | boolean | false | Capture a full-page screenshot. Useful for content that extends below the fold. |
screenshot_selector | string | Capture a screenshot of a specific element using CSS Selector. Perfect for capturing specific components. | |
screenshot_format | string | Choose between png (default) and jpeg formats for screenshots. | |
screenshot_quality | integer | For JPEG format, set quality from 1 to 100 . Lower values reduce file size but decrease quality. | |
outputs | string | Specify which data types to extract from the scraped HTML. |
ZenRows® provides flexible plans tailored to different web scraping needs, starting from $69 per month. This entry-level plan allows you to scrape up to 250,000 URLs using basic requests. For more demanding needs, our Enterprise plans scale up to 38 million URLs or more.
For complex or highly protected websites, enabling advanced features like JavaScript rendering (js_render
) and Premium Proxies unlocks ZenRows’ full potential, ensuring the best success rate possible.
The pricing depends on the complexity of the request — you only pay for the scraping tech you need.
For example, on the Business plan:
Concurrency determines how many requests can run simultaneously:
Plan | Concurrency Limit | Response Size Limit |
---|---|---|
Developer | 5 | 5 MB |
Startup | 20 | 10 MB |
Business | 50 | 10 MB |
Business 500 | 100 | 20 MB |
Business 1K | 150 | 20 MB |
Business 2K | 200 | 50 MB |
Business 3K | 250 | 50 MB |
Enterprise | Custom | 50 MB |
Important notes about concurrency:
429 Too Many Requests
errorIf response size is exceeded:
413 Content Too Large
errorStrategies for handling large pages:
css_extractor
parameterresponse_type
: Convert to markdown or plaintext to reduce sizescreenshot
features, these can significantly increase response sizeZenRows provides useful information through response headers:
Header | Description | Example Value | Usage |
---|---|---|---|
Concurrency-Limit | Maximum concurrent requests allowed by your plan | 20 | Monitor your plan’s capacity |
Concurrency-Remaining | Available concurrent request slots | 17 | Adjust request rate dynamically |
X-Request-Cost | Cost of this request | 0.001 | Track balance consumption |
X-Request-Id | Unique identifier for this request | 67fa4e35647515d8ad61bb3ee041e1bb | Include when contacting support |
Zr-Final-Url | The final URL after any redirects occurred during the request | https://example.com/page?id=123 | Track redirects |
Why these headers matter:
X-Request-Id
for faster troubleshootingX-Request-Cost
helps you monitor your usage per requestZr-Final-Url
shows where you ended up after any redirectsBeyond the core features and limits, these additional aspects are important to consider when using the Universal Scraper API:
When you cancel a request on the client side:
429 Too Many Requests
errorsTo keep your ZenRows integration secure:
To optimize performance based on target website location:
proxy_country
parameterZenRows API supports response compression to optimize bandwidth usage and improve performance. Enabling compression offers several benefits for your scraping operations:
ZenRows supports the following compression encodings: gzip
, deflate
, br
.
To use compression, include the appropriate Accept-Encoding
header in your requests. Most HTTP clients already compress the request automatically. But you can also provide simple options to enable it: