# API Error Codes Source: https://docs.zenrows.com/api-error-codes Understanding the ZenRows® Universal Scraper API error codes is crucial for troubleshooting and optimizing your interactions with ZenRows. Below are common errors you may encounter, along with explanations and recommended actions. Each error code corresponds to specific conditions encountered during the Universal Scraper API usage, from authentication problems to request handling and server responses. ## 400 Bad Request ### REQS001 Requests To This Domain Are Forbidden **Problem:** Requests to this URL are forbidden. **Solution:** 1. Check our [target sites' access restrictions and user behavior guidelines](/forbidden-sites) to see if the domain is explicitly blocked 2. Try using a different domain that provides similar data 3. If you need to scrape this specific domain for a legitimate business purpose, contact our support team to discuss potential options ### REQS002 Request Requirements Unsatisfied **Problem:** The requested URL domain needs JavaScript rendering and/or Premium Proxies due to its high-level security defenses. **Solution:** 1. Read the error message to understand the specific requirements for the domain 2. Add `js_render=true` and/or `premium_proxy=true` parameters to your request, depending on the domain's requirements ### REQS004 Invalid Params Provided **Problem:** Some parameters or even the URL are invalid or not properly encoded. **Solution:** 1. Read the error message to understand the specific issue 2. Ensure your URL is properly URL-encoded (check our [guide on encoding URLs](/universal-scraper-api/faq#how-to-encode-urls)) or the http you use is doing it for you 3. Validate that all parameter values match the expected types (boolean, string, etc.) 4. Remove any invalid or unsupported parameters from your request 5. Check that you are not sending query parameters for the target page as parameters for the API call ### RESP004 CSS Extractor Parameter Is Not Valid **Problem:** The `css_extractor` parameter sent in your request is not valid. **Solution:** 1. Check your CSS selector syntax for errors (like missing brackets or quotes) 2. Simplify complex selectors that might be causing issues 3. Test your CSS selector on the actual page using browser developer tools first (`document.querySelector('.your-selector')`) 4. If extracting multiple elements, ensure your JSON structure follows the correct format: ```js theme={null} css_extractor={ "title": "h1.product-title", "price": "span.product-price" } ``` ### REQS006 Invalid CAPTCHA Solver Key **Problem:** The configured CAPTCHA solver integration API key is invalid. **Solution:** 1. Verify your CAPTCHA solver API key in your [account integrations](https://app.zenrows.com/account/integrations) page 2. Check that your CAPTCHA solver account has sufficient funds 3. Confirm the CAPTCHA solver service is operational through their status page 4. Try re-generating a new API key from your CAPTCHA solver service ### RESP008 Non-Retryable Error not related to ZenRows **Problem:** The target returned an error on the site's config or certificate, not an error ZenRows can solve or bypass. **Solution:** 1. Check if the target website is accessible from your browser 2. If you're using geolocation, try a different country as the site might only be available in specific regions 3. Wait a few minutes and retry as the issue might be temporary 4. For SSL certificate issues, try accessing the HTTP version of the site if available ## 401 Unauthorized ### AUTH001 API Key Missing **Problem:** No `apikey` information was sent in your request. **Solution:** 1. Add the `apikey` parameter to your request 2. Check your code to ensure the apikey is being included in every request 3. For API clients like Axios or Requests, set up default parameters to include your API key automatically 4. Verify the API endpoint structure - the key should be sent as a query parameter to the API endpoint ### AUTH002 Invalid API Key **Problem:** The `apikey` sent does not match the expected format. **Solution:** 1. Copy your API key directly from the [ZenRows Builder](https://app.zenrows.com/builder) 2. Ensure no extra spaces or characters were accidentally included 3. Check for any string formatting or concatenation issues in your code 4. If using environment variables, verify they're loading correctly ### AUTH003 API Key Not Found **Problem:** The `apikey` sent in your request is not valid. **Solution:** 1. Verify you're using the correct API key from your [ZenRows Builder](https://app.zenrows.com/builder) 2. Check if your API key has been revoked or regenerated recently 3. Ensure your account is active and working, you can perform a request from the Builder itself 4. For team accounts, confirm with your administrator that the key is still valid ## 402 Payment Required ### AUTH004 Usage Exceeded **Problem:** This `apikey` has no more usage remaining. **Solution:** 1. Top up your account in the [ZenRows Billing page](https://app.zenrows.com/billing) 2. Upgrade your plan in the [ZenRows Plans page](https://app.zenrows.com/plans) 3. Check your current usage to see if you're approaching limits before they're reached 4. For temporary needs or puntual issues, contact our support team ### AUTH005 API Key Is No Longer Valid **Problem:** This `apikey` has reached its validity period. **Solution:** 1. Check your subscription in the [ZenRows Billing page](https://app.zenrows.com/billing) 2. Check if your account is active and working, you can perform a request from the [Builder](https://app.zenrows.com/builder) itself 3. Contact support if you believe there is an issue with your account ### AUTH010 Feature Is Not Included In Plan **Problem:** The requested feature is not included in your subscription plan. **Solution:** 1. Review the [features included in each plan](https://www.zenrows.com/pricing) 2. Upgrade to a plan that includes the feature you need 3. Modify your code to avoid using premium features if you don't upgrade 4. Contact our support team if you believe there is an issue with your account ### AUTH011 No Subscription Found **Problem:** This account does not have an active subscription. **Solution:** 1. Purchase a subscription plan from the [ZenRows plans page](https://app.zenrows.com/plans) 2. Check for any failed payment attempts in your account history 3. Verify your payment method details are correct 4. Contact our support team if you believe there is an issue with your account ### AUTH012 Subscription Does Not Allow To Use The Product **Problem:** This account subscription does not allow to use this service. **Solution:** 1. Check which ZenRows products your subscription includes 2. Upgrade to a plan that includes the product you're trying to use 3. Ensure you're using the correct API endpoint for your subscription 4. Contact our support team if you believe there is an issue with your account ## 403 Forbidden ### AUTH009 User Is Not Verified **Problem:** This `apikey` belongs to a user that has not verified the email account. **Solution:** 1. Check your email inbox (including spam folder) for a verification email 2. Request a new verification email 3. Ensure your email address is entered correctly in your profile 4. Contact support if you continue to have verification issues ### BLK0001 IP Address Blocked **Problem:** Your IP address has been blocked for exceeding the maximum error rate allowed. **Solution:** 1. Wait a few minutes before retrying requests 2. Implement error handling in your code to prevent excessive failed requests 3. Use exponential backoff when retrying failed requests 4. If using the service in a high-traffic environment, consider implementing a queue system to manage request rates Visit our [troubleshooting guide](/universal-scraper-api/troubleshooting/ip-address-blocked) for step-by-step instructions, common causes, and best practices to quickly restore access. ## 404 Not Found ### RESP002 Page Not Found **Problem:** The requested URL page returned a 404 HTTP Status Code. **Solution:** 1. Verify the URL exists by opening it in a browser 2. Check for typos or encoding issues in the URL 3. If the page was recently available, it might have been moved or deleted 4. For dynamic sites, try adding `js_render=true` as some 404 pages are generated via JavaScript 5. Note that these requests are billed ### RESP007 Site Not Found **Problem:** The requested target domain could not be resolved, or there is no DNS record associated with it. **Solution:** 1. Verify the domain exists by checking in your browser 2. Check for typos in the domain name 3. If using an IP address, ensure it's correctly formatted 4. If the domain is new or rarely accessed, DNS propagation might still be in progress 5. Try using premium proxies and geolocation since the domain might be available only in certain countries 6. Note that these requests are billed ## 405 Method Not Allowed ### REQS005 Method Not Allowed **Problem:** The HTTP verb used to access this page is not allowed. **Solution:** 1. Change your request method to one of the allowed methods: GET, POST, or PUT 2. For complex requests, consider breaking them down into multiple simpler requests 3. Check if the endpoint you're trying to access has specific method requirements ## 407 Proxy Authentication Required ### AUTH007 Invalid Proxy-Authorization Header **Problem:** The Proxy-Authorization header sent does not match the expected format. **Solution:** 1. Ensure the Proxy-Authorization header is a base64 encoded string of `:` 2. Check your base64 encoding function for errors or incorrect character handling 3. Try API calls instead of proxy mode to ensure that the API key and account are working properly ## 413 Content Too Large ### RESP005 Response Size Exceeded The Limit **Problem:** The response data size is bigger than the maximum allowed download size. **Solution:** 1. Request specific parts of the page using CSS selectors instead of the entire page 2. Split large pages into multiple smaller requests by targeting specific sections 3. Use pagination parameters if available on the target site 4. If using JSON Response, the full response will be considered for the size limit - try revoming it and see if it works Looking for ways to handle large responses? Check out our [troubleshooting guide](/universal-scraper-api/troubleshooting/response-too-large) for practical strategies, examples, and tips to work within response size limits. ## 422 Unprocessable Entity ### RESP001 Could Not Get Content **Problem:** The service couldn't get the content. **Solution:** 1. Add `js_render=true` as the site might require JavaScript to load content 2. Enable `premium_proxy=true` if the site has anti-bot measures 3. Try adding geolocation to the request set to a country where the site is available 4. Increase wait time with `wait=5000` or higher if content loads slowly 5. Check the error's body for more specific details about the failure 6. Try using custom headers with a referer to mimic a real browser: ```js theme={null} params={ // ... "custom_headers": true, } headers={ "Referer": "https://www.google.com" } ``` ## 424 Failed Dependency ### RESP006 Failed To Solve CAPTCHA **Problem:** The CAPTCHA solver provider was unable to solve the CAPTCHA detected in the page. **Solution:** 1. Check your CAPTCHA solver service account for sufficient balance 2. Try adding premium proxies to the request 3. Implement retry logic with increasing wait times between attempts ## 429 Too Many Requests ### AUTH006 Concurrency Exceeded **Problem:** The concurrency limit was reached. **Solution:** 1. Implement a queue system in your code to limit concurrency requests 2. Monitor the `Concurrency-Remaining` header to adjust your request rate dynamically 3. Increase wait times between batches of requests 4. For high-volume scraping needs, upgrade to a plan with higher concurrency limits 5. Learn more about [how ZenRows concurrency works](/universal-scraper-api/features/concurrency) and implement the provided code examples 6. Note that canceled requests on the client side will not release concurrency until the processing is done in the server side - we recommend not setting a timeout below 3 minutes ### AUTH008 Rate Limit Exceeded **Problem:** The rate limit was reached. **Solution:** 1. Implement exponential backoff between requests 2. Distribute requests evenly over time rather than sending them in bursts 3. Set up a queue system with configurable delay between requests 4. For time-sensitive projects, consider upgrading to a plan with higher rate limits 5. Monitor usage patterns to identify and optimize peak request periods ## 500 Internal Server Error ### CTX0001 Context Cancelled **Problem:** The request was canceled from the client's side. **Solution:** 1. Check your client's timeout settings - we recommend not setting a timeout below 3 minutes 2. If you're canceling requests manually, review your cancellation logic 3. Ensure your network connection is stable during the entire request ### ERR0001 Unknown Error **Problem:** An internal error occurred. **Solution:** 1. Retry the request after a short delay (10-30 seconds) 2. Check the [ZenRows status page](https://status.zenrows.com/) for any ongoing issues 3. Implement error logging to capture the full error response for troubleshooting 4. If the error persists, contact support with details of your request ### ERR0000 Unknown Error **Problem:** An unexpected internal error occurred. **Solution:** 1. Retry the request after a short delay (10-30 seconds) 2. Check the [ZenRows status page](https://status.zenrows.com/) for any ongoing issues 3. Implement error logging to capture the full error response for troubleshooting 4. If the error persists, contact support with details of your request ## 502 Bad Gateway ### RESP003 Could Not Parse Content **Problem:** The request failed because the URL could not be automatically parsed. **Solution:** 1. Remove the `autoparse` parameter and process the raw HTML response 2. Contact support with details of your request if the issue persists ## 504 Gateway Timeout ### CTX0002 Operation Timeout Exceeded **Problem:** The request exceeded the maximum allowed time and was aborted. **Solution:** 1. Check if the target website is slow or unresponsive in a browser 2. Reduce the complexity of your request if using JS Instructions 3. Try adding premium proxies to the request or geolocation to a country where the site is available 4. Break complex scraping tasks into smaller, more focused requests 5. If the issue persists, contact support with details of your request and use case # Product updates and announcements Source: https://docs.zenrows.com/changelog Hey everyone! We're excited to share a significant update to help you stay on top of your account activity: 🔔 **Usage Notifications & Alerts Now Available for All Plans** All plans now include access to usage notifications and alerts to keep you informed about your account activity and resource consumption. ✅ **Default Notifications Enabled for Active Customers** To make things even easier, we've enabled the following default notifications for all active customers: * Invoice Paid: Get notified when your invoices are successfully processed. * Weekly Usage Summary: A snapshot of your account usage delivered straight to your inbox. * Universal Consumption Threshold: Alerts when you approach key usage limits. 🎛️ **Customize Your Notifications** You can personalize your notification preferences anytime by visiting your [notification settings](https://app.zenrows.com/account/notifications). This update ensures you have the tools you need to manage your account effortlessly. Let us know if you have any feedback—we're always here to improve your experience! Hey everyone! We've made a round of upgrades across our APIs based on customer feedback: 🏡 **Idealista API** * Added location fields (neighborhood → region) to reduce API calls * Picture tags now included for better media organization * Unpublished listings now return data instead of a 404 * New fields: construction year & listing reference 🛒 **Walmart Reviews API** * Fixed sorting via URL * Now supports syndicated reviews 🏠 **Zillow Property API** * Contact info now included in responses Stop wasting time parsing HTML! ZenRows Scraper APIs deliver clean, structured JSON instantly—no setup, no maintenance, no headaches. 🔑 **Why ZenRows Scraper APIs?** ✅ Structured JSON Responses – Get ready-to-use data, no HTML parsing needed. ✅ Effortless Scraping – No CAPTCHAs, no blockers—just seamless data access to popular sites like Amazon, Walmart, Google, Zillow and more ✅ Zero Setup & Maintenance – Works out of the box, no tuning required. 🎯 **Get Data, Not Problems** Forget the hassle of scrapers breaking. With ZenRows Scraper APIs, you get reliable, structured data—so you can focus on insights, not maintenance. 🚀 Try the new Scraper API beta today! # Explore ZenRows Academy Source: https://docs.zenrows.com/first-steps/academy-tab # Frequently Asked Questions Source: https://docs.zenrows.com/first-steps/faq No, monthly plans are reset each month and usage doesn't roll over. ZenRows does not offer a browser extension, and our products are designed to work via API requests, proxies, and automated browsers, not through browser extensions. ## Why Doesn't ZenRows Offer a Browser Extension? Browser extensions have significant limitations when it comes to web scraping: 1. **Restricted Execution** – Extensions run in a browser's sandboxed environment, limiting their ability to bypass antibot protections. 2. **Scalability Issues** – A browser extension would require manual interaction, making it impractical for large-scale or automated scraping. 3. **Limited Customization** – Unlike our API and Scraping Browser, extensions lack the flexibility to integrate advanced scraping techniques, such as headless browsing, fingerprint evasion, and CAPTCHA handling. ## How to Scrape Without a Browser Extension Instead of using an extension, we recommend using one of our API-based solutions, which are optimized for web scraping at scale: ### Universal Scraper API * Designed for flexible, automated scraping with built-in antibot bypassing. * Supports JavaScript rendering, Premium Proxies, and CAPTCHA handling. * Ideal for scraping protected or dynamic pages. ### Scraper APIs * Ready-to-use APIs that return structured data from specific websites. * Handles antibot measures and complex scraping challenges automatically. ### Scraping Browser * A headless browsing solution that allows full control over web automation. * Ideal for cases where manual browser behavior replication is needed. ### Residential Proxies * Provides rotating IPs to increase anonymity. * Best for handling IP-based blocks but requires custom browser automation for full scraping functionality. If you're looking for automated and scalable web scraping, our API solutions are the best fit. Let us know if you need help choosing the right approach! 🚀 If you need to capture dynamic content loaded via AJAX requests, ZenRows offers different approaches depending on the product you're using. Some products provide built-in JSON responses, while others require custom configurations to extract network requests. More and more websites load content dynamically, meaning data is fetched via XHR, AJAX, or Fetch requests instead of being included in the initial HTML. Besides waiting for the content to load, you might want to capture these network requests—similar to how they appear in the Network tab in DevTools. ## How Each ZenRows Product Handles XHR / AJAX / Fetch Requests ### Universal Scraper API * The JSON Response feature (`json_response=true`) captures AJAX requests automatically. * Returns a structured JSON object containing two fields: * HTML – The final rendered page source. * XHR – An array of network requests (XHR, AJAX, Fetch), including URL, body, headers, and more. * This feature is exclusive to the Universal Scraper API and is ideal for analyzing background requests. Learn more: [JSON Response](/universal-scraper-api/features/json-response) ### Scraper APIs * Default response format is JSON, but it does not include network requests by default. * Instead of capturing XHR calls, it extracts and structures the final page content into a JSON format. * This means you'll get structured data rather than raw network request details. ### Scraping Browser * Does not capture network requests automatically. * You'll need to configure custom JavaScript code to intercept and extract XHR/AJAX calls manually on Puppeteer or Playwright. For a step-by-step guide on capturing network requests in Playwright, check out our comprehensive [Playwright guide](https://www.zenrows.com/blog/playwright-scraping#request-and-response-interception). ### Residential Proxies * Acts as a proxy layer without modifying responses. * To capture XHR/AJAX requests, you must configure custom request logging in your own setup (e.g., Puppeteer, Playwright, or Selenium). By choosing the right ZenRows product and configuration, you can effectively capture network requests and analyze the data that websites load dynamically. Let us know if you need guidance on a specific use case! 🚀 ZenRows is designed to bypass most modern antibot solutions out-of-the-box. We continuously test and optimize our systems to ensure a smooth scraping experience. However, antibot defenses vary by website, and different ZenRows products serve different purposes. Below is an overview of how each product handles antibot measures and what to expect when using them. ## Universal Scraper API The Universal Scraper API, when combined with Premium Proxies and JS Render, effectively handles most antibot measures. This setup mimics real user behavior, helping bypass bot detection mechanisms. However, not all pages are protected equally. Many websites enforce stricter protections on internal APIs or login-restricted content. If you're targeting such endpoints, additional configurations might be needed. If you're experiencing blocks despite using Premium Proxies and JS Render, refer to this guide: [Using Premium + JS Render and still blocked](/universal-scraper-api/faq#using-premium-js-render-and-still-blocked) ## Scraper APIs Our Scraper APIs are designed for ease of use. Simply send a request to our API, and we handle all antibot measures in the background, delivering the structured content you need. This is the best option for users who want a hassle-free experience without worrying about configuration. ## Residential Proxies Residential Proxies prioritize anonymity rather than antibot bypassing. They provide IP rotation and geographic targeting but do not include built-in antibot or CAPTCHA-solving capabilities. For heavily protected websites, additional techniques may be required. ## Scraping Browser The Scraping Browser is highly effective against antibot and anticaptcha solutions, using the same advanced algorithms as the Universal Scraper API. However, if a website enforces a CAPTCHA challenge, we do not automatically bypass it. Solving CAPTCHAs currently requires implementing custom handling, such as integrating third-party CAPTCHA-solving services. By choosing the right combination of ZenRows tools, you can optimize your web scraping strategy to handle even the most complex antibot defenses. If you need further assistance, feel free to reach out to our support team. ZenRows supports a variety of **no-code platforms** to help you scrape data from websites without writing a single line of code. These integrations let you connect your scraping workflows with thousands of apps like **Google Sheets**, **Airtable**, **Notion**, **Amazon S3**, and more. These no-code integrations are ideal for marketers, analysts, product managers, and anyone looking to automate data collection without needing technical skills. ## When to Use No-Code Integrations Use ZenRows' no-code options when you: * Want to scrape data into a spreadsheet without writing code * Need to automate recurring data collection tasks * Prefer visual workflow builders over API requests * Are integrating web data into tools like CRMs, email platforms, or dashboards **Best Practice:** Start with pre-built ZenRows templates in platforms like Zapier or Make to set up your workflow in minutes. ## Next Steps Visit our [Integrations Page](/integrations/overview) to explore tutorials and real-world examples that walk you through setting up your first workflow. Suppose you must scrape data from a website and automatically process it using a third-party tool. We offer various ways to integrate ZenRows with external software and tools. Currently, you can [integrate a captcha solver](/universal-scraper-api/features/other#captcha-solver) or [a no-code tool like Zapier/Make/Clay](/first-steps/faq#does-zenrows-have-no-code-options). Additionally, you can build your integrations using the ZenRows output, whether HTML or JSON. A good use case for this is the autoparse feature, which returns structured data from a page. Yes! Our custom plans are available for high-volume cases. We cannot customize public plans, as they are standardized for all our clients. Optimizing your requests can significantly improve performance and reduce response times. Below are general best practices, followed by specific recommendations for each ZenRows product. ## Factors Affecting Request Speed 1. **Concurrency**: Sending multiple requests simultaneously can increase throughput 2. **Resource Usage**: Features like JavaScript rendering or waiting for specific conditions can impact speed 3. **Response Size (optional)**: Pages with dynamic content will naturally take longer to load. Consider targeting only the necessary data or using output filters to minimize payload. 4. **Success Rate**: The rate of successful requests. If the success rate is low, you may need to increase the number of requests or the concurrency. While Residential Proxies have no concurrency restrictions, other products have plan-specific limits. Monitor your performance when adjusting these settings. ### Monitoring Concurrency Usage Each response includes headers that help you manage and optimize your concurrency: ``` Concurrency-Limit: 200 Concurrency-Remaining: 199 ``` These headers help you: * Monitor how many concurrent requests your plan allows * Track how many slots are currently available * Adjust request volume dynamically to avoid hitting limits that may delay or throttle requests The Concurrency-Remaining header reflects the real-time state of your concurrency usage and is the primary value our system uses to enforce limits. If it reaches zero and more requests are sent, you may receive a `429 Too Many Requests` error and your IP can be temporarily blocked for 5 minutes.

If you receive a `BLK0001` error (IP Address Blocked), it means your IP has exceeded the allowed error rate. The block will last for 5 minutes and will impact your ability to send new requests during that time, affecting your overall scraping speed. For more details, see our [API Error Codes documentation](/api-error-codes#BLK0001).
Use these headers to adjust your request flow in real-time, scaling up when possible and backing off before hitting limits. ## Product-Specific Recommendations ### Universal Scraper API 1. **Optimize JavaScript Rendering**: * Disable `js_render=true` for static content to improve speed * Only enable when necessary for dynamic content or accessing protected content * Consider the impact on response times 2. **Minimize Wait Times**: * Use `wait` and `wait_for` only when required * Set the minimum necessary wait duration * Longer waits mean slower requests 3. **Use Premium Proxies**: * Enable `premium_proxy=true` for faster, more consistent responses * Particularly useful for sites with anti-bot measures * Can reduce retries and improve overall speed ### Scraper APIs 1. **Concurrency Management**: * Start with moderate concurrency and monitor response times * Increase gradually while maintaining acceptable speed * Implement backoff strategies when requests slow down 2. **Parameter Optimization**: * Remove unnecessary parameters that might slow down requests * Only use parameters essential for your use case * Monitor the impact of each parameter on response times ### Residential Proxies 1. **Request Rate Optimization**: * Monitor response times at different request rates * Adjust based on target site performance * Implement backoff when responses slow down ### Scraping Browser 1. **Resource Management**: * Disable unnecessary JavaScript execution * Block non-essential resources (images, media, etc.) * Optimize browser settings for faster loading 2. **CAPTCHA Handling**: * Implement manual CAPTCHA solving to avoid automated delays * Consider the impact on overall request speed ## Speed Optimization Best Practices 1. **Start with Baseline**: Begin with standard settings and measure response times 2. **Monitor Performance**: Use response headers and timing metrics to track speed 3. **Gradual Optimization**: Make incremental changes and measure their impact 4. **Smart Retries**: Use exponential backoff for failed requests to maintain speed 5. **Target-Specific Tuning**: Adjust settings based on the specific website's performance While these optimizations can improve request speed, some features (like JavaScript rendering) might be necessary for your specific use case. If you need help optimizing for speed while maintaining functionality, our support team is here to assist.
API usage counts are managed based on your subscription plan: ## Monthly Plans * Usage is tracked and reset each month. Any remaining usage does not roll over to the next month. ## 3-Month and 6-Month Plans * Usage is tracked and reset every 3 or 6 months, depending on your subscription. Any remaining usage does not roll over at the end of each period. ## Yearly Plans * Usage is tracked and reset annually. Any remaining usage does not roll over to the next year. ZenRows is designed to bypass most modern antibot solutions out-of-the-box. We continuously test our service to ensure optimal performance. Handling CAPTCHAs depends on the type of CAPTCHA and the ZenRows product you're using. ## Handling CAPTCHAs on Forms CAPTCHAs on forms are not solved automatically. If you need to submit forms that trigger a CAPTCHA, we offer an integration with a CAPTCHA solver that might work for your use case. Learn more about it here: [Using JavaScript Instructions to Solve CAPTCHAs](/universal-scraper-api/features/js-instructions#captcha-solving) ## CAPTCHA Handling by Product Each ZenRows product has its own approach to handling CAPTCHAs, depending on the level of antibot protection in place. While some products automatically bypass CAPTCHAs in most cases, others may require additional configurations or external solvers. Below, we outline how each product deals with CAPTCHAs and what you can do to improve your success rate. ### Universal Scraper API * Uses Premium Proxies and JS Render to bypass most antibot measures. * If a CAPTCHA appears, you can use JavaScript Instructions to interact with the page and solve it manually or through an external CAPTCHA-solving service. ### Scraper APIs * Fully managed solution—our API automatically handles the antibot protections, including CAPTCHA challenges. ### Residential Proxies * Residential Proxies provide anonymity but do not bypass CAPTCHAs automatically. * If CAPTCHA protection is strict, you'll need custom handling or an external solver. ### Scraping Browser * Uses the same bypassing techniques as the Universal Scraper API. * Can handle most antibot solutions, but does not solve CAPTCHAs by default. * If a CAPTCHA is encountered, it requires custom handling, such as integrating a CAPTCHA solver. By choosing the right ZenRows product and implementing the appropriate CAPTCHA-handling techniques, you can minimize interruptions and improve your scraping success rate. If you need assistance with a specific case, feel free to contact our support team. Concurrency refers to the number of ongoing requests that happen at any given time. By different means, computers and languages can call the API in parallel and wait for results while others are still running. You can use concurrency with any ZenRows plan; check out [pricing](https://www.zenrows.com/pricing) for more details. For more details, check our [how-to guide on concurrency](/universal-scraper-api/features/concurrency#using-concurrency) to see details about implementation in Python and JavaScript. ## Important Behavior to Understand ### Canceled Requests Still Count Against Your Concurrency One crucial thing to understand is that **canceling requests on the client side does NOT immediately free up concurrency slots**. When you cancel a request: 1. Your client stops waiting for a response 2. But the ZenRows server continues processing the request 3. The concurrency slot remains occupied until the server-side processing completes 4. Only then is the concurrency slot released This can lead to unexpected `429` errors if you're canceling requests and immediately trying to make new ones, as your concurrency limit might still be reached. ### Security System for Failing Requests ZenRows implements a security system that may temporarily ban your API key if you send too many failing requests in a short period. Types of failing requests that can trigger this include: * `429 Too Many Requests` errors due to exceeding concurrency limits * Invalid API key errors * `400 Bad Request` errors due to invalid parameters * Other repeated API errors If your API key gets temporarily banned, you'll receive an error from the API. If the requests continue, the IP address might get banned for a few minutes and the requests will not even connect with the server. ## Monitoring Your Concurrency Usage You can monitor your concurrency usage through response headers: ``` Concurrency-Limit: 200 Concurrency-Remaining: 199 X-Request-Cost: 0.001 X-Request-Id: 67fa4e35647515d8ad61bb3ee041e1bb Zr-Final-Url: https://httpbin.io/anything ``` These headers provide valuable information about your request: * **Concurrency-Limit**: Your maximum concurrent request limit based on your plan * **Concurrency-Remaining**: Number of additional concurrent requests you can make * **X-Request-Cost**: The cost of this request (varies based on enabled features) * **X-Request-Id**: Unique identifier for this request - essential when reporting issues to support * **Zr-Final-Url**: The final URL after any redirects that occurred during the request ## Related questions ### How many concurrent requests are included in my plan? The concurrency limit varies by subscription plan: * **Trial plan:** 5 concurrent requests * **Developer plan:** 20 concurrent requests * **Startup plan:** 50 concurrent requests * **Business plan:** 100 concurrent requests * **Business 500:** 150 concurrent requests * **Business 1K:** 200 concurrent requests * **Business 2K:** 300 concurrent requests * **Business 3K:** 400 concurrent requests Enterprise plans can include custom concurrency limits to fit your needs. Contact us for tailor-made Enterprise solutions. ### If get a "429 Too Many Requests" error, do I lose that request or is it queued? You'll receive an error, and that request won't be queued or retried automatically. You'll need to manage retries on your end, ensuring you don't exceed your concurrency limit. ### Can I increase my concurrency limit? Absolutely! We offer [various plans](https://www.zenrows.com/pricing) with different concurrency limits to suit your needs. If you find yourself frequently hitting the concurrency limit, consider upgrading. ### How can I monitor my concurrency usage? When using the Universal Scraper API, each response includes these helpful headers: * `Concurrency-Limit`: Shows your maximum concurrent request limit * `Concurrency-Remaining`: Shows how many free concurrency slots you have available ### I've been blocked by repeatedly exceeding my concurrency limit. Why? Whenever you exceed your plan concurrency limit, you'll start receiving "429 Too Many Requests" errors. If you keep sending more and more requests exceeding your plan concurrency limit in a short period of time, the service may temporarily block your IP address to prevent API misuse. The IP address ban will last only a few minutes, but repeatedly being blocked might end in a long-lasting block. Check the [concurrency optimization section](/universal-scraper-api/features/concurrency#using-concurrency-headers-for-optimization) for more information on how to limit concurrent requests to prevent being blocked. ### Troubleshooting with Support When contacting ZenRows support for any issues with your requests, always include: 1. The **X-Request-Id** from the response headers 2. The exact error message you received 3. The timestamp of when the error occurred 4. Details about the parameters you used in the request This information, especially the Request ID, allows our support team to quickly locate your specific request in our logs and provide more effective assistance. Many websites tailor their content based on the visitor's location. For example, Amazon displays different products and prices on its UK (`.co.uk`) and French (`.fr`) sites. If you're scraping data from these sites, using a regional IP ensures that you receive the correct localized content. Also some websites restrict access to their content based on geographic location. To avoid discrepancies caused by regional variations, such as different products being displayed on a retailer's website, you can send a specific country code with your request. This ensures that your request is localized to the desired country, allowing you to obtain consistent and replicable results. ZenRows supports proxies from numerous countries around the world. You can use any country's ISO code to configure your proxies. Here is the comprehensive list of premium proxy countries supported by ZenRows: ```json theme={null} af => Afghanistan al => Albania dz => Algeria ad => Andorra ao => Angola ai => Anguilla ag => Antigua and Barbuda ar => Argentina am => Armenia au => Australia at => Austria az => Azerbaijan bs => Bahamas bh => Bahrain bd => Bangladesh bb => Barbados by => Belarus be => Belgium bz => Belize bj => Benin bm => Bermuda bt => Bhutan bo => Bolivia, Plurinational State of ba => Bosnia and Herzegovina bw => Botswana br => Brazil bn => Brunei Darussalam bg => Bulgaria bf => Burkina Faso bi => Burundi cv => Cabo Verde kh => Cambodia cm => Cameroon ca => Canada ky => Cayman Islands cf => Central African Republic td => Chad cl => Chile cn => China co => Colombia km => Comoros cg => Congo cd => Congo, The Democratic Republic o cr => Costa Rica ci => Cote D'ivoire hr => Croatia cu => Cuba cy => Cyprus cz => Czech Republic dk => Denmark dj => Djibouti do => Dominican Republic ec => Ecuador eg => Egypt sv => El Salvador gq => Equatorial Guinea er => Eritrea ee => Estonia et => Ethiopia fo => Faroe Islands fj => Fiji fi => Finland fr => France gf => French Guiana pf => French Polynesia ga => Gabon gm => Gambia ge => Georgia de => Germany gh => Ghana gi => Gibraltar gr => Greece gl => Greenland gp => Guadeloupe gu => Guam gt => Guatemala gn => Guinea gw => Guinea-Bissau gy => Guyana ht => Haiti hn => Honduras hk => Hong Kong hu => Hungary is => Iceland in => India id => Indonesia ir => Iran, Islamic Republic of iq => Iraq ie => Ireland im => Isle of Man il => Israel it => Italy jm => Jamaica jp => Japan jo => Jordan kz => Kazakhstan ke => Kenya kr => Korea, Republic of kw => Kuwait kg => Kyrgyzstan la => Lao People's Democratic Republic lv => Latvia lb => Lebanon ls => Lesotho lr => Liberia ly => Libya lt => Lithuania lu => Luxembourg mo => Macao mk => Macedonia, The Former Yugoslav Republic of mg => Madagascar mw => Malawi my => Malaysia mv => Maldives ml => Mali mt => Malta mq => Martinique mr => Mauritania mu => Mauritius mx => Mexico md => Moldova, Republic of mn => Mongolia me => Montenegro ma => Morocco mz => Mozambique mm => Myanmar na => Namibia np => Nepal nl => Netherlands nc => New Caledonia nz => New Zealand ni => Nicaragua ne => Niger ng => Nigeria no => Norway om => Oman pk => Pakistan ps => Palestine, State of pa => Panama pg => Papua New Guinea py => Paraguay pe => Peru ph => Philippines pl => Poland pt => Portugal pr => Puerto Rico qa => Qatar re => Reunion ro => Romania ru => Russia rw => Rwanda lc => Saint Lucia mf => Saint Martin (French Part) ws => Samoa sa => Saudi Arabia sn => Senegal rs => Serbia sc => Seychelles sl => Sierra Leone sg => Singapore sx => Sint Maarten (Dutch Part) sk => Slovakia si => Slovenia sb => Solomon Islands so => Somalia za => South Africa ss => South Sudan es => Spain lk => Sri Lanka sd => Sudan sr => Suriname sz => Swaziland se => Sweden ch => Switzerland sy => Syrian Arab Republic tw => Taiwan, Province of China tj => Tajikistan tz => Tanzania, United Republic of th => Thailand tl => Timor-Leste tg => Togo to => Tonga tt => Trinidad and Tobago tn => Tunisia tr => Turkey tm => Turkmenistan ug => Uganda ua => Ukraine ae => United Arab Emirates gb => United Kingdom us => United States uy => Uruguay uz => Uzbekistan vu => Vanuatu ve => Venezuela, Bolivarian Republic of vn => Viet Nam ye => Yemen zm => Zambia zw => Zimbabwe ``` ## How to use it with each product: ### Universal Scraper API Incorporate the selected ISO code into your scraping script to route your requests through the chosen proxy. ```python scraper.py theme={null} # pip install requests import requests # Example for setting a proxy for Canada params = { 'premium_proxy': 'true', 'proxy_country': 'ca', } response = requests.get('https://api.zenrows.com/v1/', params=params) print(response.text) ``` ### Scraper APIs If the API offers a country option you can add it similarly to the Universal Scraper API. ```python theme={null} # pip install requests import requests query_id_url = "example" api_endpoint = f"https://.api.zenrows.com/v1/targets///{query_id_url}" params = { "apikey": "YOUR_ZENROWS_API_KEY", "country": "us" # Optional: Target specific country } response = requests.get(api_endpoint, params=params) print(response.text) ``` ### Residential Proxies ZenRows supports IPs from a wide variety of countries, allowing you to access geo-restricted data with ease. You can specify a country by using `country` followed by the country code in your proxy URL. Example for Spain: ```bash theme={null} http://:_country-es@superproxy.zenrows.com:1337 ``` ### Scraping Browser You basically have two ways to set a country geolocation on the Scraping Browser, depending if you're using the SDK or not. Without using the SDK, select a specific country by adding the `proxy_country` parameter to the WebSocket URL: ```bash theme={null} wss://browser.zenrows.com?apikey=YOUR_ZENROWS_API_KEY&proxy_country=es ``` In SDK mode, specify the country when generating the connection URL: ```bash theme={null} const connectionURL = scrapingBrowser.getConnectURL({ proxy: { location: ProxyCountry.ES }, }); ``` By using the right proxy, you can ensure more reliable and geographically relevant data scraping while maintaining compliance with website policies. For further assistance or more detailed configuration, refer to the ZenRows documentation or contact their support team. Happy scraping! Not at all. We only charge for successful requests :) `404 Not Found` and `410 Gone` errors are charged You can extract data from as many websites as you want. Throw us 1M requests or 50M; we can perfectly handle it! Not at all. Our platform and infrastructure are cloud-based, making our language-agnostic API easy and seamless to use. # Quick Setup and First Request Source: https://docs.zenrows.com/first-steps/getting-started-guide Start scraping any website in under 5 minutes. ZenRows handles anti-bot measures, JavaScript rendering, and proxy management automatically, so you can focus on collecting data instead of managing infrastructure. ## Quick Setup Visit the [Registration Page](https://app.zenrows.com/register) and sign up using Google, GitHub, or your email address. Account creation is free and provides immediate access to your dashboard. zenrows-signup-page Select options that match your use case during the setup process. This helps ZenRows recommend the most suitable features for your scraping needs Get your API key from the dashboard and start your first request immediately. This key authenticates all your requests and tracks usage against your plan limits. Keep this key secure and never share it publicly. ## Your First Request Test your ZenRows setup with this simple request: ```python Python theme={null} ## pip install requests import requests url = 'https://httpbin.io/anything' apikey = 'YOUR_ZENROWS_API_KEY' params = { 'url': url, 'apikey': apikey, } response = requests.get('https://api.zenrows.com/v1/', params=params) print(response.text) ``` ```javascript Node.js theme={null} // npm install axios const axios = require('axios'); const url = 'https://httpbin.io/anything'; const apikey = 'YOUR_ZENROWS_API_KEY'; axios({ url: 'https://api.zenrows.com/v1/', method: 'GET', params: { 'url': url, 'apikey': apikey, }, }) .then(response => console.log(response.data)) .catch(error => console.log(error)); ``` ```java Java theme={null} import org.apache.hc.client5.http.fluent.Request; public class APIRequest { public static void main(final String... args) throws Exception { String apiUrl = "https://api.zenrows.com/v1/?apikey=YOUR_ZENROWS_API_KEY&url=https%3A%2F%2Fhttpbin.io%2Fanything"; String response = Request.get(apiUrl) .execute().returnContent().asString(); System.out.println(response); } } ``` ```php PHP theme={null} ``` ```go Go theme={null} package main import ( "io" "log" "net/http" ) func main() { client := &http.Client{} req, err := http.NewRequest("GET", "https://api.zenrows.com/v1/?apikey=YOUR_ZENROWS_API_KEY&url=https%3A%2F%2Fhttpbin.io%2Fanything", nil) resp, err := client.Do(req) if err != nil { log.Fatalln(err) } defer resp.Body.Close() body, err := io.ReadAll(resp.Body) if err != nil { log.Fatalln(err) } log.Println(string(body)) } ``` ```ruby Ruby theme={null} # gem install faraday require 'faraday' url = URI.parse('https://api.zenrows.com/v1/?apikey=YOUR_ZENROWS_API_KEY&url=https%3A%2F%2Fhttpbin.io%2Fanything') conn = Faraday.new() conn.options.timeout = 180 res = conn.get(url, nil, nil) print(res.body) ``` ```bash cURL theme={null} curl "https://api.zenrows.com/v1/?apikey=YOUR_ZENROWS_API_KEY&url=https%3A%2F%2Fhttpbin.io%2Fanything" ``` Replace `YOUR_ZENROWS_API_KEY` with your actual API key from the dashboard. A successful response confirms your setup is working correctly. ## Next Steps ### Choose your ZenRows solution Select the approach that best fits your scraping requirements and technical expertise: } href="https://docs.zenrows.com/universal-scraper-api/api-reference"> **Best for**: General web scraping with automatic optimization } href="/scraping-browser/introduction"> **Best for**: Complex automation requiring complete browser control } href="/residential-proxies/introduction"> **Best for**: High-quality proxies with existing infrastructure } href="/scraper-apis/introduction"> **Best for**: Structured data from specific popular websites } href="/integrations/overview"> **Best for**: No-code solutions and workflow automation Check our [Product Documentation](/first-steps/our-products#best-product-for-your-use-case) to compare features and find the best fit for your use case, or review our [Pricing Guide](/first-steps/pricing#choosing-the-right-plan-for-your-needs) to select the right plan. ### Learn web scraping fundamentals Build your knowledge with comprehensive guides and best practices: * [Web Scraping Best Practices](https://www.zenrows.com/blog/web-scraping-best-practices) - Essential techniques for reliable data collection, rate limiting strategies, and ethical scraping guidelines. * [Complete Python Web Scraping Tutorial](https://www.zenrows.com/blog/web-scraping-python) - Step-by-step implementation guide covering BeautifulSoup, requests, and data processing. * [Web Scraping Use Cases Guide](https://www.zenrows.com/blog/7-use-cases-for-website-scraping) - Real-world applications including price monitoring, lead generation, market research, and competitive analysis. * [Handling JavaScript-Heavy Websites](https://www.zenrows.com/blog/scraping-javascript-rendered-web-pages) - Advanced techniques for single-page applications and dynamic content. * [E-commerce Data Collection](https://www.zenrows.com/blog/job-board-scraping) - Strategies for product information, pricing, and inventory tracking. ### Scale your operations As your scraping needs grow, explore advanced features and optimization strategies: * **Concurrent requests**: Optimize performance by running multiple requests simultaneously * **Data storage**: Choose appropriate databases and storage solutions for your collected data * **Automation**: Set up scheduled scraping jobs for regular data collection * **Data processing**: Implement pipelines for cleaning, validating, and analyzing scraped data * **Monitoring**: Track success rates, identify patterns, and optimize your scraping strategy ### Get help when you need it Access support resources and community knowledge: * **FAQs** - Find answers to common questions about API limits, billing, and technical issues * **Troubleshooting Guides** - Resolve specific problems with blocked requests, parsing errors, and configuration issues * **Support** - Contact our team for personalized assistance with complex scraping challenges * **Community** - Join our Discord server to connect with other developers and share scraping strategies * **Documentation** - Explore detailed guides for advanced features and specific use cases ## Why Choose ZenRows? ZenRows solves the most challenging aspects of web scraping through automated solutions: **Anti-bot bypass & proxy rotation**: Detects and bypasses Cloudflare, reCAPTCHA, Akamai, Datadome, and other blocking mechanisms. Automatically switches between residential IPs across 190+ countries while rotating browser fingerprints to maintain access. **JavaScript rendering**: Uses real browser engines to handle dynamic content, infinite scroll, and lazy-loaded elements that traditional HTTP requests cannot capture. **Adaptive infrastructure**: Includes automatic retry mechanisms when requests fail, real-time analytics for monitoring performance, and maintains 99.9% uptime. **Auto-parsing features**: Adapts automatically to structural changes on supported websites, reducing maintenance overhead. You can start scraping immediately without technical expertise in proxy management or anti-detection techniques. ## Common Scraping Challenges If you're new to web scraping, understanding these common challenges helps explain why ZenRows provides value and how it solves problems you might encounter with traditional scraping approaches: scraping-issues-infographic ### Anti-Bot Measures Websites deploy various blocking mechanisms to prevent scraping, such as: * Web application firewalls (e.g., **Cloudflare**, **Datadome**, **Akamai**). * CAPTCHA services that require solving puzzles to verify human activity. * Rate limiting restricts the number of requests from a single IP address within a given time. cloudflare_blocked_page_sample Overcoming these measures often requires advanced techniques like CAPTCHA-solving services, IP rotation, and adapting to specific firewall rules. ### JavaScript Rendering Many modern websites dynamically render content using JavaScript, which prevents data from loading immediately when the page is opened. For example: * Dynamic content that appears only after JavaScript execution completes. * Infinite scrolling websites only display initial content, loading more as users scroll. * "Load more" buttons that reveal additional content through user interaction. Traditional HTTP requests cannot capture this dynamically generated content. scrapingcourse_infinite_scrolling_page ### HTML Structural Changes Websites frequently update page designs, layouts, and HTML structures to introduce new features or improve user experience. These changes often include: * Modifications to **CSS selectors** or **HTML attribute names**. * Adjustments to the overall structure of the page. Such updates can render previously working scrapers ineffective, requiring constant monitoring and updates to your scraping logic. ### Legal Considerations Scraping activities must comply with legal and ethical guidelines: * Website **terms of service** that prohibit scraping certain types of data, especially sensitive or restricted information. * **Data behind login walls** or private details is often protected by law and requires explicit authorization to access. * Adhering to rules specified in a site's `robots.txt` file, which defines areas where bots are not allowed. ## Frequently Asked Questions (FAQ) No technical expertise in proxy management or anti-detection techniques is required. ZenRows handles all the complex infrastructure automatically. If you can make HTTP requests, you can use ZenRows. Yes, every new account includes a free trial period to test our services. You can make requests immediately after signing up to evaluate ZenRows with your target websites. Enable JavaScript rendering (`js_render=true`) when scraping websites that load content dynamically, have infinite scroll, use "Load More" buttons, or are single-page applications (SPAs) built with React, Vue, or Angular. Your API key is unique to your account and should be kept secure. Never share it publicly or commit it to version control. Use environment variables in your code and regenerate your key if you suspect it's been compromised. Each ZenRows product has its own endpoint and documentation. You can use multiple products with the same API key. Simply change the endpoint URL and parameters based on which product you want to use for each request. ZenRows works with any programming language that can make HTTP requests. We provide code examples for Python, JavaScript/Node.js, Java, PHP, Go, Ruby, and cURL. The API is language-agnostic and follows standard REST principles. Log into your ZenRows dashboard to view real-time usage statistics, remaining credits, success rates, and request history. You can also set up usage alerts to notify you when approaching your plan limits. # Managing Your Notifications and Usage Alerts Source: https://docs.zenrows.com/first-steps/manage-notifications Stay in control of your account and data usage with our customizable notification system. Notifications help you track your subscription, monitor API usage, and avoid service disruptions without needing to log in daily. This guide outlines the types of notifications available, their default statuses, and how to manage them. [Manage your notifications](https://app.zenrows.com/account/notifications). You can add multiple email recipients in the notification settings by separating them with commas. If no email is specified, notifications will be sent to the account's primary email address by default. ## Billing ### Invoice Paid **Default: `Enabled`** Receive a confirmation email with a receipt each time a payment is successfully processed. This is ideal for keeping your accounting records up to date and sharing invoices with your finance team. ## Alerts ### Subscription Utilization Threshold **Default: `Enabled`** Set a custom percentage threshold (e.g., 80%) to receive a warning when your usage approaches the plan's limit. This is useful for proactively managing high-volume workflows and preventing unexpected overages. Click **Manage** to configure your preferred threshold level. ### Subscription Limit Reached **Default: `Enabled`** Get notified the moment your monthly quota is fully consumed. This allows you to take immediate action—such as upgrading your plan, purchasing extra credits (Top-Ups), or optimizing your requests to maintain uninterrupted service. ## Activity ### Daily Usage Summary **Default: `Disabled`** Receive a quick daily snapshot of your API activity, showing the number of requests made and data consumed by product. This is ideal for users who want to keep a close eye on consumption or need to report usage daily. ### Weekly Usage Summary **Default: `Enabled`** Receive a consolidated report summarizing your usage for the past 7 days. This is great for identifying trends, evaluating API needs, and planning for scale. Recommended for teams and managers who track consumption over time. ### Universal Scraper API Daily Consumption Threshold **Default: `Enabled`** Stay informed when your daily Universal Scraper API usage exceeds a preset threshold. This is useful for preventing excessive daily usage or monitoring anomalies. Thresholds can be customized by clicking **Manage** in your notification settings. ## Managing Your Notification Settings You can manage all notifications from the [Notifications Settings page](https://app.zenrows.com/account/notifications). To unsubscribe or pause alerts: 1. Visit your [notification settings](https://app.zenrows.com/account/notifications). 2. Locate the notification type you want to adjust. 3. Use the toggle switch to enable or disable it. 4. If applicable, click **Manage** to configure thresholds or add recipients. These settings apply instantly, giving you real-time control over how and when you're informed. We recommend keeping critical alerts such as **Subscription Utilization Limit Reached** and **Invoice Paid** enabled at all times to avoid missed billing or service interruptions. # ZenRows' list of products Source: https://docs.zenrows.com/first-steps/our-products ## Universal Scraper API ZenRows' Universal Scraper API is a powerful tool for easily extracting data from websites. It handles everything from dynamic content and JavaScript rendering to IP rotation and fingerprint management, offering a hassle-free solution for web scraping. It's extensive customization options, residential proxy support, and Cloudflare bypass capabilities make it perfect for handling complex scraping tasks efficiently. **Key Benefits:** * Handles dynamic content and JavaScript rendering * Extensive customization options * Anti-bot bypass capabilities ## Scraping Browser The Scraping Browser integrates seamlessly with Puppeteer and Playwright, making it the ideal solution for users already working with those tools. It leverages ZenRows' residential proxy network and browser simulation to scrape dynamic websites, handle user interactions, and avoid IP blocks — **all with just one line of code**. **Key Benefits:** * Seamless integration with Puppeteer and Playwright * Leverages residential proxy network * Handles user interactions and avoids IP blocks ## Residential Proxies ZenRows' Residential Proxies provide access to a global network of over 55 million IPs across 190+ countries, ensuring reliable and anonymous connections. With features like IP auto-rotation and geo-targeting, they are perfect for scraping geo-restricted content and maintaining high performance while staying undetected. **Key Benefits:** * Access to a global network of residential IPs * IP auto-rotation and geo-targeting * Reliable and anonymous connections ## Scraper APIs Scraper APIs designed by ZenRows are built to extract structured data from industries like eCommerce, Search Engine Results Pages (SERPs), and Real Estate, with more to come. Each API is optimized for its target industry, handling JavaScript-heavy pages, pagination, complex navigation, and anti-bot protections. Whether you're tracking prices, monitoring search rankings, or analyzing property listings, ZenRows ensures seamless data collection with minimal effort. **Key Benefits:** * Industry-specific optimizations * Handles complex navigation and anti-bot protections * Seamless data collection with minimal effort ## Best Product for Your Use Case ### Compare Based on Technical Needs Understanding your technical requirements helps narrow down the right product quickly: * **Facing anti-bot restrictions?**
The **Universal Scraper API** handles all bypass mechanisms automatically for sites with defenses like Cloudflare, DataDome, or similar anti-bot protection. It's your best choice for heavily protected sites. * **Need complex browser interactions?**
Choose the **Scraping Browser** if your project requires extensive actions like clicking buttons, scrolling, or form filling. It integrates with your existing Puppeteer or Playwright code, adding proxy management and scalability. Consider using custom code to bypass forced CAPTCHAS if the page forces a challenge, or switch to the Universal Scraper API. However, remember that the Universal Scraper API has limitations when executing complex [JavaScript instructions](/universal-scraper-api/features/js-instructions). * **Want plug-and-play solutions?**
Our **Scraper APIs** provide industry-specific data extraction with minimal configuration. Perfect for eCommerce, SERPs, or Real Estate data collection. * **Just need reliable IPs?**
**Residential Proxies** give you access to over 55 million IPs worldwide with geo-targeting capabilities if you have your scraping setup, but need undetectable proxies. ### Match the Product to Your Workflow Understanding when to choose each product will help you make the right decision for your specific use case. #### Universal Scraper API Best for general-purpose scraping. It handles everything, including JavaScript rendering, anti-bot bypass, proxy rotation, fingerprinting and more, making it perfect when you need a comprehensive scraping solution without building the infrastructure yourself. #### Scraper APIs Ideal for vertical-specific needs with structured output. Whether you're extracting data from eCommerce platforms, monitoring search engine results, or analyzing real estate listings, these APIs provide structured results with minimal setup. #### Scraping Browser Ideal if you already use Playwright or Puppeteer and need to add proxy management and anti-bot bypass capabilities. It provides fine-grained control over browser interactions such as clicking buttons, filling forms, scrolling and more. The key advantage is maintaining your existing scraping code while leveraging ZenRows' infrastructure for scalability. #### Residential Proxies Choose this when you have custom scraping logic but need reliable, geo-targeted IPs. ### Evaluate Cost and Scale When selecting the right product, evaluating cost and scalability is critical to ensure your solution meets your budget and growth requirements. Understand your usage patterns by estimating your monthly data needs and consider how many requests or pages you need to scrape and the complexity of those requests. For instance, scraping heavily protected sites or performing complex browser interactions may require more resources, impacting costs. You can find detailed information about pricing for each product in our [Pricing Documentation](/first-steps/pricing) ## Frequently Asked Questions (FAQ) **Universal Scraper API**: A complete web scraping solution that handles dynamic content, proxies, anti-bot measures, and JavaScript rendering automatically.\ **Scraping Browser**: Ideal for developers using Puppeteer or Playwright, it adds proxy integration, browser simulation, and anti-bot bypass with minimal setup.\ **Residential Proxies**: A standalone proxy service offering over 55 million real IPs with geo-targeting and rotation for custom-built scraping solutions.\ **Scraper APIs**: Industry-specific APIs optimized for structured data extraction from verticals like eCommerce, SERPs, and Real Estate, requiring minimal configuration and no coding. You can interact with the page using [JavaScript instructions](/universal-scraper-api/features/js-instructions) in the **Universal Scraper API**. These instructions are designed for simple interactions, such as clicking buttons, filling forms, and basic navigation. However, they have limitations and are not suited for complex or heavy interactions. For more advanced interactions, such as deep website manipulation, handling dynamic user flows, executing intricate browser actions, or multi-step navigations, the **Scraping Browser** is the best choice. It integrates seamlessly with Puppeteer or Playwright, giving you full control over the browser environment. This allows for precise interactions, such as simulating user behavior, handling JavaScript-heavy pages, and navigating complex workflows. Yes, ZenRows offers no-code options for web scraping. You can integrate ZenRows with various platforms to connect thousands of apps. [Click here](/integrations/clay) for more information on our No-Code documentation pieces under the Integration tag. Yes, each new request automatically uses a different IP to ensure anonymity and avoid detection. However, if you need to maintain the same IP for a specific period, this is also possible. Please refer to each product's documentation for detailed information on how to configure it. Yes, our products support geo-targeting, allowing you to choose IPs from specific countries or regions. This is especially useful for accessing localized content. Currently, city- or state-level geolocation is not available. Absolutely. Residential Proxies are designed to integrate seamlessly with any custom scraping setup, regardless of the tools or frameworks you use. No, Residential Proxies are only available to paying accounts. No, our fingerprints are not returned. No, we do not return either the page's IP or our own IP. No, you cannot access the page headers directly. However, you can access the response headers. No, CAPTCHA tokens are not returned after bypassing. No, we provide the tools for scraping, but you need to develop your own solutions. Our easiest product to integrate is **Scraper APIs**, which require less coding — just run them to get the page contents. We also offer a [No-Code options](/first-steps/faq#does-zenrows-have-no-code-options) for those who prefer to avoid coding. # ZenRows Pricing Source: https://docs.zenrows.com/first-steps/pricing ZenRows® offers transparent, flexible pricing designed for different technical levels and scraping volumes. Every plan includes access to all ZenRows products — the **Universal Scraper API**, **Scraping Browser**, **Residential Proxies**, and **Scraper APIs (Beta)**. ## How Pricing Works ### Shared Balance system Each plan gives you a **shared balance** that you can spend across all ZenRows products. This means you have the flexibility to use any combination of our tools without worrying about separate limits for each product. ### Pay Only for Success You're charged **only for successful requests**. Failed or retried requests don't consume your balance. HTTP 404 and 410 responses count as successful because the request completed correctly and returned valid data. ### What Affects Your Costs Your costs depend on four main factors: * **Page complexity** — Basic public pages cost less than protected pages behind anti-bot systems (i.e., Cloudflare, DataDome, Akamai, etc.) * **JavaScript rendering** — Enabling browser-based rendering increases costs because it requires more computational resources * **Premium proxies** — These residential IP addresses improve success rates on protected or geo-targeted sites but cost more * **Browser usage** — When using the **Scraping Browser**, you pay based on data transferred (GB) and active session time ### Cost Multipliers for Universal Scraper API Different features multiply your base cost depending on the infrastructure required: | Feature | Cost multiplier vs Basic | | ------------------------------ | ------------------------ | | JavaScript Rendering | ×5 | | Premium Proxies | ×10 | | JS Rendering + Premium Proxies | ×25 | **Example costs per 1,000 requests (CPM):** * Basic pages: \$0.28 * With JavaScript: \$1.40 * With Premium Proxies: \$2.80 * With both features: \$7.00 These multipliers reflect the additional infrastructure needed for browser rendering and residential IP addresses. ## Basic vs Protected Pages Understanding the difference between these page types helps you estimate costs accurately: * **Basic pages** — Standard public websites like news articles, product listings, and blogs. These usually work without proxies or JavaScript rendering. * **Protected pages** — Sites using anti-bot protection (i.e., Cloudflare, DataDome, Akamai, etc). These need JavaScript rendering and Premium Proxies for reliable data extraction. If you're unsure about your page mix, use our **[Pricing Calculator](https://www.zenrows.com/pricing-calculator)** to estimate how different page types affect your total cost. ## Cost Calculation Examples ### Universal Scraper API * **Available requests** = (Plan Balance ÷ Cost per 1,000 requests) × 1,000 * **Total cost** = (Number of requests ÷ 1,000) × Cost per 1,000 requests ### Scraping Browser * **Total cost** = (GB used × price per GB) + (Session hours × \$0.09) * Sessions are billed in 30-second increments ### Residential Proxies * **Total cost** = GB used × price per GB > **Remember:** You pay only for successful requests across all products. ## Choose Your Plan in 60 Seconds 1. **Estimate your monthly volume** — Count how many Basic and Protected pages you'll scrape 2. **Add browser usage** — If you'll use the Scraping Browser, estimate GB and session hours needed 3. **Get a recommendation** — The calculator suggests a plan with buffer room for growth → **[Open the Pricing Calculator](https://www.zenrows.com/pricing-calculator)** ## Plan Comparison **A free trial period to test your use case.** **Price:** Free **Universal Scraper API** * 1,000 basic results * 200 results using only the JS Rendering feature * 100 results using only the Premium Proxy feature * 40 protected results **Scraper APIs `BETA`** * 1,000 results **Scraping Browser** or **Residential Proxies** * 100 MB **Concurrency:** 5 parallel requests **Maximum Download Size:** 10 MB **Features:** * Everything from the Business Plan **Ideal for personal or small projects.** **Price:** \$69.99/month **Universal Scraper API** * 250K basic results (\$0.28 CPM) * 50K results using only the JS Rendering feature (\$1.40 CPM) * 25K results using only the Premium Proxy feature (\$2.80 CPM) * 10K protected results (\$7 CPM) **Scraper APIs `BETA`** * 66.7K results (\$1.05 CPM) **Scraping Browser** or **Residential Proxies** * 12.73 GB (\$5.5/GB) **Concurrency:** 20 parallel requests **Maximum Download Size:** 5 MB **Features:** * AI support and documentation * Standard analytics * Alerts & notifications **Great for early-stage startups needing more volume.** **Price:** \$129.99/month **Universal Scraper API** * 1M basic results (\$0.13 CPM) * 200K results using only the JS Rendering feature (\$0.65 CPM) * 100K results using only the Premium Proxy feature (\$1.30 CPM) * 40K protected results (\$3.25 CPM) **Scraper APIs `BETA`** * 130K results (\$1.00 CPM) **Scraping Browser** or **Residential Proxies** * 24.76 GB (\$5.25/GB) **Concurrency:** 50 parallel requests **Maximum Download Size:** 10 MB **Features:** * Everything from the Developer Plan * Human chat support **Suitable for scaling operations.** **Price:** \$299.99/month **Universal Scraper API** * 3M basic results (\$0.10 CPM) * 600K results using only the JS Rendering feature (\$0.50 CPM) * 300K results using only the Premium Proxy feature (\$1.00 CPM) * 120K protected results (\$2.50 CPM) **Scraper APIs `BETA`** * 315.8K results (\$0.95 CPM) **Scraping Browser** or **Residential Proxies** * 60 GB (\$5/GB) **Concurrency:** 100 parallel requests **Maximum Download Size:** 10 MB **Features:** * Everything from the Startup Plan * Advanced analytics **Built for high-traffic applications.** **Price:** \$499.99/month **Universal Scraper API** * 6.2M basic results (\$0.08 CPM) * 1.2M results using only the JS Rendering feature (\$0.40 CPM) * 620K results using only the Premium Proxy feature (\$0.80 CPM) * 240K protected results (\$2.08 CPM) **Scraper APIs `BETA`** * 555.5K results (\$0.90 CPM) **Scraping Browser** or **Residential Proxies** * 111.11 GB (\$4.5/GB) **Concurrency:** 150 parallel requests **Maximum Download Size:** 20 MB **Features:** * Everything from the Business Plan * Priority support **High-performance plan for large teams.** **Price:** \$999.99/month **Universal Scraper API** * 12.5M basic results (\$0.08 CPM) * 2.5M results using only the JS Rendering feature (\$0.40 CPM) * 1.2M results using only the Premium Proxy feature (\$0.80 CPM) * 480K protected results (\$2.08 CPM) **Scraper APIs `BETA`** * 1.2M results (\$0.83 CPM) **Scraping Browser** or **Residential Proxies** * 285.71 GB (\$3.5/GB) **Concurrency:** 200 parallel requests **Maximum Download Size:** 20 MB **Features:** * Everything from the Business 500 Plan * Dedicated Account Manager **Built for data-intensive operations.** **Price:** \$1,999.99/month **Universal Scraper API** * 25M basic results (\$0.08 CPM) * 5M results using only the JS Rendering feature (\$0.40 CPM) * 2.5M results using only the Premium Proxy feature (\$0.80 CPM) * 1M protected results (\$2.00 CPM) **Scraper APIs `BETA`** * 2.5M results (\$0.80 CPM) **Scraping Browser** or **Residential Proxies** * 634.92 GB (\$3.15/GB) **Concurrency:** 300 parallel requests **Maximum Download Size:** 50 MB **Features:** * Everything from the Business 1k Plan **Enterprise-grade scale and reliability.** **Price:** \$2,999.99/month **Universal Scraper API** * 37.5M basic results (\$0.08 CPM) * 7.5M results using only the JS Rendering feature (\$0.40 CPM) * 3.75M results using only the Premium Proxy feature (\$0.80 CPM) * 1.5M protected results (\$2.00 CPM) **Scraper APIs `BETA`** * 4M results (\$0.75 CPM) **Scraping Browser** or **Residential Proxies** * 1.07 TB (\$2.73/GB) **Concurrency:** 400 parallel requests **Maximum Download Size:** 100 MB **Features:** * Everything from the Business 2k Plan * SLA Guarantee * Onboarding & technical support **Custom solutions for large-scale businesses.** **Price:** Custom (above \$2,999/month) **Universal Scraper API** * Custom results **Scraper APIs `BETA`** * Custom results **Scraping Browser** or **Residential Proxies** * Custom GB **Concurrency:** Custom parallel requests **Maximum Download Size:** Custom MB **Features:** * Everything from the Business 3k Plan * Dedicated Key Account Manager **Basic results** = Public websites without anti-bot protection\ **Protected results** = Sites behind firewalls, or heavy JavaScript\ **Concurrent requests** = How many pages you can scrape simultaneously. Higher concurrency enables faster data extraction. Use the **[Pricing Calculator](https://www.zenrows.com/pricing-calculator)** to test different scenarios and find your ideal plan. ## Product-Specific Pricing Details ### Universal Scraper API * **Billing method:** Cost per 1,000 successful requests (CPM) * **Cost multipliers:** Apply when you enable JavaScript rendering or Premium Proxies * **Best for:** Mixed scraping where some pages need protection bypass ### Scraping Browser * **Billing method:** Data usage (per GB) + session time (\$0.09 per hour) * **Best for:** Interactive sessions or long-running browser automation ### Residential Proxies * **Billing method:** Data usage only (per GB) * **Best for:** Geo-targeted scraping or proxy-only workflows ### Scraper APIs `Beta` * **Billing method:** Cost per 1,000 successful requests with volume discounts * **Best for:** Structured data from predictable endpoints Enterprise plans offer additional CPM and GB pricing discounts. [Contact our sales team](https://www.zenrows.com/contact-sales) if your usage exceeds our highest standard plan. ## Managing Your Subscription Access all subscription changes from your Plans or Billing pages. ### Top-Ups You'll receive usage alerts at 80% and 95% of your limit, if enabled from your [Notifications settings](https://app.zenrows.com/account/notifications). Top-Ups let you extend usage without changing plans: * **Cost:** 15% of your plan price for 15% more usage * **Limit:** Up to 4 Top-Ups per billing cycle Set automatic Top-Ups to trigger at your chosen threshold (like 95%). To avoid surprises, enable invoice notifications in your [Notification Settings](https://app.zenrows.com/account/notifications). ### Upgrading Your Plan * **When it applies:** Immediately after purchase * **Important:** Unused balance from your previous plan doesn't carry over or be credited toward the new plan ### Downgrading Your Plan * **When it applies:** At the end of your current billing cycle * **During transition:** You keep your current limits until renewal ### Cancelling Your Subscription 1. Go to Billing 2. Click **Cancel Subscription** 3. Select your reason for cancelling (your feedback helps us improve!) 4. Confirm to complete the cancellation 1. You keep access until your billing cycle ends 2. You can use your remaining quota during this time 3. No additional charges will occur For complete terms and conditions, visit our Terms & Conditions page. ## Frequently Asked Questions (FAQ) ZenRows uses a subscription model. Each plan gives a monthly balance (e.g., Business = \$299) to spend across all products.\ Every successful request deducts cost according to its CPM or GB rate. You only pay for successful requests — failed or retried ones aren’t billed (404 and 410 count as successful). No, the cost of requests is included in your plan, and your plan's balance applies to all products. For example, if you're on the Business plan, you pay **\$299 per month** and receive **\$299 worth of usage** to spend across all ZenRows products. 1. Each product has a specific cost per request. 2. The costs for all products are deducted from your shared balance. For instance: If you spend \$150 with the Universal Scraper API and \$149 with the Scraping Browser, your total usage will be \$299. Keep in mind that your plan's balance is shared across all products, not allocated separately to each product. We accept credit cards (Visa, MasterCard, American Express, etc.), Apple Pay, Google Pay and PayPal. Wire transfer is accepted for Enterprise customers. We don't accept crypto or debit cards at the moment. We offer flexible subscription options with discounts for longer commitments. Choose from 3-month, 6-month, or yearly plans, each with discounted pricing. The best value comes with our yearly plan, which includes a 10% discount. Enjoy the full range of ZenRows features at the best rates available. Currently, ZenRows does not offer Pay-As-You-Go plans. However, we are actively working on expanding our offerings and plan to introduce a Pay-As-You-Go option soon. Currently, we only offer custom plans for users requiring higher volumes than those included in our Business 3k plan (\$2,999/month). We do not offer the option to purchase additional concurrency separately or to subscribe to a plan for just one product at a lower price. When you subscribe to any of our plans, you gain access to all products, with usage limits determined by your selected plan. If you have high-volume needs beyond our standard offerings, feel free to reach out to discuss a tailored solution. There is no need to cancel your free trial. It will expire automatically at the end of the trial period. No further action is required on your part. No, pausing or delaying your subscription is not currently available. You can either keep your subscription active or cancel it at any time. If you choose to cancel, you will continue to have access to all services until the end of your current billing cycle. Please note that any unused days or features do not roll over if you renew or reactivate your subscription later. You can view and download your subscription invoices on your [Billing page](https://app.zenrows.com/billing). Advanced Analytics, available on Business and higher plans, provides detailed insights and monitoring tools to help you optimize data extraction and manage costs. Key features include: * **Timeline View of Scraping Activity:** Visualize requests, costs, and success/failure rates over time. * **Granular Usage Insights with Date Filtering:** Filter and analyze your usage data by custom date ranges using an interactive datepicker. * **Domain and Subdomain Analytics:** Track usage and performance by domain and subdomain. * **Detailed Usage & Performance Metrics:** Monitor spending, request volume, success/failure rates, HTTP status codes, and more. * **Concurrency Insights:** Monitor concurrent request limits and resource usage. Advanced Analytics helps you quickly identify trends, optimize your scraping strategy, and maintain complete visibility over your usage and costs. This advanced filtering is only available with Advanced Analytics. Other plans can access the analytics page but do not have advanced filtering. Currently, it's not possible to update information on past invoices. To ensure your future invoices have the correct details, please keep your profile information up to date in your [Account Settings](https://app.zenrows.com/account/settings) and update your payment methods on the [Billing page](https://app.zenrows.com/billing). Any changes you make will be reflected in your next invoices. Rest assured, you won't be charged when your free trial ends. We don't ask for any credit card details during sign-up, so there's no way for us to bill you unexpectedly. During your trial, you'll have a \$1 usage limit to explore and experience our service freely. Once your trial ends or you reach the limit, your access will pause—no surprise charges, ever. If you'd like to continue using our service, you're welcome to subscribe at any time. And if you have any questions, our support team is always here to help! Yes. Upgrades apply immediately, but any unused balance from your previous plan doesn't carry over. Downgrades take effect at the start of your next billing cycle, and there are no penalties for changing plans. Yes, for usage beyond our Business 3K plan (\$2,999/month), we offer Enterprise plans with custom concurrency limits, SLA guarantees, dedicated support, and volume discounts. We don't currently offer plans for single products or additional concurrency purchases for standard plans. **All plans:** Documentation and AI-powered support\ **Startup and above:** Human chat support\ **Business and above:** Priority human support\ **Enterprise:** Dedicated account manager, SLA guarantees, onboarding assistance, and private Slack channel # Unblocking Your Scraper with Residential Proxies Source: https://docs.zenrows.com/first-steps/res-proxy-first-req # Power Up Your Headless Browser Scraper Source: https://docs.zenrows.com/first-steps/sc-browser-first-req # Launch Your First Universal Web Scraper Source: https://docs.zenrows.com/first-steps/uni-sc-api-first-req # Welcome to ZenRows® Source: https://docs.zenrows.com/first-steps/welcome ZenRows® offers tools to help you gather and manage web data easily. Whether you need to extract data from websites, use a browser for dynamic content, or access geo-targeted content with residential proxies, we have the right solutions for you. ## Products ZenRows provides a variety of tools to meet your data extraction needs. Whether you need industry-specific data, a universal scraping solution, or strong proxy support, ZenRows has you covered. } href="/universal-scraper-api/api-reference" > Increase your scraping success rate with advanced AI tools that handle CAPTCHA, browsers, fingerprinting, and more. } href="/scraping-browser/introduction" > Run your Puppeteer and Playwright scripts on cloud-hosted browsers with rotating residential IPs and 99.9% uptime for scalable web data extraction. } href="/residential-proxies/introduction" > Access geo-restricted content with our auto-rotating residential proxies, offering over 55 million IPs and 99.9% uptime. } href="/scraper-apis/introduction" > Get real-time product and property data with fast, reliable APIs for eCommerce, Real Estate, and search engine results pages (SERP). ### 👉 Common Use-Cases ZenRows is built to handle real-world scraping scenarios. Whether you need to load dynamic single-page applications, keep sessions alive across multiple requests, submit forms, or capture geo-localized data, we've got you covered. See our new [Common Use-Cases & Recipes guide](/universal-scraper-api/common-use-cases) for quick examples. ### 💡 Need Help? You Can Ask Our AI-Powered Search! You can use the **search bar at the top of the page** to instantly get answers to your questions, find documentation, and explore ZenRows' features. Simply type what you're looking for, and our AI will guide you! # Forbidden Sites and Activities Source: https://docs.zenrows.com/forbidden-sites ZenRows target sites' access restrictions and user behavior guidelines Are your ZenRows requests failing? You might be trying to scrape restricted websites or engaging in prohibited activities. Read on to understand the guidelines. ZenRows prevents access to certain websites, including but not limited to: * Financial institutions * Governmental websites * Payment processors * Visa applications Additionally, certain behaviors on permissible sites can result in account suspension. Be aware of these to use our service without disruptions. ### Understanding Access Restrictions We provide flexible products, but it's essential to understand the limitations set primarily for legal or ethical reasons. #### Restricted Websites Categories 1. **Banks**: Websites of banking institutions. 2. **Credit Card Processors/Payment Gateways**: This encompasses VISA, MasterCard, PayPal, Stripe, and similar payment platforms. 3. **Visas, eVisas, and Permits Websites**: Platforms related to visa application or issuance. 4. **Governmental Websites**: Mostly, those with `.gov` domain extensions. We may also block other sites based on legal or ethical concerns. If you believe a site is blocked in error, or there's a legitimate reason for accessing content from a restricted site, please [contact us](mailto:success@zenrows.com). We welcome feedback and will review your appeal. #### Encountering Restrictions - What to Expect If you target a restricted site, ZenRows API will return the following error: * **HTTP Status Code**: `400 Bad Request` * **Response Body**: ```json theme={null} { "code": "REQS001", "detail": "Requests to this URL are forbidden. Contact support if this may be a problem, or try again with a different target URL.", "instance": "/v1", "status": 400, "title": "Requests to this domain are forbidden (REQS001)", "type": "https://docs.zenrows.com/api-error-codes#REQS001" } ``` ### User Behavior Guidelines Certain behaviors can lead to account suspension due to ethical, legal, or operational reasons: 1. **Brute Forcing Login Forms**: Prohibited due to security concerns. 2. **Heavy Traffic Burden**: Overloading sites disrupts their operation. Respect the `robots.txt` file and avoid aggressive scraping. 3. **Scraping Copyrighted Content**: Without permissions, this is legally contentious. 4. **Extracting Private Personal Information**: Against GDPR and other privacy laws. 5. **Misrepresentation**: Tactics like spoofing user-agents or using proxies unethically are discouraged. #### Recommendations and Best Practices 1. **Handle Errors Gracefully**: Build error-handling into your systems to manage potential restrictions seamlessly. If you get an HTTP 400 status code, do not retry your request to prevent being permanently blocked. 2. **Reach Out**: Our support team is always ready to help, clarify, and listen. We aim for a balance between powerful web scraping and ethical web behavior. By respecting these guidelines, we can create a sustainable and beneficial web scraping ecosystem. For any queries or clarifications, our support team is here to assist. ## Frequently Asked Questions (FAQ) This error typically indicates that you've attempted to access a restricted website or IP address. Please review our access restrictions and guidelines to understand the limitations. If you target a restricted site, ZenRows API will return the following error: * **HTTP Status Code**: `400 Bad Request` * **Response Body**: ```json theme={null} { "code": "REQS001", "detail": "Requests to this URL are forbidden. Contact support if this may be a problem, or try again with a different target URL.", "instance": "/v1", "status": 400, "title": "Requests to this domain are forbidden (REQS001)", "type": "https://docs.zenrows.com/api-error-codes#REQS001" } ``` Apart from accessing restricted sites, specific behaviors, like brute forcing login forms, extracting private personal information against GDPR norms, or heavy traffic burden on websites, can lead to account suspension. Always refer to our [Terms and Conditions](https://www.zenrows.com/terms-and-conditions) to avoid any inadvertent disruptions. No, scraping content directly from server IP addresses is not allowed. Only URLs with Fully Qualified Domain Names (FQDNs) can be scraped. # How to Integrate 2Captcha with ZenRows Source: https://docs.zenrows.com/integrations/2captcha Integrating 2Captcha with ZenRows gives you an extra layer of protection to handle interactive CAPTCHAs that appear on web pages. This guide shows you how to set up 2Captcha with both the ZenRows Universal Scraper API and the ZenRows Scraping Browser. ## What Is 2Captcha 2Captcha is a CAPTCHA-solving service that employs humans to solve various types of CAPTCHA challenges. Here's how 2Captcha works: 1. The scraper submits the CAPTCHA challenge as a unique key (e.g., sitekey for reCAPTCHA) via 2Captcha's API. 2. A human solver receives and solves the CAPTCHA challenge manually. 3. The human worker returns the CAPTCHA solution as a hashed token. 4. The scraper supplies the solution token to the site and solves the challenge. The service is particularly useful for solving in-page CAPTCHAs, ensuring complete CAPTCHA evasion. These include CAPTCHAs attached to form fields or those that appear when you interact with page elements. You'll learn to integrate 2Captcha with ZenRows in the next sections of this guide. To demonstrate the solution, we'll use the reCAPTCHA demo page as the target website. ## Integrating 2Captcha With the Universal Scraper API The Universal Scraper API automatically bypasses website protections, but you should combine it with 2Captcha for interactive or form-based CAPTCHAs. This approach requires minimal code changes and helps handle unexpected blocks. ### Step 1: Set up 2Captcha on ZenRows 1. Log in to your 2Captcha dashboard and copy your API key. 2Captcha Dashboard API Key 2. Go to the [Integrations Page](https://app.zenrows.com/account/integrations) in your ZenRows account and click on `Integrations`. 3. Click `Manage` on the 2Captcha card. ZenRows Integration Menu 4. Enter your 2Captcha API key and click `Save`. ZenRows 2Captcha Integration Step ### Step 2: Integrate 2Captcha in your scraping request After adding the 2Captcha ZenRows integration, you need to specify its usage in your Universal Scraper API request via the [`js_instructions`](/universal-scraper-api/features/js-instructions) parameter. ZenRows handles all communication between your script and 2Captcha. The `js_instructions` parameter accepts a `solve_captcha` option that specifies which CAPTCHA type you want to solve. ZenRows's Universal Scraper API supports these CAPTCHA types for 2Captcha integration: * `recaptcha`: Google reCAPTCHA series, including invisible CAPTCHAs that don't require user interaction. * `cloudflare_turnstile`: Cloudflare Turnstile CAPTCHAs are typically found on forms and interactive elements. Here are the supported CAPTCHA types with their configuration options: ```javascript theme={null} [ // Solve reCAPTCHA {"solve_captcha": {"type": "recaptcha"}}, // Solve Cloudflare Turnstile {"solve_captcha": {"type": "cloudflare_turnstile"}}, // Solve Invisible reCAPTCHA with inactive option {"solve_captcha": {"type": "recaptcha", "options": {"solve_inactive": true}}} ] ``` For more information on the available instructions, check the [JavaScript Instructions Documentation](/universal-scraper-api/features/js-rendering#solve-captchas) page. Now, let's implement the `js_instructions` in your ZenRows request. In this example, the script waits for 3 seconds, solves the reCAPTCHA challenge using 2Captcha, waits 3 seconds for processing, clicks the submit button, and then waits for the success message to be present: ```python Python theme={null} # pip3 install requests import requests url = "https://2captcha.com/demo/recaptcha-v2" def scraper(url): apikey = "YOUR_ZENROWS_API_KEY" params = { "url": url, "apikey": apikey, "js_render": "true", "premium_proxy": "true", "json_response": "true", "js_instructions": """[ {"wait": 3000}, {"solve_captcha":{"type":"recaptcha"}}, {"wait": 3000}, {"click": "button[type='submit']"}, {"wait_for": "._successMessage_1ndnh_1"} ]""", "css_extractor": """{"success":"p._successMessage_1ndnh_1"}""", } response = requests.get("https://api.zenrows.com/v1/", params=params) return response.json() ``` The CSS selectors provided in this example (`._successMessage_1ndnh_1`, `button[type='submit']`) are specific to the page used in this guide. Selectors may vary across websites. For guidance on customizing selectors, refer to the [CSS Extractor documentation](/universal-scraper-api/features/output#css-selectors). If you're having trouble, the [Advanced CSS Selectors Troubleshooting Guide](/universal-scraper-api/troubleshooting/advanced-css-selectors) can help resolve common issues. ### Step 3: Confirm you solved the CAPTCHA You can confirm that the CAPTCHA was successfully solved by checking the response from both the `js_instructions` and `css_extractor`: ```python theme={null} # confirm if CAPTCHA is bypassed html_content = scraper(url) css_extractror_info = html_content.get("css_extractor") js_instructions_info = html_content.get("js_instructions_report") print( "CSS extractor info:", css_extractror_info, "\n", "JS instructions info:", js_instructions_info, ) ``` ### Complete Example Here's the full working code: ```python Python theme={null} # pip3 install requests import requests url = "https://2captcha.com/demo/recaptcha-v2" def scraper(url): apikey = "YOUR_ZENROWS_API_KEY" params = { "url": url, "apikey": apikey, "js_render": "true", "premium_proxy": "true", "json_response": "true", "js_instructions": """[ {"wait": 3000}, {"solve_captcha":{"type":"recaptcha"}}, {"wait": 3000}, {"click": "button[type='submit']"}, {"wait_for": "._successMessage_1ndnh_1"} ]""", "css_extractor": """{"success":"p._successMessage_1ndnh_1"}""", } response = requests.get("https://api.zenrows.com/v1/", params=params) return response.json() ### confirm if CAPTCHA is bypassed html_content = scraper(url) css_extractror_info = html_content.get("css_extractor") js_instructions_info = html_content.get("js_instructions_report") print( "CSS extractor info:", css_extractror_info, "\n", "JS instructions info:", js_instructions_info, ) ``` When successful, you'll see a response like this, confirming the CAPTCHA was solved: ```python Response theme={null} CSS extractor info: {"data": {"success": "Captcha is passed successfully!"}} JS instructions info: { "instructions": [ # ... { "duration": 1456, "instruction": "solve_captcha", "params": {"options": None, "type": "recaptcha"}, "result": [ { # ... "response_element": True, "solved": True, # ... } ], "success": True, }, # ... ], # ... } ``` This response is equivalent to manually clicking the "Check" button on the reCAPTCHA demo page to verify the CAPTCHA was solved. The [`json_response`](/universal-scraper-api/features/js-rendering#debug-js-instructions) parameter is used here for debugging purposes. For production use, you can parse the `response.text` with a parser like BeautifulSoup since the request returns the HTML content that was previously locked behind the CAPTCHA. ## Integrating 2Captcha With the ZenRows Scraping Browser The ZenRows Scraping Browser doesn't include direct 2Captcha integration, but you can achieve this using the 2Captcha SDK. Here's how to set it up using Playwright with the Scraping Browser in Python. ### Step 1: Install the 2Captcha SDK First, install the 2Captcha SDK: ```bash theme={null} pip3 install 2captcha-python ``` ### Step 2: Connect to 2Captcha Import the necessary libraries and create a connection to 2Captcha using your API key: ```python Python theme={null} # pip3 install 2captcha-python playwright from playwright.async_api import async_playwright from twocaptcha import TwoCaptcha # connect to 2Captcha API two_captcha_api_key = "2Captcha_API_KEY" solver = TwoCaptcha(two_captcha_api_key) ``` The solver object handles all communication with the 2Captcha service. ### Step 3: Connect to the Scraping Browser and obtain the CAPTCHA Token Connect to the Scraping Browser, navigate to your target page, and extract the reCAPTCHA sitekey (a unique identifier for the CAPTCHA): ```python Python theme={null} # ... # scraping browser connection URL connection_url = ( "wss://browser.zenrows.com?apikey=YOUR_ZENROWS_API_KEY" ) async with async_playwright() as p: # connect to the browser over CDP (Chrome DevTools Protocol) browser = await p.chromium.connect_over_cdp(connection_url) # create a new page page = await browser.new_page() # navigate to the target URL await page.goto(url) # wait for the reCAPTCHA sitekey to load and extract it await page.wait_for_selector("#g-recaptcha") site_key = await page.evaluate( "document.querySelector('#g-recaptcha').getAttribute('data-sitekey')" ) # solve CAPTCHA captcha_response = solver.recaptcha(sitekey=site_key, url=url) # extract the reCAPTCHA token from the response captcha_token = captcha_response["code"] ``` This code extracts the sitekey from the reCAPTCHA element and sends it to 2Captcha for solving. ### Step 4: Solve the CAPTCHA When a user solves a CAPTCHA, the solution token gets placed in a hidden input field. You need to simulate this process by injecting the token into the appropriate field: ```python Python theme={null} # ... async def scraper(url): async with async_playwright() as p: # ... # enter the CAPTCHA token to pass the block await page.evaluate( """ (captchaToken) => { const responseField = document.getElementById("g-recaptcha-response"); responseField.style.display = 'block'; responseField.value = captchaToken; responseField.dispatchEvent(new Event('change', { bubbles: true })); } """, captcha_token, ) ``` This JavaScript code finds the hidden response field, makes it visible, sets the token value, and triggers a change event to notify the page that the CAPTCHA was solved. ### Step 5: Confirm you solved the CAPTCHA After applying the CAPTCHA solution, you can proceed with your scraping tasks. Let's verify the solution worked by clicking the submit button and checking the response: ```python Python theme={null} # ... import time # ... async def scraper(url): async with async_playwright() as p: # ... # confirm if the CAPTCHA is passed await page.get_by_role("button", name="Check").click() time.sleep(10) try: success_text = await page.text_content("p._successMessage_1ndnh_1") if success_text: print(success_text) else: error_text = await page.text_content("div._alertBody_bl73y_16") print(error_text) except Exception as e: print(e) # close the browser await browser.close() ``` The CSS selectors provided in this example (`._successMessage_1ndnh_1`, `._alertBody_bl73y_16`) are specific to the page used in this guide. Selectors may vary across websites. For guidance on customizing selectors, refer to the [CSS Extractor documentation](/universal-scraper-api/features/output#css-selectors). If you're having trouble, the [Advanced CSS Selectors Troubleshooting Guide](/universal-scraper-api/troubleshooting/advanced-css-selectors) can help resolve common issues. Run the code using `asyncio`: ```python Python theme={null} # ... import asyncio # ... url = "https://2captcha.com/demo/recaptcha-v2" # run the scraper asyncio.run(scraper(url)) ``` ### Complete Example Here's the full working code: ```python Python theme={null} # pip3 install 2captcha-python playwright from playwright.async_api import async_playwright from twocaptcha import TwoCaptcha import time import asyncio # connect to 2Captcha API two_captcha_api_key = "2Captcha_API_KEY" solver = TwoCaptcha(two_captcha_api_key) # scraping browser connection URL connection_url = "wss://browser.zenrows.com?apikey=YOUR_ZENROWS_API_KEY" async def scraper(url): async with async_playwright() as p: # connect to the browser over CDP (Chrome DevTools Protocol) browser = await p.chromium.connect_over_cdp(connection_url) # create a new page page = await browser.new_page() # navigate to the target URL await page.goto(url) # wait for the reCAPTCHA sitekey to load and extract it await page.wait_for_selector("#g-recaptcha") site_key = await page.evaluate( "document.querySelector('#g-recaptcha').getAttribute('data-sitekey')" ) # solve CAPTCHA captcha_response = solver.recaptcha(sitekey=site_key, url=url) # extract the reCAPTCHA token from the response captcha_token = captcha_response["code"] # enter the CAPTCHA token to pass the block await page.evaluate( """ (captchaToken) => { const responseField = document.getElementById("g-recaptcha-response"); responseField.style.display = 'block'; responseField.value = captchaToken; responseField.dispatchEvent(new Event('change', { bubbles: true })); } """, captcha_token, # Pass the CAPTCHA token as an argument to the JavaScript function ) # confirm if the CAPTCHA is passed await page.get_by_role("button", name="Check").click() time.sleep(10) try: success_text = await page.text_content("p._successMessage_1ndnh_1") if success_text: print(success_text) else: error_text = await page.text_content("div._alertBody_bl73y_16") print(error_text) except Exception as e: print(e) # close the browser await browser.close() url = "https://2captcha.com/demo/recaptcha-v2" # run the scraper asyncio.run(scraper(url)) ``` The above code extracts the following success message from the page: ```bash theme={null} Captcha is passed successfully! ``` Congratulations! 🎉 You've just integrated 2Captcha with the ZenRows Universal Scraper API and the ZenRows Scraping Browser. ## Conclusion By integrating 2Captcha with ZenRows, you can reliably handle even the toughest CAPTCHAs and keep your scraping workflows running smoothly. Whether you use the Universal Scraper API or the Scraping Browser, this setup helps you automate CAPTCHA solving and reduce manual effort. ## Frequently Asked Questions (FAQ) At a success rate of 99.93%, ZenRows bypasses the CAPTCHAs and other anti-bot measures commonly encountered during web scraping. However, interactive or in-page CAPTCHAs, such as those tied to form fields, may occasionally appear after bypassing the primary CAPTCHA. While this is rare, integrating 2Captcha provides an additional layer of stealth, ensuring even higher reliability and enhancing the already impressive 99.93% success rate. ZenRows, when integrated with 2Captcha, natively supports solving reCAPTCHA and Cloudflare Turnstile, two of the most common CAPTCHA types. No, ZenRows does not return CAPTCHA tokens. # How to Integrate Clay with ZenRows Source: https://docs.zenrows.com/integrations/clay ZenRows provides powerful web scraping capabilities that integrate seamlessly with Clay, allowing you to extract data from websites without writing code. The integration enables the automated collection of structured data from any website directly into Clay's spreadsheet interface. ## What is Clay? Clay is a powerful no-code data platform that functions as an intelligent spreadsheet with built-in automation capabilities. It enables users to collect, transform, and analyze data from various sources without requiring programming knowledge, making complex data operations accessible to everyone. With its extensive integration ecosystem, Clay allows users to connect to hundreds of data sources, including web scraping tools like ZenRows, CRM systems, and marketing platforms. ## Watch the Video Tutorial Learn how to set up the Clay ↔ ZenRows integration step-by-step by watching this video tutorial: