ZenRows Docs home pagelight logodark logo
  • Support
  • ZenRows Status
  • Dashboard
  • Dashboard
First Steps
Scraper APIs
Universal Scraper API
Scraping Browser
Residential Proxies
Integrations
  • Blog
    • Welcome to ZenRows®
    • Our products
    • Pricing
    Get Started
    • Getting Started Guide
    Help
    • API Error Codes
    • Manage Notifications
    • FAQ

    Frequently Asked Questions

    No, monthly plans are reset each month and usage doesn’t roll over.

    ZenRows does not offer a browser extension, and our products are designed to work via API requests, proxies, and automated browsers, not through browser extensions.

    ​
    Why Doesn’t ZenRows Offer a Browser Extension?

    Browser extensions have significant limitations when it comes to web scraping:

    1. Restricted Execution – Extensions run in a browser’s sandboxed environment, limiting their ability to bypass antibot protections.
    2. Scalability Issues – A browser extension would require manual interaction, making it impractical for large-scale or automated scraping.
    3. Limited Customization – Unlike our API and Scraping Browser, extensions lack the flexibility to integrate advanced scraping techniques, such as headless browsing, fingerprint evasion, and CAPTCHA handling.

    ​
    How to Scrape Without a Browser Extension

    Instead of using an extension, we recommend using one of our API-based solutions, which are optimized for web scraping at scale:

    ​
    Universal Scraper API

    • Designed for flexible, automated scraping with built-in antibot bypassing.
    • Supports JavaScript rendering, Premium Proxies, and CAPTCHA handling.
    • Ideal for scraping protected or dynamic pages.

    ​
    Scraper APIs

    • Ready-to-use APIs that return structured data from specific websites.
    • Handles antibot measures and complex scraping challenges automatically.

    ​
    Scraping Browser

    • A headless browsing solution that allows full control over web automation.
    • Ideal for cases where manual browser behavior replication is needed.

    ​
    Residential Proxies

    • Provides rotating IPs to increase anonymity.
    • Best for handling IP-based blocks but requires custom browser automation for full scraping functionality.

    If you’re looking for automated and scalable web scraping, our API solutions are the best fit. Let us know if you need help choosing the right approach! 🚀

    If you need to capture dynamic content loaded via AJAX requests, ZenRows offers different approaches depending on the product you’re using. Some products provide built-in JSON responses, while others require custom configurations to extract network requests.

    More and more websites load content dynamically, meaning data is fetched via XHR, AJAX, or Fetch requests instead of being included in the initial HTML. Besides waiting for the content to load, you might want to capture these network requests—similar to how they appear in the Network tab in DevTools.

    ​
    How Each ZenRows Product Handles XHR / AJAX / Fetch Requests

    ​
    Universal Scraper API

    • The JSON Response feature (json_response=true) captures AJAX requests automatically.
    • Returns a structured JSON object containing two fields:
      • HTML – The final rendered page source.
      • XHR – An array of network requests (XHR, AJAX, Fetch), including URL, body, headers, and more.
    • This feature is exclusive to the Universal Scraper API and is ideal for analyzing background requests.
    Learn more: JSON Response

    ​
    Scraper APIs

    • Default response format is JSON, but it does not include network requests by default.
    • Instead of capturing XHR calls, it extracts and structures the final page content into a JSON format.
    • This means you’ll get structured data rather than raw network request details.

    ​
    Scraping Browser

    • Does not capture network requests automatically.
    • You’ll need to configure custom JavaScript code to intercept and extract XHR/AJAX calls manually on Puppeteer or Playwright.
    For a step-by-step guide on capturing network requests in Playwright, check out our comprehensive Playwright guide.

    ​
    Residential Proxies

    • Acts as a proxy layer without modifying responses.
    • To capture XHR/AJAX requests, you must configure custom request logging in your own setup (e.g., Puppeteer, Playwright, or Selenium).

    By choosing the right ZenRows product and configuration, you can effectively capture network requests and analyze the data that websites load dynamically. Let us know if you need guidance on a specific use case! 🚀

    ZenRows is designed to bypass most modern antibot solutions out-of-the-box. We continuously test and optimize our systems to ensure a smooth scraping experience. However, antibot defenses vary by website, and different ZenRows products serve different purposes.

    Below is an overview of how each product handles antibot measures and what to expect when using them.

    ​
    Universal Scraper API

    The Universal Scraper API, when combined with Premium Proxies and JS Render, effectively handles most antibot measures. This setup mimics real user behavior, helping bypass bot detection mechanisms.

    However, not all pages are protected equally. Many websites enforce stricter protections on internal APIs or login-restricted content. If you’re targeting such endpoints, additional configurations might be needed.

    If you’re experiencing blocks despite using Premium Proxies and JS Render, refer to this guide: Using Premium + JS Render and still blocked

    ​
    Scraper APIs

    Our Scraper APIs are designed for ease of use. Simply send a request to our API, and we handle all antibot measures in the background, delivering the structured content you need. This is the best option for users who want a hassle-free experience without worrying about configuration.

    ​
    Residential Proxies

    Residential Proxies prioritize anonymity rather than antibot bypassing. They provide IP rotation and geographic targeting but do not include built-in antibot or CAPTCHA-solving capabilities. For heavily protected websites, additional techniques may be required.

    ​
    Scraping Browser

    The Scraping Browser is highly effective against antibot and anticaptcha solutions, using the same advanced algorithms as the Universal Scraper API. However, if a website enforces a CAPTCHA challenge, we do not automatically bypass it. Solving CAPTCHAs currently requires implementing custom handling, such as integrating third-party CAPTCHA-solving services.

    By choosing the right combination of ZenRows tools, you can optimize your web scraping strategy to handle even the most complex antibot defenses. If you need further assistance, feel free to reach out to our support team.

    ZenRows supports a variety of no-code platforms to help you scrape data from websites without writing a single line of code. These integrations let you connect your scraping workflows with thousands of apps like Google Sheets, Airtable, Notion, Amazon S3, and more.

    These no-code integrations are ideal for marketers, analysts, product managers, and anyone looking to automate data collection without needing technical skills.

    ​
    When to Use No-Code Integrations

    Use ZenRows’ no-code options when you:

    • Want to scrape data into a spreadsheet without writing code
    • Need to automate recurring data collection tasks
    • Prefer visual workflow builders over API requests
    • Are integrating web data into tools like CRMs, email platforms, or dashboards

    Best Practice: Start with pre-built ZenRows templates in platforms like Zapier or Make to set up your workflow in minutes.

    ​
    Next Steps

    Visit our Integrations Page to explore tutorials and real-world examples that walk you through setting up your first workflow.

    Suppose you must scrape data from a website and automatically process it using a third-party tool. We offer various ways to integrate ZenRows with external software and tools. Currently, you can integrate a captcha solver or a no-code tool like Zapier/Make/Clay.

    Additionally, you can build your integrations using the ZenRows output, whether HTML or JSON. A good use case for this is the autoparse feature, which returns structured data from a page.

    Yes! Our custom plans are available for high-volume cases. We cannot customize public plans, as they are standardized for all our clients.

    Optimizing your requests can significantly improve performance and reduce response times. Below are general best practices, followed by specific recommendations for each ZenRows product.

    ​
    Factors Affecting Request Speed

    1. Concurrency: Sending multiple requests simultaneously can increase throughput
    2. Resource Usage: Features like JavaScript rendering or waiting for specific conditions can impact speed
    3. Response Size (optional): Pages with dynamic content will naturally take longer to load. Consider targeting only the necessary data or using output filters to minimize payload.
    4. Success Rate: The rate of successful requests. If the success rate is low, you may need to increase the number of requests or the concurrency.
    While Residential Proxies have no concurrency restrictions, other products have plan-specific limits. Monitor your performance when adjusting these settings.

    ​
    Monitoring Concurrency Usage

    Each response includes headers that help you manage and optimize your concurrency:

    Concurrency-Limit: 200
    Concurrency-Remaining: 199

    These headers help you:

    • Monitor how many concurrent requests your plan allows
    • Track how many slots are currently available
    • Adjust request volume dynamically to avoid hitting limits that may delay or throttle requests
    The Concurrency-Remaining header reflects the real-time state of your concurrency usage and is the primary value our system uses to enforce limits. If it reaches zero and more requests are sent, you may receive a 429 Too Many Requests error and your IP can be temporarily blocked for 5 minutes.

    If you receive a BLK0001 error (IP Address Blocked), it means your IP has exceeded the allowed error rate. The block will last for 5 minutes and will impact your ability to send new requests during that time, affecting your overall scraping speed. For more details, see our API Error Codes documentation.

    Use these headers to adjust your request flow in real-time, scaling up when possible and backing off before hitting limits.

    ​
    Product-Specific Recommendations

    ​
    Universal Scraper API

    1. Optimize JavaScript Rendering:
    • Disable js_render=true for static content to improve speed
    • Only enable when necessary for dynamic content or accessing protected content
    • Consider the impact on response times
    1. Minimize Wait Times:
    • Use wait and wait_for only when required
    • Set the minimum necessary wait duration
    • Longer waits mean slower requests
    1. Use Premium Proxies:
    • Enable premium_proxy=true for faster, more consistent responses
    • Particularly useful for sites with anti-bot measures
    • Can reduce retries and improve overall speed

    ​
    Scraper APIs

    1. Concurrency Management:
    • Start with moderate concurrency and monitor response times
    • Increase gradually while maintaining acceptable speed
    • Implement backoff strategies when requests slow down
    1. Parameter Optimization:
    • Remove unnecessary parameters that might slow down requests
    • Only use parameters essential for your use case
    • Monitor the impact of each parameter on response times

    ​
    Residential Proxies

    1. Request Rate Optimization:
    • Monitor response times at different request rates
    • Adjust based on target site performance
    • Implement backoff when responses slow down

    ​
    Scraping Browser

    1. Resource Management:
    • Disable unnecessary JavaScript execution
    • Block non-essential resources (images, media, etc.)
    • Optimize browser settings for faster loading
    1. CAPTCHA Handling:
    • Implement manual CAPTCHA solving to avoid automated delays
    • Consider the impact on overall request speed

    ​
    Speed Optimization Best Practices

    1. Start with Baseline: Begin with standard settings and measure response times
    2. Monitor Performance: Use response headers and timing metrics to track speed
    3. Gradual Optimization: Make incremental changes and measure their impact
    4. Smart Retries: Use exponential backoff for failed requests to maintain speed
    5. Target-Specific Tuning: Adjust settings based on the specific website’s performance
    While these optimizations can improve request speed, some features (like JavaScript rendering) might be necessary for your specific use case. If you need help optimizing for speed while maintaining functionality, our support team is here to assist.

    API usage counts are managed based on your subscription plan:

    ​
    Monthly Plans

    • Usage is tracked and reset each month.
      Any remaining usage does not roll over to the next month.

    ​
    3-Month and 6-Month Plans

    • Usage is tracked and reset every 3 or 6 months, depending on your subscription.
      Any remaining usage does not roll over at the end of each period.

    ​
    Yearly Plans

    • Usage is tracked and reset annually.
      Any remaining usage does not roll over to the next year.

    ZenRows is designed to bypass most modern antibot solutions out-of-the-box. We continuously test our service to ensure optimal performance.

    Handling CAPTCHAs depends on the type of CAPTCHA and the ZenRows product you’re using.

    ​
    Handling CAPTCHAs on Forms

    CAPTCHAs on forms are not solved automatically. If you need to submit forms that trigger a CAPTCHA, we offer an integration with a CAPTCHA solver that might work for your use case. Learn more about it here: Using JavaScript Instructions to Solve CAPTCHAs

    ​
    CAPTCHA Handling by Product

    Each ZenRows product has its own approach to handling CAPTCHAs, depending on the level of antibot protection in place. While some products automatically bypass CAPTCHAs in most cases, others may require additional configurations or external solvers. Below, we outline how each product deals with CAPTCHAs and what you can do to improve your success rate.

    ​
    Universal Scraper API

    • Uses Premium Proxies and JS Render to bypass most antibot measures.
    • If a CAPTCHA appears, you can use JavaScript Instructions to interact with the page and solve it manually or through an external CAPTCHA-solving service.

    ​
    Scraper APIs

    • Fully managed solution—our API automatically handles the antibot protections, including CAPTCHA challenges.

    ​
    Residential Proxies

    • Residential Proxies provide anonymity but do not bypass CAPTCHAs automatically.
    • If CAPTCHA protection is strict, you’ll need custom handling or an external solver.

    ​
    Scraping Browser

    • Uses the same bypassing techniques as the Universal Scraper API.
    • Can handle most antibot solutions, but does not solve CAPTCHAs by default.
    • If a CAPTCHA is encountered, it requires custom handling, such as integrating a CAPTCHA solver.

    By choosing the right ZenRows product and implementing the appropriate CAPTCHA-handling techniques, you can minimize interruptions and improve your scraping success rate. If you need assistance with a specific case, feel free to contact our support team.

    Concurrency refers to the number of ongoing requests that happen at any given time. By different means, computers and languages can call the API in parallel and wait for results while others are still running. You can use concurrency with any ZenRows plan; check out pricing for more details.

    For more details, check our how-to guide on concurrency to see details about implementation in Python and JavaScript.

    ​
    Important Behavior to Understand

    ​
    Canceled Requests Still Count Against Your Concurrency

    One crucial thing to understand is that canceling requests on the client side does NOT immediately free up concurrency slots. When you cancel a request:

    1. Your client stops waiting for a response
    2. But the ZenRows server continues processing the request
    3. The concurrency slot remains occupied until the server-side processing completes
    4. Only then is the concurrency slot released

    This can lead to unexpected 429 errors if you’re canceling requests and immediately trying to make new ones, as your concurrency limit might still be reached.

    ​
    Security System for Failing Requests

    ZenRows implements a security system that may temporarily ban your API key if you send too many failing requests in a short period. Types of failing requests that can trigger this include:

    • 429 Too Many Requests errors due to exceeding concurrency limits
    • Invalid API key errors
    • 400 Bad Request errors due to invalid parameters
    • Other repeated API errors

    If your API key gets temporarily banned, you’ll receive an error from the API. If the requests continue, the IP address might get banned for a few minutes and the requests will not even connect with the server.

    ​
    Monitoring Your Concurrency Usage

    You can monitor your concurrency usage through response headers:

    Concurrency-Limit: 200
    Concurrency-Remaining: 199
    X-Request-Cost: 0.001
    X-Request-Id: 67fa4e35647515d8ad61bb3ee041e1bb
    Zr-Final-Url: https://httpbin.io/anything

    These headers provide valuable information about your request:

    • Concurrency-Limit: Your maximum concurrent request limit based on your plan
    • Concurrency-Remaining: Number of additional concurrent requests you can make
    • X-Request-Cost: The cost of this request (varies based on enabled features)
    • X-Request-Id: Unique identifier for this request - essential when reporting issues to support
    • Zr-Final-Url: The final URL after any redirects that occurred during the request

    ​
    Related questions

    ​
    How many concurrent requests are included in my plan?

    The concurrency limit varies by subscription plan:

    • Trial plan: 5 concurrent requests
    • Developer plan: 20 concurrent requests
    • Startup plan: 50 concurrent requests
    • Business plan: 100 concurrent requests
    • Business 500: 150 concurrent requests
    • Business 1K: 200 concurrent requests
    • Business 2K: 300 concurrent requests
    • Business 3K: 400 concurrent requests

    Enterprise plans can include custom concurrency limits to fit your needs. Contact us for tailor-made Enterprise solutions.

    ​
    If get a “429 Too Many Requests” error, do I lose that request or is it queued?

    You’ll receive an error, and that request won’t be queued or retried automatically. You’ll need to manage retries on your end, ensuring you don’t exceed your concurrency limit.

    ​
    Can I increase my concurrency limit?

    Absolutely! We offer various plans with different concurrency limits to suit your needs. If you find yourself frequently hitting the concurrency limit, consider upgrading.

    ​
    How can I monitor my concurrency usage?

    When using the Universal Scraper API, each response includes these helpful headers:

    • Concurrency-Limit: Shows your maximum concurrent request limit
    • Concurrency-Remaining: Shows how many free concurrency slots you have available

    ​
    I’ve been blocked by repeatedly exceeding my concurrency limit. Why?

    Whenever you exceed your plan concurrency limit, you’ll start receiving “429 Too Many Requests” errors. If you keep sending more and more requests exceeding your plan concurrency limit in a short period of time, the service may temporarily block your IP address to prevent API misuse.

    The IP address ban will last only a few minutes, but repeatedly being blocked might end in a long-lasting block. Check the concurrency optimization section for more information on how to limit concurrent requests to prevent being blocked.

    ​
    Troubleshooting with Support

    When contacting ZenRows support for any issues with your requests, always include:

    1. The X-Request-Id from the response headers
    2. The exact error message you received
    3. The timestamp of when the error occurred
    4. Details about the parameters you used in the request

    This information, especially the Request ID, allows our support team to quickly locate your specific request in our logs and provide more effective assistance.

    Many websites tailor their content based on the visitor’s location. For example, Amazon displays different products and prices on its UK (.co.uk) and French (.fr) sites. If you’re scraping data from these sites, using a regional IP ensures that you receive the correct localized content. Also some websites restrict access to their content based on geographic location.

    To avoid discrepancies caused by regional variations, such as different products being displayed on a retailer’s website, you can send a specific country code with your request. This ensures that your request is localized to the desired country, allowing you to obtain consistent and replicable results.

    ZenRows supports proxies from numerous countries around the world. You can use any country’s ISO code to configure your proxies.

    Here is the comprehensive list of premium proxy countries supported by ZenRows:

    af => Afghanistan
    al => Albania
    dz => Algeria
    ad => Andorra
    ao => Angola
    ai => Anguilla
    ag => Antigua and Barbuda
    ar => Argentina
    am => Armenia
    au => Australia
    at => Austria
    az => Azerbaijan
    bs => Bahamas
    bh => Bahrain
    bd => Bangladesh
    bb => Barbados
    by => Belarus
    be => Belgium
    bz => Belize
    bj => Benin
    bm => Bermuda
    bt => Bhutan
    bo => Bolivia, Plurinational State of
    ba => Bosnia and Herzegovina
    bw => Botswana
    br => Brazil
    bn => Brunei Darussalam
    bg => Bulgaria
    bf => Burkina Faso
    bi => Burundi
    cv => Cabo Verde
    kh => Cambodia
    cm => Cameroon
    ca => Canada
    ky => Cayman Islands
    cf => Central African Republic
    td => Chad
    cl => Chile
    cn => China
    co => Colombia
    km => Comoros
    cg => Congo
    cd => Congo, The Democratic Republic o
    cr => Costa Rica
    ci => Cote D'ivoire
    hr => Croatia
    cu => Cuba
    cy => Cyprus
    cz => Czech Republic
    dk => Denmark
    dj => Djibouti
    do => Dominican Republic
    ec => Ecuador
    eg => Egypt
    sv => El Salvador
    gq => Equatorial Guinea
    er => Eritrea
    ee => Estonia
    et => Ethiopia
    fo => Faroe Islands
    fj => Fiji
    fi => Finland
    fr => France
    gf => French Guiana
    pf => French Polynesia
    ga => Gabon
    gm => Gambia
    ge => Georgia
    de => Germany
    gh => Ghana
    gi => Gibraltar
    gr => Greece
    gl => Greenland
    gp => Guadeloupe
    gu => Guam
    gt => Guatemala
    gn => Guinea
    gw => Guinea-Bissau
    gy => Guyana
    ht => Haiti
    hn => Honduras
    hk => Hong Kong
    hu => Hungary
    is => Iceland
    in => India
    id => Indonesia
    ir => Iran, Islamic Republic of
    iq => Iraq
    ie => Ireland
    im => Isle of Man
    il => Israel
    it => Italy
    jm => Jamaica
    jp => Japan
    jo => Jordan
    kz => Kazakhstan
    ke => Kenya
    kr => Korea, Republic of
    kw => Kuwait
    kg => Kyrgyzstan
    la => Lao People's Democratic Republic
    lv => Latvia
    lb => Lebanon
    ls => Lesotho
    lr => Liberia
    ly => Libya
    lt => Lithuania
    lu => Luxembourg
    mo => Macao
    mk => Macedonia, The Former Yugoslav Republic of
    mg => Madagascar
    mw => Malawi
    my => Malaysia
    mv => Maldives
    ml => Mali
    mt => Malta
    mq => Martinique
    mr => Mauritania
    mu => Mauritius
    mx => Mexico
    md => Moldova, Republic of
    mn => Mongolia
    me => Montenegro
    ma => Morocco
    mz => Mozambique
    mm => Myanmar
    na => Namibia
    np => Nepal
    nl => Netherlands
    nc => New Caledonia
    nz => New Zealand
    ni => Nicaragua
    ne => Niger
    ng => Nigeria
    no => Norway
    om => Oman
    pk => Pakistan
    ps => Palestine, State of
    pa => Panama
    pg => Papua New Guinea
    py => Paraguay
    pe => Peru
    ph => Philippines
    pl => Poland
    pt => Portugal
    pr => Puerto Rico
    qa => Qatar
    re => Reunion
    ro => Romania
    ru => Russia
    rw => Rwanda
    lc => Saint Lucia
    mf => Saint Martin (French Part)
    ws => Samoa
    sa => Saudi Arabia
    sn => Senegal
    rs => Serbia
    sc => Seychelles
    sl => Sierra Leone
    sg => Singapore
    sx => Sint Maarten (Dutch Part)
    sk => Slovakia
    si => Slovenia
    sb => Solomon Islands
    so => Somalia
    za => South Africa
    ss => South Sudan
    es => Spain
    lk => Sri Lanka
    sd => Sudan
    sr => Suriname
    sz => Swaziland
    se => Sweden
    ch => Switzerland
    sy => Syrian Arab Republic
    tw => Taiwan, Province of China
    tj => Tajikistan
    tz => Tanzania, United Republic of
    th => Thailand
    tl => Timor-Leste
    tg => Togo
    to => Tonga
    tt => Trinidad and Tobago
    tn => Tunisia
    tr => Turkey
    tm => Turkmenistan
    ug => Uganda
    ua => Ukraine
    ae => United Arab Emirates
    gb => United Kingdom
    us => United States
    uy => Uruguay
    uz => Uzbekistan
    vu => Vanuatu
    ve => Venezuela, Bolivarian Republic of
    vn => Viet Nam
    ye => Yemen
    zm => Zambia
    zw => Zimbabwe

    ​
    How to use it with each product:

    ​
    Universal Scraper API

    Incorporate the selected ISO code into your scraping script to route your requests through the chosen proxy.

    scraper.py
    # pip install requests
    import requests
    
    # Example for setting a proxy for Canada
    params = {
        'premium_proxy': 'true',
        'proxy_country': 'ca',
    }
    
    response = requests.get('https://api.zenrows.com/v1/', params=params)
    print(response.text)

    ​
    Scraper APIs

    If the API offers a country option you can add it similarly to the Universal Scraper API.

    # pip install requests
    import requests
    
    query_id_url = "example"
    api_endpoint = f"https://<INDUSTRY>.api.zenrows.com/v1/targets/<WEBSITE>/<TYPE_OF_REQUEST>/{query_id_url}"
    
    params = {
        "apikey": "YOUR_ZENROWS_API_KEY",
        "country": "us"  # Optional: Target specific country
    }
    
    response = requests.get(api_endpoint, params=params)
    print(response.text)

    ​
    Residential Proxies

    ZenRows supports IPs from a wide variety of countries, allowing you to access geo-restricted data with ease. You can specify a country by using country followed by the country code in your proxy URL.

    Example for Spain:

    http://<YOUR_USERNAME>:<YOUR_PASSWORD>_country-es@superproxy.zenrows.com:1337

    ​
    Scraping Browser

    You basically have two ways to set a country geolocation on the Scraping Browser, depending if you’re using the SDK or not.

    Without using the SDK, select a specific country by adding the proxy_country parameter to the WebSocket URL:

    wss://browser.zenrows.com?apikey=YOUR_ZENROWS_API_KEY&proxy_country=es

    In SDK mode, specify the country when generating the connection URL:

    const connectionURL = scrapingBrowser.getConnectURL({
        proxy: {
            location: ProxyCountry.ES
        },
    });

    By using the right proxy, you can ensure more reliable and geographically relevant data scraping while maintaining compliance with website policies.

    For further assistance or more detailed configuration, refer to the ZenRows documentation or contact their support team. Happy scraping!

    Not at all. We only charge for successful requests :)

    404 Not Found and 410 Gone errors are charged

    You can extract data from as many websites as you want. Throw us 1M requests or 50M; we can perfectly handle it!

    Not at all. Our platform and infrastructure are cloud-based, making our language-agnostic API easy and seamless to use.

    Was this page helpful?

    Manage Notifications
    Powered by Mintlify
    Assistant
    Responses are generated using AI and may contain mistakes.