This guide covers fundamental troubleshooting techniques for ZenRows. Whether you’re just getting started or encountering unexpected errors, these tips will help you resolve common issues quickly.

Debugging API Responses

Retrieving Original Status Codes

When a target website returns an error, ZenRows handles it and returns a successful response by default. However, you can retrieve the original status code for debugging purposes:

// Include original_status in your request
const response = await axios.get('https://api.zenrows.com/v1/', {
  params: {
    apikey: YOUR_ZENROWS_API_KEY,
    url: TARGET_URL,
    original_status: true  // Return original status code
  }
});

console.log('Original status code:', response.status);
console.log('Response body:', response.data);

This helps you identify if the target site is returning errors like 403 Forbidden or 429 Too Many Requests.

You can find more information about the original_status parameter in the Original HTTP Code documentation.

Allowing Specific Status Codes

You can allow specific HTTP status codes to pass as successful responses. It helps examine error pages or handle specific status codes in your application.

const response = await axios.get('https://api.zenrows.com/v1/', {
  params: {
    apikey: YOUR_ZENROWS_API_KEY,
    url: TARGET_URL,
    // Allow 404 and 500 responses to be treated as success
    allowed_status_codes: '404,500'
  }
});

console.log('Status code:', response.status);
console.log('Response body:', response.data);

With this parameter, even if the target returns a 404 Not Found or 500 Server Error, ZenRows will process the response and return it as a 200 OK. This is particularly useful when:

  • You’re debugging a website’s error page content
  • You need to capture and analyze specific error responses
  • You want to handle certain status codes differently in your application logic

The allowed_status_codes parameter accepts a comma-separated list of HTTP status codes that should be processed and returned as successful responses.

You can find more information about the allowed_status_codes parameter in the Return content on error documentation.

CSS Selector Challenges

Multiple Wait-For Selectors

The wait_for parameter is critical when scraping dynamic content, but it can be tricky to use correctly, especially with edge cases like empty results. You can specify multiple CSS selectors using standard CSS syntax:

const response = await axios.get('https://api.zenrows.com/v1/', {
  params: {
    apikey: YOUR_ZENROWS_API_KEY,
    url: TARGET_URL,
    js_render: true,
    // Wait for either product listings OR a "no results" message
    wait_for: '.product-listing, .no-results-message'
  }
});

This ensures your request completes when the expected content appears OR when a “no results” indicator is displayed, avoiding unnecessary timeouts.

Testing CSS Selectors Before Scraping

Before using CSS selectors in your scraping code, test them directly in the browser console to verify they work across different page states:

  1. Open the target page in your browser
  2. Open Developer Tools (F12 or ⌘+⌥+I)
  3. In the Console tab, test your selector:
    document.querySelectorAll('.your-selector')
    
  4. Verify it returns the expected elements
  5. Test with variations (empty results, different page layouts)

This simple check can save hours of debugging complex scraping issues.

Common Error Patterns and Solutions

Error 429: Too Many Requests

This error status code indicates you’ve reached your concurrency limit. Solutions include:

  1. Request queue implementation: Space out requests using a queue
  2. Concurrency monitoring: Check the concurrency-remaining response header
  3. Backoff strategy: Increase the delay between requests when 429 errors occur
  4. Do not cancel the request: Let the request complete, don’t add a timeout. Aborting a request will not release the concurrency slot until the request is completed on the server side.

Error 413: Content Too Large

This happens when the response exceeds your plan’s size limits. Solutions include:

  1. Using CSS selectors: Extract only the data you need with css_extractor
  2. Remove JSON Response: If enabled, the json_response option can increase payload size, especially when internal API calls are captured. Disable it if not needed.
  3. Pagination: For pages with infinite scroll or large data sets, split your scraping into multiple requests using pagination.

Error 422: Unprocessable Entity

This typically means anti-bot protection is blocking access. Solutions include:

  1. Enable JavaScript rendering: js_render: true to mimic real browser behavior.
  2. Use premium proxies: Set premium_proxy: true to route traffic through residential IPs.
  3. Adjust geolocation: Specify a country with 'proxy_country': 'us' or another region where the website is accessible.
  4. Wait for Content to Load: Use the wait_for or wait parameters to ensure dynamic content has fully rendered, especially on slower sites.
  5. Add a referer header: Include a custom header such as referer: 'https://www.google.com/' to mimic typical browser behavior.

Getting Support

If you’re still encountering issues after applying the steps above, ZenRows’ support team is ready to help.

  1. Prepare Request Details

    • Target URL
    • All parameters used in the request
    • The full error response
  2. Include Debug Information

    • Request ID from the x-request-id response header
    • Any relevant logs or error messages
    • Expected vs. actual results
  3. Contact Support

The ZenRows team will help you optimize your scraping strategy and resolve any technical challenges you’re facing.