Skip to main content
This guide helps you resolve common web scraping issues by organizing solutions around specific use cases. Whether you’re dealing with dynamic content, geo-restrictions, or session management, these targeted troubleshooting techniques will get you back on track quickly.

Dynamic Content & Timing Issues

When scraping JavaScript-heavy sites, SPAs, or pages with delayed content loading, timing becomes critical.

Multiple Wait-For Selectors

The wait_for parameter handles dynamic content, but edge cases like empty results require careful selector planning:
const response = await axios.get('https://api.zenrows.com/v1/', {
  params: {
    apikey: YOUR_ZENROWS_API_KEY,
    url: TARGET_URL,
    js_render: true,
    // Wait for either product listings OR a "no results" message
    wait_for: '.product-listing, .no-results-message'
  }
});
This ensures your request completes when the expected content appears OR when a “no results” indicator is displayed, avoiding unnecessary timeouts.

Testing CSS Selectors Before Scraping

Before using CSS selectors in your scraping code, test them directly in the browser console:
  1. Open the target page in your browser
  2. Open Developer Tools (F12 or ⌘+⌥+I)
  3. In the Console tab, test your selector:
document.querySelectorAll('.your-selector')
  1. Verify it returns the expected elements across different page states

Anti-Bot Protection & Access Issues

When sites block your requests or return unexpected errors, these debugging techniques help identify the root cause.

Error 422: Unprocessable Entity

This typically means anti-bot protection is blocking access. Solutions include:
  1. Enable JavaScript rendering: js_render: true to mimic real browser behavior
  2. Use premium proxies: Set premium_proxy: true to route traffic through residential IPs
  3. Add a referer header: Include referer: 'https://www.google.com/' to mimic typical browser behavior
  4. Wait for content to load: Use wait_for or wait parameters for slower sites
If your success rate suddenly drops, it’s often due to new anti-bot challenges or region-specific blocks. Enabling js_render, switching proxy regions, or using premium_proxy: true typically resolves this.

Retrieving Original Status Codes

When a target website returns an error, ZenRows handles it gracefully by default. Use original_status to see what’s actually happening:
const response = await axios.get('https://api.zenrows.com/v1/', {
  params: {
    apikey: YOUR_ZENROWS_API_KEY,
    url: TARGET_URL,
    original_status: true  // Return original status code
  }
});
console.log('Original status code:', response.status);
This helps you identify if the target site is returning errors like 403 Forbidden or 429 Too Many Requests.

Allowing Specific Status Codes

Examine error pages or handle specific status codes by allowing them to pass as successful responses:
const response = await axios.get('https://api.zenrows.com/v1/', {
  params: {
    apikey: YOUR_ZENROWS_API_KEY,
    url: TARGET_URL,
    // Allow 404 and 500 responses to be treated as success
    allowed_status_codes: '404,500'
  }
});

Region-Restricted Sites

Geographic restrictions require specific approaches to access localized content.

Error 422 with Geo-Restrictions

When content is region-locked, adjust your geographic approach:
  1. Specify target country: Use proxy_country: 'us' or another accessible region
  2. Enable premium proxies: Set premium_proxy: true for residential IPs
  3. Test different regions: Some content may be available in multiple countries

Performance & Resource Management

Optimize your scraping efficiency and manage API limits effectively.

Error 429: Too Many Requests

This indicates you’ve reached your concurrency limit. Solutions include:
  1. Request queue implementation: Space out requests using a queue
  2. Concurrency monitoring: Check the concurrency-remaining response header
  3. Backoff strategy: Increase delays when 429 errors occur
  4. Complete requests fully: Don’t cancel requests - let them finish to release concurrency slots
If you begin to notice a lower-than-usual success rate, monitor your concurrency usage and ensure requests are being spaced out appropriately. Sudden spikes can trigger errors due to the concurrency limit being reached.

Error 413: Content Too Large

When responses exceed your plan’s size limits:
  1. Use CSS selectors: Extract only needed data with css_extractor
  2. Disable JSON response: Remove json_response if not needed to reduce payload size
  3. Implement pagination: Split large data sets into multiple requests

Session-Dependent Sites

For multi-step workflows requiring consistent session state.

Session management issues

When session state needs to persist across multiple requests:
  1. Use consistent session IDs: Apply the same session_id across related requests
  2. Maintain custom headers: Include necessary cookies and authentication headers
  3. Test session flow: Verify each step maintains the required state

Getting Support

If you’re still encountering issues after applying these targeted solutions, ZenRows’ support team is ready to help. Prepare Request Details:
  • Target URL and all parameters used
  • Request ID from the X-Request-Id response header
  • Expected vs. actual results
Contact Support: Use live chat on the ZenRows website or email.
I