Who Is this for?
This guide is for developers who already have a price monitoring system and want to integrate ZenRows to improve scraping reliability and reduce blocking issues.What you’ll learn
- Replace existing HTTP clients with ZenRows requests
- Migrate from HTML parsing to CSS extraction
- Optimize performance with concurrency controls
- Monitor and control scraping costs
- Scale across multiple regions
Prerequisites
- An existing price monitoring system
- A ZenRows API key (SIGN UP HERE)
- Basic understanding of web scraping concepts
Integration Approaches
Choose the integration approach that best fits your current system:Approach 1: Minimal Integration (HTTP Client Replacement)
Replace your current HTTP client with ZenRows while keeping existing parsing logic. Before (typical implementation):Python
Python
- Minimal code changes required
- Immediate anti-bot protection
- Keep existing data processing logic
- Easy rollback if needed
Approach 2: Full Integration (CSS Extraction)
Replace both HTTP client and HTML parsing with ZenRows CSS extraction for cleaner, more maintainable code. Before:Python
Python
- Eliminates HTML parsing dependencies
- Cleaner, more maintainable code
- Built-in data extraction
- Reduced code complexity
Step-by-Step Integration
1
Assess Your Current Implementation
Before integrating ZenRows, analyze your current scraping setup to choose the best integration approach.
Identify your current components:Integration complexity assessment:
- HTTP client (requests, urllib, etc.)
- HTML parser (BeautifulSoup, lxml, etc.)
- Data extraction logic
- Error handling mechanisms
- Proxy management (if any)
Python
- Low complexity: Simple requests + BeautifulSoup → Use Approach 1
- Medium complexity: Custom headers/sessions → Use Approach 1 or 2
- High complexity: Selenium/complex proxy logic → Use Approach 2
2
Replace HTTP Client
Start with the minimal integration approach by replacing your HTTP client with ZenRows.For requests-based systems:For Puppeteer/Playwright-based systems:
Python
JavaScript
Python
3
Migrate to CSS Extraction (Optional)
For cleaner code and better maintainability, replace HTML parsing with ZenRows CSS extraction.Identify your current selectors:Convert to CSS extractor:
Python
Python
Migration Checklist
Pre-Migration- Document current scraping logic and selectors
- Identify proxy and header requirements
- Test ZenRows with sample requests
- Plan rollback strategy
- Replace HTTP client with ZenRows API calls
- Update error handling for ZenRows responses
- Test with production URLs
- Monitor request costs and concurrency
- Remove old proxy management code
- Clean up unused HTML parsing dependencies
- Update monitoring and alerting
- Document new ZenRows integration
Best Practices
Cost Management- Set daily cost limits to prevent unexpected charges
- Monitor request costs with the
X-Request-Costheader - Use
Concurrency-LimitandConcurrency-Remainingresponse headers to optimize throughput and avoid IP block errors - Cache results when appropriate to reduce requests
- Implement retry logic with exponential backoff
- Monitor selector stability and update as needed
- Use fallback selectors for critical data points
- Log all requests responses and errors for debugging and monitoring
- Use appropriate concurrency based on your plan limits
- Batch requests when monitoring multiple products
- Leverage geographic targeting for region-specific data
Troubleshooting
Selector Compatibility- Test selectors in ZenRows Request Builder before migration
- Some selectors may behave differently with JavaScript rendering
- Use more specific selectors if extraction returns unexpected data
- Disable
js_renderfor static content to reduce costs - Use
wait_forinstead ofwaitwhen possible - Monitor
Concurrency-LimitandConcurrency-Remainingresponse headers to maximize throughput
- ZenRows returns different error codes. See the API Error Codes documentation for more details.
- Implement specific handling for rate limits (
429) and quota exceeded (402) - Log ZenRows-specific headers for debugging