Discover how to scrape data from any website using ZenRows’ Scraping Browser with Puppeteer. This comprehensive guide demonstrates how to create your first browser automation request capable of handling JavaScript-heavy sites and bypassing sophisticated anti-bot measures. ZenRows’ Scraping Browser offers cloud-hosted Chrome instances that integrate seamlessly with Puppeteer’s automation framework. From scraping dynamic content to performing complex browser interactions, you can build robust scraping solutions in minutes using Puppeteer’s intuitive API.

1. Set Up Your Project

Set Up Your Development Environment

Ensure you have the necessary development tools and Puppeteer installed before starting. The Scraping Browser supports both Node.js Puppeteer and Python Pyppeteer implementations.
We recommend using the latest stable versions to ensure optimal compatibility and access to the newest features.
Node.js 18+ installed (latest LTS version recommended). Consider using an IDE like Visual Studio Code or WebStorm for enhanced development experience.
    # Install Node.js (if not already installed)
    # Visit https://nodejs.org/ or use package managers:
    
    # macOS (using Homebrew)
    brew install node
    
    # Ubuntu/Debian (using NodeSource)
    curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
    sudo apt-get install -y nodejs
    
    # Windows (using Chocolatey)
    choco install nodejs

    # Install Puppeteer Core
    npm install puppeteer-core
Need help with your setup? Check out our comprehensive Puppeteer web scraping guide

Get Your API Key and Connection URL

Create a Free Account with ZenRows and retrieve your API key from the Scraping Browser Dashboard. This key authenticates your WebSocket connection to our cloud browsers.

2. Make Your First Request

Begin with a basic request to familiarize yourself with how Puppeteer connects to the Scraping Browser. We’ll target the E-commerce Challenge page to demonstrate browser connection and title extraction.
// npm install puppeteer-core
const puppeteer = require('puppeteer-core');
// scraping browser connection URL
const connectionURL = 'wss://browser.zenrows.com?apikey=YOUR_ZENROWS_API_KEY';

const scraper = async () => {
    // connect to the scraping browser
    const browser = await puppeteer.connect({ browserWSEndpoint: connectionURL });
    const page = await browser.newPage();
    
    await page.goto('https://www.scrapingcourse.com/ecommerce/');
    console.log(await page.title());
    
    await browser.close();
};

scraper();
Replace YOUR_ZENROWS_API_KEY with your actual API key and execute the script:
node scraper.js
Expected Output: Your script will display the page title:
ScrapingCourse.com E-commerce Challenge
Excellent! You’ve successfully completed your first web scraping request using ZenRows Scraping Browser with Puppeteer.

3. Build a Real-World Scraping Scenario

Now let’s advance to a comprehensive scraping example by extracting product data from the e-commerce site. We’ll enhance our code to collect product names, prices, and URLs using Puppeteer’s robust element selection and data extraction capabilities.
// npm install puppeteer-core
const puppeteer = require('puppeteer-core');

// scraping browser connection URL
const connectionURL = 'wss://browser.zenrows.com?apikey=YOUR_ZENROWS_API_KEY';

const scraper = async (url) => {
    // connect to the scraping browser
    const browser = await puppeteer.connect({
        browserWSEndpoint: connectionURL,
    });
    const page = await browser.newPage();
    
    try {
        await page.goto(url, { waitUntil: 'networkidle2' });
        await page.waitForSelector('.product');
        
        // extract the desired data
        const data = await page.$$eval('.product', (products) =>
            products.map((product) => ({
                name: product.querySelector('.product-name')?.textContent.trim() || '',
                price: product.querySelector('.price')?.textContent.trim() || '',
                productURL: product.querySelector('.woocommerce-LoopProduct-link')?.href || '',
            }))
        );
        
        return data;
    } finally {
        await page.close();
        await browser.close();
    }
};

// execute the scraper function
(async () => {
    const url = 'https://www.scrapingcourse.com/ecommerce/';
    const products = await scraper(url);
    console.log(products);
})();

Run Your Application

Launch your script to verify the scraping functionality:
node scraper.js
Example Output: Your script will collect and display the product information:
[
    {
        "name": "Abominable Hoodie",
        "price": "$69.00",
        "productURL": "https://www.scrapingcourse.com/ecommerce/product/abominable-hoodie/"
    },
    {
        "name": "Artemis Running Short",
        "price": "$45.00",
        "productURL": "https://www.scrapingcourse.com/ecommerce/product/artemis-running-short/"
    }
    // ... additional products
]
Outstanding! 🎉 You’ve successfully implemented a production-ready scraping solution using Puppeteer and the ZenRows Scraping Browser.

4. Alternative: Using the ZenRows Browser SDK

For enhanced developer experience, consider using the ZenRows Browser SDK rather than manually managing WebSocket URLs. The SDK streamlines connection handling and offers additional development utilities.
The ZenRows Browser SDK is currently only available for JavaScript. For more details, see the GitHub Repository.

Install the SDK

Node.js
npm install @zenrows/browser-sdk

Quick Migration from WebSocket URL

Transitioning from direct WebSocket connections to the SDK requires minimal code changes: Before (WebSocket URL):
Node.js
const puppeteer = require('puppeteer-core');
const connectionURL = 'wss://browser.zenrows.com?apikey=YOUR_ZENROWS_API_KEY';

const browser = await puppeteer.connect({ browserWSEndpoint: connectionURL });
After (SDK):
Node.js
const puppeteer = require('puppeteer-core');
const { ScrapingBrowser } = require('@zenrows/browser-sdk');

const scrapingBrowser = new ScrapingBrowser({ apiKey: 'YOUR_ZENROWS_API_KEY' });
const connectionURL = scrapingBrowser.getConnectURL();
const browser = await puppeteer.connect({ browserWSEndpoint: connectionURL });

Complete Example with SDK

Node.js
// npm install @zenrows/browser-sdk puppeteer-core
const puppeteer = require('puppeteer-core');
const { ScrapingBrowser } = require('@zenrows/browser-sdk');

const scraper = async () => {
    // Initialize SDK
    const scrapingBrowser = new ScrapingBrowser({ apiKey: 'YOUR_ZENROWS_API_KEY' });
    const connectionURL = scrapingBrowser.getConnectURL();
    
    const browser = await puppeteer.connect({ browserWSEndpoint: connectionURL });
    const page = await browser.newPage();
    
    await page.goto('https://www.scrapingcourse.com/ecommerce/');
    console.log(await page.title());
    
    await browser.close();
};

scraper();

SDK Benefits

  • Streamlined configuration: Eliminates manual WebSocket URL construction
  • Enhanced error handling: Provides detailed error messages and debugging capabilities
  • Future-proof architecture: Automatic updates to connection protocols and endpoints
  • Extended utilities: Access to helper functions and advanced configuration options
The SDK proves especially valuable in production environments where code maintainability and robust error handling are priorities.

How Puppeteer with Scraping Browser Helps

Integrating Puppeteer with ZenRows’ Scraping Browser delivers significant advantages for web automation:

Key Benefits

  • Cloud-hosted browser instances: Execute Puppeteer scripts on remote Chrome browsers, preserving local system resources for other applications.
  • Drop-in replacement: Transform existing Puppeteer code to use ZenRows by simply changing the connection method - no architectural changes required.
  • Full automation capabilities: Leverage Puppeteer’s complete feature set including form interactions, file handling, network monitoring, and custom JavaScript execution.
  • Automatic anti-detection: Benefit from built-in residential proxy rotation and authentic browser fingerprints without additional configuration.
  • Proven reliability: Cloud infrastructure delivers consistent performance without the complexity of local browser management.
  • Massive scalability: Execute up to 150 concurrent browser instances depending on your subscription plan.
  • Network optimization: Reduced latency and improved success rates through globally distributed infrastructure.

Troubleshooting

Common challenges when integrating Puppeteer with the Scraping Browser and their solutions:
1

Connection Refused

If you encounter Connection Refused errors, verify these potential causes:
  • API Key Validation: Confirm you’re using the correct API key from your dashboard.
  • Network Connectivity: Check your internet connection and firewall configurations.
  • WebSocket Endpoint: Ensure the WebSocket URL (wss://browser.zenrows.com) is properly formatted.
2

Timeout and Loading Issues

  • Use page.waitForSelector() to ensure elements are available before interaction
  • Extend timeout values for slow-loading websites
    Node.js
    await page.goto('https://example.com', { timeout: 60000 });  // 60 seconds
    
  • Validate CSS selectors using browser developer tools
  • Implement waitUntil: 'networkidle2' for dynamic content loading
3

Page Navigation Errors

  • Handle navigation exceptions with proper try-catch blocks
  • Ensure proper browser and page cleanup to prevent memory leaks
  • Use page.waitForNavigation() for multi-step workflows
4

Geographic Restrictions

While ZenRows automatically rotates IP addresses, some websites implement location-based blocking. Consider adjusting regional settings for better access.
Learn more about geographic targeting in our Region Documentation and Country Configuration.
5

Get Help From ZenRows Experts

If challenges persist after implementing these solutions, our technical support team is ready to assist. Access help through the Scraping Browser dashboard or contact our support team for expert guidance.

Next Steps

You’ve established a strong foundation for Puppeteer-based web scraping with ZenRows. Continue your journey with these resources:

Frequently Asked Questions (FAQ)