Discover common automation patterns and real-world scenarios when using ZenRows’ Scraping Browser with Puppeteer and Playwright. These practical examples demonstrate how to leverage browser automation for various data extraction and interaction tasks.

The Scraping Browser excels at handling complex scenarios that traditional HTTP-based scraping cannot address. From capturing visual content to executing custom JavaScript, these use cases showcase the full potential of browser-based automation for your scraping projects.

Websites often change their structure or update CSS class names and HTML tags. This means the selectors you use for scraping (like .product, .products, or specific element tags) might stop working if the site layout changes. To keep your scraper reliable, regularly check and update your selectors as needed.

Extract complete page content and metadata by navigating to target websites. This fundamental pattern forms the foundation for most scraping workflows and demonstrates how to retrieve both visible content and underlying HTML structure.

const puppeteer = require('puppeteer-core');
const connectionURL = 'wss://browser.zenrows.com?apikey=YOUR_ZENROWS_API_KEY';

const scraper = async () => {
    const browser = await puppeteer.connect({ browserWSEndpoint: connectionURL });
    const page = await browser.newPage();
    
    try {
        console.log('Navigating to target page...');
        await page.goto('https://www.scrapingcourse.com/ecommerce/', { 
            waitUntil: 'domcontentloaded' 
        });
        
        // Extract page metadata
        const title = await page.title();
        console.log('Page title:', title);
        
        // Get complete HTML content
        console.log('Extracting page content...');
        const html = await page.content();
        
        // Extract specific elements
        const productCount = await page.$$eval('.product', products => products.length);
        console.log(`Found ${productCount} products on the page`);
        
        // Extract text content from specific elements
        const headings = await page.$$eval('h1, h2, h3', elements => 
            elements.map(el => el.textContent.trim())
        );
        console.log('Page headings:', headings);
        
        return {
            title,
            productCount,
            headings,
            htmlLength: html.length
        };
    } finally {
        await browser.close();
    }
};

scraper().then(result => console.log('Extraction complete:', result));

Key Benefits

  • Complete content access: Retrieve both rendered content and raw HTML source
  • Metadata extraction: Access page titles, descriptions, and other document properties
  • Element counting: Quickly assess page structure and content volume
  • Structured data collection: Extract specific elements using CSS selectors

Taking Screenshots

Capture visual representations of web pages for monitoring, documentation, or visual verification purposes. Screenshots prove invaluable for debugging scraping workflows and creating visual records of dynamic content.

const puppeteer = require('puppeteer-core');
const connectionURL = 'wss://browser.zenrows.com?apikey=YOUR_ZENROWS_API_KEY';

const screenshotScraper = async () => {
    const browser = await puppeteer.connect({ browserWSEndpoint: connectionURL });
    const page = await browser.newPage();
    
    try {
        console.log('Navigating to target page...');
        await page.goto('https://www.scrapingcourse.com/ecommerce/', { 
            waitUntil: 'domcontentloaded' 
        });
        
        console.log('Page loaded:', await page.title());
        
        // Take full page screenshot
        console.log('Capturing full page screenshot...');
        await page.screenshot({ 
            path: 'full-page-screenshot.png',
            fullPage: true 
        });
        
        // Take viewport screenshot (uses default 1920x1080 viewport)
        console.log('Capturing viewport screenshot...');
        await page.screenshot({ 
            path: 'viewport-screenshot.png' 
        });
        
        // Take screenshot of specific element
        console.log('Capturing product grid screenshot...');
        const productGrid = await page.$('.products');
        if (productGrid) {
            await productGrid.screenshot({ 
                path: 'product-grid-screenshot.png' 
            });
        }
        
        // Take screenshot with custom clipping (alternative to viewport resizing)
        console.log('Capturing custom-sized screenshot...');
        await page.screenshot({
            path: 'custom-size-screenshot.png',
            type: 'jpeg',
            quality: 100,
            clip: { x: 0, y: 0, width: 1200, height: 800 }
        });
        
        console.log('All screenshots saved successfully');
        
    } finally {
        await browser.close();
    }
};

screenshotScraper();

Screenshot Options

  • Full page capture: Include content below the fold with fullPage: true
  • Element-specific screenshots: Target individual components or sections
  • Custom clipping: Focus on specific page areas using coordinate-based clipping
  • Format options: PNG (lossless) or JPEG (with quality control from 0-100)
  • Default viewport: Screenshots use the standard 1920x1080 viewport size
Screenshots are captured from the cloud browser and automatically transferred to your local environment. Large full-page screenshots may take additional time to process and download.

Running Custom JavaScript Code

Execute custom JavaScript within the browser context to manipulate pages, extract computed values, or perform complex data transformations. This powerful capability enables sophisticated automation scenarios beyond standard element selection.

const puppeteer = require('puppeteer-core');
const connectionURL = 'wss://browser.zenrows.com?apikey=YOUR_ZENROWS_API_KEY';

const customJavaScriptScraper = async () => {
    const browser = await puppeteer.connect({ browserWSEndpoint: connectionURL });
    const page = await browser.newPage();
    
    try {
        console.log('Navigating to target page...');
        await page.goto('https://www.scrapingcourse.com/ecommerce/', { 
            waitUntil: 'domcontentloaded' 
        });
        
        // Extract page title using custom JavaScript
        const pageTitle = await page.evaluate(() => {
            return document.title;
        });
        console.log('Page title via JavaScript:', pageTitle);
        
        // Get page statistics
        const pageStats = await page.evaluate(() => {
            return {
                totalLinks: document.querySelectorAll('a').length,
                totalImages: document.querySelectorAll('img').length,
                totalForms: document.querySelectorAll('form').length,
                pageHeight: document.body.scrollHeight,
                viewportHeight: window.innerHeight,
                currentURL: window.location.href,
                userAgent: navigator.userAgent
            };
        });
        console.log('Page statistics:', pageStats);
        
        // Extract product data with custom processing
        const productData = await page.evaluate(() => {
            const products = Array.from(document.querySelectorAll('.product'));
            return products.map((product, index) => {
                const name = product.querySelector('.product-name')?.textContent?.trim();
                const priceText = product.querySelector('.price')?.textContent?.trim();
                
                // Custom price processing
                const priceMatch = priceText?.match(/\$(\d+(?:\.\d{2})?)/);
                const priceNumber = priceMatch ? parseFloat(priceMatch[1]) : null;
                
                return {
                    id: index + 1,
                    name: name || 'Unknown Product',
                    originalPrice: priceText || 'Price not available',
                    numericPrice: priceNumber,
                    isOnSale: product.querySelector('.sale-badge') !== null,
                    position: index + 1
                };
            });
        });
        console.log('Processed product data:', productData);
        
        // Scroll and capture dynamic content
        const scrollResults = await page.evaluate(async () => {
            // Scroll to bottom of page
            window.scrollTo(0, document.body.scrollHeight);
            
            // Wait for any lazy-loaded content
            await new Promise(resolve => setTimeout(resolve, 2000));
            
            return {
                finalScrollPosition: window.pageYOffset,
                totalHeight: document.body.scrollHeight,
                lazyImagesLoaded: document.querySelectorAll('img[data-src]').length
            };
        });
        console.log('Scroll results:', scrollResults);
        
        // Inject custom CSS and modify page appearance
        await page.evaluate(() => {
            const style = document.createElement('style');
            style.textContent = `
                .product { border: 2px solid red !important; }
                .price { background-color: yellow !important; }
            `;
            document.head.appendChild(style);
        });
        
        console.log('Custom styles applied');
        
    } finally {
        await browser.close();
    }
};

customJavaScriptScraper();

JavaScript Execution Capabilities

  • Data extraction and processing: Transform raw data within the browser context
  • Page statistics collection: Gather comprehensive page metrics and analytics
  • Dynamic content interaction: Trigger JavaScript events and handle dynamic updates
  • Custom styling injection: Modify page appearance for testing or visual enhancement
  • Scroll automation: Navigate through infinite scroll or lazy-loaded content
  • Complex calculations: Perform mathematical operations on extracted data
Custom JavaScript execution runs within the browser’s security context, providing access to all DOM APIs and browser features available to the target website.

PDF Generation and Document Export

Generate PDF documents from web pages for archival, reporting, or documentation purposes. This capability proves valuable for creating snapshots of dynamic content or generating reports from scraped data.

const puppeteer = require('puppeteer-core');
const connectionURL = 'wss://browser.zenrows.com?apikey=YOUR_ZENROWS_API_KEY';

const pdfGenerationScraper = async () => {
    const browser = await puppeteer.connect({ browserWSEndpoint: connectionURL });
    const page = await browser.newPage();
    
    try {
        console.log('Navigating to target page...');
        await page.goto('https://www.scrapingcourse.com/ecommerce/', { 
            waitUntil: 'domcontentloaded' 
        });
        
        console.log('Page loaded:', await page.title());
        
        // Generate basic PDF
        console.log('Generating basic PDF...');
        await page.pdf({
            path: 'basic-page.pdf',
            format: 'A4',
            printBackground: true
        });
        
        // Generate custom PDF with options
        console.log('Generating custom PDF...');
        await page.pdf({
            path: 'custom-page.pdf',
            format: 'A4',
            printBackground: true,
            margin: {
                top: '20mm',
                bottom: '20mm',
                left: '20mm',
                right: '20mm'
            },
            displayHeaderFooter: true,
            headerTemplate: '<div style="font-size: 10px; text-align: center; width: 100%;">E-commerce Scraping Report</div>',
            footerTemplate: '<div style="font-size: 10px; text-align: center; width: 100%;">Page <span class="pageNumber"></span> of <span class="totalPages"></span></div>'
        });
        
        // Generate PDF of specific content area
        console.log('Generating product grid PDF...');
        const productGrid = await page.$('.products');
        if (productGrid) {
            const boundingBox = await productGrid.boundingBox();
            await page.pdf({
                path: 'product-grid.pdf',
                format: 'A4',
                printBackground: true,
                clip: boundingBox
            });
        }
        
        // Generate landscape PDF
        console.log('Generating landscape PDF...');
        await page.pdf({
            path: 'landscape-page.pdf',
            format: 'A4',
            landscape: true,
            printBackground: true,
            scale: 0.8
        });
        
        console.log('All PDFs generated successfully');
        
    } finally {
        await browser.close();
    }
};

pdfGenerationScraper();

PDF Generation Features

  • Multiple format support: Generate A4, Letter, Legal, and custom page sizes
  • Custom headers and footers: Add branding, page numbers, and metadata
  • Background preservation: Include CSS backgrounds and styling in PDFs
  • Margin control: Configure precise spacing and layout
  • Orientation options: Create portrait or landscape documents
  • Scale adjustment: Optimize content size for better readability
PDF generation works seamlessly with the cloud browser, automatically transferring generated files to your local environment while maintaining high quality and formatting.

Conclusion

ZenRows’ Scraping Browser transforms complex web automation challenges into straightforward solutions. These practical use cases demonstrate the platform’s versatility in handling everything from basic content extraction to sophisticated browser automation workflows.

Next Steps for Implementation

Start with the basic navigation and content extraction patterns to establish your foundation, then progressively incorporate advanced features like form interactions and network monitoring as your requirements evolve. The modular nature of these examples allows you to combine techniques for sophisticated automation workflows.

Consider implementing error handling and retry logic around these patterns for production deployments. The Scraping Browser’s consistent cloud environment reduces many common failure points, but robust error handling ensures reliable operation at scale.

Frequently Asked Questions (FAQ)