Welcome to ZenRows! This guide is for anyone new to web scraping or data extraction. We’ll walk you through everything you need to know, from creating an account to making your first successful API request. No technical experience is required to get started.

What is Web Scraping?

Web scraping is a method for automatically collecting information from websites. Instead of manually copying and pasting data, you can use tools that gather it for you, saving time and reducing human error.

Common uses for web scraping include:

  • Price monitoring across e-commerce platforms
  • Gathering contact information from business directories
  • Collecting data for market research
  • Extracting content for analysis

Why Use ZenRows?

Web scraping can be challenging, especially when dealing with JavaScript-heavy websites, anti-bot protections, or managing rotating proxies. ZenRows handles all of that complexity for you, offering:

  • Simplicity: Extract data with just a few lines of code
  • Reliability: Successfully scrape sites with anti-bot protections
  • Efficiency: Save time and resources on infrastructure management
  • Scalability: Easily scale your scraping operations as your needs grow

Step 1: Create Your ZenRows Account

Let’s start by creating your ZenRows account:

  1. Go to ZenRows’ registration page
  2. Enter your email address
  3. Choose a secure password
  4. Click “Sign up”
  5. Verify your email address

Once you’re verified and onboarded, you’ll be redirected to one of our products, where you can access your personal API key.

The API key is your authentication credential for using ZenRows services.

Step 2: Understanding ZenRows Products

ZenRows offers multiple tools depending on your use case:

  1. Universal Scraper API: The easiest way to extract data from any website (recommended for beginners).
  2. Scraper APIs: Pre-built extractors for specific industries (e.g. e-commerce, real estate, etc.).
  3. Scraping Browser: Ideal solution for those using Puppeteer or Playwright.
  4. Residential Proxies: Direct access to our proxy network for custom solutions.
To get started, head to the Builder page in your dashboard. Enter the URL you want to scrape, and click “Try it”. You’ll see results immediately, and the Builder will generate code snippets in multiple languages.

Step 3: Making Your First Request

Let’s make your first request using the Universal Scraper API. We’ll start with a simple example that extracts data from a webpage. Ensure you have your API key ready from the dashboard.

First, install the necessary package:

npm install axios

Then create a file with the following code:

const axios = require('axios');

async function scrapeWebsite() {
  try {
    // Replace with your actual API key
    const API_KEY = '<YOUR_ZENROWS_API_KEY>';
    
    // The website you want to scrape
    const TARGET_URL = 'https://example.com/';

    // Make request to ZenRows API
    const response = await axios.get('https://api.zenrows.com/v1/', {
      params: {
        apikey: API_KEY,
        url: TARGET_URL,
      }
    });

    console.log('Successful response!');
    console.log('HTTP Status:', response.status);
    console.log('Content length:', response.data.length);
    console.log('First 300 characters of content:');
    console.log(`${response.data.substring(0, 300)}...`);

    return response.data;
  } catch (error) {
    console.error('Error occurred:', error.message);
    if (error.response) {
      console.error('Response status:', error.response.status);
      console.error('Response data:', error.response.data);
    }
  }
}

// Call the function
scrapeWebsite();

Replace '<YOUR_ZENROWS_API_KEY>' with your actual API key from the dashboard, then run the script:

node first-scrape.js

If everything works correctly, you should see the HTML content from the target website in the console output!

Step 4: Handling JavaScript-Rendered Content

Many modern websites use JavaScript to load content dynamically. This means the data you want might not be in the initial HTML. Let’s see how to handle this using ZenRows.

Modify your code to include the js_render parameter:

const TARGET_URL = 'https://httpbin.io/xhr/get';

const response = await axios.get('https://api.zenrows.com/v1/', {
  params: {
    apikey: API_KEY,
    url: TARGET_URL, // JS-rendered content
    js_render: true, // Enable JavaScript rendering
  }
});

The js_render parameter enables ZenRows to render JavaScript, so you can extract content that loads dynamically.

For more information check our JS Render Documentation.

Step 5: Handling Protected Websites

If the website you’re scraping uses anti-bot techniques (like Cloudflare or Akamai), you can activate residential proxies on your requests. ZenRows excels at bypassing these protections automatically.

Modify your code to include the premium_proxy parameter:

const response = await axios.get('https://api.zenrows.com/v1/', {
  params: {
    apikey: API_KEY,
    url: TARGET_URL,
    js_render: true,     // Enable JavaScript rendering
    premium_proxy: true, // Use residential proxies
  }
});

The premium_proxy parameter enables ZenRows to use residential proxy IPs instead of the default datacenter proxy IPs, which helps avoid being blocked by anti-bot systems and significantly increases your success rate.

For more information check our Premium Proxies Documentation.

Step 6: Using CSS Selectors

By default, the Universal Scraper API returns the entire HTML content of the page. However, you can use CSS selectors to extract specific parts of the page.

const TARGET_URL = 'https://example.com/';

const CSS_EXTRACTOR = {
  title: 'h1',
  content: 'p:first-of-type'
};

const response = await axios.get('https://api.zenrows.com/v1/', {
  params: {
    apikey: API_KEY,
    url: TARGET_URL,
    css_extractor: JSON.stringify(CSS_EXTRACTOR),
  }
});

console.log(response.data);
For more information check our CSS Extractor Documentation.

The Universal Scraper API offer other Output options like plain text, markdown, screenshots, and more.

Step 7: Saving Extracted Data

To save your results for future use, write them to a file:

const fs = require('fs');

// Inside your function, after the API call
fs.writeFileSync('content.json', JSON.stringify(response.data, null, 2));
console.log('Data saved to content.json');

This will store the extracted content as a nicely formatted JSON file.

You’re Ready to Scrape!

You now have a complete workflow:

  1. Sign up
  2. Grab your API key
  3. Use ZenRows to scrape static or JavaScript-heavy sites
  4. Extract and save the data you need

ZenRows handles the complexity behind the scenes, so you can focus on what matters: the data.

👉 Check out our full API reference for advanced options, examples, and best practices.