Interested in building a scraper with ZenRows in NodeJS? Follow this tutorial to learn how!

The steps:

  1. Retrieve your ZenRows API key.
  2. Install the ZenRows NodeJS SDK.
  3. Perform your first request.
  4. Get the HTML content of any page.

Let’s jump into it!

How to Use ZenRows in NodeJS

Before following the steps below, make sure you have Node.js 18+ installed on your machine. A JavaScript IDE such as IntelliJ IDEA or Visual Studio Code will also help.

If you’re new to web scraping, you might find our Node.js scraping guide useful.

From now on, we’ll assume you have a scraper.js Node.js script contained in the /scraper npm project folder.

Step 1: Sign Up for ZenRows

To get your ZenRows’ API key, sign up. You’ll also receive 1,000 free credits and gain access to a 24/7 support chat from experienced developers.

If you already have an account, log in with your credentials.

After logging in, you’ll get redirected to the Request Builder page. Here’s where you can find your API key as in the image below:

ZenRows Request Builder Page

Step 2: Install the ZenRows Node.js SDK

The Node.js SDK for ZenRows makes it easier to get started.

However, the service also supports two other connection modes:

  • API: Endpoints you can call using an HTTP client library to get site HTML or data.
  • Proxy: Servers you can set up your HTTP requests to go through to hide your identity.

To begin, run the following command in the /scraper project folder to install the ZenRows’ Node.js SDK:

npm install zenrows

Awesome, you just added the Zenrows npm package to your project’s dependencies. It’s time to use it.

Step 3: Perform Your First API Request

In your scraper.js, import the ZenRows API client from the zenrows library. Use it to make your first request, as in the snippet below. And, to test that the API works as expected, you can target HTTPBin.io:

scraper.js
const { ZenRows } = require("zenrows");

(async () => {
    const client = new ZenRows("<YOUR_ZENROWS_API_KEY>");

    const { data } = await client.get("https://httpbin.io/get");

    console.log(data);
})();
Replace <YOUR_ZENROWS_API_KEY> with the API key retrieved in step 1.

Launch your script:

node scraper.js

The output in the terminal will be similar to this:

{
    "args": {},
    "headers": {
    // omitted for brevity...
    "User-Agent": [
        "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36"
    ]
    },
    "origin": "37.214.8.211:8823",
    "url": "http://httpbin.io/get"
}

The origin field in the response contains the IP your request originates from. As ZenRows automatically rotates the exit IP and User-Agent for you, you’ll see different values every time you execute the script.

Well done! You just saw how to get started with the ZenRows API client and use it to make a basic scraping request! Next, you’ll scrape a real website.

Step 4: Scrape Any Web Page

ZenRows default capabilities enable you to bypass common blocks, but you’ll need more power to scrape websites with advanced anti-bot systems, such as G2. So, what to do?

We recommend using the following features as a foundation:

  • Premium proxies (parameter: premium_proxy).
  • JavaScript rendering (parameter: js_render)

Enable them by passing a configuration object to the ZenRows client as in this snippet:

scraper.js
const { ZenRows } = require("zenrows");

(async () => {
    const client = new ZenRows("<YOUR_ZENROWS_API_KEY>");
    
    const url = "https://www.g2.com/products/jira/reviews";
    const params = {
      // enable Javascript rendering in a headless browser
      js_render: "true",
      // enable a premium residential proxy
      premium_proxy: "true",
    };
    const { data } = await client.get(url, params);

    console.log(data);
})();

Run the script again, and it’ll print the HTML source code of the target page:

<!DOCTYPE html>
<head>
    <meta charset="utf-8" />
    <link href="https://www.g2.com/assets/favicon-fdacc4208a68e8ae57a80bf869d155829f2400fa7dd128b9c9e60f07795c4915.ico" rel="shortcut icon" type="image/x-icon" />
    <title>Jira Reviews 2023: Details, Pricing, &amp; Features | G2</title>
    <!-- omitted for brevity ... -->

Bye-bye anti-scraping systems! The ZenRows API request now returns the HTML content of the target page.

Mission complete! You just learned how to bypass though anti-bot systems and access any site.

Tip: Explore our documentation to find out all the parameters supported by the ZenRows Node.js API client. For example, wait_for is a useful one.

The next step is to pass the HTML to a parsing library to start extracting specific data, as introduced in the What’s Next section.

Troubleshooting

There are several reasons why an API request can fail. Usually, it’s just a matter of setting the right ZenRows parameters. However, in other cases, the reason behind the failure is something you don’t have control over, like the target server being temporarily down.

These are some quick solutions you should try to tackle the request failures:

A) Try the request again: Enable the retry capabilities offered by the ZenRows client via axios-retry. Specify the number of retries to perform on 429 and 5xx errors in the ZenRows client API constructor:

scraper.js
const client = new ZenRows("<YOUR_ZENROWS_API_KEY>", { retries: 5 }); // up to 5 retries

B) Make sure the site is publicly accessible: Some pages are accessible only to users who are logged in. Open the destination page in incognito mode of your browser to verify that. If the page does require a session, you’ll need some extra work as explained in our guide on how to scrape a site under a login wall.

C) Verify the target page is available in your region: The premium proxy used by ZenRows might change with every request. If your target site is only accessible from specific countries or regions, set a specific country for the proxy server with the proxy_country parameter:

scraper.js
const params = {
    premium_proxy: "true",
    proxy_country: "us" // <- choose a premium proxy in the US
    // other configs...
};
const { data } = await client.get(url, params);

If none of those quick tips work for you, chat with one of our engineers on the Request Builder page

ZenRows Request Builder Page

Pricing

Creating a ZenRows account will give you access to 1,000 free API credits. You’ll consume them via API requests only if they return the data you asked for. In other words, you’re charged for successful requests only.

After your trial, ZenRows plans start as low as $49/month. This entry-level plan provides you with 250,000 API credits. Since a basic request with no parameters consumes 1 credit, you can scrape 250,000 URLs successfully!

To maximize effectiveness, you need the recommended parameters js_render and premium_proxy. With this setup, the requests will cost 25 API credits, which means 10,000 scraped URLs.

Check out our pricing page for more information.

What’s Next

Before getting into code for your ZenRows Node.js scraper, you may prefer to test a request to verify it works. That’s where the Request Builder comes to the rescue.

Log into ZenRows to reach the Request Builder page

ZenRows Request Builder Page

Here, you can:

  1. Paste the URL of your target page.
  2. Create your request by enabling the different parameters offered by ZenRows (e.g., wait for an element to be on the page, specify particular JS instructions, or block specific resources).
  3. Select a programming language and specific connection mode. For general use, select “cURL” to get the complete API endpoint you can call with any HTTP client.
  4. Click the “Try It” button to test the request in the browser.
  5. Copy the auto-generated code and paste it into your script.

Keep learning by exploring our resources to discover more ZenRows features and possibilities:

  • Extract data from a page through data parsing: View how to use the css_extractor parameter to get the data of interest via CSS Selectors directly from the API calls.
  • Integrate ZenRows with Axios and Cheerio: Retrieve the HTML content of pages by calling the ZenRows API endpoints in Axios (or any other HTTP client library). That’s ideal if you want to take advantage of the API and Proxy connection modes and don’t use the ZenRows SDK. Then, parse the HTML with Cheerio, the most popular Node.js HTML parser.
  • Interact with web pages: Use the js_render and js_instructions parameters to simulate user interactions on the target page via browser automation.
  • Scrape more efficiently with concurrent requests: Make multiple ZenRows API calls simultaneously to scrape data from many URLs at the same time.