This guide will walk you through the steps to get started with ZenRows in Node.js, from installing the necessary packages to performing your first API request. ZenRows simplifies web scraping by handling anti-bot measures and rendering JavaScript-heavy sites, allowing you to focus on data extraction without worrying about site protection mechanisms. Let’s dive in!

How to Use ZenRows with Node.js

Before starting, ensure you have Node.js 18+ installed on your machine. Using an IDE like Visual Studio Code or IntelliJ IDEA will also enhance your coding experience.

We’ll create a Node.js script named scraper.js inside a /scraper directory. If you need help setting up your environment, check out our Node.js scraping guide for detailed instructions on preparing everything.

Install Node.js’s Axios Library

To interact with the ZenRows API in a Node.js environment, you can use the popular HTTP client library, Axios. Axios simplifies making HTTP requests and handling responses, making it an ideal choice for integrating with web services such as ZenRows.

To install axios in your Node.js project, run the following command in your terminal:

npm install axios

This command will install the Axios library, which allows you to easily send HTTP requests from your Node.js applications. Once installed, you can start making API calls to ZenRows, retrieving data efficiently while managing different aspects of the request and response cycle.

Perform Your First API Request

In this step, you will send your first request to ZenRows using the Axios library to scrape content from a simple URL. We will use the HTTPBin.io/get endpoint to demonstrate how ZenRows processes the request and returns the data.

Here’s an example:

scraper.js
// npm install axios
const axios = require('axios');

const url = 'https://httpbin.io/get';
const apikey = 'YOUR_ZENROWS_API_KEY';
axios({
	url: 'https://api.zenrows.com/v1/',
	method: 'GET',
	params: {
		'url': url,
		'apikey': apikey,
	},
})
    .then(response => console.log(response.data))
    .catch(error => console.log(error));

Replace YOUR_ZENROWS_API_KEY with your actual API key and run the script:

node scraper.js

The script will print something similar to this:

{
    "args": {},
    "headers": {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36",
        // additional headers omitted for brevity...
    },
    "origin": "38.154.5.224:6693",
    "url": "http://httpbin.io/get"
}

The response includes useful information such as the origin, which shows the IP address from which the request was made. ZenRows automatically rotates your IP address and adjusts the User-Agent for each request, ensuring anonymity and preventing blocks.

Congratulations! You’ve just made your first web scraping request with Node.js and Axios.

Scrape More Complex Web Pages

While scraping simple sites like HTTPBin is straightforward, many websites, especially those with dynamic content or strict anti-scraping measures, require additional features. ZenRows allows you to bypass these defenses by enabling JavaScript Rendering and using Premium Proxies.

For example, if you try to scrape a page like G2’s Jira reviews without any extra configurations, you’ll encounter an error:

{
    "code":"REQS002",
    "detail":"The requested URL domain needs JavaScript rendering and/or Premium Proxies due to its high-level security defenses. Please retry by adding 'js_render' and/or 'premium_proxy' parameters to your request.",
    "instance":"/v1",
    "status":400,
    "title":"Www.g2.com requires javascript rendering and premium proxies enabled (REQS002)",
    "type":"https://docs.zenrows.com/api-error-codes#REQS002"
}

This error happens because G2 employs advanced security measures that block basic scraping attempts.

Here’s how you can modify the request to enable both:

scraper.js
// npm install axios
const axios = require('axios');

const url = 'https://www.g2.com/products/jira/reviews';
const apikey = 'YOUR_ZENROWS_API_KEY';
axios({
	url: 'https://api.zenrows.com/v1/',
	method: 'GET',
	params: {
		'url': url,
		'apikey': apikey,
		'js_render': 'true',
		'premium_proxy': 'true',
	},
})
    .then(response => console.log(response.data))
    .catch(error => console.log(error));

Run the script, and this time, ZenRows will handle the heavy lifting by rendering the page’s JavaScript and routing your request through premium residential proxies. The response will contain the entire HTML content of the page:

<!DOCTYPE html>
<head>
    <meta charset="utf-8">
    <title>Jira Reviews 2024: Details, Pricing, & Features | G2</title>
    <!-- omitted for brevity -->
</head>
<body>
    <!-- page content -->
</body>
</html>

This demonstrates how you can scrape more advanced websites that rely on JavaScript or have stringent anti-bot mechanisms in place.

Troubleshooting

Request failures can happen for various reasons. While some issues can be resolved by adjusting ZenRows parameters, others are beyond your control, such as the target server being temporarily down.

Below are some quick troubleshooting steps you can take:

1

Retry the Request

Network issues or temporary failures can cause your request to fail. Implementing retry logic can solve this by automatically repeating the request. Learn how to add retries in our Node.js Axios retry guide.

Example of retry logic using requests:

scraper.js
// Retry interceptor function
import axiosRetry from 'axios-retry';
// Default axios instance
import axios from 'axios';

// Pass the axios instance to the retry function  and call it
axiosRetry(axios, { 
  retries: 3, // Number of retries (Defaults to 3) 
});

// Make Axios requests below
axios.get('https://scrapingcourse.com/ecommerce/') // The request is retried if it fails
  .then((response) => {
    console.log('Data: ', response.data);
  }).catch((error) => {
    console.log('Error: ', error);
  });
2

Verify the Site is Accessible in Your Country

Sometimes, the target site might be region-restricted and only available to some proxies. ZenRows automatically selects the best proxy, but if the site is available only in specific regions, specify a geolocation using proxy_country.

Here’s how to choose a proxy in the US:

params: {
    'premium_proxy': 'true',                 
    'proxy_country': 'us' // <- choose a premium proxy in the US
    // other configs...
},

If the target site requires access from a specific region, adding the proxy_country parameter will help.

Check out more about it on our Geolocation Documentation Page.
3

Check if the Site is Publicly Accessible

Some websites may require a session, so verifying if the site can be accessed without logging in is a good idea. Open the target page in an incognito browser to check this.

If login credentials are required, you’ll need to handle session management in your requests. You can learn how to scrape a website that requires login in our guide: Web scraping with login in Python.

4

Get Help From ZenRows Experts

If the issue persists despite following these tips, our support team is available to assist you. Use the Builder page or contact us via email to get personalized help from ZenRows experts.

Frequently Asked Questions (FAQ)