Extract web data with enterprise-grade performance using Undici’s HTTP/1.1 client and ZenRows’ Universal Scraper API. This guide demonstrates how to leverage Undici’s superior performance with ZenRows’ robust scraping infrastructure to build high-speed web scraping applications.

What Is Undici?

Undici is a fast, reliable HTTP/1.1 client written from scratch for Node.js. It provides significant performance improvements over traditional HTTP clients, such as Axios and node-fetch, making it ideal for high-throughput scraping applications. Undici supports advanced features like HTTP/1.1 pipelining, connection pooling, and multiple request methods.

Key Benefits of Integrating Undici with ZenRows

The undici-zenrows integration brings the following advantages:

  • Superior Performance: Undici delivers up to 3x better performance compared to traditional HTTP clients, making your scraping operations faster and more efficient.
  • Advanced Connection Management: Built-in connection pooling and HTTP/1.1 pipelining capabilities optimize resource usage for high-volume scraping.
  • Enterprise-Grade Scraping Reliability: Combine Undici’s robust HTTP handling with ZenRows’ scraping infrastructure.
  • Multiple Request Methods: Choose from request, fetch, stream, or pipeline API methods based on your specific use case requirements.
  • Memory Efficient: Stream-based processing reduces memory overhead for large-scale scraping operations.
  • TypeScript Support: Full TypeScript support for a better development experience and type safety.

Getting Started: Basic Usage

Let’s start with a simple example that uses Undici to scrape the Antibot Challenge page through ZenRows’ Universal Scraper API.

Step 1: Install Undici

Install the undici package using npm:

npm install undici

Step 2: Set Up Your Project

You’ll need your ZenRows API key, which you can get from the Request builder dashboard.

You can explore different parameter configurations using the Request Builder dashboard, then apply those settings in your Undici implementation. For a complete list of available parameters and their descriptions, refer to the API Reference documentation.

Create a new file and import the necessary modules.

Node.js
// npm install undici
import { request } from "undici";

// set your ZenRows API key
const ZENROWS_API_KEY = "YOUR_ZENROWS_API_KEY";
const targetUrl = "https://www.scrapingcourse.com/antibot-challenge";

Step 3: Make Your First Request

Use Undici’s request method to call ZenRows’ Universal Scraper API:

Node.js
// npm install undici
import { request } from "undici";

// set your ZenRows API key
const ZENROWS_API_KEY = "YOUR_ZENROWS_API_KEY";
const targetUrl = "https://www.scrapingcourse.com/antibot-challenge";

async function scrapeWithUndici() {
  try {
    const { body } = await request("https://api.zenrows.com/v1/", {
      method: "GET",
      query: {
        url: targetUrl,
        apikey: ZENROWS_API_KEY,
        js_render: "true",
        premium_proxy: "true",
      },
    });

    // get the HTML content
    const htmlContent = await body.text();
    console.log(htmlContent);
  } catch (error) {
    console.error("Scraping failed:", error.message);
  }
}

// execute the scraper
scrapeWithUndici();

Understanding the Parameters

In the code above, we’re using two key parameters to handle the antibot:

js_render: "true": Activates JavaScript execution to handle dynamic content and render the page completely

premium_proxy: "true": Routes requests through high-quality residential IP addresses to avoid detection

These parameters work together to bypass sophisticated antibots. For a comprehensive list of all available parameters and their usage, check the parameter overview table.

Replace YOUR_ZENROWS_API_KEY with your actual API key and run the script:

node scraper.js

The script will successfully bypass the antibot and return the HTML content:

Output
<html lang="en">
<head>
    <!-- ... -->
    <title>Antibot Challenge - ScrapingCourse.com</title>
    <!-- ... -->
</head>
<body>
    <!-- ... -->
    <h2>
        You bypassed the Antibot challenge! :D
    </h2>
    <!-- other content omitted for brevity -->
</body>
</html>

Perfect! You’ve successfully integrated Undici with ZenRows to bypass antibot protection.

Complete Example: E-commerce Product Scraper

Now let’s build a complete scraper for an e-commerce site to demonstrate practical use cases with Undici and ZenRows.

Step 1: Scrape the E-commerce Site

Let’s start by scraping an e-commerce demo site to see how Undici handles real-world scenarios. We’ll extract the complete HTML content of the page:

Node.js
// npm install undici
import { request } from "undici";

// set your ZenRows API key
const ZENROWS_API_KEY = "YOUR_ZENROWS_API_KEY";
const targetUrl = "https://www.scrapingcourse.com/ecommerce/";

async function scrapeEcommerceSite() {
  try {
    const { body } = await request("https://api.zenrows.com/v1/", {
      method: "GET",
      query: {
        url: targetUrl,
        apikey: ZENROWS_API_KEY,
      },
    });

    // get the HTML content
    const htmlContent = await body.text();
    console.log(htmlContent);
  } catch (error) {
    console.error("Scraping failed:", error.message);
  }
}

// execute the scraper
scrapeEcommerceSite();

This will return the complete HTML content of the e-commerce page.

Output
<!DOCTYPE html>
<html lang="en-US">
<head>
    <!--- ... --->
  
    <title>Ecommerce Test Site to Learn Web Scraping - ScrapingCourse.com</title>
    
  <!--- ... --->
</head>
<body class="home archive ...">
    <p class="woocommerce-result-count">Showing 1-16 of 188 results</p>
    <ul class="products columns-4">

        <!--- ... --->

    </ul>
</body>
</html>

Step 2: Parse the Scraped Data

Now that you can successfully scrape the site, let’s extract specific product information using CSS selectors. We’ll use ZenRows’ css_extractor parameter to get structured data:

Node.js
// npm install undici
import { request } from "undici";

// set your ZenRows API key
const ZENROWS_API_KEY = "YOUR_ZENROWS_API_KEY";
const targetUrl = "https://www.scrapingcourse.com/ecommerce/";

async function scrapeEcommerceSite() {
  try {
    const { body } = await request("https://api.zenrows.com/v1/", {
      method: "GET",
      query: {
        url: targetUrl,
        apikey: ZENROWS_API_KEY,
        css_extractor: JSON.stringify({
          "product-names": ".product-name",
        }),
      },
    });

    // get the product data
    const productData = await body.json();
    console.log(productData);
  } catch (error) {
    console.error("Scraping failed:", error.message);
  }
}

// execute the scraper
scrapeEcommerceSite();

The response will be a JSON object containing the extracted product names:

Output
{
  "product-names": [
    "Abominable Hoodie",
    "Adrienne Trek Jacket",
    // ...
    "Ariel Roll Sleeve Sweatshirt",
    "Artemis Running Short"
  ]
}

Step 3: Export Data to CSV

Finally, save the scraped product data to a CSV file for further analysis:

Node.js
// npm install undici
import { request } from "undici";
import { writeFileSync } from "fs";

// set your ZenRows API key
const ZENROWS_API_KEY = "YOUR_ZENROWS_API_KEY";
const targetUrl = "https://www.scrapingcourse.com/ecommerce/";

async function scrapeEcommerceSite() {
  try {
    const { body } = await request("https://api.zenrows.com/v1/", {
      method: "GET",
      query: {
        url: targetUrl,
        apikey: ZENROWS_API_KEY,
        css_extractor: JSON.stringify({
          "product-names": ".product-name",
        }),
      },
    });

    // get the product data
    const productData = await body.json();
    const productNames = productData["product-names"];

    // create CSV content
    let csvContent = "Product Name\n";
    productNames.forEach((name) => {
      csvContent += `"${name}"\n`;
    });

    // write to CSV file
    writeFileSync("products.csv", csvContent, "utf8");
    console.log(
      `Successfully exported ${productNames.length} products to products.csv`
    );
  } catch (error) {
    console.error("Scraping failed:", error.message);
  }
}

// execute the scraper
scrapeEcommerceSite();

Run the script and you’ll get a CSV file containing all the product names:

Congratulations! You now have a complete working scraper that can extract product data from an e-commerce site and export it to a CSV file.

Next Steps

You now have a complete foundation for high-performance web scraping with Undici and ZenRows. Here are some recommended next steps to optimize your scraping operations:

  • Undici Documentation: Explore Undici’s advanced features like HTTP/1.1 pipelining, custom dispatchers, and performance optimizations.
  • Complete API Reference: Explore all available ZenRows parameters and advanced configuration options to customize your Undici requests for specific use cases.
  • JavaScript Instructions Guide: Learn how to perform complex page interactions like form submissions, infinite scrolling, and multi-step workflows using ZenRows’ browser automation.
  • Output Formats and Data Extraction: Master advanced data extraction with CSS selectors, convert responses to Markdown or PDF, and capture screenshots using Undici’s streaming capabilities.

Getting Help

Request failures can happen for various reasons when using Undici with ZenRows. For detailed troubleshooting guidance, visit our comprehensive troubleshooting guide and check Undici’s documentation for HTTP client-specific issues.

If you’re still facing issues despite following the troubleshooting tips, our support team is available to help you. Use the chatbox in the Request Builder dashboard or contact us via email to get personalized help from ZenRows experts.

When contacting support, always include the X-Request-Id from your response headers to help us diagnose issues quickly.

Frequently Asked Questions (FAQ)