Make Your First Request with ZenRows' Universal Scraper API
Learn how to extract data from any website using ZenRows’ Universal Scraper API. This guide walks you through creating your first scraping request that can handle sites at any scale.
ZenRows’ Universal Scraper API is designed to simplify web scraping. Whether you’re dealing with static content or dynamic JavaScript-heavy sites, you can get started in minutes with any programming language that supports HTTP requests.
1. Set Up Your Project
Set Up Your Development Environment
Before diving in, ensure you have the proper development environment and required HTTP client libraries for your preferred programming language. ZenRows works with any language that can make HTTP requests.
Python 3 is recommended, preferably the latest version. Consider using an IDE like PyCharm or Visual Studio Code with the Python extension.
If you need help setting up your environment, check out our detailed Python web scraping setup guide.
Python 3 is recommended, preferably the latest version. Consider using an IDE like PyCharm or Visual Studio Code with the Python extension.
If you need help setting up your environment, check out our detailed Python web scraping setup guide.
Node.js 18 or higher is recommended, preferably the latest LTS version. Consider using an IDE like Visual Studio Code or IntelliJ IDEA to enhance your coding experience.
If you need help setting up your environment, check out our detailed Node.js scraping guide
Java 8 or higher is recommended, preferably the latest LTS version, and a build tool like Maven or Gradle. IDEs like IntelliJ IDEA or Eclipse provide excellent Java development support.
For Maven projects, add this dependency to your pom.xml
:
PHP 7.4 or higher with cURL extension enabled is recommended, preferably the latest stable version. Consider editors like PhpStorm or Visual Studio Code with PHP extensions.
PHP comes with cURL built-in, so no additional packages are needed for basic HTTP requests.
Go 1.16 or higher is recommended, preferably the latest stable version. Visual Studio Code with the Go extension or GoLand provide excellent Go development environments.
Go’s standard library includes the net/http
package, so no additional dependencies are needed for HTTP requests.
Ruby 2.7 or higher is recommended, preferably the latest stable version. Consider editors like RubyMine or Visual Studio Code with Ruby extensions.
Typically pre-installed on most systems. If not available, install using your system’s package manager.
No additional packages needed - cURL works directly from the command line.
Get Your API Key
Sign up for a free ZenRows account and get your API key from the Builder dashboard. You’ll need this key to authenticate your requests.
2. Make Your First Request
Start with a simple request to understand how ZenRows works. We’ll use the HTTPBin.io/get endpoint to demonstrate how ZenRows processes requests and returns data.
Replace YOUR_ZENROWS_API_KEY
with your actual API key and run the script:
Expected Output
The script will print the contents of the website, for HTTPBin.io/get
it’s something similar to this:
Perfect! You’ve just made your first web scraping request with ZenRows.
3. Scrape More Complex Websites
Modern websites often use JavaScript to load content dynamically and employ sophisticated anti-bot protection. ZenRows provides powerful features to handle these challenges automatically.
Use the Request Builder in your ZenRows dashboard to easily configure and test different parameters. Enter the target URL (for this demonstration, the Anti-bot Challenge page) in the URL to Scrape field to get started.
Use Premium Proxies
Premium Proxies provide access to over 55 million residential IP addresses from 190+ countries with 99.9% uptime, ensuring the ability to bypass sophisticated anti-bot protection.
Enable JavaScript Rendering
JavaScript Rendering uses a real browser to execute JavaScript and capture the fully rendered page. This is essential for modern web applications, single-page applications (SPAs), and sites that load content dynamically.
Combine Features for Maximum Success
For the most protected sites, enable both JavaScript Rendering and Premium Proxies. This provides the highest success rate for challenging targets.
This code sends a GET request to the ZenRows API endpoint with your target URL and authentication. The js_render
parameter enables JavaScript processing, while premium_proxy
routes your request through residential IP addresses.
Run Your Application
Execute your script to test the scraping functionality and verify that your setup works correctly.
Example Output
Run the script, and ZenRows will handle the heavy lifting by rendering the page’s JavaScript and routing your request through premium residential proxies. The response will contain the entire HTML content of the page:
Congratulations! You now have a ZenRows integration that can scrape websites at any scale while bypassing anti-bot protection. You’re ready to tackle more advanced scenarios and customize the API to fit your scraping needs.
Troubleshooting
Request failures can happen for various reasons. While some issues can be resolved by adjusting ZenRows parameters, others are beyond your control, such as the target server being temporarily down.
Below are some quick troubleshooting steps you can take:
Check the Error Code and Error Message
When faced with an error, it’s essential first to check the error code and message for indications of the error. The most common error codes are:
-
401 Unauthorized
Your API key is missing, incorrect, or improperly formatted. Double-check that you are sending the correct API key in your request headers. -
429 Too Many Requests
You have exceeded your concurrency limit. Wait for ongoing requests to finish before sending new ones, or consider upgrading your plan for higher limits. -
413 Content Too Large
The response size exceeds your plan’s limit. Use CSS selectors to extract only the needed data, reducing the response size. -
422 Unprocessable Entity
Your request contains invalid parameter values, or anti-bot protection is blocking access. Review the API documentation to ensure all parameters are correct and supported.
Check if the Site is Publicly Accessible
Some websites may require a session, so verifying if the site can be accessed without logging in is a good idea. Open the target page in an incognito browser to check this.
You must handle session management in your requests if login credentials are required. You can learn how to scrape a website that requires login in our guide: Web scraping with login in Python.
Verify the Site is Accessible in Your Country
Sometimes, the target site may be region-restricted and only accessible to specific locations. ZenRows automatically selects the best proxy, but if the site is only available in concrete regions, specify a geolocation using proxy_country
.
Here’s how to choose a proxy in the US:
If the target site requires access from a specific region, adding the proxy_country
parameter will help.
Add Pauses to Your Request
You can also enhance your request by adding options like wait
or wait_for
to ensure the page fully loads before extracting data, improving accuracy.
Retry the Request
Network issues or temporary failures can cause your request to fail. Implementing retry logic can solve this by automatically repeating the request.
Get Help From ZenRows Experts
Our support team can assist you if the issue persists despite following these tips. Use the Builder page or contact us via email to get personalized help from ZenRows experts.
For more solutions and detailed troubleshooting steps, see our Troubleshooting Guides.
Next Steps
You now have a solid foundation for web scraping with ZenRows. Here are some recommended next steps to take your scraping to the next level:
- Complete API Reference:
Explore all available parameters and advanced configuration options to customize ZenRows for your specific use cases. - JavaScript Instructions Guide:
Learn how to perform complex page interactions like form submissions, infinite scrolling, and multi-step workflows. - Output Formats and Data Extraction:
Learn advanced data extraction with CSS selectors, output formats including Markdown and PDF conversion, and screenshot configurations. - Pricing and Plans:
Understand how request costs are calculated and choose the plan that best fits your scraping volume and requirements.