ZenRows Docs home pagelight logodark logo
  • Support
  • ZenRows Status
  • Dashboard
  • Dashboard
First Steps
Scraper APIs
Universal Scraper API
Scraping Browser
Residential Proxies
Integrations
  • Blog
  • Quickstart
    • Introduction
    • Scraper APIs Setup
    E-Commerce
    • Amazon APIs
    • Walmart APIs
    Real Estate
    • Zillow APIs
    • Idealista APIs
    SERP
    • Google API
    Help
    • Migrating to Amazon Scraper APIs
    • Migrating to Google Search APIs
    • Migrating to Zillow Scraper APIs
    • Migrating to Idealista Scraper APIs
    • FAQ

    Frequently Asked Questions

    When using ZenRows, knowing the right identifier (ID) or query for a given URL is crucial. Whether you’re extracting product details, searching for listings, or retrieving reviews, each platform has a unique way of structuring URLs. This guide explains how to find the correct ID or query for different APIs so you can make accurate and efficient requests.

    ​
    Amazon Product API

    • Example URL: /dp/B07FZ8S74R
    • Correct ID: B07FZ8S74R

    ​
    How do I find the correct ID?

    The Amazon Standard Identification Number (ASIN) is the unique product ID, found after /dp/ in the URL.

    Example Request: https://www.amazon.com/dp/B07FZ8S74R

    ​
    Amazon Discovery API

    • Example URL: /s?k=Echo+Dot
    • Correct Query: Echo Dot

    ​
    How do I find the correct query?

    Amazon search URLs contain the query after ?k=. This query must be URL-encoded when making requests.

    Example Request: https://www.amazon.com/s?k=Echo+Dot

    ​
    Walmart Product API

    • Example URL: /ip/5074872077
    • Correct ID: 5074872077

    ​
    How do I find the correct ID?

    Walmart product pages include a numeric item ID, which appears right after /ip/ in the URL.

    Example Request: https://www.walmart.com/ip/5074872077

    ​
    Walmart Review API

    • Example URL: /reviews/product/5074872077
    • Correct ID: 5074872077

    ​
    How do I find the correct ID?

    The Walmart Review API uses the same product ID as the main product page, found after /reviews/product/.

    Example Request: https://www.walmart.com/reviews/product/5074872077

    ​
    Walmart Discovery API

    • Example URL: /search?q=Wireless+Headphones
    • Correct Query: Wireless Headphones

    ​
    How do I find the correct query?

    The search query appears after ?q= in the URL and should be URL-encoded when used in API requests.

    Example Request: https://www.walmart.com/search?q=Wireless+Headphones

    ​
    Idealista Property API

    • Example URL: /inmueble/106605370/
    • Correct ID: 123456789

    ​
    How do I find the correct ID?

    Idealista property pages include a numeric identifier found after /inmueble/.

    Example Request: https://www.idealista.com/inmueble/106605370/

    ​
    Idealista Discovery API

    • Example URL: /venta-viviendas/madrid-madrid
    • Correct ID: NOT_SUPPORTED

    ​
    Why is there no ID?

    Idealista search results do not use a unique ID because they return multiple listings instead of a single property.

    Example Request: https://www.idealista.com/venta-viviendas/madrid-madrid

    ​
    Zillow Property API

    • Example URL: /homedetails/112-Treadwell-Ave-Staten-Island-NY-10302/32297624_zpid/
    • Correct ID: 32297624

    ​
    How do I find the correct ID?

    Zillow property pages include a zpid, found near the end of the URL before _zpid/.

    Example Request: https://www.zillow.com/homedetails/112-Treadwell-Ave-Staten-Island-NY-10302/32297624_zpid/

    ​
    Zillow Discovery API

    • Example URL: /homes/for_sale/San-Francisco-CA/
    • Correct ID: NOT_SUPPORTED

    ​
    Why is there no ID?

    Zillow search results do not have a unique identifier because they display multiple property listings instead of a single home.

    Example Request: https://www.zillow.com/homes/for_sale/San-Francisco-CA/

    ​
    Google Search API

    • Example URL: /search?q=nintendo&udm=14
    • Correct Query: Nintendo

    ​
    How do I find the correct query?

    The search query appears after ?q= in the URL and should be URL-encoded when making API requests.

    Example Request: https://www.google.com/search?q=nintendo&udm=14

    The ZenRows® Scraper APIs provide a more efficient and specialized way to extract structured data from popular websites. If you’re currently using the Universal Scraper API, switching to the Scraper APIs can simplify your setup, ensure predictable pricing, and improve performance. This guide walks you through the transition process for a seamless upgrade to the Scraper APIs.

    ​
    Why Switch to the Scraper APIs

    The Scraper APIs are designed to streamline web scraping by offering dedicated endpoints for specific use cases. Unlike the Universal Scraper API, which requires custom configurations for proxies, JavaScript rendering, and anti-bot measures, the Scraper APIs handle these complexities for you. Key benefits include:

    • Predictable Pricing: Scraper APIs offer a competitive fixed price per successful request, ensuring clear and consistent cost management for your scraping needs.

    • No Setup, Parsing, or Maintenance Required: With the Scraper APIs, simply select the API use case, and we handle all the anti-bot configuration. This means you can focus on extracting valuable data while we ensure the API responses are structured and reliable. Enjoy high success rates and continuous data streams with no maintenance on your end.

    ​
    How the Scraper APIs Work

    The core of the ZenRows API is the API Endpoint, which is structured based on the industry, target website, type of request, and query parameters. This modular approach allows you to extract data efficiently from various sources.

    https://<INDUSTRY>.api.zenrows.com/v1/targets/<WEBSITE>/<TYPE_OF_REQUEST>/<QUERY_ID_URL>?<YOUR_ZENROWS_API_KEY>

    Each part of the URL serves a specific purpose:

    • <INDUSTRY> The industry category (e.g., ecommerce, realestate, serp).
    • <WEBSITE> The target website (e.g., amazon, idealista, google).
    • <TYPE_OF_REQUEST> The type of data you want (e.g., products, reviews, search).
    • <QUERY_ID_URL> The unique identifier for the request, such as a product ID, property ID, or query.
    • <YOUR_ZENROWS_API_KEY> Your personal API key for authentication and access.

    Here’s an example for Amazon Product Information API:

    https://ecommerce.api.zenrows.com/v1/targets/amazon/products/{asin}

    Breaking it down:

    • Industry: ecommerce
    • Website: amazon
    • Type of Request: products
    • Query ID: {asin} (Amazon Standard Identification Number, used for product lookup)

    ​
    Customization with Additional Parameters

    Depending on the website, you may include extra parameters to refine your request:

    • .tld Specify the top-level domain (e.g., .com, .co.uk, .de).
    • country Set the country code to retrieve localized data.
    • filters Apply filters to extract specific data.

    ​
    Examples on How to Transition from the Universal Scraper API

    Switching to the Scraper APIs requires minimal code changes. Below is an example of how to transition from the Universal Scraper API to a dedicated Scraper API.

    ​
    E-Commerce (i.e., Amazon Product Information API)

    ​
    Old Code Using Universal Scraper API:

    Python
    # pip install requests
    import requests
    
    url = 'https://www.amazon.com/dp/B07FZ8S74R'
    apikey = 'YOUR_ZENROWS_API_KEY'
    params = {
        'url': url,
        'apikey': apikey,
        'js_render': 'true',
        'premium_proxy': 'true',
        'proxy_country': 'us',
    }
    response = requests.get('https://api.zenrows.com/v1/', params=params)
    print(response.text)

    ​
    New Code Using a Dedicated Scraper API:

    Python
    # pip install requests
    import requests
    
    asin = 'B07FZ8S74R'
    api_endpoint = f'https://ecommerce.api.zenrows.com/v1/targets/amazon/products/{asin}'
    params = {
        'apikey': 'YOUR_ZENROWS_API_KEY',
    }
    
    response = requests.get(api_endpoint, params=params)
    print(response.text)

    ​
    Steps to Switch:

    1

    Update the API Endpoint

    Replace the existing Universal Scraper API endpoint with the dedicated endpoint for Amazon products. The new endpoint follows this format:

    https://ecommerce.api.zenrows.com/v1/targets/amazon/products/{asin}

    Where {asin} is the unique Amazon product identifier.

    2

    Simplify the Parameters

    The Scraper APIs are optimized to handle common scraping tasks, such as JS rendering and proxy management, without needing to specify additional parameters like js_render or premium_proxy. Simply pass the apikey and any other relevant parameters (such as country or tld) and you’re good to go.

    Refer to the API documentation for a complete list of available parameters.

    3

    Test the New API Call

    Once you’ve made the necessary changes to your code, run it to test the new API call. The response will be structured data tailored to your scraping use case, such as product name, price, and description.

    ​
    Real State (i.e., Zillow Property Data API)

    ​
    Old Code Using Universal Scraper API:

    Python
    # pip install requests
    import requests
    
    url = 'https://www.zillow.com/homedetails/3839-Bitterroot-Dr-Billings-MT-59105/3194425_zpid/'
    apikey = 'YOUR_ZENROWS_API_KEY'
    params = {
        'url': url,
        'apikey': apikey,
        'js_render': 'true',
        'premium_proxy': 'true',
        'proxy_country': 'us',
    }
    response = requests.get('https://api.zenrows.com/v1/', params=params)
    print(response.text)

    ​
    New Code Using a Dedicated Scraper API:

    Python
    # pip install requests
    import requests
    
    zpid = '3194425'
    url = f"https://realestate.api.zenrows.com/v1/targets/zillow/properties/{zpid}"
    params = {
        "apikey": "YOUR_ZENROWS_API_KEY",
    }
    response = requests.get(url, params=params)
    print(response.text)

    ​
    Steps to Switch:

    1

    Update the API Endpoint

    Replace the existing Universal Scraper API endpoint with the dedicated endpoint for Zillow property data. The new endpoint follows this format:

    https://realestate.api.zenrows.com/v1/targets/zillow/properties/{zpid}

    Where {zpid} is the unique Zillow property identifier.

    2

    Simplify the Parameters

    The Scraper APIs are optimized to handle common scraping tasks, such as JS rendering and proxy management, without requiring additional parameters like js_render or premium_proxy. Simply provide the apikey along with any other relevant parameters (such as country or tld), and you’re good to go.

    Refer to the API documentation for a complete list of available parameters.

    3

    Test the New API Call

    Once you’ve made the necessary changes to your code, run it to test the new API call. The response will be structured data tailored to your scraping use case, such as property address, price, and description.

    ​
    SERP (i.e., Google Search Results API)

    ​
    Old Code Using Universal Scraper API:

    Python
    # pip install requests
    import requests
    
    url = 'https://www.google.com/search?q=zenrows'
    apikey = 'YOUR_ZENROWS_API_KEY'
    params = {
        'url': url,
        'apikey': apikey,
        'js_render': 'true',
        'premium_proxy': 'true',
        'proxy_country': 'us',
    }
    response = requests.get('https://api.zenrows.com/v1/', params=params)
    print(response.text)

    ​
    New Code Using a Dedicated Scraper API:

    Python
    # pip install requests
    import requests
    
    encoded_query = 'zenrows+scraper+apis'
    url = f"https://serp.api.zenrows.com/v1/targets/google/search/{encoded_query}"
    params = {
        "apikey": "YOUR_ZENROWS_API_KEY",
    }
    response = requests.get(url, params=params)
    print(response.text)

    ​
    Steps to Switch:

    1

    Update the API Endpoint

    Replace the existing Universal Scraper API endpoint with the dedicated endpoint for Google search results. The new endpoint follows this format:

    https://serp.api.zenrows.com/v1/targets/google/search/{query}
    Note: {query} should be URL-encoded.
    2

    Simplify the Parameters

    The Scraper APIs are optimized to handle common scraping tasks, such as JS rendering and proxy management, without requiring additional parameters like js_render or premium_proxy. Simply provide the apikey along with any other relevant parameters (such as country or tld), and you’re good to go.

    Refer to the API documentation for a complete list of available parameters.

    3

    Test the New API Call

    Once you’ve made the necessary changes to your code, run it to test the new API call. The response will be structured data tailored to your scraping use case, such as advertisements results, links, and titles.

    ​
    Need Help?

    If you have any questions or face issues during the transition, our support team is here to assist you. Switching to the new Scraper APIs is straightforward, offering optimized performance, predictable pricing, and less maintenance. We’re constantly expanding our API catalog and prioritizing new features based on your feedback, so enjoy a more reliable scraping experience today!

    You can find your API key in your ZenRows settings under the API Key section.

    If you exceed your API usage limits, your requests will return an error. You can either upgrade your plan, wait until your cycle renews, or buy Top-Ups.

    No, ZenRows automatically handles proxies, browser fingerprinting, and anti-bot measures, so you don’t need to set up anything manually.

    Yes, you can integrate ZenRows with various programming languages, including Python, JavaScript, and more. Code snippets are provided in the API request builder.

    Yes, you can specify the top-level domain (.tld) and country parameters to target localized versions of a website.

    Only available on some APIs. Refer to the API documentation for a complete list of available parameters.

    If your request fails, check the response message for error details. Common reasons include invalid API keys or exceeding usage limits.

    To optimize performance, use filtering options, minimize unnecessary requests, and ensure you’re only scraping the required data.

    Only available on some APIs. Refer to the API documentation for a complete list of available parameters.

    Yes, ZenRows is designed for scalability. You can run multiple concurrent requests to speed up data extraction, and each plan offers a different concurrency limit. If you need higher limits, consider upgrading to a higher-tier plan or contacting support for enterprise solutions.

    Was this page helpful?

    Migrating to Idealista Scraper APIs
    Powered by Mintlify
    Assistant
    Responses are generated using AI and may contain mistakes.