Migrating From the Universal Scraper API to the Google Search API
This guide takes you through the step-by-step migration process from the Universal Scraper API to the new dedicated Google Search Results API:
- We’ll first show an end-to-end code demonstrating how you’ve been doing it with the Universal Scraper API.
- Next, we’ll show you how to **upgrade to the new Google Search API.
This guide helps you transition to the Search API in just a few steps.
Previous Method via the Universal Scraper API
Previously, you’ve handled requests targeting Google’s search results page using the Universal Scraper API. This approach involves:
- Manually parsing the search result page with an HTML parser like Beautiful Soup
- Exporting the data to a CSV
- Dealing with fragile, complex selectors that change frequently - a common challenge when scraping Google.
First, you’d request the HTML content of the search result page:
Parsing Logic and Search Result Data Extraction
The next step involves identifying and extracting titles, links, snippets, and displayed links from deeply nested elements. This is where things get tricky.
Google’s HTML structure is not only dense but also highly dynamic, which makes selector-based scraping brittle. Here’s an example:
N54PNb
, LC20lb
, VwiC3b
) are not stable. They may change often and break your scraper overnight. Maintaining such selectors is only viable if you’re scraping Google daily and are ready to troubleshoot frequently.Store the Data
The last step after scraping the result page is to store the data:
Putting Everything Together
Combining all these steps gives the following complete code:
Transition the Search to the Google Search Results API
The Google Search Results API automatically handles the parsing stage, providing the data you need out of the box, with no selector maintenance required.
To use the Search API, you only need its endpoint and a query term. The API then returns a JSON data containing the query results, including pagination details like the next search result page URL.
See the Google Search API endpoint below:
The Google Search Results API only requires these three simple steps:
- Send a query (e.g., Nintendo) through the Google Search Result API.
- Retrieve the search result in JSON format.
- Get the organic search results from the JSON data.
Here’s the code to achieve the above steps:
This would return a JSON object with the following structure:
Store the Data
You can store the organic search results retrieved above in a CSV file or a local or remote database.
The following code exports the scraped data to a CSV file by creating each result in a new row under the relevant columns:
Here’s the full code:
Your result will be similar to the one below:
That was quite an upgrade! Congratulations!! 🎉 Your Google search result scraper got much better, more efficient, and more reliable with the dedicated Google Search Results API.
Scaling Up Your Scraping
Once you have your first results, it is easy to scale up. You can follow subsequent result pages by extracting the next page query from the pagination
field in the JSON response.
For example, if the JSON includes a pagination object like this:
You can see that the next page query contains start=10. To continue scraping, you simply update the start value: the next page would use start=10, then start=20, and so on.
By incrementing the start parameter in multiples of 10, you can retrieve as many results as needed while maintaining full control over the pagination.
Conclusion
By migrating to the Google Search Results API, you’ve eliminated the need to manage fragile selectors and parse complex HTML, making your scraping setup far more reliable and efficient.
Instead of reacting to frequent front-end changes, you now receive clean, structured JSON that saves time, reduces maintenance, and allows you to focus on building valuable tools like SEO dashboards, rank trackers, or competitor monitors. With ZenRows handling the heavy lifting, your team can stay productive and scale confidently.