Concurrency refers to the number of API requests you can have in progress (or running) at the same time. If your plan supports 5 concurrent requests, then you can have up to 5 requests being processed simultaneously. If you send a 6th request while 5 are already processing, you’ll get an error.

Understanding Concurrency

At its core, the concept of concurrency is about how many scraping requests can be in progress simultaneously.

Imagine you have a team of workers and each worker represents a “concurrent request slot”. If you have 5 workers, they can work on 5 tasks simultaneously. If you hand them a 6th task while all of them are busy, you’ll have to wait until at least one of them is free.

In ZenRows, each “task” is an API request, and each “worker” is a concurrent request slot available to you based on your subscription.

Impact of Request Duration on Throughput

The duration a request takes plays a critical role in how many requests you can process over a period. For instance, consider this:

  • If each request lasts 1 second, with 5 concurrent requests, you can process 5 requests every second. Over a 60-second period, that’s 300 requests!
  • However, if each request takes 10 seconds to complete, you can only process 5 requests every 10 seconds. Over the same 60-second period, you’d only manage 30 requests.

This showcases that the shorter the duration of each request, the higher the number of requests you can process within the same timeframe. See the Frequently Asked Questions section for tips on optimizing speed.

Example

Let’s dive into a detailed example to make this clearer. Assume your plan includes 5 concurrent requests.

Scenario:

  • 1st Request: Takes 10 seconds to finish.
  • 2nd Request: Takes 7 seconds to finish.
  • 3rd Request: Takes 8 seconds to finish.
  • 4th Request: Takes 9 seconds to finish.
  • 5th Request: Takes 14 seconds to finish.

You send all these 5 requests at once. Each of them gets one of your 5 available “workers” and starts processing. Now, before any of these 5 finish, if you send a:

  • 6th & 7th Request: This will immediately get ”429 Too Many Requests” errors. Why? Because all your “workers” are busy. You’ll have to wait until the quickest request (in this example, the 2nd request that finishes in 7 seconds) completes.
Concurrency example

Concurrency Headers

To give you greater insight and control over your API usage, every response from our API will include two crucial HTTP headers related to concurrency:

  1. Concurrency-Limit: This header provides the total number of concurrent requests your current plan allows.
  2. Concurrency-Remaining: This header informs you about the number of available concurrency slots at the time this specific request was received by the server.

Imagine you’re on a plan that supports 10 concurrent requests. If you send 3 requests simultaneously and then inspect the headers of one of those responses, you might find:

  • Concurrency-Limit: 10
  • Concurrency-Remaining: 7

This indicates that at the time of this request’s reception, there were 7 free slots remaining. The other 3 were taken up by the requests in progress.

Concurrency headers

Using Concurrency Headers for Optimization

These headers are not just informational; they are instrumental tools for you. By reading and understanding them in real time, you can dynamically tune the number of parallel requests your system sends to ensure efficient use of available concurrency slots.

For instance:

  1. Before sending a batch of requests, inspect the Concurrency-Remaining header of the most recent response.
  2. Based on the value of this header, adjust the number of parallel requests you send. For example, if Concurrency-Remaining is 5, avoid sending more than 5 simultaneous requests.

By adapting your request pattern based on these headers, you’ll mitigate the risk of encountering “429 Too Many Requests” errors, ensuring a smoother and more efficient interaction with our API.

Limiting concurrency

ZenRows SDK for Python

For the code to work, you will need python3 installed. Some systems have it pre-installed. After that, install all the necessary libraries by running:

pip install zenrows

ZenRows Python SDK comes with concurrency and retries already implemented for you. Pass in the numbers in the constructor as seen below; remember to adjust the concurrency according to your plan. Take into account that each client instance will have its own limit, meaning that two different scripts will not share it, and 429 (Too Many Requests) errors might arise.

asyncio.gather will wait for all the calls to finish and store all the responses in an array. You can loop over it afterward and extract the data you need. As usual, each response will have the status, request, response content, and other values. Remember to run the scripts with asyncio.run or it will fail with a coroutine 'main' was never awaited error.

from zenrows import ZenRowsClient 
import asyncio 
from urllib.parse import urlparse, parse_qs 
 
client = ZenRowsClient("YOUR_API_KEY", concurrency=5, retries=1) 
 
urls = [ 
	# ... 
] 
 
async def main(): 
	responses = await asyncio.gather(*[client.get_async(url) for url in urls]) 
 
	for response in responses: 
		original_url = parse_qs(urlparse(response.request.url).query)["url"] 
		print({ 
			"response": response, 
			"status_code": response.status_code, 
			"request_url": original_url, 
		}) 
 
asyncio.run(main())

Frequent Questions

How many concurrent requests are included in my plan?

The concurrency limit starts at 5 concurrent requests for free trial plans and the maximum concurrency increases the bigger your plan is:

  • Trial plan: 5 concurrent requests (see what ZenRows trial includes)
  • Developer plan: 10 concurrent requests
  • Startup plan: 25 concurrent requests
  • Business plan: starting at 50 concurrent requests

Enterprise plans can include custom concurrency limits to fit your needs. Contact us for tailor-made Enterprise plans.

If get a “429 Too Many Requests” error, do I lose that request or is it queued?

You’ll receive an error, and that request won’t be queued or retried automatically. You’ll need to manage retries on your end, ensuring you don’t exceed your concurrency limit.

Can I increase my concurrency limit?

Absolutely! We offer various plans with different concurrency limits to suit your needs. If you find yourself frequently hitting the concurrency limit, consider upgrading.

How can I speed up requests?

Using js_render=true parameter will make requests slower (as JS rendering is a slower process). Also, waiting for specific conditions (using wait or wait_for parameters) will slow down the process as conditions must pass for the request to end and the concurrency slot to be released. We recommend you be extremely careful when using these parameters.

The use of the premium_proxy parameter will speed up your requests, as the better the proxy to use to retrieve the contents, the faster it will be and the less time a request will take.

I’ve been blocked by repeatedly exceeding my concurrency limit. Why?

Whenever you exceed your plan concurrency limit, you’ll start receiving “429 Too Many Requests” errors. If you keep sending more and more requests exceeding your plan concurrency limit in a short period of time, the service may temporarily block your IP address to prevent API misuse.

The IP address ban will last only a few minutes, but repeatedly being blocked might end in a long-lasting block. Check the concurrency optimization section for more information on how to limit concurrent requests to prevent being blocked.