Throttling Policies

The Customer API is guarded by several throttling policies. Each time a request violates a throttling policy, it will receive a response with http status 429 TOO MANY REQUESTS. The request rejection reason is placed in the HTTP headers of a response corresponding to the violated policy.

Below you can find descriptions of all throttling policies.

Note: All throttling policies are applied to an individual Customer API user.

Number of requests per time period

Current limit

The Customer API limit is 10 requests per second.

Exceptions

For the following endpoints, the limit is 2 requests per second:

  • POST /jobs/{id}/publication
  • DELETE /jobs/{id}/publication

Response headers

The following headers are present in a response regardless of whether a policy was violated:

  • X-RateLimit-Limit – the request limit for the time period
  • X-RateLimit-Remaining – the number of requests remaining until the time period passes

Number of concurrent requests

Current limit

The Customer API limit is 8 concurrent requests.

Exceptions

For the following endpoints, only 1 concurrent request is allowed.

Response headers

The following headers are present in a response regardless of whether a policy was violated:

  • X-RateLimit-Concurrent-Limit – the concurrent request limit
  • X-RateLimit-Concurrent-Remaining – the number of possible additional concurrent requests

Response size per time period

Current limit

The response size limit policy is applied only to the Analytics API. For those endpoints, the response size limit is 100,000 bytes per second. The limit is enforced independently on each Analytics API endpoint.

Response headers

The following headers are present on a response only when the throttling policy is violated, i.e. for responses with 429 status code:

  • Retry-After – the number of seconds after which your app will be permitted to make another request to the same resource.

You have a great influence on how your service behaves in the shared environment. If you follow the tips listed below you will get responses from the system faster:

  1. Program your software in the way that it does not make all the calls at one specific point of time, e.g.: 8 am, 9 am, etc. Build in instead some randomness and therefore distribute the calls in time more evenly.
  2. Ensure that timeout of your requests is set to at least 128s. You shall receive response from our API servers within this time (a valid response or an error code). Of course we will do our best to answer your request as quickly as possible.
  3. Use our Reporting API for getting data for analysis. Reporting API is designed to serve big amounts of data as quickly and effectively as possible. Reporting API uses streaming to serve huge amounts of data as a response to one call so that there is no need to make multiple calls to iterate through the whole set. Throttling strategy on Reporting API ensures that you get full set of data at once and therefore you do not have to wait for the next chunk(s). There is a limit only when you can make the next call to the same endpoint.