Standard/Best Practices
Use Cases
Connecting to on-prem systems

Intro to managing API limits


When building workflows, it is very likely that you will come across issues arising from the fact that certain operations in service connectors have API rate limits.

For example a particular service connector may have a limit of 100 calls per hour when you are trying to download records, or running a batch update operation.


Google Sheets API limits

Or you may be trying to add hundreds of rows to a Google Sheet, and you find that your Sheets add rows errors with the following response:

And if we look at this page on the rate limits for Google Sheets API we see that the rate limit is 100 requests per 100 seconds per user.

So this means that if you are looping through and adding 100 records to a Google Sheet, if your workflow takes less than 100 seconds to do this, then you will receive the above error.

Slack API limits

Slack has a rate limiting page which explains their rate limits. The amount of API requests which can be made depend on your Web API Tier

Slack's rate limiting window is every minute, but small amount of burst behaviour are tolerated. This means that if you are limited to 100 calls per minute you might be allowed to send 120 in the 1st minute, assuming the 2nd minute has 80 requests, and the overall average is still 100.

Available solutions

The solution here is to set up a system which controls the rate at which these calls are made. There are several options for doing this:

Using a scheduled trigger

Building a queue

A common scenario is that - on a continual basis - too many requests are coming in to a webhook-triggered workflow such that if you were to process them straight into Google Sheets, Salesforce, etc. it would exceed your limits.

In this case it is possible to use various methods to build a First In First Out (FIFO) queue

1 - Using data storage

It is possible to use the Data Storage connector to store your records at account level. You can then set up a scheduled workflow to extract these records for processing.

Please see our guide on setting up a data storage queue for guidance on this.

2 - Using the CSV Editor

Another approach is to use the CSV Editor to store your records at account level. You can then set up a scheduled workflow to extract these records for processing using the Get Rows operation, at which point you can specify exactly how many rows are extracted for processing each time.

See the tutorial in this section for guidance

The CSV Editor is likely to present problems if it is fed in the region of 20,000 rows (you may find duplicates and fragmented data appearing). In which case you can use a SQL database or Amazon SQS.

3 - Using a SQL database

You can store batches of data in a SQL database instead. Storing data in SQL also means you can perform extra validation, cleaning or enrichment tasks if need be (tutorial coming soon!)

4 - Using an SQS queue

If you are wanting to manage queues of messages and API requests - e.g. you have a high rate of orders / purchases coming through from Shopify - a paid-for solution might be to use our Amazon SQS connector in conjunction with a scheduled trigger to control the rate at which you pull and delete messages from the queue (note that the CSV method above can be used as an alternative to paying for SQS).

Pulling data and pagination

When using certain operations such as the 'Find Records' operation, you will find there are pagination options which allow you to pull results in batches.

This means you can limit the number of API calls you make - e.g. you can retrieve 10 batches of 200 records, instead of making 2000 individual calls for each record.

Advanced methods

The delay connector is a good multi-use tool for managing the rate requests are made to service APIs.

Delay example 1

The following example shows running some checks using a method similar to last runtime in conjunction with the delay connector to make sure that messages to Slack aren't being sent too frequently:

Delay example 2

The following example for Jira shows how we can effectively make use of the delay, and also how to use Error Handling to control rates for steps which might regularly return errors.

It sets a loop which makes five attempts to get issue data from Jira:

Key points are:

  1. Retry up to 5 times is set to 'Loop forever'. It is crucial to note that you must always make sure there are conditions which will stop the loop. In this case if the operation succeeds, Break loop is used to progress to the next stage of the workflow. If the operation continualy errors, then it terminates the whole workflow if it runs too many times.

  1. For Get Issues of Page manual error handling is turned on. This prevents the automatic retries makes from contributing to your rate limit being exceeded

  2. The Delay or Terminate step is a boolean which uses $.steps.loop-1.index to check if the count of the loop has reached 5. If so then it will terminate the workflow

  1. If there are still more attempts to be made then a delay of 1 minute is made, to ensure the requests are not made too quickly in succession