Standard/Best Practices
Use Cases
Connecting to on-prem systems

Intro to managing API limits

Overview
Copy

When building Tray.io workflows, it is very likely that you will come across issues arising from the fact that certain operations in service connectors have API rate limits.

For example a particular service connector may have a limit of 100 calls per hour when you are trying to download records, or running a batch update operation.

Examples
Copy

Google Sheets API limits
Copy

Or you may be trying to add hundreds of rows to a Google Sheet, and you find that your Sheets add rows errors with the following response:

And if we look at this page on the rate limits for Google Sheets API we see that the rate limit is 100 requests per 100 seconds per user.

So this means that if you are looping through and adding 100 records to a Google Sheet, if your workflow takes less than 100 seconds to do this, then you will receive the above error.

Slack API limits
Copy

Slack has a rate limiting page which explains their rate limits. The amount of API requests which can be made depend on your Web API Tier

https://api.slack.com/docs/rate-limits

Slack's rate limiting window is every minute, but small amount of burst behaviour are tolerated. This means that if you are limited to 100 calls per minute you might be allowed to send 120 in the 1st minute, assuming the 2nd minute has 80 requests, and the overall average is still 100.

Available solutions
Copy

The solution here is to set up a system which controls the rate at which these calls are made. There are several options for doing this:

Using a scheduled trigger
Copy

Building a queue
Copy

A common scenario is that - on a continual basis - too many requests are coming in to a webhook-triggered workflow such that if you were to process them straight into Google Sheets, Salesforce, etc. it would exceed your limits.

In this case it is possible to use various methods to build a First In First Out (FIFO) queue

1 - Using data storage
Copy

It is possible to use the Tray.io Data Storage connector to store your records at account level. You can then set up a scheduled workflow to extract these records for processing.

Please see our guide on setting up a data storage queue for guidance on this.

2 - Using the CSV Editor
Copy

Another approach is to use the CSV Editor to store your records at account level. You can then set up a scheduled workflow to extract these records for processing using the Get Rows operation, at which point you can specify exactly how many rows are extracted for processing each time.

See the tutorial in this section for guidance

The CSV Editor is likely to present problems if it is fed in the region of 20,000 rows (you may find duplicates and fragmented data appearing). In which case you can use a SQL database or Amazon SQS.

3 - Using a SQL database
Copy

You can store batches of data in a SQL database instead. Storing data in SQL also means you can perform extra validation, cleaning or enrichment tasks if need be (tutorial coming soon!)

4 - Using an SQS queue
Copy

If you are wanting to manage queues of messages and API requests - e.g. you have a high rate of orders / purchases coming through from Shopify - a paid-for solution might be to use our Amazon SQS connector in conjunction with a scheduled trigger to control the rate at which you pull and delete messages from the queue (note that the CSV method above can be used as an alternative to paying for SQS).

Pulling data and pagination
Copy

When using certain operations such as the 'Find Records' operation, you will find there are pagination options which allow you to pull results in batches.

This means you can limit the number of API calls you make - e.g. you can retrieve 10 batches of 200 records, instead of making 2000 individual calls for each record.

Advanced methods
Copy

The delay connector is a good multi-use tool for managing the rate requests are made to service APIs.

Delay example 1
Copy

The following example shows running some checks using a method similar to last runtime in conjunction with the delay connector to make sure that messages to Slack aren't being sent too frequently:

Delay example 2
Copy

The following example for Jira shows how we can effectively make use of the delay, and also how to use Error Handling to control rates for steps which might regularly return errors.

It sets a loop which makes five attempts to get issue data from Jira:

Key points are:

  1. Retry up to 5 times is set to 'Loop forever'. It is crucial to note that you must always make sure there are conditions which will stop the loop. In this case if the operation succeeds, Break loop is used to progress to the next stage of the workflow. If the operation continualy errors, then it terminates the whole workflow if it runs too many times.

  1. For Get Issues of Page manual error handling is turned on. This prevents the automatic retries Tray.io makes from contributing to your rate limit being exceeded

  2. The Delay or Terminate step is a boolean which uses $.steps.loop-1.index to check if the count of the loop has reached 5. If so then it will terminate the workflow

  1. If there are still more attempts to be made then a delay of 1 minute is made, to ensure the requests are not made too quickly in succession