Standard/Best Practices
Use Cases
Connecting to on-prem systems

Using batch update operations

Introduction
Copy

Many connectors have batch methods, for instance in Salesforce and Marketo you can batch update records such as leads and contacts. In the case of Salesforce, there is a limit of 25 records that can be updated in a single call. The question is, if we start with a list of more than 25 records, how do we efficiently divide the list into batches of 25 that can be processed in a single call?

Note this example makes extensive use of the Script connector, and the Lodash Javascript library to perform data transformations.

Workflow example: Pull data from CSV reader and push to Salesforce
Copy

Workflow Summary
Copy

In this example, we will be using the CSV Reader to retrieve the data to be used in a batch update.

We will be showing how to use the Salesforce Batch update records operation, pulling data from a CSV structured in such a way as:

A key point to batch update operations - which will be common to all services - is that you will need the ID of the record/object being updated. This is shown above, along with the data (phone and city) that you are wanting to update for each record.

Note that the column names in the above example may not match the correct field names as they are expected by the API in the service you are sending the updates to (Salesforce in this example).

You can 'map' the batch update data to the correct fields by manually creating an override list using a script step similar to the Set field list in the workflow below, which then gets used in the subsequent Key-pair rows and Format update scripts.

The Set field list step creates the correct field list of object_id, Phone and BillingCity.

The complete workflow looks like:

The first section of the workflow shows the basic process of creating a CSV and pulling results from it in the Check execution loop:

  1. Pull CSV file from a source (such as Google Drive)

  2. Use the CSV Editor Create CSV operation to create a CSV that can be accessed in the workflow

  3. Use the CSV Editor Start query operation to being the process of extracting rows from the CSV

  4. Use the CSV Editor Get query execution operation to check if the query has finished. Once the query has finished the results can be pulled from the created CSV. So we then set up the Paginate CSV loop.

  5. The Paginate CSV loop uses Get Query Results to pull a number of rows from the CSV connector equal to the max allowed by the service API

  6. It then uses Get next page and Set next page to store and retrieve a token which indicates if there are more rows to be pulled from the CSV

  7. It then uses Set field list to set the correct field names expected by the service API

  8. It then uses Key-pair rows to map the individual rows with the field list

  9. It then uses Format update to put the data for each record in the exact format expected by the API

  10. It then uses Batch update to send the formatted batch of records to the service, using the 'batch update'-type operation

  11. It then uses Has next page? to check if the CSV Reader Get Query results step returned a next token

  12. It then uses Set next page to set the next token for the next loop iteration, if it is found

  13. It then uses Break loop if no token is found

1- Getting CSV query results and a next page token
Copy

Please see our CSV Reader documentation for instructions on setting up the first part of the workflow, up to breaking the loop after the Get query execution has returned a state of 'SUCCEEDED'.

Once the CSV has been exported using the Start Query operation, we can return rows by using the Get query results operation and specifying how many rows to fetch, as well as an offset.

By looking at the output schema for this operation, we can see that the operation will return a property called 'NextToken' if there are more results to return. If there is no next page, the value of this property will be Null:

With the data storage helper, we can use this token to drive the pagination of results, as explained below.

2 - Setup the basic query results pagination loop
Copy

The start of this construction is a loop step, using the operation Loop Forever, which we use to loop through the CSV:

Get query results (CSV Reader)
Copy

Inside this loop, we place a CSV reader step, with the operation set to Get query results. The Query execution ID parameter will be set to reference the 'QueryExecutionId' of a previous CSV reader step set to Start query. The next parameter is the maximum number of results to return. As we are sending batches of 25 to Salesforce in this example, we set this field to the value 25. The last parameter is where we set a Next token to retrieve the next page of results on each subsequent request.

Get next page (data storage)
Copy

In order to set a value for this field, we need some way of referencing the token from the previous run of the loop, for each run after the first. In order to accomplish this, we use data storage to store and retrieve the token. First, we add a data storage step with the operation set to Get Value, and set the Key to something informative such as 'token'. The Default Value of this variable will be (null). We place this step just before the CSV reader step. Then in the CSV reader step, we set the value of Next token to a jsonPath referencing the value from this data storage step:

Has next page? (boolean)
Copy

Next, we need to set the value for this variable. Firstly, we add a boolean condition step after the CSV reader step, to check whether this step has output a token, implying there is a subsequent page of results to fetch. The value of 1st Value will be a jsonPath to the 'NextToken' property output by the Get Query Results CSV step. We wish to check that the value of this property is not null:

Set next page (TRUE branch)
Copy

Inside the TRUE branch of this boolean condition, we use data storage to store the value of 'NextToken' for the next run. Duplicate the data storage step you have already created, and move this duplicated step in to the boolean condition. Then, change the operation of this data storage step to Set Value, and then set the value of Value to a jsonPath referencing the 'NextToken' property from the CSV reader step (the same jsonPath that was used in the boolean condition).

Break loop if no token found (FALSE branch)
Copy

Inside the FALSE branch of the boolean condition, we need to stop the loop from continuing when there are no more records to retrieve. We achieve this by using the Break Loop connector, and its Break operation, specifying the name of the loop step to break:

3 - Process results and send to a service
Copy

We now have the minimum setup necessary for paging results from the CSV reader. This requires adding the script and Salesforce steps to the above pagination system. The full loop for this section looks like this:

Set field list (script)
Copy

Before we process the rows of results we need to set the field list, as expected by the Salesforce API. We can do this using a simple script such as:

1
exports.step = function() {
2
return {
3
"salesforce_fields": ["object_id", "Phone", "BillingCity"]
4
}
5
};

Key-pair rows (script)
Copy

The next stage is to process these rows to send to a service, in this case Salesforce. The CSV reader outputs each column as a list, rather than an object, and so we need to transform the output to use in Salesforce. Given a list of column headers in the correct order, we can transform the data from the CSV reader into rows of key-value pairs. For instance, in our workflow we have a script which outputs a property called salesforce_fields, which is an ordered list of column names. After the CSV reader step to get query results, we set up a script step as follows:

The csvRows variable comes from the CSV reader step's RowData property, and the columnHeaders variable contains our list of ordered column names. The contents the Script are as follows:

1
// Maps all 25 rows
2
exports.step = (input) => _.map(
3
input.csvRows,
4
(row) => _.reduce(
5
// Within each row, we loop through the list of values and assign each value to an object, with the key name being the column name
6
row.Data,
7
(acc, data, index) => {
8
acc[input.columnHeaders[index]] = data.VarCharValue;
9
return acc;
10
},
11
{}
12
),
13
[]
14
);

This will then output a list of rows such as:

Format update
Copy

Note that the column headers we have used are the schema field names that Salesforce expects (if you were to fill in the input panel for the Salesforce connector, then these would be the field names that appear in the Input part of the Debug when the workflow is run). Hence we can simply map these rows to a schema structure for the Salesforce Batch update records operation. If we have a Salesforce step using the Batch update records operation in a standalone operation, we run the workflow, and then look at the Input part of the Debug panel, we should see a payload being sent such as the following:

1
{
2
"result": [
3
{
4
"object_id": "0014J000007k2moQAA",
5
"fields": [
6
{
7
"key": "Phone",
8
"value": "869-246-0198"
9
},
10
{
11
"key": "BillingCity",
12
"value": "Seattle"
13
}
14
]
15
},
16
{
17
"object_id": "0014J000006m8EVQAY",
18
"fields": [
19
{
20
"key": "Phone",
21
"value": "996-312-0009"
22
},
23
{
24
"key": "BillingCity",
25
"value": "Chicago"
26
}
27
]
28
},
29
{
30
"object_id": "0014J000006m89GQAQ",
31
"fields": [
32
{
33
"key": "Phone",
34
"value": "135-198-2336"
35
},
36
{
37
"key": "BillingCity",
38
"value": "Belfast"
39
}
40
]
41
}
42
],
43
"console": []
44
}

As we can see, object_id is a special top-level field, but for all other fields, we can put them inside the fields list, where the key is the field name and value is the value to assign to the field. In order to achieve this structure, we place a another script step in the workflow just after the previous script step. We define a variable called 'rows', and set its value to the output of the previous script:

Then we set the contents of Script to:

1
exports.step = (input) => _.map(
2
input.rows,
3
(row) => ({
4
object_id: row.object_id,
5
fields: _.reduce(
6
_.omitBy(row, (value, key) => key === 'object_id'), // remove the object_id property which we have placed above
7
(acc, fieldValue, fieldKey) => {
8
acc.push({
9
key: fieldKey,
10
value: fieldValue
11
});
12
return acc;
13
},
14
[]
15
)
16
})
17
);

This will output the list we need for the batch_update_list property in our Salesforce step.

Batch update (Salesforce)
Copy

Now, we can add a Salesforce step to the workflow just after this script step:

This uses the Batch update records operation. Set the value of Record type to the type of object we are updating, 'Account' in this case. Then the value of Batch update list will be a jsonPath referencing the output of the Format update script step.

Tip: In order to reduce your task usage, you could combine both script steps into one.