Open Navigation

Google BigQuery

A web service that enables interactive analysis of massively large datasets


BigQuery is Google's serverless, highly scalable enterprise data warehouse and is designed to make data analysts more productive.


Note: Before authenticating any Google connector, you will need to whitelist the Google authentication app. Please follow the instructions here before continuing with the authentication process.

Once this is done, you can click on the New Authentication button:


You can then make sure your scopes are set correctly and click 'Add authentication':


You will then be able to select your Google account to authenticate with.

Basic Operations list

  • Add table fields
  • Get job
  • Get table
  • Insert rows
  • Insert rows from csv file
  • List datasets
  • List jobs
  • List projects
  • List tables
  • Run Query

Note on Operations usage

Run Query

When using the Run Query operation be sure to leave out \ the in your query.

For example a query should be just SELECT * FROM trainee_trials.trainee_candidates

There is no need to enclose trainee_trials.trainee_candidates in \.

Prefixing query statements

Depending on the type of query you run, prefixes should be used to resolve the SQL format: #standardSQL (or on the VERY rare occasion, even: #legacySQL).

If you do not use the #standardSQL prefix, the execution will fail.

For more details please reference the (Switching SQL dialects)[] page.

Example - Insert multiple rows to a table

This example will take you through creating some dummy data to input as multiple rows into a BigQuery table.

It assumes that you have already set up a data table in BigQuery, with Fields configured. To follow this example precisely, you should create a table called 'trainee_candidates' with the following fields:


The complete workflow looks like this:


Step 1 - create the dummy data


To create the dummy data add a Script connector and enter something like the following into the Script box, to create an array of 'candidates':

exports.step = function() {
return {
"candidates": [
{"Name": "P Pitstop", "Id": "0345g45gAAR", "Stage": "Trial: Activated"},
{"Name": "D Dastardly", "Id": "032828d8AUY", "Stage": "Trial: Scheduled"},
{"Name": "P Pending", "Id": "031228dwAIW", "Stage": "Trial: Scheduled"}

Step 2 - Loop through the data


Then add a loop connector which uses the $.steps.script-1.result.candidates jsonpath to take each candidate one-by-one.

Step 3 - Enter the rows in BigQuery


To add each row to BigQuery, set the Operation as 'Insert Rows'.

Then choose the correct Project id, Dataset id and Table id.

For Rows you will need to repeatedly Add Property to Data and set each property to exactly match the names of the Fields as they are set for your table in BigQuery.

Each Field can then use a jsonpath such as $.steps.loop-1.value.Name to pick up the correct values.

Step 4 - Run the workflow and check the result

Once your workflow is complete, click 'Run Workflow Now'

You should then be able to see that the multiple rows have been added to BigQuery by running a table query:


Bonus - using the script connector to format data

Please see our Script Connector documentation for a similar tutorial to this, which also includes using the Script Connector to format data if cells in your database have any particular formatting requirements that must be adhered to.

For example your table may not accept null or empty values, in which case you can use the example script to change all empty/null values to something standard like "---" or "not given".

Was this article helpful?