This article describes how to add a new or edit an existing BigQuery database feed.
Contents
Process Overview
There are four stages that need to be completed when adding or editing a database feed:
- Connection
- Import Configuration
- Destination
- Trigger
To find out how to create new and edit existing data feeds, see Data Sources overview.
Connection
The following BigQuery roles must be activated before you attempt to make a connection:
- BigQuery Data Owner
- BigQuery Data Viewer
- BigQuery Job User
- BigQuery User
- Storage Admin
- Storage Object Admin
- Storage Object Creator
- Storage Object Viewer
A Bucket must be created or available to use in the project. For more information, see BigQuery Access Controls Documentation.
Setting up a connection
When setting up a connection, you can either use an existing connection or create a new one.
If you are using an existing connection, you can leave the configuration parameters as they are or edit them.
To use an existing connection:
- At the Connection stage, from the Select Connection dropdown, choose the required connection.
The dropdown will be empty if there have not been any previous connections configured. - Enter the required connection parameters.
See below for details. - Click SAVE to save the parameters and move to the next step.
To create a new connection:
- From the Connection screen, click NEW CONNECTION.
The Add Connection box appears. - Enter the required connection name and click AUTHENTICATE.
- Sign-in to the email account that enables Peak to access the Google BigQuery.
An oAuth Consent screen appears. - Click Allow.
Upon authentication, you’ll be returned to the Connection stage with the connection name filled out. - Click NEXT to continue to Import Configuration.
Import Configuration
The Import Configuration screen enables you to specify:
- The BigQuery project, bucket, dataset and table from where the connector will be retrieving data.
- The way in which the feed will update data stored by Peak.
A bucket name must be specified for the connector to work.
To complete the BigQuery Import Configuration stage:
- Select the required details from these drop-down menus:
- Select GBQ Project
The top-level container that contains your datasets. - Bucket
A bucket contain objects which can be accessed by their own methods.
For more information, see: - Dataset
A dataset represents a collection of tables. - Table
Tables exist within datasets. Select the table that will be used to fetch the data.
- Select GBQ Project
Note: Views are not supported.
- Click NEXT
A preview of the required data is generated. - Select the required details from these drop-down menus and click NEXT:
- Feed Load Type
- Truncate and insert
- Incremental
- Update and insert (upsert)
For more information, see Data feed load types.
- Primary Key
The primary key is only mandatory for an upsert feed. - Fetch historical data from
Enter the date from which the data needs to be fetched from BigQuery. - Feed Name
Enter a suitable name for the feed.
- Feed Load Type
Destination
- The Destination stage enables you to choose where your data will be ingested.
- If there are multiple options available, you can choose multiple destinations for Redshift users Redshift and S3 (external table) and for the Snowflake users Snowflake and S3 (external table).
- For more information, see Choosing a destination for a data connector.
- Schema generated can be modified if needed.
Setting a trigger
For a guide to setting triggers for your data feeds, see How to create a trigger for a data feed.