This article describes how to add a new or edit an existing Redshift database feed.
Contents
Process Overview
There are four stages that need to be completed when adding or editing a database feed:
- Connection
- Import Configuration
- Destination
- Trigger
For more information on data feeds, see Data Sources overview.
Connection
When setting up a connection, you can either use one that has been preconfigured or create a new one.
If you are using a preconfigured connection, you can leave the configuration parameters as they are or edit them.
To use a preconfigured connection:
- At the Connection stage, from the Select Connection dropdown, choose the required connection.
The dropdown will be empty if there have not been any previous connections configured. - Enter the required connection parameters.
See below for details. - Click SAVE to save the parameters and move to the next step.
To create a new connection:
- At the Connection stage, click NEW CONNECTION.
- Enter the required connection parameters.
See below for details. - Click SAVE to save the parameters and move to the next step.
Connection Parameters
To make a connection to your chosen Redshift database or edit the details of an existing connection, complete these fields:
- Connection Name
Enter a connection name - Database Host
Enter the database host name.
[name].[id].[region].redshift.amazonaws.com - Database Port
This is usually 5439. - Database Username
See the Redshift credentials on your AWS Console. - Database Password
See the Redshift credentials on your AWS Console. - Database Name
See the Redshift credentials on your AWS Console.
Connection parameters.
Finding your Redshift credentials
- From your AWS Console, go to Redshift.
- From Clusters, select your cluster.
The hostname is the endpoint listed at the top of the page.
The Database Username and Database Name are listed under Cluster Database Properties.
Testing your connection parameters
- Once you have completed your connection parameters, click the TEST button to make sure they are correct.
If the test fails, hover over 'i' for details. - If you need to connect through an SSH tunnel, tick the Connect through SSH box and complete the required details (see below).
- Click SAVE to save the parameters and move to the next step.
Note: Before testing the connection, a set of IPs needs to be whitelisted to allow Peak to access the database.
Edit Connection.
Connecting through SSH
If required, it is possible to make a connection through an encrypted SSH tunnel.
To do this, tick the Connect through SSH box and complete the required details.
Connection Parameters
- SSH Host or IP
Enter the SSH host or the IP. - SSH Port
This is usually 22. - SSH User
Enter the SSH server username. - SSH Password (optional)
If selected, enter the SSH server password. - Public key (optional)
If the password is not selected then public key has to be used, copy the public key and paste it into the SSH server.
SSH connection parameters (with password).
SSH connection parameters (with public key).
Import Configuration
Once a connection has been established and a table selected, the feed has to be configured so that data is updated and filtered in the most suitable way.
Selecting a table
Once a table is selected, you can:
- Preview the table data.
- Configure the load data type:
- Truncate and insert
- Incremental
- Update and insert (upsert)
For more information, see Data feed load types.
- Filter fields by operator and value.
- Add a primary key.
This is only mandatory for an upsert feed. - Name the feed.
Naming the feed
Follow these guidelines when naming a feed:
- The name should be meaningful.
- Only use alphanumeric characters and underscores.
- It must start with a letter.
- It must not end with an underscore.
- Use up to 50 characters.
Filter fields by operator and value.
Destination
- The Destination stage enables you to choose where your data will be ingested.
- If there are multiple options available, you can choose multiple destinations for Redshift users Redshift and S3 (external table) and for the Snowflake users Snowflake and S3 (external table).
- For more information, see Choosing a destination for a data connector.
- Schema generated can be modified if needed.
Triggers
For a guide to setting triggers for your data feeds, see How to create a trigger for a data feed.