Data Bridge makes it possible for Peak to connect directly with data warehouses that are owned and managed by you. This means that your data does not leave your infrastructure when it is being processed and analysed in Peak; you will have full control over how Peak accesses your data.

This article explains the process for connecting your Snowflake data warehouse using Peak’s Data Bridge and what to be aware of when using Snowflake with different platform features.


Contents



Onboarding process 

Your Peak representative will assist you with this process and will provide you with all of the information you’ll need to get your Snowflake data warehouse working with Peak.

These are the steps that you’ll complete:

  • Prepare your Snowflake account and database so that Peak can have read and write access to specific schemas.
  • Share your Snowflake data warehouse details with the Peak onboarding team.


Preparing your Snowflake account to work with Peak

Your Snowflake account should be configured so that Peak can read raw data from your storage schemas and read/write to a specific schema for staging, transforming and publishing data.

You will need to create a user role for Peak in your Snowflake account that has the following access permissions:

  • Read access to the schemas where your raw data is stored.
  • Read/write access to a schema that will be used by Peak to process your data.


Example configuration


Providing Peak with your Snowflake data warehouse details

Give your Peak representative the following account details for your Snowflake data warehouse:

  • Account ID 
  • Database name
  • Password
  • Schema name
    Peak users will need to have read/write access on this schema.
    See below for an example SQL statement.
  • Default user name
  • Warehouse name
  • Default role



Using Snowflake with Peak

Once your Peak organization is fully configured to use your Snowflake data warehouse, you will need to be aware of how different areas of the platform interact with it. 


Feeds

Incremental load type

Snowflake only supports an incremental load type.
This means the data will be appended in the destination table with:

  • peakauditcreatedate
  • feed-run-id

For more information on configuring feeds see: 


SQL Explorer

Snowflake uses standard (case-sensitive) SQL comparison semantics. This means that data in Snowflake schemas is treated as case sensitive. 

Snowflake automatically converts unquoted, case-insensitive identifiers to uppercase, so any unquoted object names will appear in uppercase in the schema.

Snowflake applies the following rules for storing identifiers (at creation/definition time) and resolving them (in queries and other SQL statements):

  • When an identifier is unquoted, it is stored and resolved in uppercase.
  • When an identifier is double-quoted, it is stored and resolved exactly as entered, including case.
For more information, see this Snowflake document: Identifier Resolution.


Image Management

When creating images for your Workspaces, Workflows and Web Apps you will need to use the relevant Snowflake connectors, drivers and connection parameters.

Connection parameters, such as username and password should be added as environment variables and prefixed with:

  • SNOWFLAKE_
For more information, see the following Snowflake documents: Snowflake ODBC Driver and Snowflake Connector for Python.


Dashboards

When you are creating charts, Snowflake has specific requirements when writing your queries:

  • Column name case
    If column names use lower-case, make sure that they are contained within double quotes in your queries and that you have defined an alias. For an example, see the image below.
  • Do not use stars (*) in queries
    If lower case is used, queries will fail if you use a star within them.


Example query