The Data Sources function contains a range of ready-to-use Connectors that are used to establish data feeds between Peak and external sources of data. Once configured, you will be able to regularly ingest and process data.
This article introduces you to the functions of Data Sources and what they enable you to do.


Contents



Connectors and data feeds

From Data Sources you can create data feeds using a wide range of Connectors and then manage them from the Feeds screen.


What are Connectors?

Connectors enable you to ingest data to Peak from different Data Sources.

To use one of Peak’s Connectors to create a feed, go to Dock > Data Sources and click the ADD FEED button.

The Connectors are categorized into Databases, Applications, and File Storage.

Databases

Peak can get data directly from these types of databases:

  • Redshift

  • Snowflake

  • BigQuery

  • PostgreSQL

  • MSSQL

  • MySQL

  • Oracle


Applications

Data can be pulled directly from online sources.

Data from these applications is queried and then saved to your organization's data lake.


File storage

Data can be pulled from files uploaded to your organization’s data lake.

This is the easiest way to ingest data to Peak and can be done via a simple drag and drop, FTP / SFTP upload or signed URL.


What are data feeds?

A data feed connects to a Data Source, makes a copy of its data, and then ingests the copied data into the data warehouse of your Peak organization. This process can be automated with a Trigger, or run manually by a user.

Once a feed has been set up, you can manage it from the Data Sources screen.


How are data feeds configured?

Once you have chosen a Connector, Peak guides you through a configuration process comprising the following stages:

  • Connection
    This is used to specify exactly where the Data Source is located and any credentials that will be required to access it, for example, hostnames, usernames, and passwords.

  • Import Configuration
    This is used when configuring feeds from a database or application. It enables you to select specific database tables or data that will be ingested by the feed.

  • Destination
    The destinations available will depend on the data warehouse you use. For example, Redshift users would see Redshift and S3 as options, while users of Snowflake will see Snowflake and S3 as options.

  • Trigger
    Data feeds can be triggered to ingest data in several different ways:

    • Schedule
      The feed runs at a user-specified interval (for example, every x days or hours).

    • Run once
      The feed will only run when the user presses "Run"

    • Webhook
      The feed is triggered to run from an API call using a Webhook URL.

For more information on how to configure each of the different types of Connector, see the following sets of articles:



Understanding the Data Sources screens

The Feeds screen enables you to view or edit the existing data feeds that have been configured for your organization and add new ones.

The screen shows the time and the status of each data feed run and the next scheduled run. You can also run, pause, tag, edit, and delete feeds from here.


Feed functions

Hover over a feed to access the following functions:

Run the feed now.

Pause the feed schedule.

Manage tags for the feed.

Only alphanumeric characters are allowed. Use the tab or enter key to separate values.

Edit the feed.

Resume the feed schedule after a pause.


Filtering your feeds

If you have a lot of feeds, you can sort them with the filter function to help you identify the one you need.



The following filters are available:

  • Feed Status

    • Active

    • Paused

  • Trigger Type

    • Schedule

    • Webhook

    • Run Once (Manual)

    • Run Once (Schedule)

  • Last Run Status

    • Running

    • Failed

    • Success

    • No new data

  • Tags

    • Shows custom tags that have been applied to your feeds.

Monitoring feed activity

Once you have configured your data feeds, you can monitor how each one is working by going to Dock > Data Sources and clicking on the feed that you want to view.

Logs tab

This tab lets you quickly see how many rows of data have been successfully loaded into your data warehouse.

It also shows detailed logs for each occasion the feed has been run.


On clicking the Browse file icon on the detailed logs, you will be redirected to File Manager, showing the folder containing the file(s) associated with this particular feed run.



Info tab

The Info tab provides you with basic information about the feed.

Debugging the Feed run failures


To debug the feed run failure, you can look at the errors corresponding to the feed run in the Detailed Log section. On clicking the Detailed Log corresponding to a particular feed run, you can see the various details of that feed run.

In case of an error, you will see the error details

Further, you can get details of all the individual failed records, by downloading the STL load error files. You can do this by clicking the download icon next to the Error details.