- Overview
- Platform setup and administration
- Platform setup and administration
- Platform architecture
- Data Bridge onboarding overview
- Connecting a Peak-managed data lake
- Connecting a customer-managed data lake
- Creating an AWS IAM role for Data Bridge
- Connecting a Snowflake data warehouse
- Connecting a Redshift data warehouse (public connectivity)
- Connecting a Redshift data warehouse (private connectivity)
- Reauthorizing a Snowflake OAuth connection
- Using Snowflake with Peak
- SQL Explorer overview
- Roles and permissions
- User management
- Inventory management solution
- Commercial pricing solution
- Merchandising solution

Supply Chain & Retail Solutions user guide
Configuring a Redshift connector for data extraction
A Redshift database connector extracts specific datasets from your Redshift clusters into Peak for detailed analysis. This connector enables targeted data extraction, distinct from connecting your entire Redshift data warehouse through Data Bridge.
For connecting your Redshift data warehouse to Data Bridge, see Connecting a Redshift data warehouse (public connectivity) or Connecting a Redshift data warehouse (private connectivity).
Prerequisites
Before configuring a Redshift connector feed, ensure you meet the following requirements:
- You have access to Peak with permissions to configure Data Sources.
- You have Redshift cluster credentials (host, port, username, password, database name).
- Peak IP addresses have been added to your Redshift cluster's security group inbound rules before testing the connection.
Configuring the connection
- In Peak, open Manage and select Data Sources.
- Select Add feed and choose the Amazon Redshift connector.
- At the Connection stage, either select an existing connection from the dropdown or select New connection.
- Enter the connection parameters. See Connection parameters.
- Select Test to validate the connection. If the test fails, hover over the info icon for details.
- Select Save and proceed to the next stage.
Connection parameters
| Parameter | Description |
|---|---|
| Connection name | A name for this connection. |
| Database host | The Redshift cluster endpoint. Format: [name].[id].[region].redshift.amazonaws.com |
| Database port | The port your cluster uses (typically 5439). |
| Database username | The username for database access. |
| Database password | The password for database access. |
| Database name | The name of the database. |
To find your credentials, go to Redshift in the AWS Console, open Clusters, and select your cluster. The host endpoint is at the top of the page. The database username and database name are listed under Cluster database properties.
Connecting through SSH
If your cluster requires an SSH tunnel, select Connect through SSH and enter the following:
| Parameter | Description |
|---|---|
| SSH host or IP | The SSH server host or IP address. |
| SSH port | The SSH port (typically 22). |
| SSH user | The SSH server username. |
| SSH password | Optional. The SSH server password. |
| Public key | Required if no password is used. Copy the public key and add it to the SSH server. |
Configuring import settings
- Select the database table to ingest and preview the data.
- Select a load type: Truncate and insert, Incremental, or Upsert. See Load types.
- Optionally, add field filters by operator and value.
- Add a primary key if using the upsert load type (required for upsert).
- Enter a feed name following these rules:
- Use only alphanumeric characters and underscores.
- Must start with a letter.
- Must not end with an underscore.
- Maximum 50 characters.
Configuring the destination
Select where your data will be ingested. Available options depend on your configured data warehouse. See Destination options.
Configuring the trigger
Set up how and when the feed runs. See Triggers and watchers.
Result
Peak creates the Redshift connector feed and runs it according to the selected trigger. You can monitor feed runs and troubleshoot failures from Manage > Data Sources.