- Overview
- Platform setup and administration
- Platform setup and administration
- Platform architecture
- Data Bridge onboarding overview
- Connecting a Peak-managed data lake
- Connecting a customer-managed data lake
- Creating an AWS IAM role for Data Bridge
- Connecting a Snowflake data warehouse
- Connecting a Redshift data warehouse (public connectivity)
- Connecting a Redshift data warehouse (private connectivity)
- Reauthorizing a Snowflake OAuth connection
- Using Snowflake with Peak
- SQL Explorer overview
- Roles and permissions
- User management
- Inventory management solution
- Commercial pricing solution
- Merchandising solution

Supply Chain & Retail Solutions user guide
Connecting a Redshift data warehouse (private connectivity)
Connecting your Redshift cluster via private connectivity uses AWS PrivateLink for secure, isolated network access without internet exposure. This method is more secure but requires specific Redshift cluster configuration before you begin.
Prerequisites
Before connecting a Redshift data warehouse via private connectivity, ensure your Redshift cluster meets all of the following requirements:
- RA3 node type — other node types do not support cluster relocation or enhanced VPC routing.
- Multi-subnet — the cluster must have subnets in at least two different availability zones.
- Cluster relocation enabled — allows the cluster to be moved to another availability zone without data loss.
- Enhanced VPC routing enabled — ensures COPY and UNLOAD traffic does not use the public internet.
- A data lake is already configured in Peak in the same region as your Redshift cluster.
- An S3 gateway endpoint must be created in the same region as the Redshift cluster. See Creating an S3 gateway endpoint.
Steps
-
In Peak, open Manage and select Data Bridge.
-
Select Add data warehouse.
-
Enter a unique name for the connection and select Amazon Redshift.
- The name must be unique to your Peak organization.
- Use only alphanumeric characters and underscores. No spaces or other special characters.
- Minimum 3 characters, maximum 40 characters.
- Must start and end with an alphanumeric character.
- The name cannot be changed after the connection has been set up.
-
Select the region where your Redshift cluster is hosted. Ensure the region complies with your data localization requirements.
Supported regions: Ireland (eu-west-1), London (eu-west-2), Mumbai (ap-south-1), North California (us-west-1), North Virginia (us-east-1).
-
Select Private connectivity.
-
Copy the Peak AWS account ID displayed in the configuration step.
-
In AWS, grant Peak access to your Redshift cluster:
- Open the Redshift console and select your cluster.
- On the Properties tab, scroll to Granted accounts and select Grant access.
- Enter the Peak AWS account ID copied in the previous step.
- Select Grant access to all VPCs and confirm.
-
In Peak, enter the cluster identifier (your AWS cluster name) and select the confirmation checkbox.
-
Select Next to proceed to the database step.
-
Enter the database credentials. See Credential fields.
-
In the Data lake step, select the data lake connection to link.
-
Set up the Redshift IAM role for data lake access. See Configuring the Redshift IAM role.
-
Review the configuration and select Finish.
Note: For private connectivity, the connection test runs automatically after you complete the configuration and submit the setup request. Unlike public connectivity, the Test button is not available during the wizard.
Credential fields
| Field | Description |
|---|---|
| Database name | Name of the Redshift database. |
| Username | The username Peak will use to access the database. |
| Password | The password for the database user. Must be 8–64 characters and include at least one uppercase letter, one lowercase letter, and one number. ASCII characters 33–126 are allowed, except ', ", \, /, and @. |
| Port | The port your Redshift cluster is deployed on (typically 5439). |
| Input schema | The schema used to load data into the warehouse via connectors. |
| Default schema | The schema used for Peak write-back operations. |
Configuring the Redshift IAM role
Peak requires an IAM role attached to your Redshift cluster with permission to access the S3 data lake. Peak provides the required policy and trusted entity JSON on the Data lake step of the setup wizard.
To configure the IAM role:
- In Peak, copy the Redshift IAM role policy from the Data lake step.
- In the AWS IAM console, go to Access management > Policies and create a new policy. Paste the copied JSON on the JSON tab, then save the policy.
- In Access management > Roles, select the role attached to your Redshift cluster.
- Select Add permissions > Attach policies and attach the policy you just created.
- Ensure the
AmazonRedshiftAllCommandsFullAccessmanaged policy is also attached to this role. - In Peak, copy the Trusted entity policy from the Data lake step.
- In the same IAM role, select Edit trust relationship, paste the copied JSON, and save.
- Copy the IAM role ARN from the role's Summary section.
- Paste the role ARN into the IAM role ARN field in Peak.
Creating an S3 gateway endpoint
An S3 gateway endpoint is required to enable private connectivity between your Redshift cluster and the S3 data lake. This ensures that COPY and UNLOAD traffic does not route through the public internet when enhanced VPC routing is enabled.
To create an S3 gateway endpoint:
- In the AWS console, go to VPC > Endpoints and select Create endpoint.
- Enter an endpoint name.
- Under Service category, select AWS services.
- In the Service list, search for S3 and select the result with type Gateway.
- Select the VPC where your Redshift cluster is hosted.
- Select the route table associated with the Redshift cluster's subnet.
- Under Policy, select Full access.
- Select Create endpoint.
Result
The Redshift data warehouse connection appears as Active in the Data Bridge list.