- Overview
- Platform setup and administration
- Platform setup and administration
- Platform architecture
- Data Bridge onboarding overview
- Connecting a Peak-managed data lake
- Connecting a customer-managed data lake
- Creating an AWS IAM role for Data Bridge
- Connecting a Snowflake data warehouse
- Connecting a Redshift data warehouse (public connectivity)
- Connecting a Redshift data warehouse (private connectivity)
- Reauthorizing a Snowflake OAuth connection
- Using Snowflake with Peak
- Peak Platform Ingestion API
- SQL Explorer overview
- Roles and permissions
- User management
- Inventory management solution
- Commercial pricing solution
- Merchandising solution

Supply Chain & Retail Solutions user guide
Peak Platform Ingestion API
The Peak Platform Ingestion API enables programmatic data ingestion into Snowflake or Redshift. It is intended for teams that need direct control over data fetch, transformation, sync frequency, and error management without relying on platform workarounds.
Only Snowflake and Redshift are supported as ingestion destinations. Data lake (S3) ingestion is not supported.
Authentication
Authenticate your API requests using one of these methods:
- Tenant API key: Obtain the API key from the Peak platform. See API keys.
- OAuth: Request client credentials (
client_idandclient_secret) from Peak Support. Use these to generate an access token. Theaudienceparameter is optional.
Access tokens are valid for 24 hours.
Managing table schemas
Before ingesting data, you must define a table schema in the default ingestion schema of the target warehouse. You can also reuse an existing table schema that was created using a Feed.
Creating a table schema
Use POST /api/v1/tables to create a table schema. In the request body, provide the table name and define each column with a name and data type.
Supported column types: string, boolean, JSON, integer, numeric, date, timestamp.
Optional per-column settings:
- Date format: for
dateandtimestampcolumns - Precision and scale: for
numericcolumns
A successful request returns a 201 response. The schema is created in the default ingestion schema of the data warehouse.
Constraints:
- Table schemas cannot be updated after creation.
- Table schemas created via the API cannot be deleted.
Getting a table schema
Use GET /api/v1/tables/{tableName} to retrieve the schema for a specific table. Pass the table name as a path parameter. The response returns the column definitions for that table.
Listing table schemas
Use GET /api/v1/tables to retrieve all table schemas. Results are returned in descending order of creation date.
Use the limit and NextToken parameters to control pagination. The default page size is 20. If more tables exist than the specified limit, a NextToken is returned in the response to retrieve the next page.
Ingesting data
Use POST /api/v1/tables/{tableName} to ingest data into a table. Pass the table name as a path parameter and provide the data as a JSON array in the request body.
The API validates incoming data against the table schema. If any record fails validation, the entire request is rejected — partial ingestion is not supported.
A successful response includes the total number of records ingested.
Limits
| Constraint | Value |
|---|---|
| Maximum records per request | 500 |
| Maximum request size | 1 MB |
| Maximum concurrent requests | 30 |
| Data lake (S3) ingestion | Not supported |
| Partial ingestion | Not supported |
| Schema updates | Not supported |
| Schema deletion (API-created) | Not supported |