- Overview
- Platform setup and administration
- Platform setup and administration
- Platform architecture
- Data Bridge onboarding overview
- Connecting a Peak-managed data lake
- Connecting a customer-managed data lake
- Creating an AWS IAM role for Data Bridge
- Connecting a Snowflake data warehouse
- Connecting a Redshift data warehouse (public connectivity)
- Connecting a Redshift data warehouse (private connectivity)
- Reauthorizing a Snowflake OAuth connection
- Using Snowflake with Peak
- Peak Platform Ingestion API
- Configuring a Google Ads connector
- Configuring a REST API connector
- Configuring a webhook connector
- SQL Explorer overview
- Roles and permissions
- User management
- Inventory management solution
- Commercial pricing solution
- Merchandising solution

Supply Chain & Retail Solutions user guide
Configuring a REST API connector
A REST API connector connects Peak to any REST-compliant API endpoint using a Python script you write. This enables you to ingest data from sources not covered by other connectors.
Prerequisites
Before configuring a REST API connector, ensure you meet the following requirements:
- You have access to Peak with permissions to configure Data Sources.
- You have the REST API endpoint URL and any required authentication credentials.
Configuring the connection
- In Peak, open Manage and select Data Sources.
- Select Add feed and choose the REST API connector.
- At the Connection stage, either select an existing connection from the dropdown or select New connection.
- For a new connection, enter the connection parameters. See Connection parameters.
- Select Save and proceed to the next stage.
Note: The connection name and authentication type cannot be edited after a connection is saved.
Connection parameters
| Parameter | Description |
|---|---|
| Connection name | A name for this connection. Use alphanumeric characters. |
| Authentication type | The authorization method required by the API. See Authentication types. |
Authentication types
| Type | Description |
|---|---|
| No Auth | The API does not require authorization. No additional details required. |
| Basic Auth | The API requires a username and password. |
| Custom Auth | The API requires a custom authorization URL, request headers, and a JSON body. |
Configuring import settings
The import configuration screen is where you write and test the Python script that fetches data from the API.
Installing dependencies
If your script requires Python packages, add them in the Install dependency field before writing your script. Multiple packages can be added by pressing Tab or Enter.
Writing the fetch script
Write your fetch logic in the Python script editor. Use the following objects in your script:
outputStream.write(obj): Writes a record to the feed output. Call this for each record you want to ingest.metadata.save(object): Saves feed run details (for example, a timestamp or offset) so they can be used on the next run to fetch data incrementally.metadatastarts as an empty object{}.
Import these objects at the top of your script:
import requests
from lib import outputStream, metadata
import requests
from lib import outputStream, metadata
After writing your script, select Test to run it. Logs appear below the editor; select the refresh icon to fetch live logs.
Previewing results
After a successful test, a preview of up to 10 records is available in two formats:
- JSON: The data as received from the API.
- Tabular: The data in a table format.
Load type and feed name
- Load type: Always Incremental.
- Feed name: Use only alphanumeric characters and underscores. Must start with a letter, must not end with an underscore, maximum 50 characters.
Configuring the destination
Select where your data will be ingested. Available options depend on your configured data warehouse:
- Redshift: Redshift and S3
- Snowflake: Snowflake only
See Destination options.
Configuring the trigger
Set up how and when the feed runs. See Triggers and watchers.
Result
Peak creates the REST API connector feed and runs it according to the selected trigger. You can monitor feed runs and troubleshoot failures from Manage > Data Sources.