UiPath Documentation
industry-department-solutions
latest
false
UiPath logo, featuring letters U and I in white

Supply Chain & Retail Solutions user guide

Last updated Apr 16, 2026

Configuring an Amazon S3 connector

An Amazon S3 connector enables you to automatically ingest data from your S3 buckets into Peak. This approach offers several advantages over manual uploads:

  • Accessibility - Centralize data management by keeping files in your existing S3 infrastructure.
  • Codeless configuration - Set up connectors without writing code or complex scripts.
  • Automation - Schedule recurring imports and use webhooks for event-driven data ingestion.
  • Security - Leverage AWS Identity and Access Management (IAM) roles and encryption.
  • Flexibility - Ingest multiple file types (CSV, JSON, NDJSON) with configurable processing options.

Prerequisites

Before configuring an Amazon S3 connector, ensure you meet the following requirements:

  • You have access to Peak with permissions to configure Data Sources.
  • You have an AWS S3 bucket with the files you want to ingest.
  • You have created an AWS IAM role with permissions to access the S3 bucket. The role must include:
    • s3:GetObject - Read objects from the bucket
    • s3:ListBucket - List objects in the bucket
    • s3:GetObjectTagging - (Optional) Read object tags

Steps

Follow these steps to create an Amazon S3 connector in Peak:

Step 1: Create or reuse a connector

  1. In Peak, open Manage and select Data Sources.
  2. Select Add connector and choose Amazon S3.
    • To create a new connector, select Create new connector and enter a name.
    • To use an existing connector, select Use an existing connector and choose it from the list.

Step 2: Configure connector settings

  1. Enter the connector name.
  2. Provide your AWS account details:
    • Role ARN - Enter the Amazon Resource Name (ARN) of the IAM role that has permissions to access your S3 bucket.
    • External ID - (Optional) Enter an external ID if your IAM role requires one for cross-account access.
  3. Select Test connection to verify the connector can access your AWS account.
  4. Select Save.

Step 3: Configure import settings

  1. Specify the S3 bucket path:

    • Bucket name - Enter the name of your S3 bucket.
    • Folder path - (Optional) Enter the folder path within the bucket (e.g., data/uploads). Leave blank to use the bucket root.
  2. Configure file processing options:

    • File type - Select the format of files in your S3 location: CSV, JSON, or NDJSON.
    • Preview - Review a sample of the data to verify correct parsing.
    • Separator - (For CSV only) Specify the delimiter (e.g., comma, pipe, tab).
    • Encoding - Specify the character set: UTF-8 or ASCII.
    • Access Control List (ACL) - Note that files with restricted ACLs may not be readable by the connector.
  3. (Optional) Set a historical date:

    • Historical date - Select a date to import only files modified on or after this date.
  4. Select the load type to determine how Peak processes the data:

    • Truncate and insert - Delete all existing data and load new records.
    • Incremental - Add only new records (requires a unique identifier).
    • Upsert - Update existing records and insert new ones (requires a unique identifier).
    • See Load types for details.

Step 4: Select a destination

  1. Select where to store the imported data:

    • Peak managed data lake - Store data in Peak's managed S3 environment.
    • Customer-managed data warehouse - Send data to your own data warehouse (Snowflake, Redshift, etc.).
    • See Destination options for details.
  2. Map the S3 data fields to your target schema or create a new schema.

Step 5: Configure triggers and watchers

  1. Select Schedule to automatically run the import on a recurring schedule:

    • Enter a cron expression (e.g., 0 0 * * * for daily at midnight).
    • See Triggers and watchers for details on trigger types and cron formatting.
  2. (Optional) Set up an S3 event notification to trigger imports when new files are added:

    • Configure an S3 SNS or SQS notification to send events to Peak's webhook endpoint.
    • Peak can automatically process new files as they arrive.

Step 6: Review and save

  1. Review all settings and select Save.
  2. For new connectors, Peak begins the initial import. Subsequent imports run according to your configured schedule.

Supported file types

The S3 connector supports the following file formats:

  • CSV - Specify UTF-8 or ASCII encoding and field separator (comma, pipe, tab, semicolon, etc.).
  • JSON - Newline-delimited JSON arrays or objects. Specify UTF-8 or ASCII encoding.
  • NDJSON - One JSON object per line. Specify UTF-8 or ASCII encoding.
  • Gzip-compressed variants - CSV.gz, JSON.gz, NDJSON.gz. Preferred for large files to reduce storage and transfer costs.

Encryption notes

  • Server-side encryption (SSE-S3) - Enabled by default. Peak can read encrypted objects without additional configuration.
  • Customer-managed KMS - Currently not supported. If your bucket requires customer-managed KMS keys, use an unencrypted location or configure the bucket with SSE-S3.

Result

Peak creates the S3 connector and begins the initial data import based on your configuration. Subsequent imports run according to your trigger schedule, keeping your data synchronized with your S3 bucket.

If the import fails for any records, Peak logs the failures. You can view failed records and retry the import. See Destination options for information on handling failed row thresholds.

Was this page helpful?

Connect

Need help? Support

Want to learn? UiPath Academy

Have questions? UiPath Forum

Stay updated