- Overview
- Requirements
- Deployment templates
- Manual: Preparing the installation
- Manual: Preparing the installation
- Step 2: Configuring the OCI-compliant registry for offline installations
- Step 3: Configuring the external objectstore
- Step 4: Configuring High Availability Add-on
- Step 5: Configuring SQL databases
- Step 6: Configuring the load balancer
- Step 7: Configuring the DNS
- Step 8: Configuring the disks
- Step 9: Configuring kernel and OS level settings
- Step 10: Configuring the node ports
- Step 11: Applying miscellaneous settings
- Step 12: Validating and installing the required RPM packages
- Step 13: Generating cluster_config.json
- Certificate configuration
- Database configuration
- External Objectstore configuration
- Pre-signed URL configuration
- Kerberos authentication configuration
- External OCI-compliant registry configuration
- Disaster recovery: Active/Passive and Active/Active configurations
- High Availability Add-on configuration
- Orchestrator-specific configuration
- Insights-specific configuration
- Process Mining-specific configuration
- Document Understanding-specific configuration
- Automation Suite Robots-specific configuration
- Monitoring configuration
- Optional: Configuring the proxy server
- Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
- Optional: Passing custom resolv.conf
- Optional: Increasing fault tolerance
- install-uipath.sh parameters
- Adding a dedicated agent node with GPU support
- Adding a dedicated agent Node for Task Mining
- Connecting Task Mining application
- Adding a Dedicated Agent Node for Automation Suite Robots
- Step 15: Configuring the temporary Docker registry for offline installations
- Step 16: Validating the prerequisites for the installation
- Manual: Performing the installation
- Post-installation
- Cluster administration
- Managing products
- Getting Started with the Cluster Administration portal
- Migrating objectstore from persistent volume to raw disks
- Migrating from in-cluster to external High Availability Add-on
- Migrating data between objectstores
- Migrating in-cluster objectstore to external objectstore
- Migrating from in-cluster registry to an external OCI-compliant registry
- Switching to the secondary cluster manually in an Active/Passive setup
- Disaster Recovery: Performing post-installation operations
- Converting an existing installation to multi-site setup
- Guidelines on upgrading an Active/Passive or Active/Active deployment
- Guidelines on backing up and restoring an Active/Passive or Active/Active deployment
- Redirecting traffic for the unsupported services to the primary cluster
- Scaling a single-node (evaluation) deployment to a multi-node (HA) deployment
- Monitoring and alerting
- Migration and upgrade
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Orchestrator
- Step 7: Migrating standalone Insights
- Step 8: Migrating standalone Test Manager
- Step 9: Deleting the default tenant
- Performing a single tenant migration
- Migrating from Automation Suite on Linux to Automation Suite on EKS/AKS
- Upgrading Automation Suite
- Downloading the installation packages and getting all the files on the first server node
- Retrieving the latest applied configuration from the cluster
- Updating the cluster configuration
- Configuring the OCI-compliant registry for offline installations
- Executing the upgrade
- Performing post-upgrade operations
- Product-specific configuration
- Using the Orchestrator Configurator Tool
- Configuring Orchestrator parameters
- Orchestrator appSettings
- Configuring appSettings
- Configuring the maximum request size
- Overriding cluster-level storage configuration
- Configuring credential stores
- Configuring encryption key per tenant
- Cleaning up the Orchestrator database
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to disable TX checksum offloading
- How to upgrade from Automation Suite 2022.10.10 and 2022.4.11 to 2023.10.2
- How to manually set the ArgoCD log level to Info
- How to expand AI Center storage
- How to generate the encoded pull_secret_value for external registries
- How to address weak ciphers in TLS 1.2
- How to work with certificates
- How to forward application logs to Splunk
- How to clean up unused Docker images from registry pods
- How to collect DU usage data with in-cluster objectstore (Ceph)
- How to install RKE2 SELinux on air-gapped environments
- How to clean up old differential backups on an NFS server
- Unable to run an offline installation on RHEL 8.4 OS
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- Volume unable to mount due to not being ready for workloads
- Support bundle log collection failure
- Test Automation SQL connection string is ignored
- DNS settings not honored by CoreDNS
- Data loss when reinstalling or upgrading Insights following Automation Suite upgrade
- Single-node upgrade fails at the fabric stage
- Cluster unhealthy after automated upgrade from 2021.10
- Upgrade fails due to unhealthy Ceph
- RKE2 not getting started due to space issue
- Volume unable to mount and remains in attach/detach loop state
- Upgrade fails due to classic objects in the Orchestrator database
- Ceph cluster found in a degraded state after side-by-side upgrade
- Unhealthy Insights component causes the migration to fail
- Service upgrade fails for Apps
- In-place upgrade timeouts
- Docker registry migration stuck in PVC deletion stage
- AI Center provisioning failure after upgrading to 2023.10 or later
- Upgrade fails in offline environments
- SQL validation fails during upgrade
- snapshot-controller-crds pod in CrashLoopBackOff state after upgrade
- Longhorn REST API endpoint upgrade/reinstall error
- Upgrade fails due to overridden Insights PVC sizes
- Service upgrade fails during pre-service script execution
- Setting a timeout interval for the management portals
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Update the underlying directory connections
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- Unhealthy services after cluster restore or rollback
- Pods stuck in Init:0/X
- Missing Ceph-rook metrics from monitoring dashboards
- Pods cannot communicate with FQDN in a proxy environment
- Failure to configure email alerts post upgrade
- No healthy upstream issue
- Failure to add agent nodes in offline environments
- Accessing FQDN returns RBAC: access denied error
- Document Understanding not on the left rail of Automation Suite
- Failed status when creating a data labeling session
- Failed status when trying to deploy an ML skill
- Migration job fails in ArgoCD
- Handwriting recognition with intelligent form extractor not working
- Failed ML skill deployment due to token expiry
- Running High Availability with Process Mining
- Process Mining ingestion failed when logged in using Kerberos
- After Disaster Recovery Dapr is not working properly for Process Mining
- Configuring Dapr with Redis in cluster mode
- Unable to connect to AutomationSuite_ProcessMining_Warehouse database using a pyodbc format connection string
- Airflow installation fails with sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
- How to add an IP table rule to use SQL Server port 1433
- Automation Suite certificate is not trusted from the server where CData Sync is running
- Running the diagnostics tool
- Using the Automation Suite support bundle
- Exploring Logs
- Exploring summarized telemetry

Automation Suite on Linux installation guide
High Availability Add-on configuration
Automation Suite supports High Availability Add-on (HAA) installed either in the same cluster or on external machines.
You must configure HAA to enable the actual HA setup for the multi-node setup. To do that, you can either provide the HAA license to the installer or install HAA on the external machines and give the HAA configurations to the installer.
In-cluster Redis High Availability Add-on configuration
In a multi-node HA-ready production setup, High Availability (HA) is enabled by default. However, the Redis-based in-memory cache used by cluster services runs on a single node and represents a single point of failure. Therefore, if you have not purchased a High Availability Add-on (HAA) license, a cache node failure or restart will result in downtime for the entire cluster. To prevent such an incident, you can purchase HAA, which enables redundant, multi-node HA-ready production deployment of the cache.
All installations include the HAA software with a single-node license. This license is free of cost, no purchase required.
If you wish to enable HAA across multiple nodes, then purchasing an HAA license is required. This will implement full high availability for the cluster in a multi-node HA-ready production setup.
HAA is based on Redis technology.
To do that, take the following steps:
- Purchase an HAA license. Contact UiPath® for details.
- Update the following fields in the
cluster_config.jsonfile:fabric.redis.license- enter the HAA license converted to a single base64 string. You need to encode the entire license key to base64, including the text-----LICENSE START----- and -----LICENSE END-----. In bash, you can do that using the following command:echo '-----LICENSE START-----<license_key_here>-----LICENSE END-----' | base64 -w0echo '-----LICENSE START-----<license_key_here>-----LICENSE END-----' | base64 -w0fabric.redis.ha- usetrueto enable HAA and make sure to also configure thefabric.redis.licenseparameter. This enables HAA database replication and increases the number of HAA pods to 3. By default,fabric.redis.hais set tofalse.Note:If
redis.hais enabled,redis.licenseneeds to be set to a license that supports more than two shards."fabric": { "redis": { "ha": "true", "license": "Base64String" // Replace Base64String with the encoded full license string. } }"fabric": { "redis": { "ha": "true", "license": "Base64String" // Replace Base64String with the encoded full license string. } }
- Rerun the fabric installer:
./install-uipath.sh -i cluster_config.json -f -o output.json --accept-license-agreement./install-uipath.sh -i cluster_config.json -f -o output.json --accept-license-agreement
Updating the Redis license
To update the Redis license, take the following steps:
-
Set up kubectl and ArgoCD access:
- Enable kubectl access on the primary node. See Enabling kubectl for instructions.
- Enable access to ArgoCD. See Accessing ArgoCD for instructions.
-
Check the current license status:
To check the status of the current license, run the following Shell command:
kubectl get rec -n redis-system redis-cluster -o jsonpath='{.status.licenseStatus}' | jqkubectl get rec -n redis-system redis-cluster -o jsonpath='{.status.licenseStatus}' | jq- Clusters deployed after the expiry date of the license included in the installer will show the trial 4-shard license that expires in 30 days, as in the following example:

- Clusters that were already running when the license expired will show the following status:

-
Update the existing license:
-
To update the existing license, run the following Shell command:
kubectl patch application fabric-installer -n argocd \ --type=json -p '[{"op":"add","path":"/spec/source/helm/parameters/-","value":{"name": "global.redis.license", "value": "<LICENSE_KEY_IN_BASE64>"}}]'kubectl patch application fabric-installer -n argocd \ --type=json -p '[{"op":"add","path":"/spec/source/helm/parameters/-","value":{"name": "global.redis.license", "value": "<LICENSE_KEY_IN_BASE64>"}}]'
-
To see if the change was applied, access ArgoCD. See Accessing ArgoCD for instructions.
-
If the fabric-installer application appears out of sync, and the sync process was not triggered automatically, select the Sync button yourself. This may be happen if you are using an older Automation Suite version.


Note:There is a small delay between the moment the ArgoCD UI shows the app synced and when the Redis operator successfully applies the new license.
-
To see logs from the Redis operator when it tries to apply the license, run the following command:
kubectl logs -n redis-system --since=300s -l name=redis-enterprise-operator -c redis-enterprise-operator --tail=-1 | grep licensekubectl logs -n redis-system --since=300s -l name=redis-enterprise-operator -c redis-enterprise-operator --tail=-1 | grep license
-
If you try to apply an expired license, or you run the installer that ships with an expired license, you will get the following output:

-
To update the Redis license used by older installers before running them, update the
fabric.redis.licensekey in the<installer_folder>/defaults.json:
-
-
Check that the new license is applied:
To check if the new license is applied, run the following Shell command:
kubectl get rec -n redis-system redis-cluster -o jsonpath='{.status.licenseStatus}' | jqkubectl get rec -n redis-system redis-cluster -o jsonpath='{.status.licenseStatus}' | jqIn the following image you can see that the Redis cluster switched from the trial 30-day license to a single-shard 10-year license.

External High Availability Add-on configuration
When opting for an Active/Active configuration of Automation Suite, an external cluster-hosted High Availability Add-on is mandatory. In all other situations, it is merely optional.
To configure High Availability Add-on, you must update the following parameters in the cluster_config.json file:
| Parameter | Description |
|---|---|
|
| Provide the FQDN of the High Availability Add-on (HAA) server. |
|
| Provide the password to connect to the HAA server. |
|
| Provide the port for the HAA server. |
fabric.redis.tls | Enable the TLS protocol. By default, TLS is enabled. If a certificate is required when TLS is enabled, make sure to provide it via the |
"fabric": {
"redis": {
"hostname": "redis_fqdn",
"password": "credential_to_connect_redis",
"port": 6380,
"tls": true,
}
}
"fabric": {
"redis": {
"hostname": "redis_fqdn",
"password": "credential_to_connect_redis",
"port": 6380,
"tls": true,
}
}