automation-suite
2021.10
false
- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Monitoring and alerting
- Migration and upgrade
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to disable TLS 1.0 and 1.1
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to debug failed Automation Suite installations
- How to disable TX checksum offloading
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Failure After Certificate Update
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Cannot Log in After Migration
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- ArgoCD goes into progressing state after first installation
- Unexpected Inconsistency; Run Fsck Manually
- Missing Self-heal-operator and Sf-k8-utils Repo
- Degraded MongoDB or Business Applications After Cluster Restore
- Unhealthy Services After Cluster Restore or Rollback
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite support bundle
- Exploring Logs

OUT OF SUPPORT
Automation Suite installation guide
Last updated Feb 24, 2025
Offline Multi-node HA-ready Production Mode
linkPreparation
link- Identify any server (not agent) that meets the disk requirements for an offline installation. Referenced as a primary server throughout this document.
If you are using a self-signed certificate, run the following command:
### Please replace /path/to/cert with path to location where you want to store certificates. sudo ./configureUiPathAS.sh tls-cert get --outpath /path/to/cert ### Now copy the ca.crt file generated in above location to trust store location sudo cp --remove-destination /part/to/cert/ca.crt /etc/pki/ca-trust/source/anchors/ ### Update the trust store sudo update-ca-trust
### Please replace /path/to/cert with path to location where you want to store certificates. sudo ./configureUiPathAS.sh tls-cert get --outpath /path/to/cert ### Now copy the ca.crt file generated in above location to trust store location sudo cp --remove-destination /part/to/cert/ca.crt /etc/pki/ca-trust/source/anchors/ ### Update the trust store sudo update-ca-trust - Download the full offline bundle (
sf.tar.gz
) on the selected server. - Download the infra-only offline bundle (
sf-infra.tar.gz
) on all the other nodes. - Download and unzip the new installer (
installer.zip
) on all the nodes.Note: Give proper permissions to the folder by runningsudo chmod 755 -R <installer-folder>
. - Make the original
cluster_config.json
available on the primary server. - Generate the new
cluster_config.json
file as follows:-
If you have the old ``cluster_config.json` file, use the following command to generate the configuration file from the cluster:
cd /path/to/new-installer ./configureUiPathAS.sh config get -i /path/to/old/cluster_config.json -o /path/to/store/generated/cluster_config.json
-
If you do not have the old
cluster config
file, run the following command:cd /path/to/new-installer ./configureUiPathAS.sh config get -o /path/to/store/generated/cluster_config.json
Note: See Advanced installation experience to fill up the remaining parameters.
-
- Copy this
cluster_config.json
to the installer folder on all nodes.
Execution
linkMaintenance and Backup
- Make sure you have enabled the backup on the cluster. For details, see Backing up and restoring the cluster.
- Connect to one of the server nodes via SSH.
- Verify that all desired volumes have backups in the cluster by running the following command:
/path/to/new-installer/configureUiPathAS.sh verify-volumes-backup
/path/to/new-installer/configureUiPathAS.sh verify-volumes-backupNote: The backup might take some time, so wait for approximately 15-20 minutes, and then verify the volumes backup again. - To verify if Automation Suite is healthy, run:
kubectl get applications -n argocd
kubectl get applications -n argocd - Put the cluster in maintenance mode as follows:
- Execute the following command:
/path/to/new-installer/configureUiPathAS.sh enable-maintenance-mode
/path/to/new-installer/configureUiPathAS.sh enable-maintenance-mode - Verify that the cluster is in maintenance mode by running the following command:
/path/to/new-installer/configureUiPathAS.sh is-maintenance-enabled
- Execute the following command:
- Create the SQL database backup.
Upgrade Infrastructure on Servers
Note: Upgrading the infrastructure on servers and agents simultaneously is not supported and will result in an error. Make sure
to carry out these steps successively.
- Connect to each server via SSH.
- Become root by running
sudo su -
. - Execute the following command on all server nodes:
/path/to/new-installer/install-uipath.sh --upgrade -k -i /path/to/cluster_config.json --offline-bundle "/path/to/sf-infra.tar.gz" --offline-tmp-folder /uipath/tmp --install-offline-prereqs --accept-license-agreement -o /path/to/output.json
/path/to/new-installer/install-uipath.sh --upgrade -k -i /path/to/cluster_config.json --offline-bundle "/path/to/sf-infra.tar.gz" --offline-tmp-folder /uipath/tmp --install-offline-prereqs --accept-license-agreement -o /path/to/output.jsonNote: This command also creates a backup of the cluster state and pauses all other scheduled backups.
Upgrade Infrastructure on Agents
Note: Upgrading the infrastructure on servers and agents simultaneously is not supported and will result in an error. Make sure
to carry out these steps successively.
- Connect to each server via SSH.
- Become root by running
sudo su -
. - Execute the following command:
/path/to/new-installer/install-uipath.sh --upgrade -k -i /path/to/cluster_config.json --offline-bundle "/path/to/sf-infra.tar.gz" --offline-tmp-folder /uipath/tmp --install-offline-prereqs --accept-license-agreement -o /path/to/output.json
/path/to/new-installer/install-uipath.sh --upgrade -k -i /path/to/cluster_config.json --offline-bundle "/path/to/sf-infra.tar.gz" --offline-tmp-folder /uipath/tmp --install-offline-prereqs --accept-license-agreement -o /path/to/output.json
Execute the Rest of the Upgrade on the Primary Server
- Connect to primary server via SSH.
- Become root by running
sudo su -
. - Execute the following command:
/path/to/new-installer/install-uipath.sh --upgrade -f -s -i /path/to/cluster_config.json --offline-bundle "/path/to/sf.tar.gz" --offline-tmp-folder /uipath/tmp --install-offline-prereqs --accept-license-agreement -o /path/to/output.json
/path/to/new-installer/install-uipath.sh --upgrade -f -s -i /path/to/cluster_config.json --offline-bundle "/path/to/sf.tar.gz" --offline-tmp-folder /uipath/tmp --install-offline-prereqs --accept-license-agreement -o /path/to/output.jsonNote: This command disables the maintenance mode that you enabled before the upgrade because all services are required to be up during the upgrade. - After the successful upgrade and verification, resume the backup scheduling on the node by running the following command:
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups
Rollback on Error
linkPreparation
- Create a separate folder to store the old bundles, and perform the following operations inside that folder.
- Download and unzip the installer's older version (
installer.zip
) on all the nodes.Note: Give proper permissions to the folder by runningsudo chmod 755 -R <installer-folder>
. - Create
restore.json
file and copy it to all the nodes. For details, see Backing up and restoring the cluster. - Verify that the etcd backup data is present on the primary server at the following location:
/mnt/backup/backup/<etcdBackupPath>/<node-name>/snapshots
etcdBackupPath
- this is the same as the one specified inbackup.json
while enabling the backup'snode-name
;node-name
- the hostname of the primary server VM.
Cluster Cleanup
- Copy and run the dedicated script to uninstall everything from that node. Do this for all the nodes. For details, see Troubleshooting.
- Restore all UiPath databases to the older backup that was created before the upgrade.
Restore Infra on Server Nodes
- Connect to the primary server (which is the same as the one chosen during upgrade).
- Restore infra by running the following command:
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r --accept-license-agreement --install-type online
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r --accept-license-agreement --install-type online - Connect to the rest of the server nodes one by one via SSH.
- Restore infra on these nodes by running the following command:
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r -j server --accept-license-agreement --install-type online
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r -j server --accept-license-agreement --install-type onlineNote: Run this command on server nodes one by one. Executing them in parallel is not supported.
Restore Infra on Agent Nodes
- Connect to each agent VM via SSH.
- Restore infra on these nodes by running the following command:
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r -j agent --accept-license-agreement --install-type online
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r -j agent --accept-license-agreement --install-type online
Restore Volumes Data
- Connect to the primary server via SSH.
- Go to the new installer folder.Note: The previous infra restore commands were executed using older installer, and the following commands are executed using newer installer bundle.
- Disable the maintenance mode on the cluster by running the following command:
/path/to/new-installer/configureUiPathAS.sh disable-maintenance-mode
/path/to/new-installer/configureUiPathAS.sh disable-maintenance-mode - Verify that maintenance mode is disabled by executing the following command:
/path/to/new-installer/configureUiPathAS.sh is-maintenance-enabled
/path/to/new-installer/configureUiPathAS.sh is-maintenance-enabled - Copy the
restore.jsonfile
that was used in the infra restore stage to the new installer bundle folder. - Restore volumes from the newer installer bundle by executing the following command:
/path/to/new-installer/install-uipath.sh -i /path/to/new-installer/restore.json -o /path/to/new-installer/output.json -r --volume-restore --accept-license-agreement --install-type online
/path/to/new-installer/install-uipath.sh -i /path/to/new-installer/restore.json -o /path/to/new-installer/output.json -r --volume-restore --accept-license-agreement --install-type online - Once the restore is completed, verify if everything is restored and working properly.
- During the upgrade, scheduled backups were disabled on the primary node. To enable them again, run the following command:
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups