automation-suite
2023.10
false
UiPath logo, featuring letters U and I in white

Automation Suite on Linux installation guide

Last updated Sep 24, 2025

How to manually clean up logs

Cleaning up Ceph logs

Moving Ceph out of read-only mode

If you installed AI Center and use Ceph storage, take the following steps to move Ceph out of read-only mode:

  1. Check if Ceph is at full capacity:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph statuskubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph status
    If Ceph is at full capacity, you must adjust the read-only threshold to start up the rgw gateways.
  2. Scale down the ML skills:
    kubectl -n uipath scale deployment <skill> --replicas=0kubectl -n uipath scale deployment <skill> --replicas=0
  3. Put the cluster in write mode:
    ceph osd set-full-ratio 0.95 <95 is the default value so you could increase to 96 and go up 
    incrementall>ceph osd set-full-ratio 0.95 <95 is the default value so you could increase to 96 and go up 
    incrementall>
  4. Run Garbage Collection:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- radosgw-admin gc process --include-allkubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- radosgw-admin gc process --include-all
  5. When storage goes down, run the following commands:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph status
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph dfkubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph status
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph df

    At this point, the storage should be lower, and the cluster should be healthy.

Disabling streaming logs

To ensure everything is in a good state, disable streaming logs by taking the following steps.

  1. Disable aut-sync on UiPath and AI Center.
  2. Disable streaming logs for AI Center.
  3. If you have ML skills that have already been deployed, run the following commands:
    kubectl set env deployment [REPLICASET_NAME] LOGS_STREAMING_ENABLED=falsekubectl set env deployment [REPLICASET_NAME] LOGS_STREAMING_ENABLED=false
  4. Find out which buckets use the most space:
    kubectl -n rook-ceph exec deploy/rook-ceph-tools -- radosgw-admin bucket stats | jq -r '["BucketName","NoOfObjects","SizeInKB"], ["--------------------","------","------"], (.[] | [.bucket, .usage."rgw.main"."num_objects", .usage."rgw.main".size_kb_actual]) | @tsv' | column -ts $'\t'kubectl -n rook-ceph exec deploy/rook-ceph-tools -- radosgw-admin bucket stats | jq -r '["BucketName","NoOfObjects","SizeInKB"], ["--------------------","------","------"], (.[] | [.bucket, .usage."rgw.main"."num_objects", .usage."rgw.main".size_kb_actual]) | @tsv' | column -ts $'\t'
  5. Install s3cmd to prepare for cleaning up the sf-logs:
    pip3 install awscli s3cmd
    export PATH=/usr/local/bin:$PATHpip3 install awscli s3cmd
    export PATH=/usr/local/bin:$PATH
  6. Clean up the sf-logs logs. For details, see How to clean up old logs stored in the sf-logs bundle.
  7. Complete the cleanup operation:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- radosgw-admin gc process --include-allkubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- radosgw-admin gc process --include-all
  8. If the previous steps do not solve the issue, clean up the AI Center data.
  9. Check if the storage was reduced:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph dfkubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph df
  10. Once storage is no longer full, reduce the backfill setting:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph osd set-full-ratio 0.95kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph osd set-full-ratio 0.95
  11. Check if the ML skills are affected by the multipart upload issue:
    echo $(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- radosgw-admin bucket list --max-entries 10000000 --bucket train-data | jq '[.[] | select (.name | contains("_multipart")) | .meta.size] | add') | numfmt --to=iec-iecho $(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- radosgw-admin bucket list --max-entries 10000000 --bucket train-data | jq '[.[] | select (.name | contains("_multipart")) | .meta.size] | add') | numfmt --to=iec-i

    If they are affected by this issue, and the returned value is high, you may need to do a backup and restore.

Cleaning up s3 logs

If you use an s3-compatible storage provider, take the following steps to clean up your logs:

  1. Get the key for storage access.
  2. Find the large items:
    export PATH=/usr/local/bin:$PATH
    kubectl  get secret -n logging  logging-secrets -o json |jq -r .data
     
    # Then base64 decode the "S3_ACCESSKEY" and "S3_SECRETKEY"export PATH=/usr/local/bin:$PATH
    kubectl  get secret -n logging  logging-secrets -o json |jq -r .data
     
    # Then base64 decode the "S3_ACCESSKEY" and "S3_SECRETKEY"
  3. Configure the AWS CLI using the credentials decoded in the previous step. To configure AWS, run the following command:
    aws configure
    
    --
    Once aws cli is configured, you can run below commands to check content of sf logsaws configure
    
    --
    Once aws cli is configured, you can run below commands to check content of sf logs
  4. Delete the sf-logs. For more details, see the AWS documentation.
    aws s3 rm --endpoint-url <AWS-ENDPOINT> --no-verify-ssl --recursive s3://sf-logs --include="2022* --exclude="2022_12_8"
    
    # You can craft an include and exclude command to help with this. use --dryrun firstaws s3 rm --endpoint-url <AWS-ENDPOINT> --no-verify-ssl --recursive s3://sf-logs --include="2022* --exclude="2022_12_8"
    
    # You can craft an include and exclude command to help with this. use --dryrun first
  5. Delete the train-data.
To clean up logs automatically, you can also configure a cleanup policy on your external object store.
Note:

CORS and bucket retention policy are subject to change based on the ObjectStore provider. Refer to your ObjectStore provider documentation for the same.

We recommend retention of 15 days for the logs generated by the Automation Suite platform. These log objects are found in the automation-suite-logs folder of the platform bucket.
The following example shows the steps needed for AWS:
  1. Create policy.json with the following content:
    {
        "Rules": [
            {
                "Filter": {
                    "Prefix": "automation-suite-logs/"
                },
                "Status": "Enabled",
                "Expiration": {
                    "Days": 15
                },
                "ID": "DeleteOldLogs"
            }
        ]
    }{
        "Rules": [
            {
                "Filter": {
                    "Prefix": "automation-suite-logs/"
                },
                "Status": "Enabled",
                "Expiration": {
                    "Days": 15
                },
                "ID": "DeleteOldLogs"
            }
        ]
    }
  2. To apply policy.json to the bucket, run the following command:
    aws s3api put-bucket-lifecycle-configuration --bucket BUCKET_NAME --lifecycle-configuration file://policy.jsonaws s3api put-bucket-lifecycle-configuration --bucket BUCKET_NAME --lifecycle-configuration file://policy.json
  • Cleaning up Ceph logs
  • Moving Ceph out of read-only mode
  • Disabling streaming logs
  • Cleaning up s3 logs

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo
Trust and Security
© 2005-2025 UiPath. All rights reserved.