- Getting started
- About this guide
- About Autopilot™
- Enabling/disabling Autopilot™
- Configuring an LLM for Autopilot
- Unified Pricing licensing
- Flex licensing
- Best practices
- Data privacy
- Autopilot chat
- Generating automations
- Generating tests
- Generating tests
- Quality-check requirements
- Generate tests for requirement
- Import manual test cases
- Find obsolete tests
- Generate tests for SAP transactions
- Generate coded automations
- Generate coded API automation
- Refactor coded automations
- Generate low-code automations
- Generate synthetic test data
- Generate test reports
- Search Test Manager project
- Autopilot for Everyone
- About Autopilot for Everyone
- User types
- Data sources
- Toolset automations
- Localization
- Prerequisites
- Autopilot widget
- The Autopilot for Everyone tenant card
- Prerequisites for installation
- Enabling Anthropic models
- Installing Autopilot for Everyone
- Updating Autopilot for Everyone
- Uninstalling Autopilot for Everyone
- Configuring Autopilot for Everyone
- Disabling the Autopilot welcome screen in Assistant
- Configuring an LLM for Autopilot for Everyone
- Deploying toolset automations
- Prompt-to-response flow
- Launching Autopilot for Everyone
- Autopilot settings for business users
- Using a specialized Autopilot
- Using a starting prompt
- Uploading and analyzing files
- Running automations
- Interacting with Autopilot answers
- Using suggested prompts
- Starting a new chat
- Chat history
- Providing general feedback
- Clipboard AI Enterprise version
- Troubleshooting

Autopilot user guide
The AI Trust Layer card in the Admin section of your organization allows you to configure your own subscription for the models Autopilot supports.
To configure your own LLM, you need a connection to the following AI providers, depending on the model you want to incorporate:
- Amazon Bedrock or Amazon Web Services, for Anthropic models
- Google Vertex, for Gemini models.
Important: To set up a Gemini model, you need to reach out to your designated UiPath technical account manager.
When you select Autopilot as the product for which you want to configure your own LLM, you must select the feature which will use that model:
- Go to your Admin > AI Trust Layer section of your organization.
- In the LLM configurations tab, select the tenant where you want to configure your LLM.
- Select Add configuration and provide the following properties:
- Product - select Autopilot
- Feature - select Generation
- Provide the Connections Folder.
- For every default LLM Name, configure a new Connector and a Connection for your own model.
Configure a different LLM for reading the context of your current page or project and answering questions, explaining automations, or suggesting improvements. The model you configure overrides the existing model.
Autopilot chat uses a dual-model architecture. Understanding this architecture helps you properly configure your own LLM for Autopilot chat:
- Primary model - used for complex tasks requiring advanced reasoning and capabilities
- Secondary model - used for simpler tasks to optimize costs and performance
- If you configure different primary and secondary models, a tooltip displays both model names and informs you that the secondary model is used for optimization.
- If you configure only the primary model, a warning message appears indicating that the secondary model configuration is missing. A lightweight, fast primary model is used across all tasks without invoking a secondary model.
For Automation Cloud organizations, not configuring a secondary model defaults to a UiPath owned subscription.
For Automation Suite organizations, not configuring a secondary model may limit chat features, such as context compacting.
- Go to your Admin > AI Trust Layer section of your organization.
- In the LLM configurations tab, select the tenant where you want to configure your LLM.
- Select Add configuration and provide the following properties:
- Product - select Autopilot
- Feature - select Chat
- Provide the Connections Folder.
- Configure your new model.
For details, refer to Setting up an LLM connection.