- Getting started
- Data security and compliance
- Organizations
- Authentication and security
- Licensing
- About licensing
- Unified Pricing: Licensing plan framework
- Activating your Enterprise license
- Migrate from Test Suite to Test Cloud
- License migration
- Assigning licenses to tenants
- Assigning user licenses
- Deallocating user licenses
- Monitoring license allocation
- License overallocation
- Licensing notifications
- User license management
- Tenants and services
- Accounts and roles
- AI Trust Layer
- About AI Trust Layer
- Bring your own LLM for Context Grounding
- Using DeepRAG
- Context Grounding licensing
- External applications
- Notifications
- Logging
- Testing in your organization
- Troubleshooting
- Migrating to Test Cloud
Test Cloud admin guide
Context Grounding supports Bring Your Own Model (BYOM) and Bring Your Own Subscription (BYOS) through the UiPath AI Trust Layer. Administrators can replace UiPath-managed embedding and inference models with their own subscriptions, or connect models not included in the default set, while retaining the governance and audit capabilities of the AI Trust Layer.
Supported features and models
The following Context Grounding features support custom LLM configuration through any provider whose API follows the OpenAI V1 standard, connected via the OpenAI V1 Compliant LLM connector. Each feature has specific model requirements.
| Feature | Supported models | Recommended model | Requirements | Recommended specs |
|---|---|---|---|---|
| Advanced ingestion | Any OpenAI V1-compatible LLM | gemini-2.5-flash | Forced tool calling; multimodal image support | 16k input tokens; 32k output tokens |
| Batch Transform | Any OpenAI V1-compatible LLM | gemini-2.5-flash (smart), gemini-2.5-flash-lite (fast) | Forced tool calling | 128k input tokens |
| Batch Transform with Web Search | Any Gemini web search model | N/A | Gemini web search tool | N/A |
| DeepRAG | Any OpenAI V1-compatible LLM | gemini-2.5-flash | Forced tool calling; count > 1 support | 1M input tokens; 64k output tokens |
| Embeddings | Any OpenAI V1-compatible Embedding model | gemini-embedding-001 | < 4,096 dimensions; ≥ 8k input tokens | N/A |
For the full model scope across all UiPath products, refer to Configuring LLMs.
Requirements — without this capability, the feature does not function.
Recommended — without this capability, the feature is untested and may perform worse or fail more often.
Configuration and routing
Custom LLM configuration for Context Grounding is managed through the LLM configurations tab in Admin > AI Trust Layer. For setup steps, select Context Grounding as the product when following the steps in Configuring LLMs.
For details on how AI features route requests to models, including fallback behavior and model governance, refer to AI features and model routing.