- Release notes Test Manager
- 2.2510
- Release notes integrations

Release notes Test Manager
Release date: November 11, 2025
Autopilot
AutopilotTM is a collection of AI-powered digital systems, designed to boost the productivity of testers throughout the entire testing lifecycle.
To work with Autopilot, you must bring and configure at least one supported language model or AI provider subscription within your UiPath organization. For detailed steps refer to Configuring LLMs.
Autopilot in Test Manager offers the following capabilities:
AI-powered evaluation
Improve your test portfolio by using AI to assess and enhance the quality of your requirements. Use AutopilotTM to evaluate requirements against criteria and implement the suggestions provided to enhance their quality.
Visit AI-powered evaluation for a step-by-step guide and visit AI-powered evaluation: Best practices for efficient best practices about evaluating requirements.
AI-powered generation
With AI-powered generation, that leverages Autopilot, you can simplify the creation of manual tests using two types of test case generation:
- From requirement: Generate manual test cases directly from your requirements.
- From transactions: Generate test cases for SAP transactions using either the Heatmap, or from the transactions listed in the Change Impact Analysis.
You can guide Autopilot through additional instructions, to tailor its focus when generating manual tests. You can provide supporting documents that offer insights about the requirement or SAP transaction, enabling Autopilot to generate more accurate test cases.
Review the generated test cases, and then either create the desired test cases or refine them.
Visit AI-powered generation and AI-powered generation: Best practices for more information on how to effectively generate manual tests.
AI-powered test reports
Leverage the power of AI, using Autopilot, to gain comprehensive insights into your tests. Autopilot analyzes your testing data and provides intuitive, in-depth reports. Understand patterns, identify potential gaps, and gain actionable insights for enhancing test efficiency.
Visit Generate test report to start using the AI-powered features for your testing portfolio.
Import manual test cases using Autopilot
You can now import manual test cases from Excel files directly into Test Manager. This capability facilitates an accurate and efficient transfer of test case details, including names, descriptions, preconditions, or postconditions, and properties. Autopilot also allows you to import from Excel files that contain multiple sheets. During the import process, you can preview the extracted test cases to verify their accuracy before finalizing the import into your Test Manager project.
For more information, visit Importing manual test cases.
Searching for obsolete test cases
You can now identify obsolete test cases linked to requirements. To help you keep your testing projects up to date, we have added a capability that highlights obsolete test cases assigned to a requirement. Navigate to a recently updated requirement and use the Find obsolete tests action to check if any associated test cases have become outdated.
For more information, visit Finding obsolete tests based on requirements.
Prompt library for AI-powered capabilities
When using Autopilot to improve your testing processes, you can employ pre-defined prompts to ease your process of evaluating the quality of your requirements, or generating test cases. You can edit the pre-defined prompts, according to your needs.
Moreover, you can include your project's prompt library in the export TMH file.
You can find the Prompt library of a project in the Project Settings.
Visit Prompt library to check the available predefined prompts that you can use for your AI-powered testing processes.
Automating tests in Studio Web
You can now automate your Test Manager test cases directly in Studio Web. From the Automation tab of a test case in Test Manager, use the new Automate in Studio Web option to create or link a Studio Web test case and start building automations immediately in the browser. This streamlines the testing by enabling end-to-end test creation, automation, and execution in the cloud. For more information on automating tests in Studio Web, refer to Automating test cases in Studio Web.
Unified Pricing
We are excited to announce Unified Pricing, our innovative licensing model that brings new licensing plans, user licenses, and a consolidated consumption unit. We are also excited to let you know that Test Manager is now available through Unified Pricing.
For extensive release notes, refer to the Automation Suite release notes and Test Cloud release notes, as well as to the Automation Suite admin guide and Test Cloud admin guide.
For details on the impact of Unified Pricing on Test Manager, visit Unified Pricing Test Manager.
Concurrently executing manual test cases
Multiple users can now execute the same manual test case concurrently, to decrease the total runtime of a test execution. This is accessible from the Execution section.
This capability applies specifically to manual test cases. The final result of a manual test case is determined by the most recent user interaction, such as when you trigger an execution, you initiate a pending test execution, or when you re-execute a test case. The ExecutedBy column, for your test cases and executions, displays the latest updates on the test case results.
If you want to see previous execution results, initiated by all users, navigate to the Logs of a test case.
Visit Executing test cases simultaneously for more information.
Assigning manual executions to users
You can now assign manual test executions to users that are collaborating on the same testing project. For enhanced productivity, you can also set a Due Date for when the manual test execution is planned.
- Test case assigned
- Test case unassigned
Visit Assigning manual executions to users and Scheduling a due date for manual executions for more information.
Postcondition for manual test cases
To define postconditions for manual tests, you can now add a condition that the application should meet at the end of a test case. Manually executing tests now involves specifying if the configured post condition was met or not. Postconditions help by offering a clear way to confirm if a test has passed successfully. For more information, visit Adding manual steps to a test case and Executing manual tests.
You can also import and export postconditions.
New version of the project migration schema
Postcondition property in the JSON files of
your test cases. This allows you to import or export your postconditions. Visit Import project to learn how to retrieve the
schema for your testing project.
Export test executions to PDF
For a more diverse analysis of your test results, you can now export a test execution to PDF. The PDF file includes the same information as the XLSX format. For example, the XLSX and PDF files include the following details: overview of the execution, a detailed list of assertions, test case logs, and a list of corresponding requirements. For more information about exporting data from Test Manager, visit Export data, and Downloading execution logs.
Select automations for test cases
You now can link automations to a test case not only from Studio, but also directly within Test Manager. When you create a test case, you can search and select an automation from the feeds of any Orchestrator folder you have access to. Additionally, if you want to transfer your test cases between tenants, the linkage between the test case and its corresponding Orchestrator automation is preserved during the test case export.
For more information on selecting automations from Orchestrator directly in Test Manager, visit Selecting automation.
Selecting a robot account for executing test sets
To improve your experience, we have extended the abilities for configuring a test set execution. Now, in addition to selecting test cases from a specific Orchestrator folder and choosing a particular package version, you can also designate a specific robot account to execute the test set. For more information on configuring test set runs, visit Configuring test sets for specific execution folders and robots.
Reporting with Insights
Test Manager can now integrate with Insights for generating reporting dashboards for your testing projects. This integration allows you to generate fully customizable cross-project reporting dashboards in Insights using your test case log data. Insights offers the Test Manager Execution Report predefined dashboard dedicated for test projects. This dashboard helps you analyze important execution metrics like accumulated daily or weekly test results, automation rates for execution tests, and user or robot details for each test execution.
For more information on enabling the integration, visit Tenant level settings, and for additional details on reporting with Insights, visit Reporting with Insights.
Scheduling test executions
Plan your automated test executions effortlessly using dedicated schedules. Create and customize single, recurrent, or advanced schedules tailored to your testing needs. Schedules for automated test executions are now available under the Execution section of your project. For more information on managing schedules, visit Scheduling test executions.
Taking screenshots during manual test execution
To improve efficiency, and representation during manual test execution, we have enhanced Test Manager with a new screenshot capture capability. While manually executing a test case, you now have the capability to visually document the behavior of the application you are testing at any given step.
Access this feature within the manual execution assistant by navigating to More Options > Capture Screen. This allows you to capture screenshots of the entire screen, a specific application, a selected application window, or even just a single browser tab.
To boost usability, the Capture Screen option also provides a preview of your recently captured screenshot. Screenshots are then displayed in the test case logs, as attachments. For more information on taking screenshots during manual test executions, visit Execute test sets manually.
Live streaming and remote controlling Test Manager executions
You can now live stream and remotely control test executions in Test Manager, delivered via Test Cloud. This capability allows you to watch a test run in real time and take remote control if something goes wrong.
With this improvement, you can:
- View test executions live to monitor behavior as it happens.
- Remotely intervene and debug if an error occurs.
- Reduce turnaround time by avoiding multiple manual restarts or additional test runs.
- Minimize interruptions, as you can unblock failed tests on the spot.
For information on how to live stream and remotely control test executions in Test Manager, refer to Live streaming and remote controlling.
Test case parameters
To make testing more efficient, you can now create parameters for your test cases. Parameters help you avoid duplicate test cases, by acting as variables that you can reuse across multiple executions of the same test case. For more information on working with test case parameters, visit Parameters.
Overriding test set parameters
For improved flexibility and test coverage, you can now override parameter values within a test set. The parameters that you can change are determined by the parameters of the test cases that are part of the test set. These can include parameters added in Test Manager, or those originating from the selected automation for a test case. For information on overriding test set parameters, visit Overriding test set parameters.
Enhanced Tricentis qTest integration
The integration of Test Manager with Tricentis qTest, through UiPath Test Manager Connect, now supports the following artifacts: test sets, manual test steps, and test results. For more information on the Test Manager synchronization with other ALM tools, visit UiPath Test Manager Connect.
SAP Cloud ALM native integration
Increase the speed of your SAP testing processes and the quality of your SAP solutions, using the SAP Cloud ALM native integration. This native integration acts as an out-of-the-box connector that allows you to manage and execute automated test cases created in Test Manager, directly from SAP Cloud ALM.
For more information about creating the integration and using it, visit SAP Cloud ALM.
Using Change Impact Analysis with CSV-based Heatmap connection
You can now configure a separate SAP connection for Change Impact Analysis when using a CSV-based Heatmap. This allows you to analyze your transports even if your Heatmap is not connected directly to SAP. For more information, visit Prerequisites.
Web services connection for Heatmap and Change Impact Analysis
When setting up an integration between a Test Manager project and your SAP system for Heatmap or Change Impact Analysis, you have a new connection option. In addition to the existing RFC and CSV upload connections, you can now use a Web service connection. For more information on using the Heatmap and Change impact analysis with a web service connection, visit Heatmap prerequisites and Change Impact Analaysis prerequisites.
UI, API and Security tabs in Change Impact Analysis
We have enhanced the Change Impact Analysis to with new UI, API, and Security tabs. These tabs provide a comprehensive view of how potential system changes could impact your transactions, APIs, and security. For more information, visit Working with Change Impact Analysis.
We have achieved feature parity between Test Manager and the Orchestrator Testing tab. This advancement allows us to introduce a dedicated migration tool to seamlessly transfer your test artefacts from Orchestrator to Test Manager. The migration tool allows you to import test sets from any Orchestrator folder you can access.
- Name
- Description
- Activity coverage
- Test case assignment (maintained as static assignments)
- Arguments and their default values (converted into test case parameters)
- Execution folder
- Test case versions
- The mapped Robot or User account
- Test results
- Test set schedules
- Attachments
For information on importing your Orchestrator test sets, visit Importing Orchestrator test sets.
We have removed the option to link test sets from Orchestrator to Test Manager. Test Manager test sets now support the same capabilities, making the linkage redundant.
- In all Test Manager projects, we have renamed the Test Results section to Execution for a more accurate representation of its functionality. Besides results, the Execution section serves as the starting point for planning and reporting, consolidating everything related to test execution. This change is also mirrored in the breadcrumb navigation within your project when accessing test objects.
- We have improved the Excel report
that you can download for a test execution with additional details. The enhanced
Excel report now contains four sheets detailing the following aspects:
- Overview: Provides an overview of the test execution.
- Test case logs: Displays information about the executing user, the type of execution, and the source package.
- Assertions: Offers
details on each test case along with their successful or unsuccessful
assertions.
- For manual test cases: Each test step is a verification and the tester's comment is the message.
- For automated test cases: The name and message from the automation are displayed, along with a hyperlink to a screenshot, if available.
- Requirements: Indicates the number of test cases assigned to a requirement and the results of those test cases.
- You can select any test case result chip in Test Manager and it will take
you to the underlying test case log with all the details.
Figure 1. Selecting the test case result chips takes you to the test case log
- In the Overview
section of a test case, the latest results are now visible in two formats:
Table view and Chart view.
- The Table view presents the data in a grid format
- The Chart view
graphically displays the results using a dot graph.
In the Chart view, hover over the dots to check execution details, or select the dots to navigate to the test case log.
Figure 2. Latest results of a test case displayed in the Chart view
- The character limit of the Description field for test objects such as Requirements, Test Cases, and Test Sets, has been increased to 60,000 characters.
- You can now enter up to 8000 characters in the Clipboard data field for more efficiency during manual testing.
- To better reflect their
functionality, we renamed the following actions available for test sets:
- Assign Test Cases is now Add Test Cases.
- Un-assign Test Cases is now Remove Test Cases.
- When executing a manual test case, you can now view the test description at the top of the Manual Execution Assistant.
- Several UI areas consisting of lists or tables are now displayed as grids. You
can:
- Resize the columns
- Sort columns alphabetically
- Customize which columns are shown or hidden, by selecting or clearing the relevant check boxes in the Columns menu
- Refresh the grid to get the latest data
The data exported from the Execution page contains only the columns which are shown (included) in the grid.The grid layout has been implemented on the following pages: Requirements, Test Cases, Test Sets, Execution, ProjectSettings (Manage Access tab, Custom field definitions tab, Prompt library tab).
- You can now resize various text fields (like the Description field), by using the Ctrl + Up/Down arrow keys. The resize action is available for, but not limited to, fields where long text is usually entered, across creation screen areas for projects, requirements, test cases, test sets.
- Azure DevOps
- Jira Data Center
- Jira Cloud
- XRay
- XRay Cloud
To stay up to date with other deprecations or removals, visit the Deprecation timeline.
- Fixed Test Manager search issue for keywords under three characters.
- This release brings security updates to address CVE-2025-55315
The creation of new connections for the qTest connector is no longer supported.
To better align with the evolution of our platform strategy, the Test Suite guide has been deprecated.
UiPath’s automation testing capabilities are no longer grouped under the Test Suite umbrella. Instead, they remain as platform-specific components across the Automation Cloud and Automation Suite platforms.
| Content | New location |
|---|---|
| Test Manager user guide | Available as a dedicated guide: Test Manager. |
| Mobile automation documentation | Now part of the UiAutomation Activities guide, under the Mobile Automation section. |
| Overview of automaton testing | Moved under the Testing in your organization section in the: |
The Test Suite guide will no longer be maintained and will redirect you to the appropriate locations of the previous content.
We recommend that you regularly check the deprecation timeline for any updates regarding features that will be deprecated and removed.
- What's new
- Autopilot
- Automating tests in Studio Web
- Unified Pricing
- Concurrently executing manual test cases
- Assigning manual executions to users
- Postcondition for manual test cases
- New version of the project migration schema
- Export test executions to PDF
- Select automations for test cases
- Selecting a robot account for executing test sets
- Reporting with Insights
- Scheduling test executions
- Taking screenshots during manual test execution
- Live streaming and remote controlling Test Manager executions
- Test case parameters
- Overriding test set parameters
- Enhanced Tricentis qTest integration
- SAP Cloud ALM native integration
- Using Change Impact Analysis with CSV-based Heatmap connection
- Web services connection for Heatmap and Change Impact Analysis
- UI, API and Security tabs in Change Impact Analysis
- Orchestrator-to-Test Manager migration tool
- Breaking change
- Improvements
- Continuous support for native connectors
- Bug fixes
- qTest connector deprecation
- Deprecation announcement for the Test Suite guide
- Deprecation timeline