- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Overview
- Using reports
- Filtering Reports
- Autopilot for Communications Mining™ - Conversational filters (Preview)
- Monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Conversational filtering is an Autopilot for Communications Mining feature that helps you get to the answers you need more quickly.
It turns natural language queries into the set of filters required to answer them. If you are unsure which filters you need to answer a question, or how to apply them correctly, it does the hard work for you. This helps you get the best out of the analytics in Communications Mining, with minimal experience.
Conversational filters are available to all users who have the Use generative AI features toggle enabled in the dataset settings. The toggle is typically enabled at dataset creation.
To use conversational filters, follow these steps:
- Type in a query, such as Show me transactional messages, and hit Enter.
- Wait for Communications Mining to understand the query, map it to the correct set of filters, and apply them for you.
- The filter outputs a response. The response confirms how many filters were identified in the message, and how many were successfully applied. This helps identify if a query was partially successful, and allow you to edit the query if needed, or manually apply any remaining filters.
If a request was partially successful, one of the values in the query was probably unidentifiable and may not be present in the dataset.
If you need to edit the query to refine it, adjust the wording, then hit Enter again. It automatically clears the currently applied filters, and then applies the set of filters identified in the query.
Conversational filters only switch from Message view to Threads view whilst in Reports. Threads view is not available in Explore, as messages are already shown in the context of their thread.
-
From a specific time period: Show me messages from [insert time period].
-
From a specific sender or sender domain: Show me messages from [insert email / email domain].
-
Messages versus threads:
- While in Reports, you can make it switch from Messages view to Threads view by adding show me threads or show me conversations to your query.
- Similarly, to return to the Messages view, add show me messages or show me emails.
-
Opportunity discovery:
-
- Show me transactional messages - these have short thread lengths (2-4 messages), and can be prime candidates for automation.
- Show me requests containing documents accepted by Document Understanding - these can be candidates for processing with Document Understanding downstream.
- Show me messages showing very poor [or very bad] service levels – if you have Quality of Service enabled and configured, this helps identify problematic messages and labels.