v1.6.0 January 31, 2026
This update includes new features and enhancements summarized below. Enhanced Agent Creation Flow Agent creation now supports three paths: building agents from scratch, importing pre-built agents from the Marketplace, or adding externally deployed agents for orchestration. Each path provides a tailored setup flow with agent-specific configurations. This streamlines agent onboarding with guided experiences and eliminates the previous two-step enablement process across apps and agent profiles. Learn more → AI-Assisted Prompt Refinement The prompt editor now includes AI-assisted refinement, enabling users to easily improve and optimize prompts directly within the editor. This feature reduces iteration cycles and improves prompt accuracy through clearer, more effective definitions, making prompt writing faster and easier. Note: This feature is in preview and can be enabled upon request. Enhanced Access Control for Tool Logs Tool-level role management has been enhanced with separate permissions for tool log visibility, allowing administrators to control access to the tool log list and detailed execution logs independently. These permissions support three access levels - detailed access, view-only, and no access, providing finer control over log. Environment Variable for Workflow Tools in Agentic Apps Workflow Tools created within or scoped to Agentic Apps can now use environment variables defined at the app level. Access is managed through namespaces—when you attach a namespace to a tool, all environment variables within that namespace become available for use. Workflow Tools created outside an Agentic App and not linked to any app cannot access namespaces or app-level environment variables. Learn more → Vertex AI Model Integration Agent Platform now offers secure connections to Google Vertex AI-hosted Gemini models (2.5 and 3.0 families). You can configure connections manually or via cURL import with automated credential extraction for both AI Studio and Vertex AI formats. A guided setup includes built-in validation, connection testing, and error handling. The platform stores all credentials securely using encryption. This integration works across Agentic Apps, Workflow Tools, and Prompts. Learn more → Expanded Model Support The Agent Platform now supports additional AI models, giving users greater flexibility in selecting the right model for their use case. New models include:- Google: gemini-3-pro-preview, gemini-3-pro-image-preview, gemini-3-flash-preview, gemini-2.5-flash-native-audio-preview-12-2025, gemini-2.5-flash-native-audio-preview-09-2025, gemini-2.5-flash-preview-09-2025, gemini-2.5-flash-lite-preview-09-2025, gemini-2.5-flash-lite, and gemini-2.5-flash-image.
- OpenAI: gpt-realtime-mini-2025-10-06, gpt-audio-mini-2025-10-06, gpt-audio-2025-08-28, gpt-realtime-2025-08-28, gpt-4o-audio-preview-2025-06-03, gpt-4o-realtime-preview-2025-06-03, o3-2025-04-16, o4-mini-2025-04-16, gpt-4o-search-preview-2025-03-11, o3-mini-2025-01-31, gpt-4o-realtime-preview-2024-12-17, gpt-4o-mini-audio-preview-2024-12-17, gpt-4o-audio-preview-2024-12-17, o1-2024-12-17, gpt-4o-2024-11-20, gpt-4o-2024-08-06, gpt-4o-mini-2024-07-18, gpt-4o-2024-05-13, gpt-4-turbo-2024-04-09, gpt-4.1-nano, gpt-4.1-mini, gpt-4.1, gpt-4-turbo, gpt-3.5-turbo-0125, gpt-4o-mini-transcribe, gpt-4o-mini-audio-preview, gpt-4o-audio-preview, gpt-4o-realtime-preview, gpt-audio-mini, gpt-audio, gpt-image-1-mini, gpt-image-1, gpt-realtime-mini, gpt-realtime, o4-mini, o3, and o1.
v1.5.0 January 17, 2026
This update includes new features and enhancements summarized below. Direct Real-Time Voice Integration for Single-Agent Apps The Single Agent Orchestration Pattern now supports real-time models, which significantly reduce response latency when your agentic app contains only one agent. The platform now automatically bypasses the supervisor routing layer and connects users directly to the agent, eliminating unnecessary orchestration overhead. This improvement is especially beneficial for voice interactions with real-time models where speed is critical, and it works automatically without requiring any configuration changes. Learn more → Customizable Waiting Messages The Waiting Experience feature enhances voice interactions by streaming natural filler messages during processing delays, reducing perceived latency and ensuring smoother conversations. This feature is now publicly available and includes a customizable prompt editor for creating AI-generated dynamic waiting messages. This feature is supported only in ASR/TTS mode (not available for real-time models). Learn more → Tool Output Artifacts in Response Payload You can now configure tools to include their outputs as artifacts in the final response payload. This new capability allows you to capture specific tool execution results and make them available under the ‘artifacts’ key in the response, enabling downstream channels and applications to access structured data for custom processing, display logic, or integration workflows. Artifact inclusion is configurable at the individual tool level, giving you precise control over which tool outputs are exposed in the response. Learn more → PII Handling for Workflow Tools The Agent Platform extends existing PII handling to Workflow Tools, ensuring sensitive data is securely processed while preventing exposure in logs, traces, or model outputs. Before a Workflow Tool starts execution, input fields are automatically scanned for declared PII patterns. Inputs identified as PII are masked as configured and passed to the tool in redacted form. If the Workflow tools are granted access to the original value in the PII configuration:- The tool can securely unredact and use the PII internally for execution.
- All monitoring, debugging logs, and execution traces continue to display only masked values.
{{ in any field that supports context variables, a dynamic dropdown appears showing all available variables grouped by node, including environment variables defined at the workflow-tool level. This eliminates the hassle of manually entering the full path. Learn more →
Coming Soon: Support for selecting and referencing agentic app-level environment variables in Workflow Tools is currently in progress and will be available in an upcoming release.
Expanded Model Support
The Agent Platform now supports additional AI models, giving users greater flexibility in selecting the right model for their use case.
New models include:
- OpenAI Models: gpt-5.2-chat-latest, gpt-5.2-2025-12-11, gpt-5.2, gpt-5.1-chat-latest, gpt-5.1-2025-11-13, and gpt-5.1.
- Anthropic: claude-haiku-4-5-20251001, claude-sonnet-4-5-20250929, and claude-opus-4-5-20251101
- meta-llama/Llama-3.1-8B-Instruct
- meta-llama/Llama-3.2-1B-Instruct
- meta-llama/Llama-3.2-3B-Instruct
- mistralai/Mistral-7B-Instruct-v0.3
- mistralai/Mistral-Nemo-Instruct-2407
- XiaomiMiMo/MiMo-VL-7B-RL
v1.4.0 December 6, 2025
This update includes new features and enhancements summarized below. Unified Orchestration Management The platform now supports creating Agentic Apps with a single AI Agent, making it easy to deploy use cases that do not require sophisticated multi-agent orchestration. The Adaptive Network Pattern is now generally available for all users. The new Behavioral Guidelines let you centrally manage safety, branding, and other instructions. Along with these updates, a dedicated Orchestrator section makes it easy to switch between the patterns. Learn more → Direct Tool Invocation via Agent Protocol Agent Protocol now supports direct invocation of tools (workflow, code, and MCP tools) while maintaining full access to application context, including memory and environment variables. This enhancement enables developers to execute specific tools programmatically when they know exactly what action is needed, bypassing agent reasoning for faster, more cost-effective, and deterministic execution. Learn more → Introducing Content Variables Content Variables provide a centralized way for users to declare data used throughout the application—such as user profiles, customer IDs, and employee IDs. This data becomes automatically accessible to all components, including supervisor prompts, agent definitions, tools, events, and knowledge. It helps streamline context management during execution and eliminates the need for manual configuration. Learn more → External Model Connection APIs The platform now provides External Connection APIs that enable programmatic management of model connections in addition to the existing UI. These APIs support viewing existing connections, creating new connections for both easy and custom integrations, and rotating keys for supported providers. Two new API scopes, View connections and Manage connections, have been added, offering more granular permission control for accessing and updating external model connections. Learn more → New Model Support The OpenAI gpt-4o-search-preview model is now supported in Prompts and Workflow tools. The model is optimized for search and RAG workflows, and features multi-chunk document retrieval and grounded response generation. Learn more →v1.3.1 November 21, 2025
This update includes new features and enhancements summarized below. Environment Variables & Namespace Support Environment Variables with namespace support enable secure, reusable, and environment-specific configuration management across the application. Developers can centrally define and manage variables and access them seamlessly in Code Tools. These variables are automatically resolved at runtime, eliminating the need to hardcode sensitive information such as API keys, endpoints, and tokens. This enhancement simplifies multi-environment deployments and ensures consistent, dynamic configuration across all environments. Learn more → Pre-Processor for Agent Execution Agent Platform introduces a new Pre-Processor capability that allows developers to transform and validate agent context before each execution. This enhancement enables custom logic to run before each agent invocation, allowing it to process incoming data, enrich context, and adjust agent inputs as required. The Pre-Processor supports user-defined scripts in JavaScript and Python. Note: This feature is currently in preview and can be enabled upon request. API Key Permissions and Access Control Agent Platform now supports granular permission controls for API keys, allowing precise definition of each key’s capabilities. Users can create app-level keys scoped to specific permissions to create, manage, or delete sessions, upload or delete files, and execute agent runs. Existing API keys maintain full backward compatibility. Learn more → Agent Invocation Error Handling Agent Platform introduces a new error-handling framework that provides configurable timeouts, retry logic, fallback models, and recovery actions for model invocation failures. These enhancements ensure a predictable handling of failures, and improved user experience during model outages. Learn more → Agentic Evaluation: Introducing Simulations for Agentic Apps Evaluation Studio now supports Simulations, enabling teams to generate realistic mock interaction sessions before deploying agentic apps to production. The feature helps validate agent behavior early, detect issues quickly, and assess quality across diverse conditions. Simulation sessions can be reviewed directly and imported into Evaluations, just like production data. Key capabilities- Reusable Personas: Create personas representing different communication styles and behaviors.
- Test Scenarios: Define scenarios to simulate specific tasks, intents, and edge cases.
- Mock Conversations: Generate realistic conversations (using personas and test scenarios) based on the agentic app’s current configuration.
- Transcript Review: Review transcripts to validate agent behavior and reliability before your agentic app goes live.
v1.3.0 November 3, 2025
This update includes new features and enhancements summarized below. Waiting Experience Agent Platform now introduces a new feature that keeps users engaged during voice interactions when the AI Agent needs additional processing time, using configurable filler messages to update them on the current state. These messages can be configured in two ways:- Static messages: Pre-written responses for consistent communication.
- Dynamic messages: AI-generated responses tailored to the conversation context.
- This feature is currently in preview and can be enabled upon request.
- This feature only works in ASR/TTS streaming mode.
v1.2.0 September 27, 2025
This update includes new features and enhancements summarized below. Introducing Knowledge Base Test Tool The new test feature at the app level allows users to enter queries directly and receive real-time responses from connected Search AI sources within the knowledge base. This enables quick validation and optimization of knowledge base content before deployment. Learn more → Enhanced Context Management The Agent Platform now provides enhanced context handling for conversations with rolling context windows. Configure the number of recent messages to use as conversation context by setting a message count limit. When this limit is reached, the oldest messages are automatically removed to make room for new ones. This prevents context overflow and keeps conversations focused on relevant, up-to-date information. Learn more → Improved Workflow Tool Testing Experience The Platform now provides a unified interface for testing workflow tools directly within Agentic Apps. Users can view tool details, input parameters, and execute tools within a single, streamlined workflow. The interface includes sample execution capabilities and displays results in a standardized output format. Learn more → Custom Model Support The Platform now supports seamless integration of custom models in Agentic Apps and Agents through API endpoints. To ensure compatibility, custom models must support tool calling and adhere to the request and response structures as per the API reference of Anthropic or OpenAI. Custom model integrations with Default model settings are currently not supported in Agentic Apps. The platform provides standardized API integration, performance monitoring, and security controls to ensure consistent and secure usage. Learn more → Structured Output Support for Open-Source Models Open-source models now support structured JSON output through the response_format parameter, aligned with OpenAI’s schema style. This enables schema-based responses across Prompts and Tools.- Supported on the v2/chat/completions endpoint (default for new deployments).
- Works with most open-source models (see the documentation for the full list of supported models.
- Not supported for fine-tuned models, Hugging Face imports, CT2-optimized models, or locally imported models.
- The schema editor automatically appears in AI nodes when a supported model is selected.
- OpenAI Models GPT-5 Family: gpt-5-2025-08-07, gpt-5-mini-2025-08-07, gpt-5-nano-2025-08-07, and gpt-5-chat-latest
- Anthropic Model: claude-opus-4-1-20250805 Learn more →
v1.0.11 September 9, 2025
This update includes new features and enhancements summarized below. Introducing Agent Diagnostics Agent Diagnostics is a comprehensive validation framework that proactively validates your AI application before deployment, automatically checking for configuration errors and operational risks to prevent production failures. It provides detailed, actionable reports with direct navigation to problem areas. All diagnostic runs are recorded in Audit Logs for complete traceability. Learn more → Enhanced Debug Logs with Actionable Insights Debugging is now more intuitive with enhanced execution logs. The refreshed interface provides detailed, actionable status messages across agents, tools, and supervisors. New features include Guardrails execution logging, auto expanded current traces, and improved navigation via session and trace IDs for a clearer view of the execution flow. Learn more → Enhanced Document Management in Playground The enhanced interface offers a smoother and more intuitive way to manage conversation attachments. The new ‘Manage’ panel features ‘In Context’ and ‘Removed’ tabs, clearly organizing documents by status, with visual indicators showing actively used files. This streamlined approach enhances user control and awareness of document usage, making it easier to track and manage attachments. Learn more → New Human Node for Workflow Approvals The new Human review node enables human-in-the-loop workflows by pausing execution to collect user input or approvals, with custom input fields and automatic branching based on responses, timeouts, or failures. This ensures that critical decisions are validated by humans while maintaining workflow continuity and automatically managing exceptions. Key benefits- Improve Accuracy: Ensure critical decisions are validated by a person.
- Maintain Flow: Automatically manage interruptions and exceptions without breaking the workflow.
- Gain Control: Design the exact review process your business rules require.
- Text-to-Text tasks handling and node support for open-source models:
- Meta-llama/Llama-Guard-4-12B
- Meta-llama/Llama-3.2-11B-Vision-Instruct
- Audio-to-Text, Image-to-Text, and Text-to-Text nodes support for external models and Tool calling:
- Gemini 2.5 Pro
- Gemini 2.5 Flash
- Real-time models for voice conversations in Agentic Apps
- gemini-live-2.5-flash-preview
- gemini-2.0-flash-live-001
- meta-llama/Llama-3.1-8B-Instruct
- meta-llama/Llama-3.2-1B-Instruct
- meta-llama/Llama-3.2-3B-Instruct
- mistralai/Mistral-7B-Instruct-v0.3
- mistralai/Mistral-Nemo-Instruct-2407
v1.0.10 August 13, 2025
These updates focus on providing greater flexibility in model deployment, improved security controls, and enhanced monitoring capabilities across the platform. Agent Protocol Enhancements Significant enhancements have been made to the Agent Protocol, with a focus on improving security, optimizing memory management, and streamlining file upload workflows. Key enhancements:- Authentication Token Support for Async Operations: The platform now supports authentication tokens for callback URLs in asynchronous executions. When the callback URL is invoked, the configured token is included in the request headers, securing communication with external systems. Learn more →
- Document Upload Configuration in Create Session API: The Create Session API now returns upload configuration details, including the maximum file size, the allowed number of files, supported formats, and other relevant constraints. These details enable proactive validation, reducing failed uploads and improving user experience. Learn more →
- Commercial models: Create multiple connections for the same model, each with its own API key/security token, and track usage separately per connection. Learn more →
- Fine-tuned and open-source models: Run multiple deployments of the same model to improve inference control, manage costs, and optimize performance. Learn more →
- Manage multiple connections or deployments for greater flexibility and control.
- Select the right model connection or deployment at runtime for different tasks or environments.
-
Agents and Supervisors
- OpenAI - o3-mini
- Anthropic - claude-sonnet-4-20250514, claude-opus-4-20250514
- Google - gemini-2.5-flash
- Azure OpenAI - GPT-4.1, GPT-4.1-Nano, GPT-4.1-Mini, O1, O1-Mini, O3-Mini
-
Text to Image Node (External models)
- OpenAI - DALL·E 2 and DALL·E 3
-
Open-source Model
- Xiaomi Mimo-7B—VL-RL
- Model Analytics & Traces: Filter and view data by deployment name & version (open-source/fine-tuned) or connection name (external). Learn more →
- Audit Logs: Model Added/Deleted events now display the relevant deployment or connection name. Learn more →
- Billing & Usage: The drill-down view in the Usage page’s Models tab displays the deployment name, type, credits used, last updated date, and status for a model. Totals of all the deployments roll up to show model-level consumption, with deployment-level data reflected in fine-tuning, hosting, and storage metrics. Learn more →
- Enhanced Data Security with Customer-Managed Keys on Azure: For Azure cloud private deployments, the platform now supports Customer-Managed Key (CMK) encryption, allowing you to control your own encryption keys for all transaction data. This enhancement strengthens compliance, enhances data privacy, and provides full ownership of encryption management. The feature can be activated via environment-level configuration and requires MongoDB Atlas (not available for self-managed MongoDB).
- Angular Upgrade: Upgraded Angular from v17 to v20 across the platform to enhance performance, scalability, and long-term maintainability.
- UX Enhancements: UI refinements across the platform to improve the overall user experience, along with bug fixes and performance improvements.
v1.0.9 July 23, 2025
This update includes new features, enhancements, and bug fixes summarized below. Enhanced Support for Large Documents in Conversations Document upload limits have been increased to support more detailed and context-rich conversations.- The file size limit has been increased to a maximum of 25MB.
- The max token limit for document content has been increased to 800,000 tokens.
- Flexible Response Routing: Directly stream external agent responses to users for faster interactions, or route through the orchestrator for complex orchestration needs.
- Contextual Metadata Passthrough: Pass structured contextual metadata with agent requests for seamless context continuity, improved personalization, and smoother system integration.
- Seamless integration and intuitive design:
- Easily accessible from the bottom tray, assets tray, or plus (+) icon.
- Flexible configuration and smart error handling:
- Supports input arrays via context variables.
- Customize how outputs are collected from each iteration of the loop.
- Choose from built-in error-handling options to control how failures are managed:
- Continue on Error: Skip over failed iteration and continue looping.
- Remove failed results: Continue processing and exclude failed results from the final output array.
- Terminate execution: Break out of the loop immediately.
- Smart path configuration: Separate “On Success” and “On Error” paths for robust logic.
- The Loop node is currently available within the Tool Builder. Support for displaying related data in other parts of the product, including Monitoring, Analytics, and Import/Export, will be added in upcoming releases.
- A restriction was added in v1.0.7 on backward connections, preventing loops from being created. (Please see the section under parallel connections in the release notes) This restriction is no longer relevant as customers can now create loops natively.
- Enable default values for parameters to streamline tool usage and maintain consistency.
- Define restricted value sets using enum parameters for dropdown-style input selection.
- Apply type-specific validation to ensure data accuracy.
- Audit Logging: Full tracking of custom script activities, including Write Code and Custom Function executions via the Function node.
- Import Tool: Automated population of Function node configurations and validation of script deployments.
- Share Tool: Preserves Function node configurations when sharing with other users.
- Tools Monitor Dashboard: Displays complete execution data and logging information for custom scripts.
- Validation Errors: The system now flags missing or undeployed scripts.
- Real-time metrics on users, sessions, messages, tokens, and runs or executions.
- Interactive visualizations with trend analysis and drill-down views.
- List views for detailed component-level insights.
- Filters data by date range for a specific set of analysis.
.csv file, following the schema and file-naming conventions defined in the Agent Platform. Learn more →
v1.0.8 July 7, 2025
This update includes new features, enhancements, and bug fixes summarized below. Simplified App Creation Process The Agent Platform has simplified the app creation process, making it faster and more user-friendly. Users can set up apps with fewer configurations upfront that capture only the essential information. This enhancement reduces the setup time and improves the overall onboarding experience. Learn more → Updates to Agent Protocol The Agent Platform now supports a Universal Session Closure API for consistent and reliable session management. It enables seamless session termination across integrations, addressing issues such as orphaned sessions and incomplete closures and ensuring a unified approach to managing session lifecycle. Learn more → Enhanced Document Upload Feature The Document Upload feature has been enhanced to provide a smoother and more intuitive experience for sharing files in conversations. Users now benefit from more explicit error messages, a visual loading indicator during uploads, and better enforcement of upload restrictions.- Single File Upload Enforcement: The platform now allows only one file upload at a time, removing multi-file selection to align with one-at-a-time processing logic.
- Improved error handling: Users receive clear, actionable messages in the chat interface when uploads fail due to issues such as the document exceeding the configured token limit or the file type not being supported.
- Azure OpenAI models: Added support for
GPT-4.1,GPT-4.1-Mini, andGPT-4.5 preview. - Gemini 2.0 and 2.5 models: Added support for
gemini-2.0-flash,gemini-2.0-flash-lite, andgemini-2.5-flash-preview-05-20. - Anthropic Claude 4 models: Added support for
claude-sonnet-4-20250514andclaude-opus-4-20250514.
v1.0.7 June 20, 2025
This update includes new features, enhancements, and bug fixes summarized below. Preferred Agent Support The Agent Platform now supports direct agent invocation through the new Preferred Agent capability in the Agent Protocol. External systems consuming Apps or agents can now call specific agents directly, bypassing the supervisor routing layer for improved performance. Learn more → Import/Export Enhancements The import/export feature now supports MCP server configurations and Memory Stores. This enhancement enables users to include these elements during import and export, simplifying migrations and reducing manual work for more comprehensive deployments across various environments. Learn more → Typeahead Support for Memory Access The Agent Platform now includes type-ahead functionality across Code Tools and prompt editors, providing developers with contextual suggestions while they write. This enhancement streamlines variable referencing and reduces common development errors. Improved User Interface The Agentic App interface has been redesigned for consistent, informative listings, displaying essential information upfront to reduce navigation. Enhancements include streamlined Agent, Tools, and Knowledge listings, as well as improved App Profile and Configuration pages. The Simulate feature has been transformed into Playground with enhanced debugging and testing capabilities, including chat history management and the ability to resume previous sessions. The new interface offers improved usability, featuring easy message copying and clear agent identification during thought streaming. Parallel Execution in Workflow Builder The Agent Platform now supports parallel execution within the workflow builder. You can create and trigger multiple branches simultaneously in a single flow — a major upgrade alongside traditional sequential execution. Key benefits- Improved Performance: Run branches concurrently to reduce total execution time.
- Faster Workflows: Significantly lowers overall runtime.
- Simplified Design: Ideal for independent tasks like multi-channel actions or parallel data operations.
- Easier Debugging: Outputs are grouped by branch in logs for clear visibility and troubleshooting.
- Object parameter type: Accept structured JSON input alongside existing string and number types.
- Side-by-side layout: View parameters, code editor, and output simultaneously without scrolling.
- Enhanced usability: Streamlined interface reduces context switching and accelerates development cycles.
- Direct tool execution: Test individual MCP tools immediately after configuration without creating full agents.
- Dynamic input forms: Provide sample data through automatically generated parameter forms.
- Real-time results: View execution output instantly.
- Direct access to AWS Bedrock models via a secure, customer-managed setup.
- Built-in validation, testing, and draft-saving for seamless configuration.
- Automated credential management ensures secure and uninterrupted access to models.
- Easy to implement using a secure API Key and Secret.
- User-agnostic - It doesn’t maintain sessions or track user identity. Each request is treated independently.
- The same credentials remain valid until the connection expires, eliminating the need for re-authentication.
- Write Code: Admins can utilize the integrated code editor to write and run either static or dynamic code, with immediate access to output and logs.
- Custom Function: Admins can select a particular function from an already deployed script or an imported project. This option offers several capabilities:
- Dynamic configuration and execution of input parameters through context objects.
- Mapping of selected function’s input arguments to static or dynamic values.
- The ability to add or remove input arguments as needed.
- Testing of script and function configurations with varied input values.
- Execution of the script as part of the tool’s automation workflow, generating a debug log that includes custom function specifics like script name, function name, and tool parameters.
v1.0.6 June 5, 2025
This update includes new features, enhancements, and bug fixes summarized below. Memory Stores for Contextual Interactions with Read/Write support via Code Tools Agent Platform now supports persistent Memory Stores to retain contextual data, enabling more personalized and intelligent agent interactions. These stores can be directly accessed within prompts and programmatically managed via code tools to support dynamic, stateful behavior. The memory stores can be used to maintain user preferences, conversation history, or custom data. Key benefits- Stateful Interactions: Maintain and update contextual data to ensure accurate and consistent information.
- Personalized Experiences: Store user-specific data to tailor responses and behavior.
- Flexible Data Management: Access, modify, and persist custom data within the agent’s execution flow.
- Enabling multimodal intelligence powered by Google’s advanced LLM.
- Supporting both Agent and Supervisor roles.
- Maintaining full compatibility with routing logic and tool-calling workflows.
- OpenAI: gpt-4.5-preview, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano
- Anthropic: claude-3-5-sonnet, claude-3-5-haiku, claude-3-7-sonnet
v1.0.5 May 9, 2025
This update includes new features, enhancements, and bug fixes summarized below. Agentic Evaluation Introducing Agentic Evaluation, a framework for analyzing the real-world performance of agentic AI applications. It enables multi-level evaluation across sessions and traces, offering deep insights into how orchestrators, agents, and tools operate in production. This feature provides visibility into how your system reasons, acts, and interacts with users over time, enabling data-driven improvements. Scoring interactions across the agentic workflow helps identify strengths, surface inefficiencies, and drive continuous improvement at scale. Key capabilities- Model Trace Analysis: Import and filter app sessions and traces by app version, environment and date range.
- Multi-Level Scoring: Evaluate performance at the session and trace levels.
- Evaluator Library: Apply built-in evaluators to assess reasoning quality, action effectiveness, and goal alignment.
- Interactive Scorecards and Trace Trees: Visualize agent behavior, drill into sessions, and explore full execution paths.
- Actionable Insights: Identify deviations, redundant interactions, or suboptimal tool usage to guide iteration.
- Simplified script deployment with an intuitive wizard.
- No more file and network limitations — run your custom scripts with full flexibility.
- Customizable runtime settings for improved performance.
- Efficient management of deployed scripts, including status tracking and API key management.
v1.0.4 April 26, 2025
This update includes new features, enhancements, and bug fixes summarized below. Knowledge Integration with Agent Platform The Agent Platform now integrates with AI for Service (XO) search capabilities, offering a RAG-based knowledge solution that enables users to leverage knowledge from multiple sources through agents easily. With this integration, users can easily link one or more knowledge bases to an agent and access them as Knowledge Tools. The agent can then leverage these tools to provide accurate and relevant responses to user queries, enhancing overall performance. Key features- Users can create a new Search AI app or link it to an existing app in the same workspace, streamlining knowledge management.
- Users can link one or more knowledge bases to the Agentic app, enabling access to various sources.
- Agents can access the most up-to-date and relevant knowledge to deliver more accurate responses to user queries.
- Standardized export format ensures consistency.
- A dependency-based import process where the platform follows an order of importing Tools, followed by Agents, and then the App config. This ensures that all components are successfully imported. If any step fails, the process is fully rolled back to maintain system integrity.
- Easy export and import streamlines deployment and accelerates setup across environments, saving time and minimizing errors.
- Intuitive Interface.
- Support for PDF document formats.
- Multiple file upload capability.
- Progress indicators for uploading and processing documents.
- The release supports three system events: welcome events, agent handoffs, and end-of-conversation events. Users can enable or disable these events as needed, offering greater flexibility.
- System events are applied consistently across all agents within an agent-based app, ensuring uniform behavior throughout the platform.
- Users can customize the data passed during agent handoff or end-of-conversation events. This customization enables apps to modify behaviors based on specific scenarios.
- Reduced repetitive inputs from users by using key details from the memory stores.
- Seamless personalization across interactions.
v1.0.3 April 18, 2025
This update includes new features, enhancements, and bug fixes summarized below. OAuth2 Support in API Node Users can now select an existing authorization profile from the Auth tab when configuring an API node, allowing a secure connection to third-party services using the saved authentication settings. By default, ‘None’ is selected, allowing users to proceed without choosing a profile for authentication. Support for New Integration Node Agent Platform introduces the Integration node in Tool Flow to help users connect to supported third-party services and perform specific actions for different use cases. It supports form-based and JSON configuration for easy, no-code integration into automation flows. Real-Time Model Support Support for real-time models (gpt-4o-realtime-preview, gpt-4o-mini-realtime-preview) via API key integration has been added. These models can now be added through the Models module and used within the Agentic Apps section. Support for other modules will be added in future updates. New Table Features in Evaluation Studio A new table option has been added to make working with data easier in Evaluation Studio. Users can now filter and sort columns, adjust row heights, and hide or show columns to create a personalized view. Custom Connection Integration with OAuth 2.0 Users can now select preconfigured custom OAuth 2.0 auth profiles to preauthorize a connection. These profiles automatically populate the required parameters, such as Scopes, Refresh URLs, and more. Once a custom auth profile is selected, no further authentication is needed for the external integration. Centralized Integrations Management A dedicated Integrations section has been added to manage all external service integrations on the Agent Platform. Users can now go to Settings → Integrations to:- View all supported integrations in one place.
- Search and filter integrations by category and authorization type.
- View key details, including supported authentication mechanisms, descriptions, and connection names.
- Easily switch between grid and list views.
- Add and set up a new connection, including the pre-authorization credentials to access the service securely.
- Test a configured connection and fix any errors.
- Edit, delete, enable, or disable a connection.
- Trace & Monitor Responses: Response JSON schemas are now captured in model traces and monitoring, with token usage tracked for better insights.
- Save & Reuse with Templates: You can save prompts with attached schemas as templates and reuse them directly in AI nodes—no need to redefine.
- Seamless Sharing & Import/Export: Shared prompts retain their schemas, and exports now include schema details. Imports restore schema data automatically.
- Customize with Flexibility: Schemas can be added or edited directly in AI nodes, and templates or Prompt hub selections auto-load the schema.
- Consistency Across Tools: Cloned and scoped tools in Agentic Apps preserve schemas, and flow change logs now capture schema-related updates.
- Fixed an issue where StableDiffusion models were not automatically undeployed after 1 hour. Models now undeploy as expected after the set time.
v1.0.2 April 05, 2025
This update includes new features, enhancements, and bug fixes summarized below. Seamless Integration of Third-Party Agents Agent Platform now supports the integration of external agents via a proxy agent architecture, allowing enterprises to leverage their existing investments in agents built on various platforms. Key benefits- Centralized management and monitoring of agents across multiple platforms.
- Consistent user experience when interacting with different agent systems.
- Ability to combine and orchestrate cross-platform agent capabilities.
- Ability to integrate fully autonomous applications from the XO Platform.
- Leverage XO Platform’s channel integrations while using Agent Platform capabilities.
- Streamlined user experience with shared authentication and session management.
- Real-time Voice Streaming: Enables real-time voice interactions when the Voice Gateway is selected as the channel.
- Voice Streaming to Users: Supports streaming voice responses to users in real-time based on agent responses.
- Model and Prompt Selection: Provides options to select supported AI models and prompts specific to voice interactions.
- Full Lifecycle Management: Each tool instance within an Agentic App now has full lifecycle management capabilities, including versioning and editing, ensuring greater control and flexibility.
- Simplified Permissions: The new structure simplifies permissions by making tools more securely controlled within each app, improving sharing control, and reducing complexity.
- Easier Updates: With tools directly associated with Agentic Apps, updates and modifications can be made more easily, without impacting other apps or instances.
- Improved Manageability: App-scoped tool instances make tools more manageable, as they are now organized and accessed within the context of each Agentic App.
- Integrated Draft Development: Thoroughly test application functionality in draft mode before creating versions, minimizing disruptions to active environments.
- Flexible Agent Selection: When creating application versions, users can choose from the current draft, previous versions, or specific versions of agents and tools, ensuring optimal compatibility and performance.
- Unified Version Tracking: Agent versions automatically align with application versions, simplifying tracking and management of complex, multi-component systems.
- Seamless Environment Management: Effortlessly deploy and manage different versions of applications across development, testing, and production environments, ensuring consistency and reliability.
- Detailed Execution Logs: Comprehensive logging for every tool interaction.
- On-Demand Trace Viewing: Instant access to execution details.
- Improved Troubleshooting: Deeper insights into tool performance and behavior.
- Create and manage distinct environments.
- Link specific application versions to environments.
- Use unique endpoints for each environment.
- Customize configurations per environment.
- Track deployment history.
- Streamline CI/CD integration.
- Simple integration via a script tag.
- Ability to launch the application directly within enterprise websites.
- Customizable configuration options.
- Seamless authentication and session management.
- Upgraded the TRL version of ml-training-service to support DPO Reinforcement Learning from Human Feedback (RLHF) fine-tuning, ensuring seamless functionality with custom parameters.
- Fixed an issue with CTranslate2 where deploying models with more than 6 billion parameters (e.g., opt-6.7b) on A10 hardware was stuck in the Deploying state when optimization was not enabled.
- Fixed an issue where the output JSON in the model traces for diffusion models in the Text-To-Image node was returning null.
- Fixed an issue where the output JSON in the model traces for the Whisper model in the Audio-To-Text node was returning null.
v1.0.1 March 22, 2025
This update includes only bug fixes.v1.0 March 14, 2025
This update includes new features and enhancements summarized below. New Features- JSON schema validation for JSON input type: Users can now define and validate JSON schemas for the JSON input type in Tools. A new JSON editor with schema definition and validation ensures that input data matches the required format, with detailed error messages during agent runs and endpoint failures.
- Mapping environment variables in Tools: Tool builders can now specify and map environment variables when adding tools within an AI node. They can select existing environment variables from the tool’s configuration or context variables or enable tool-specific environment variables.
- Deepseek model support: Added support for deploying Deepseek models from Hugging Face on existing Agent Platform hardware. Users can now deploy models like Deepseek-R1-Distill-Qwen-1.5B, Deepseek-R1-Distill-Llama-8B, Deepseek-R1-Distill-Qwen-14B, and Deepseek-R1-Distill-Qwen-7B. These models are now available in the list of open-source models. This support is only available for the models listed above through Hugging Face connections.
- Text-to-Image support: The AI node now supports Text-to-image generation within the tools flow. In prompts, users can specify image details and attributes, including elements to include or exclude. Using the Stable Diffusion model, the system generates images in line with the given instructions/keywords. The output is converted to a URL for further usage. Developers can now seamlessly generate and modify images using text-based instructions for creative purposes like generating marketing content, etc.
- Audio-to-Text support: The AI node now supports Audio-to-text conversion for multi-speaker, multilingual conversations using the OpenAI Whisper model. It transcribes audio, removes banned words, and translates other languages into English. Users can customize transcription style, proper nouns, punctuation, and context through prompt inputs, ensuring accurate results.
- Support for Open AI Whisper and Anthropic Claude Sonnet Vision: Agent Platform now supports the following external commercial models in its modules and workflows:
- Open AI Whisper
- Anthropic Claude Sonnet Vision
- Support for Stable Diffusion: Agent Platform now supports the following variants of the Stable Diffusion open-source models in its modules and workflows:
- stable-diffusion-xl-base-1.0
- stable-diffusion-2-1
- stable-diffusion-v1-5
- Evaluation Studio:
- Added support for sharing and collaboration: Projects can now be shared with collaborators, enabling team-based evaluation in a centralized environment. Permissions can now be applied across all evaluations within the project.
- Added support for creating custom evaluators: Users can now create custom AI evaluators using in-built templates, with the ability to select evaluator categories (Quality or Safety). Users can choose scoring mechanisms, set thresholds, and test evaluators, receiving scores and explanations. Custom evaluators can be edited, and saved as templates for use by other users. They can also be saved as global evaluators, making them accessible across multiple projects.
- Added support for human evaluators: Users can now add human evaluators to datasets with three types: thumbs up/down, better output, and comments. These evaluators are added as columns, where users can use ‘thumbs up/down’ to show approval or disapproval, ‘better output’ to suggest improvements, and ‘comments’ for additional feedback.
- Added support for running an API as an output column: Users can now integrate data from external sources using rows from the Evaluation Studio data table. For example, values from a row can be passed as input to a tool, which then generates a response by triggering an API call. This response is automatically populated into a new output column within Evaluation Studio.
- Added support for RAGAS evaluators: RAGAS evaluators are now integrated into Evaluation Studio as system evaluators, particularly within RAG (Retrieval-Augmented Generation) pipelines. These evaluators assess both the accuracy of the answer and the relevance of the contexts used. The supported evaluators include Context Precision, Context Recall, Context Entity Recall, Noise Sensitivity, and Faithfulness.
- Agentic Apps: We are excited to announce the general availability (GA) of Agentic Apps.
- Tools export with automatic model linking: Improvements have been made to tool imports for better handling of linked models.
- Guardrails model deployment support from file system: The deployment process for Guardrail models has been updated to read model paths directly from the file system instead of S3. The file system is now mounted to the Guardrails pods, enabling seamless deployment and testing of Guardrail models.
- Multimodal input support using vLLM: Support added for models that process image & audio inputs. Supported models include:
- microsoft/Phi-3-vision-128k-instruct
- microsoft/Phi-3.5-vision-instruct
- meta-llama/Llama-3.2-11B-Vision-Instruct
- llava-hf/llava-1.5-7b-hf