Anaconda announced the release of the Anaconda AI Platform, the only unified AI platform for open source that provides proven security and governance when leveraging open source for AI development, empowering enterprises to build reliable, innovative AI systems without sacrificing speed, value, or flexibility. As the only AI platform for open source, the Anaconda AI Platform combines trusted distribution, simplified workflows, real-time insights, and governance controls in one place to deliver secure and production-ready enterprise Python. The Anaconda AI Platform empowers organizations to leverage open source as a strategic business asset, providing the essential guardrails that enable responsible innovation while delivering documented ROI and enterprise-grade governance capabilities. The Anaconda AI Platform enables enterprises to build once and deploy anywhere safely and at scale. Anaconda saw a 119% ROI and $1.18M in benefits within three years, with improved operational efficiency (80% improvement worth $840,000, according to the Forrester study) and enterprise-powered security (Anaconda provided an 80% reduction in time spent on package security management and a 60% reduction in security breach risk, according to the Forrester study). The Anaconda AI Platform eliminates environment-specific barriers, enabling teams to create, innovate, and run AI applications across on-premise, sovereign cloud, private cloud, and public cloud on any device without reworking code for each target. The platform is now available on AWS Marketplace for seamless procurement and deployment. Additional features include: Trusted Distribution; Secure Governance; Actionable Insights
Parasoft’s agentic assistant automates generating API test scenarios using service definition files while also parameterizing for data looping
Parasoft has added Agentic AI capabilities to SOAtest, featuring API test planning and creation. Parasoft also has enhanced its Continuous Testing Platform (CTP), extending Test Impact Analysis (TIA) and code coverage collection to manual testers, further reducing technical barriers, accelerating feedback, and improving collaboration between development and quality. Parasoft SOAtest’s AI Assistant now utilizes agentic AI in API test-scenario generation, making it easier for testing teams with diverse skill sets to adopt API test automation. This release now enables a tester to, in natural language, request the AI to generate API test scenarios using service definition files. Going beyond simple test creation, the AI Assistant leverages AI agents to generate test data and parameterize the test scenario for data looping. Complex, multi-step workflows with dynamic data are handled in collaboration with the user, allowing less technical testers to build complicated tests without requiring scripts, advanced code-level skills, or in-depth domain knowledge. In addition to reducing technical burdens, Parasoft’s AI Assistant will help customers scale API testing and automate other in-product actions. As additional agents are introduced over time, it will produce even smarter test scenarios and workflow guidance. QA teams can leverage Parasoft CTP to collect and analyze code coverage from manual test runs, then publish that coverage into Parasoft DTP for deeper analysis. In CTP, the tester can easily create a manual test case, and with a few clicks can ensure code coverage is captured during their test runs. With this visibility, teams can fine-tune their manual testing efforts—eliminating redundancies, filling coverage gaps, and focusing on the highest-risk areas. Teams can now create, import, and manage manual tests directly in CTP, capture code coverage as those tests run, and utilize that data in test impact analysis to pinpoint exactly which manual regression tests need to be rerun to validate application changes. This trims retesting time and effort, reducing testing fatigue while strengthening collaboration between development and QA teams. This new capability also makes it easier to adapt manual regression testing for agile sprints, as it allows teams to only focus on impacted areas. With faster test cycles, QA teams can quickly validate changes and shorten feedback loops.
Pega launches agents for workflow and decisioning design that can instantly create out-of-the-box conversational agents from any workflow
Pegasystems unveiled Pega Predictable AI™ Agents that give enterprises extraordinary control and visibility as they design and deploy AI-optimized processes. Businesses can deploy Pega Predictable AI Agents with confidence, accelerating value while minimizing risk. Pega Predictable AI Agents allow enterprises to avoid the sinkhole of “AI black boxes” by thoughtfully integrating AI agents into the world’s leading enterprise platform for workflow automation. Instead of providing nothing more than prompt-based authoring tools, basic dashboards, and vague advice to use it wisely, Pega maximizes the value of AI while minimizing risk with the following Pega Predictable AI Agents: Design Agents: At the core of Pega Predictable AI Agents strategy is Pega Blueprint™, the industry’s first agents for workflow and decisioning design. Pega Blueprint leverages a collection of unique AI models and agents to generate workflows, next-best-action strategies, data structures, interfaces, user screens, security configuration, and more. It can also be invoked at runtime if a user needs to automate a process on the fly that isn’t already defined in the application. Conversation Agents: Leveraging the Pega Agent Experience™ API, Pega Blueprint can instantly create out-of-the-box conversational agents from any workflow. Automation Agents: Clients can incorporate these agents into their workflows as specific workflow steps, orchestrating agents both inside and outside of Pega to accelerate productivity in a transparent and reliable way. Knowledge Agents: Pega Blueprint leverages Pega Knowledge Buddy™ agents to create workflows that leverage industry best practices and to embed guidance inside other workflows. Coach Agents, such as Pega Coach, collaborate with humans involved in a workflow step to provide real-time, contextual guidance about the work.
Vectara offers to reduce hallucination rates in enterprise AI systems to about 0.9%; provides detailed explanation for factual inconsistency along with a corrected version
AI agent and assistant platform provider Vectara launched a new Hallucination Corrector directly integrated into its service, designed to detect and mitigate costly, unreliable responses from enterprise AI models. In its initial testing, Vectara said the Hallucination Corrector reduced hallucination rates in enterprise AI systems to about 0.9%. The HHEM scores the answer against the source with a probability score between 1 and 0, where 0 means completely inaccurate – a total hallucination – and 1 for perfect accuracy. HHEM is available on Hugging Face and received over 250,000 downloads last month, making it one of the most popular hallucination detectors on the platform. In the case of a factually inconsistent response, the Corrector provides a detailed output including an explanation of why the statement is a hallucination and a corrected version incorporating minimal changes for accuracy. The company automatically uses the corrected output in summaries for end-users, but experts can use the full explanation and suggested fixes for testing applications to refine or fine-tune their models and guardrails to combat hallucinations. It can also show the original summary but use corrections info to flag potential uses while offering the corrected summary as an optional fix. In the case of LLM answers that fall into the category of misleading but not quite outright false, the Hallucination Corrector can work to refine the response to reduce its uncertainty core according to the customer’s settings.
LatticeFlow AI’s risk evaluation service delivers independent evaluations of LLMs using benchmarks tailored to real-world, business-oriented requirements for secure and compliant adoption of gen AI
LatticeFlow AI has launched AI Insights, the first independent LLM risk evaluation service for secure business adoption. AI Insights gives AI and governance, risk, and compliance (GRC) leaders clear, actionable intelligence on enabling fast, secure and confident adoption of foundation models. AI Insights sets a new standard, favoring transparency, independence, and real-world relevance over leaderboard rankings and performance metrics. It’s designed to provide enterprise leaders independent, trustworthy, and business-oriented evaluations that support secure and compliant AI adoption. AI Insights delivers independent evaluations of foundation models using the most comprehensive set of benchmarks tailored to real-world business requirements, covering security, fairness, and regulatory alignment. Each evaluation provides clear, actionable recommendations to support secure and compliant generative AI adoption. The results are presented in intuitive reports that explain model behavior, flag critical issues like bias or prompt vulnerabilities, and offer mitigation recommendations. AI Insights offers a new model, one that prioritizes transparency over leaderboard hype, and business requirements over performance points. Dr. Petar Tsankov, CEO and Co-founder of LatticeFlow AI said “AI Insights enables organizations to accelerate AI adoption by ensuring secure and compliant AI deployment.”
Inflectra’s cloud-native generative AI engine to be natively integrated into its software development platforms unlike conventional ‘bolt-on’ AI to offer real-time support and dynamic test automation
Inflectra announced the general availability of Inflectra.ai, its natively integrated generative AI engine designed to accelerate software delivery, improve quality, and optimize development throughput. Inflectra.ai delivers AI capabilities directly within Inflectra’s cloud platforms — starting with Spira — enabling teams to automate routine processes, generate key artifacts, and enhance decision-making without leaving their existing tools or introducing additional overhead. Unlike conventional “bolt-on” AI features, Inflectra.ai is deeply embedded within the fabric of Inflectra’s Software Project Management platforms: SpiraTest, SpiraTeam, and SpiraPlan, and is expected to expand into Rapise later in 2025. Built as a cloud-native and context-aware intelligence layer, Inflectra.ai delivers real-time support across the software lifecycle. Core Capabilities Include: Intelligent Generation of test cases, BDD scenarios, risks, and user stories from structured and unstructured inputs; Dynamic Test Automation that adapts to UI changes without manual rework; Risk Identification and prioritization at the point of planning and analysis; Seamless Contextual Assistance embedded within the Spira UI, aligned to user workflows.
Databricks to integrate Neon’s serverless Postgres architecture to enable developers to deploy AI agents without requiring to scale compute and storage in tandem
Databricks announced its intent to acquire Neon, a leading serverless Postgres company. Databricks plans to continue innovating and investing in Neon’s database and developer experience for existing and new Neon customers and partners. Together, Databricks and Neon will work to remove the traditional limitations of databases that require compute and storage to scale in tandem — an inefficiency that hinders AI workloads. The integration of Neon’s serverless Postgres architecture with the Databricks Data Intelligence Platform will help developers and enterprise teams efficiently build and deploy AI agent systems. This approach not only prevents performance bottlenecks from thousands of concurrent agents but also simplifies infrastructure, reduces costs and accelerates innovation — all with Databricks’ security, governance and scalability at the core. Together, Neon and Databricks will empower organizations to eliminate data silos, simplify architecture and build AI agents that are more responsive, reliable and secure.
OpenAI releases GPT-4.1 models which are faster than GPT-4o and excel at coding and instruction following; but with a different set of safety evaluations
OpenAI is releasing its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT. The GPT-4.1 models should help software engineers who are using ChatGPT to write or debug code, OpenAI spokesperson Shaokyi Amdo told TechCrunch. GPT-4.1 excels at coding and instruction following compared to GPT-4o, according to OpenAI, but is faster than its o-series of reasoning models. The company says it’s now rolling out GPT-4.1 to subscribers to ChatGPT Plus, Pro, and Team. Meanwhile, OpenAI is releasing GPT-4.1 mini for free and paying users of ChatGPT. As a result of this update, OpenAI is removing GPT-4.0 mini from ChatGPT for all users. “GPT-4.1 doesn’t introduce new modalities or ways of interacting with the model, and doesn’t surpass o3 in intelligence,” said OpenAI’s Head of Safety Systems Johannes Heidecke in a post. “This means that the safety considerations here, while substantial, are different from frontier models.” Now, OpenAI is releasing more information about GPT-4.1 and all its AI models. OpenAI has committed to publish the results of its internal AI model safety evaluations more frequently as part of an effort to increase transparency. Those results will live in OpenAI’s new Safety Evaluations Hub, which it launched on Wednesday.
Capgemini’s mainframe modernization offering automates legacy code analysis and extraction of business rules using a set of generative AI agents
Capgemini has launched a new offering that enables organizations to unlock greater value from their legacy systems at unprecedented speed and accuracy. The new approach, powered by generative and agentic AI, allows organizations to gain cost savings, agility, and a significant improvement in data quality. It converts legacy mainframe applications into modern, agile, and cloud-friendly formats that can run more efficiently either on or outside of a mainframe. Capgemini’s automated mainframe application refactoring uses tools and techniques to automatically convert legacy mainframe applications, such as those written in COBOL, into modern architecture. The approach is supported by rigorous automated testing for faster, higher-quality transformations and reduced risk for businesses. Capgemini’s experience in delivering large and complex mainframe modernization programs, market leadership in AI, deep domain knowledge, and broad understanding of complex industry regulations has already delivered tangible results for blue-chip clients.
Boomi and AWS partner to offer a centralized management solution for deploying, monitoring, and governing AI agents across hybrid and multi-cloud environments with built-in support for MCP via a single API
Boomi announced a multi-year Strategic Collaboration Agreement (SCA) with AWS to help customers build, manage, monitor and govern Gen AI agents across enterprise operations. Additionally, the SCA will aim to help customers accelerate SAP migrations from on-premises to AWS. By integrating Amazon Bedrock with the Boomi Agent Control Tower, a centralized management solution for deploying, monitoring, and governing AI agents across hybrid and multi-cloud environments, customers can easily discover, build, and manage agents executing in their AWS accounts, while also maintaining visibility and control over agents running in other cloud provider or third-party environments. Through a single API, Amazon Bedrock provides a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI in mind, including support for Model Context Protocol (MCP), a new open standard that enables developers to build secure, two-way connections between their data and AI-powered tools. MCP enables agents to effectively interpret and work with ERP data while complying with data governance and security requirements. Steve Lucas, Chairman and CEO at Boomi. “By integrating Amazon Bedrock’s powerful generative AI capabilities with Boomi’s Agent Control Tower, we’re giving organizations unprecedented visibility and control across their entire AI ecosystem while simultaneously accelerating their critical SAP workload migrations to AWS. This partnership enables enterprises to confidently scale their AI initiatives with the security, compliance, and operational excellence their business demands.” Apart from Agent Control Tower, the collaboration will introduce several strategic joint initiatives, including: Enhanced Agent Designer; and New Native AWS Connectors and Boomi for SAP.