Lumen and IBM announced a new collaboration to develop enterprise-grade AI solutions at the edge—integrating watsonx, IBM’s portfolio of AI products, with Lumen’s Edge Cloud infrastructure and network. The new AI inferencing solutions optimized for the edge will deploy IBM watsonx technology in Lumen’s edge data centers and leverage Lumen’s multi-cloud architecture, enabling clients across financial services, healthcare, manufacturing and retail to analyze massive volumes of data in near real-time to help minimize latency. This will allow enterprises to develop and deploy AI models closer to the point of data generation, facilitating smarter decision-making while maintaining data control and security, plus accelerating AI innovation. Lumen’s edge network offers <5ms latency and direct connectivity to major cloud providers and enterprise locations. When paired with IBM watsonx, the infrastructure has the potential to enable real-time AI processing, which can help mitigate costs and risks associated with public cloud dependence. IBM Consulting will act as the preferred systems integrator, supporting clients in their efforts to scale deployments, reduce their costs and fully leverage AI capabilities through their deep technology, domain, and industry expertise. The collaboration aims to solve contemporary business challenges by turning AI potential into practical, high-impact outcomes at the edge. For enterprise businesses, this can mean faster insights, lower operational costs, and a smarter path to digital innovation. Ryan Asdourian, Chief Marketing and Strategy officer at Lumen said, “By combining IBM’s AI innovation with Lumen’s powerful network edge, we’re making it easier for businesses to tap into real-time intelligence wherever their data lives, accelerate innovation, and deliver smarter, faster customer experiences.”
Anaconda supports enterprise open source, combining trusted distribution, simplified workflows, real-time insights, and governance controls in one place to deliver secure and production-ready enterprise Python
Anaconda announced the release of the Anaconda AI Platform, the only unified AI platform for open source that provides proven security and governance when leveraging open source for AI development, empowering enterprises to build reliable, innovative AI systems without sacrificing speed, value, or flexibility. As the only AI platform for open source, the Anaconda AI Platform combines trusted distribution, simplified workflows, real-time insights, and governance controls in one place to deliver secure and production-ready enterprise Python. The Anaconda AI Platform empowers organizations to leverage open source as a strategic business asset, providing the essential guardrails that enable responsible innovation while delivering documented ROI and enterprise-grade governance capabilities. The Anaconda AI Platform enables enterprises to build once and deploy anywhere safely and at scale. Anaconda saw a 119% ROI and $1.18M in benefits within three years, with improved operational efficiency (80% improvement worth $840,000, according to the Forrester study) and enterprise-powered security (Anaconda provided an 80% reduction in time spent on package security management and a 60% reduction in security breach risk, according to the Forrester study). The Anaconda AI Platform eliminates environment-specific barriers, enabling teams to create, innovate, and run AI applications across on-premise, sovereign cloud, private cloud, and public cloud on any device without reworking code for each target. The platform is now available on AWS Marketplace for seamless procurement and deployment. Additional features include: Trusted Distribution; Secure Governance; Actionable Insights
Microsoft’s new tools can build and manage multi-agent workflows and simulate agent behavior locally before deploying to the cloud while ensuring interoperability across different open-source frameworks like MCP and Agent2Agent
Microsoft Corp. is rolling out a suite of new tools and services that are designed to accelerate the development and deployment of the autonomous assistants called artificial intelligence agents across its platforms. The Azure AI Foundry Agent Service is now generally available, allowing developers to build, manage, and scale AI agents that automate business processes. It supports multi-agent workflows, meaning specialized agents can collaborate on complex tasks. The service integrates with various Microsoft services and supports open protocols like Agent2Agent and Model Context Protocol, ensuring interoperability across different agent frameworks. To streamline deployment and testing, Microsoft has introduced a unified runtime that merges the Semantic Kernel SDK and AutoGen framework, enabling developers to simulate agent behavior locally before deploying to the cloud. The service also includes AgentOps, a set of monitoring and optimization tools, and allows developers to use Azure Cosmos DB for thread storage. Another major announcement is Copilot Tuning, a feature that lets businesses fine-tune Microsoft 365 Copilot using their own organizational data. This means law firms can create AI agents that generate legal documents in their house style, while consultancies can build Q&A agents based on their regulatory expertise. The feature will be available in June through the Copilot Tuning Program, but only for organizations with at least 5,000 Microsoft 365 Copilot licenses. Microsoft is also previewing new developer tools for Microsoft Teams, including secure peer-to-peer communication via the A2A protocol, agent memory for contextual user experiences, and improved development environments for JavaScript and C#.
Unblocked is an AI-powered assistant that answers contextual questions about lines of code and to search for the person who made changes to a particular module
Unblocked is an AI-powered assistant that answers contextual questions about lines of code. Unblocked integrates with development environments and apps like Slack, Jira, Confluence, Google Drive, and Notion. The tool gathers intelligence about a company’s codebase and helps answer questions such as “Where do we define user metrics in our system?” Developers can also use the platform to search for the person who made changes to a particular module and quickly gain insights from them. Unblocked offers admin controls that can be easily adopted by a company’s system admin, and the startup is working on integrating with platforms like Cursor and Lovable to improve code explainability. Beyond this, Unblocked is developing tools that actively help developers with projects rather than simply answer questions. One, Autonomous CI Triage, supports developers in testing code through different scenarios. Unblocked counts companies such as Drata, AppDirect, Big Cartel, and TravelPerk as customers. Pilarinos claims that engineers at Drata were able to save one to two hours per week using Unblocked’s platform.
Parasoft’s agentic assistant automates generating API test scenarios using service definition files while also parameterizing for data looping
Parasoft has added Agentic AI capabilities to SOAtest, featuring API test planning and creation. Parasoft also has enhanced its Continuous Testing Platform (CTP), extending Test Impact Analysis (TIA) and code coverage collection to manual testers, further reducing technical barriers, accelerating feedback, and improving collaboration between development and quality. Parasoft SOAtest’s AI Assistant now utilizes agentic AI in API test-scenario generation, making it easier for testing teams with diverse skill sets to adopt API test automation. This release now enables a tester to, in natural language, request the AI to generate API test scenarios using service definition files. Going beyond simple test creation, the AI Assistant leverages AI agents to generate test data and parameterize the test scenario for data looping. Complex, multi-step workflows with dynamic data are handled in collaboration with the user, allowing less technical testers to build complicated tests without requiring scripts, advanced code-level skills, or in-depth domain knowledge. In addition to reducing technical burdens, Parasoft’s AI Assistant will help customers scale API testing and automate other in-product actions. As additional agents are introduced over time, it will produce even smarter test scenarios and workflow guidance. QA teams can leverage Parasoft CTP to collect and analyze code coverage from manual test runs, then publish that coverage into Parasoft DTP for deeper analysis. In CTP, the tester can easily create a manual test case, and with a few clicks can ensure code coverage is captured during their test runs. With this visibility, teams can fine-tune their manual testing efforts—eliminating redundancies, filling coverage gaps, and focusing on the highest-risk areas. Teams can now create, import, and manage manual tests directly in CTP, capture code coverage as those tests run, and utilize that data in test impact analysis to pinpoint exactly which manual regression tests need to be rerun to validate application changes. This trims retesting time and effort, reducing testing fatigue while strengthening collaboration between development and QA teams. This new capability also makes it easier to adapt manual regression testing for agile sprints, as it allows teams to only focus on impacted areas. With faster test cycles, QA teams can quickly validate changes and shorten feedback loops.
Nvidia DGX Spark and DGX Station personal AI supercomputers to enable developers to prototype, fine-tune and inference models with networking speeds of up to 800Gb/s for high-speed connectivity and multi-station scaling
Nvidia announced that Taiwan’s system manufacturers are set to build Nvidia DGX Spark and DGX Station systems. Growing partnerships with Acer, Gigabyte and MSI will extend the availability of DGX Spark and DGX Station personal AI supercomputers. Powered by the Nvidia Grace Blackwell platform, DGX Spark and DGX Station will enable developers to prototype, fine-tune and inference models from the desktop to the data center. DGX Spark is equipped with the Nvidia GB10 Grace Blackwell Superchip and fifth-generation Tensor Cores. It delivers up to 1 petaflop of AI compute and 128GB of unified memory, and enables seamless exporting of models to Nvidia DGX Cloud or any accelerated cloud or data center infrastructure. Built for the most demanding AI workloads, DGX Station features the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip, which offers up to 20 petaflops of AI performance and 784GB of unified system memory. The system also includes the Nvidia ConnectX-8 SuperNIC, supporting networking speeds of up to 800Gb/s for high-speed connectivity and multi-station scaling. DGX Station can serve as an individual desktop for one user running advanced AI models using local data, or as an on-demand, centralized compute node for multiple users. The system supports Nvidia Multi-Instance GPU technology to partition into as many as seven instances — each with its own high-bandwidth memory, cache and compute cores — serving as a personal cloud for data science and AI development teams. To give developers a familiar user experience, DGX Spark and DGX Station mirror the software architecture that powers industrial-strength AI factories. Both systems use the Nvidia DGX operating system, preconfigured with the latest Nvidia AI software stack, and include access to Nvidia NIM microservices and Nvidia Blueprints. Developers can use common tools, such as PyTorch, Jupyter and Ollama, to prototype, fine-tune and perform inference on DGX Spark and seamlessly deploy to DGX Cloud or any accelerated data center or cloud infrastructure.
Iterate.ai offers an on-premises AI appliance that delivers complete control, privacy, and enterprise-grade AI performance without relying on the cloud
Iterate.ai and ASA Computers have launched AIcurate, a turnkey, on-premises AI appliance that delivers complete control, privacy, and enterprise-grade AI performance without relying on the cloud. Built on Iterate.ai’s Generate platform and deployed on Dell PowerEdge servers, AIcurate empowers enterprises to run LLMs and AI workloads securely and within their own infrastructure. The system supports integration with popular business tools, is vendor-agnostic, and is optimized for performance-intensive applications such as document analysis, internal search, and workflow automation. Unlike public AI platforms, AIcurate enables secure deployment of powerful LLMs such as OpenAI, PaLM 2, Meta’s Llama, Mistral, and Microsoft’s models, all without sending data to the cloud. Businesses can build custom AI workflows while ensuring compliance with internal policies and industry regulations. “This collaboration makes advanced AI more accessible for organizations that can’t compromise on data control.” Ruban Kanapathippillai, SVP of Systems and Solutions at ASA Computers said “AIcurate puts enterprise-grade AI directly into customers’ data centers, giving them full control while supporting the flexible and secure architecture that modern IT teams demand.” Capabilities included in AIcurate: Secure on-prem deployment, Enterprise tool integration, Support for leading LLMs, Vendor-agnostic architecture, Advanced document processing, Role-based access control:, Workflow automation with agentic AI.
Pega launches agents for workflow and decisioning design that can instantly create out-of-the-box conversational agents from any workflow
Pegasystems unveiled Pega Predictable AI™ Agents that give enterprises extraordinary control and visibility as they design and deploy AI-optimized processes. Businesses can deploy Pega Predictable AI Agents with confidence, accelerating value while minimizing risk. Pega Predictable AI Agents allow enterprises to avoid the sinkhole of “AI black boxes” by thoughtfully integrating AI agents into the world’s leading enterprise platform for workflow automation. Instead of providing nothing more than prompt-based authoring tools, basic dashboards, and vague advice to use it wisely, Pega maximizes the value of AI while minimizing risk with the following Pega Predictable AI Agents: Design Agents: At the core of Pega Predictable AI Agents strategy is Pega Blueprint™, the industry’s first agents for workflow and decisioning design. Pega Blueprint leverages a collection of unique AI models and agents to generate workflows, next-best-action strategies, data structures, interfaces, user screens, security configuration, and more. It can also be invoked at runtime if a user needs to automate a process on the fly that isn’t already defined in the application. Conversation Agents: Leveraging the Pega Agent Experience™ API, Pega Blueprint can instantly create out-of-the-box conversational agents from any workflow. Automation Agents: Clients can incorporate these agents into their workflows as specific workflow steps, orchestrating agents both inside and outside of Pega to accelerate productivity in a transparent and reliable way. Knowledge Agents: Pega Blueprint leverages Pega Knowledge Buddy™ agents to create workflows that leverage industry best practices and to embed guidance inside other workflows. Coach Agents, such as Pega Coach, collaborate with humans involved in a workflow step to provide real-time, contextual guidance about the work.
NLWeb from Microsoft combines semi-structured data, RSS and LLMs to turn any website into an AI app powered by natural language, that lets visitors query the contents using their voice
Microsoft has launched NLWeb, an open-source project that aims to transform any existing website into an artificial intelligence-powered application by integrating natural language capabilities. The project, which was announced at Microsoft Build 2025, aims to provide developers with the fastest and easiest way to turn any website into an AI app powered by the large language model of their choice. Once integrated, people can query the contents of any website using their voice, just as they do with AI assistants such as ChatGPT or Microsoft Copilot. NLWeb uses semistructured data from websites like Schema.org and RSS information, combining these with Language Learning Models (LLMs) to create a natural language interface accessible to both humans and AI agents. The project is technology-agnostic, supporting major operating systems besides Windows, such as Android, iOS, and Linux. Microsoft aims to bring the benefits of generative AI search directly to every website and is building NLWeb with an eye toward future AI agents.
ServiceNow’s new AI Control Tower lets AI systems administrators and other AI stakeholders monitor and manage every AI agent, model or workflow in their system
ServiceNow’s new AI Control Tower, offers a holistic view of the entire AI ecosystem. AI Control Tower acts as a “command center” to help enterprise customers govern and manage all their AI workflows, including agents and models. The AI Control Tower lets AI systems administrators and other AI stakeholders monitor and manage every AI agent, model or workflow in their system — even third-party agents. It also provides end-to-end lifecycle management, real-time reporting for different metrics, and embedded compliance and AI governance. The idea around AI Control Tower is to give users a central location to see where all of the AI in the enterprise is. “I can go to a single place to see all the AI systems, how many were onboarded or are currently deployed, which ones are an AI agent or classic machine learning,” said Dorit Zilbershot, ServiceNow’s Group Vice President of AI Experiences and Innovation. “I could be managing these in a single place, making sure that I have full governance and understanding of what’s going on across my enterprise.” She added that the platform helps users “really drill down to understand the different systems by the provider and by type,” to understand risk and compliance better. The company’s agent library allows customers to choose the agent that best fits their workflows, and it has built-in orchestration features to help manage agent actions. ServiceNow also unveiled its AI Agent Fabric, a way for its agent to communicate with other agents or tools. Zilbershot said ServiceNow will still support other protocols and will continue working with other companies to develop standards for agentic communication.