Lumen and IBM announced a new collaboration to develop enterprise-grade AI solutions at the edge—integrating watsonx, IBM’s portfolio of AI products, with Lumen’s Edge Cloud infrastructure and network. The new AI inferencing solutions optimized for the edge will deploy IBM watsonx technology in Lumen’s edge data centers and leverage Lumen’s multi-cloud architecture, enabling clients across financial services, healthcare, manufacturing and retail to analyze massive volumes of data in near real-time to help minimize latency. This will allow enterprises to develop and deploy AI models closer to the point of data generation, facilitating smarter decision-making while maintaining data control and security, plus accelerating AI innovation. Lumen’s edge network offers <5ms latency and direct connectivity to major cloud providers and enterprise locations. When paired with IBM watsonx, the infrastructure has the potential to enable real-time AI processing, which can help mitigate costs and risks associated with public cloud dependence. IBM Consulting will act as the preferred systems integrator, supporting clients in their efforts to scale deployments, reduce their costs and fully leverage AI capabilities through their deep technology, domain, and industry expertise. The collaboration aims to solve contemporary business challenges by turning AI potential into practical, high-impact outcomes at the edge. For enterprise businesses, this can mean faster insights, lower operational costs, and a smarter path to digital innovation. Ryan Asdourian, Chief Marketing and Strategy officer at Lumen said, “By combining IBM’s AI innovation with Lumen’s powerful network edge, we’re making it easier for businesses to tap into real-time intelligence wherever their data lives, accelerate innovation, and deliver smarter, faster customer experiences.”
Unblocked is an AI-powered assistant that answers contextual questions about lines of code and to search for the person who made changes to a particular module
Unblocked is an AI-powered assistant that answers contextual questions about lines of code. Unblocked integrates with development environments and apps like Slack, Jira, Confluence, Google Drive, and Notion. The tool gathers intelligence about a company’s codebase and helps answer questions such as “Where do we define user metrics in our system?” Developers can also use the platform to search for the person who made changes to a particular module and quickly gain insights from them. Unblocked offers admin controls that can be easily adopted by a company’s system admin, and the startup is working on integrating with platforms like Cursor and Lovable to improve code explainability. Beyond this, Unblocked is developing tools that actively help developers with projects rather than simply answer questions. One, Autonomous CI Triage, supports developers in testing code through different scenarios. Unblocked counts companies such as Drata, AppDirect, Big Cartel, and TravelPerk as customers. Pilarinos claims that engineers at Drata were able to save one to two hours per week using Unblocked’s platform.
Iterate.ai offers an on-premises AI appliance that delivers complete control, privacy, and enterprise-grade AI performance without relying on the cloud
Iterate.ai and ASA Computers have launched AIcurate, a turnkey, on-premises AI appliance that delivers complete control, privacy, and enterprise-grade AI performance without relying on the cloud. Built on Iterate.ai’s Generate platform and deployed on Dell PowerEdge servers, AIcurate empowers enterprises to run LLMs and AI workloads securely and within their own infrastructure. The system supports integration with popular business tools, is vendor-agnostic, and is optimized for performance-intensive applications such as document analysis, internal search, and workflow automation. Unlike public AI platforms, AIcurate enables secure deployment of powerful LLMs such as OpenAI, PaLM 2, Meta’s Llama, Mistral, and Microsoft’s models, all without sending data to the cloud. Businesses can build custom AI workflows while ensuring compliance with internal policies and industry regulations. “This collaboration makes advanced AI more accessible for organizations that can’t compromise on data control.” Ruban Kanapathippillai, SVP of Systems and Solutions at ASA Computers said “AIcurate puts enterprise-grade AI directly into customers’ data centers, giving them full control while supporting the flexible and secure architecture that modern IT teams demand.” Capabilities included in AIcurate: Secure on-prem deployment, Enterprise tool integration, Support for leading LLMs, Vendor-agnostic architecture, Advanced document processing, Role-based access control:, Workflow automation with agentic AI.
ServiceNow’s new AI Control Tower lets AI systems administrators and other AI stakeholders monitor and manage every AI agent, model or workflow in their system
ServiceNow’s new AI Control Tower, offers a holistic view of the entire AI ecosystem. AI Control Tower acts as a “command center” to help enterprise customers govern and manage all their AI workflows, including agents and models. The AI Control Tower lets AI systems administrators and other AI stakeholders monitor and manage every AI agent, model or workflow in their system — even third-party agents. It also provides end-to-end lifecycle management, real-time reporting for different metrics, and embedded compliance and AI governance. The idea around AI Control Tower is to give users a central location to see where all of the AI in the enterprise is. “I can go to a single place to see all the AI systems, how many were onboarded or are currently deployed, which ones are an AI agent or classic machine learning,” said Dorit Zilbershot, ServiceNow’s Group Vice President of AI Experiences and Innovation. “I could be managing these in a single place, making sure that I have full governance and understanding of what’s going on across my enterprise.” She added that the platform helps users “really drill down to understand the different systems by the provider and by type,” to understand risk and compliance better. The company’s agent library allows customers to choose the agent that best fits their workflows, and it has built-in orchestration features to help manage agent actions. ServiceNow also unveiled its AI Agent Fabric, a way for its agent to communicate with other agents or tools. Zilbershot said ServiceNow will still support other protocols and will continue working with other companies to develop standards for agentic communication.
Dremio launches new MCP Server integrating with leading AI models like Claude, enabling agents to seamlessly discover and query data with contextual understanding
Drem,io has launched the Dremio MCP Server, a solution that brings AI-native data discovery and query capabilities to the lakehouse. By adopting the open Model Context Protocol (MCP), Dremio enables AI agents to dynamically explore datasets, generate queries, and retrieve governed data in real time. Through MCP, Dremio natively integrates with leading AI models like Claude, enabling agents to seamlessly discover and query data with contextual understanding. Claude-powered agents can dynamically interpret user intent, invoke Dremio’s tools, and deliver trusted, real-time insights—all without manual integrations. “The Model Context Protocol is a critical advancement that allows AI systems like Claude to seamlessly interact with enterprise data systems,” said Mahesh Murag, Product Manager at Anthropic. “Dremio’s implementation of MCP enables Claude to extend its reasoning capabilities directly to an organization’s data assets, unlocking new possibilities for AI-powered insights while maintaining enterprise governance.” Powered by Dremio’s semantic layer – which provides a unified, governed view across all data sources – with the Dremio MCP Server AI agents gain seamless access to Dremio’s full data environment. This enables agents to: Discover datasets and metadata without manual integrations; Translate natural language into SQL queries and execute them directly; Automate workflows like reporting, customer segmentation, and operational analysis.
CodeAnt AI’s platform plugs into developer platforms, reviews the code, gives instant feedback across 30+ programming languages and suggests fixes that developers can apply with a single click
AI might be great at helping engineers write code, but it’s creating a new problem – all that code still needs to be reviewed by humans. CodeAnt AI is stepping in with a solution that uses AI to tackle the review process itself. CodeAnt AI’s platform plugs right into GitHub, GitLab, Bitbucket, and Azure DevOps, giving developers instant feedback on their code across more than 30 programming languages. More impressively, it doesn’t just find problems – it suggests fixes that developers can apply with a single click, turning reviews that used to take hours into proactive quick, five-minute sessions. For companies racing to get products out the door, this means fewer delays and higher quality code. It also means cost savings – fixing problems during code reviews costs 10x less compared to fixing them later during CI/CD or after production deployments. What makes CodeAnt AI different is the technology under the hood. The company built a proprietary language-agnostic AST engine that actually understands how different parts of a codebase connect, letting it spot issues that isolated code reviews would miss. The platform also pulls in data from major security databases and lets companies set up their own rules based on their specific needs. For security-conscious organizations, CodeAnt AI can run entirely within their own infrastructure, ensuring code never leaves their environment. It’s proven to help enterprises reduce manual code review time by over 50%.
Mistral’s platform enables enterprises to build AI agents tailored to their operations and gain full control over the AI stack—from infrastructure and platform features to model-level customization and user interfaces without vendor lock-in
AI startup Mistral unveiled Le Chat Enterprise, a unified AI assistant platform designed for enterprise-scale productivity and privacy, powered by its new Medium 3 model that outperforms larger ones at a fraction of the cost (here, “larger” refers to the number of parameters, or internal model settings, which typically denote more complexity and more powerful capabilities, but also take more compute resources such as GPUs to run). Available on the web and via mobile apps, Le Chat Enterprise is like a ChatGPT competitor, but one built specifically for enterprises and their employees, taking into account the fact that they’ll likely be working across a suite of different applications and data sources. It’s designed to consolidate AI functionality into a single, privacy-first environment that enables deep customization, cross-functional workflows, and rapid deployment. Among its key features that will be of interest to business owners and technical decision makers are: Enterprise search across private data sources; Document libraries with auto-summary and citation capabilities; Custom connectors and agent builders for no-code task automation; Custom model integrations and memory-based personalization; Hybrid deployment options with support for public cloud, private VPCs, and on-prem hosting. Le Chat Enterprise supports seamless integration into existing tools and workflows. Companies can build AI agents tailored to their operations and maintain full sovereignty over deployment and data—without vendor lock-in. The platform’s privacy architecture adheres to strict access controls and supports full audit logging, ensuring data governance for regulated industries. Enterprises also gain full control over the AI stack—from infrastructure and platform features to model-level customization and user interfaces. Mistral’s new Le Chat Enterprise offering could be appealing to many enterprises with stricter security and data storage policies (especially medium-to-large and legacy businesses). Mistral Medium 3 introduces a new performance tier in the company’s model lineup, positioned between lightweight and large-scale models. Designed for enterprise use, the model delivers more than 90% of the benchmark performance of Claude 3.7 Sonnet, but at one-eighth the cost—$0.40 per million input tokens and $20.80 per million output tokens, compared to Sonnet’s $3/$15 for input/output. Benchmarks show that Mistral Medium 3 is particularly strong in software development tasks. In coding tests like HumanEval and MultiPL-E, it matches or surpasses both Claude 3.7 Sonnet and OpenAI’s GPT-4o models. According to third-party human evaluations, it outperforms Llama 4 Maverick in 82% of coding scenarios and exceeds Command-A in nearly 70% of cases. Mistral Medium 3 is optimized for enterprise integration. It supports hybrid and on-premises deployment, offers custom post-training, and connects easily to business systems.
Claude’s web search API to allow the AI assistant to conduct multiple progressive searches using earlier results to inform subsequent queries complete with source citations
Anthropic has introduced a web search capability for its Claude AI assistant, intensifying competition in the rapidly evolving AI search market where tech giants are racing to redefine how users find information online. The company announced that developers can now enable Claude to access current web information through its API, allowing the AI assistant to conduct multiple progressive searches to compile comprehensive answers complete with source citations. Anthropic’s technical approach represents a significant advance in how AI systems can be deployed as information gathering tools. The system employs a sophisticated decision-making layer that determines when external information would improve response quality, generating targeted search queries rather than simply passing user questions verbatim to a search backend. This “agentic” capability — allowing Claude to conduct multiple progressive searches using earlier results to inform subsequent queries — enables a more thorough research process than traditional search. The implementation essentially mimics how a human researcher might explore a topic, starting with general queries and progressively refining them based on initial findings. Anthropic’s web search API represents more than just another feature in the AI toolkit — it signals the evolution of internet information access toward a more integrated, conversation-based model. The new capability arrives amid signs that traditional search is losing ground to AI-powered alternatives. With Safari searches declining for the first time ever; we’re witnessing early indicators of a mass consumer behavior shift. Traditional search engines optimized for advertising revenue are increasingly being bypassed in favor of conversation-based interactions that prioritize information quality over commercial interests.
Neo4j’s serverless solution enables users of all skill levels to access graph analytics without the need for custom queries, ETL pipelines, or specialized graph expertise and can be used seamlessly with any data source
Neo4j, has launched Neo4j Aura Graph Analytics, a new serverless offering that for the first time can be used seamlessly with any data source, and with Zero ETL (extract, load, transfer). The solution delivers the power of graph analytics to users of all skill levels, unlocking deeper intelligence and achieving 2X* greater insight precision and quality over traditional analytics. The new Neo4j offering makes graph analytics capabilities accessible to everyone and eliminates adoption barriers by removing the need for custom queries, ETL pipelines, or any need for specialized graph expertise – so that business decision-makers, data scientists, and other users can focus on outcomes, not overhead. Neo4j Aura Graph Analytics requires no infrastructure setup and no prior experience with graph technology or Cypher query language. Users seamlessly deploy and scale graph analytics workloads end-to-end, enabling them to collect, organize, analyze, and visualize data. The offering includes the industry’s largest selection of 65+ ready-to-use graph algorithms and is optimized for high-performance applications and parallel workflows. Users pay only for the processing power and storage they consume. Additional benefits and capabilities below are based on customer-reported outcomes that reflect real-world performance gains: 1) Up to 80% model accuracy, leading to 2X greater efficacy of insights that go beyond the limits of traditional analytics. 2) Insights achieved twice as fast as open-source alternatives with parallelized in-memory processing of graph algorithms 3) 75% less code, Zero ETL. 4) No administration overhead, and lower total cost of ownership.
ChatGPT’s deep research tool gets a GitHub connector allowing developers to ask questions about a codebase and engineering documents
OpenAI announced what it’s calling the first “connector” for ChatGPT deep research, the company’s tool that searches across the web and other sources to compile thorough research reports on a topic. Now, ChatGPT deep research can link to GitHub (in beta), allowing developers to ask questions about a codebase and engineering documents. The connector will be available for ChatGPT Plus, Pro, and Team users over the next few days, with Enterprise and Edu support coming soon. The GitHub connector for ChatGPT deep research arrives as AI companies look to make their AI-powered chatbots more useful by building ways to link them to outside platforms and services. Anthropic, for example, recently debuted Integrations, which gives apps a pipeline into its AI chatbot Claude. In addition to answering questions about codebases, the new ChatGPT deep research GitHub connector lets ChatGPT users break down product specs into technical tasks and dependencies, summarize code structure and patterns, and understand how to implement new APIs using real code examples. The company also launched fine-tuning options for developers looking to customize its newer models for particular applications. Devs can now fine-tune OpenAI’s o4-mini “reasoning” model via a technique OpenAI calls reinforcement fine-tuning, which uses task-specific grading to improve the model’s performance. Fine-tuning has also rolled out for the company’s GPT-4.1 nano model.