SurrealDB has announced the launch of SurrealMCP, the official Model Context Protocol (MCP) server for SurrealDB and SurrealDB Cloud. SurrealMCP gives AI assistants, AI agents, IDEs, chatbots, and data platforms the ability to securely store, recall, and reason over live structured data – giving them the persistent, permission-aware memory they’ve been missing. Built on the open Model Context Protocol standard (modelcontextprotocol.io), SurrealMCP connects any MCP-compatible client to SurrealDB with full portability and interoperability across the AI ecosystem. SurrealMCP gives agents a secure, real-time memory layer backed by SurrealDB’s multi-model engine. With SurrealMCP, agents can: Remember and recall events, facts, and conversations over time; Query and update live data with role-based access controls; Link vectors, graphs, and documents to create deep contextual understanding; Perform administrative tasks like creating schemas or seeding data, all through natural language. Example use cases: Agent Memory: “Store this chat and recall anything about shipping delays.” SurrealMCP stores the conversation as vectors, links related data in graph form, and makes it time-travel queryable. Business Intelligence: “Recall customers in the top ten percent by lifetime value.” SurrealMCP translates the request into optimized queries, respecting all access policies. Operational Automation: “Create a dev namespace in Europe, apply the schema, seed sample data.” SurrealMCP executes instantly, no dashboards, no manual scripts. Enterprise Co-pilots: Power contextual CRM insights, real-time inventory tracking, or customer support histories
Edgen rolls out multi‑agent stock intelligence that generates sub‑second market reports, AI ratings, and price forecasts via its EDGM model, enabling rapid, executable equity decisions at scale
Edgen, an AI-driven market intelligence platform, announced a series of major platform upgrades, centered on the launch of AI-generated stock picks, stock ratings, and stock price forecasts. The new AI stock picks feature draws on Edgen’s multi-agent system to identify opportunities across equities with speed and precision. Users can now see which stocks surface as high-potential investments, rated and ranked by AI across multiple dimensions. Stock ratings distill performance into a transparent scoring framework, providing both institutional and retail investors with a quick way to differentiate between stronger and weaker companies. This rating system, combined with stock price forecasts, enables investors to anticipate potential moves rather than react after the fact. The outcome is sharper, faster decision-making, where signals come directly from AI agents trained to scan, assess, and act at scale. Alongside these initiatives, the company is rolling out a new Market Report system and advancing its proprietary model, EDGM, bringing unprecedented speed and depth to investment research. Edgen’s new Market Report delivers professional-grade research in under a second. The platform provides structured analysis that consolidates financial data, market momentum, and forward-looking scenarios into a single, easy-to-read report, enabling confident investment decisions at speed. This capability is powered by EDGM, Edgen’s private model, now upgraded to deliver results almost instantly. What once required hours of manual research, cross-checking analyst notes, and piecing together market commentary can now be compressed into a few seconds of AI-powered insight. Edgen’s multi-agent architecture introduces a dynamic layer of discovery, exploration, recommendation, and rating. Each agent operates with a specialized focus, such as analyzing technical signals, identifying market trends, or flagging undercovered opportunities, before converging on insights to provide a unified view for the user.
Nvidia proposes “speculative decoding,” which uses a second, smaller model to guess what the main model will output for a given prompt in an attempt to speed it up
Nvidia announced advances in artificial intelligence software and networking innovations aimed at accelerating AI infrastructure and model deployment. It unveiled Spectrum-XGS, or “giga-scale,” for its Spectrum-X Ethernet switching platform designed for AI workloads. Spectrum-X connects entire clusters within the data center, allowing massive datasets to stream across AI models. Spectrum-XGS extends this by providing orchestration and interconnection between data centers. “We’re introducing this new term, ‘scale across,’” said Dave Salvator, director of accelerated computing products at Nvidia. “These switches are basically purpose built to enable multi-site scale with different data centers able to communicate with each other and essentially act as one gigantic GPU.” Salvator said the system minimizes jitter and latency, the variability in packet arrival times and the delay between sending data and receiving a response. Dynamo is Nvidia’s inference serving framework, which is how models are deployed and process knowledge. Nvidia is also researching “speculative decoding,” which uses a second, smaller model to guess what the main model will output for a given prompt in an attempt to speed it up. “The way that this works is you have what’s called a draft model, which is a smaller model which attempts to sort of essentially generate potential next tokens,” said Salvator. Because the smaller model is faster but less accurate, it can generate multiple guesses for the main model to verify. “And we’ve already seen about a 35% performance gain using these techniques.” According to Salvator, the main AI model does verification in parallel against its learned probability distribution. Only accepted tokens are committed, so rejected tokens are discarded. This keeps latency under 200 milliseconds, which he described as “snappy and interactive.”
Testaify’s autonomous platform can begin testing in 5 min; its self‑contained AI swarm discovers apps, generates human‑readable tests, adapts across releases, and delivers expert‑level QA without scripts or consultants
Testaify, the AI-native platform for autonomous software testing, is opening access beyond its initial waitlist, marking another significant milestone in its managed rollout. This next phase comes in response to growing demand and strong early results from early adopters. Since launch, Testaify has helped engineering teams accelerate their testing efforts by discovering applications, generating intelligent tests, and delivering actionable findings—without requiring scripts, training, or complex configuration. Testaify provides a fully self-contained, truly autonomous solution that can begin testing in under five minutes. Teams using Testaify are seeing improvements in coverage, speed, and defect detection. Key Capabilities: Autonomous application discovery—no training or manual intervention required; Intelligent test generation using advanced software testing techniques; Human-readable test steps and visual replays for faster defect triage; Continuous adaptation to product changes across builds and releases. COO and Co-Founder, Rafael E. Santos, explains, “estaify’s intelligent testing swarm delivers thorough, high-quality coverage that traditionally requires large QA teams.” Sigma Solve cofounder Prerak Parikh, a Testaify investor turned customer, says, “Teaming up with Testaify offers us a head start in using AI to improve product quality through testing as we deliver enterprise-level development services to our customers. Our investment in Testaify aligns with our business objectives because it’ll help us improve the quality of our work, boost our margins, and grow our revenue streams as we develop a reseller relationship with Testaify over time.”
XerpaAI “growth helper” finds and scales real audiences for new Web3 projects by working with vetted influencers (KOL), verifiable influence score and a create-distribute-repeat autonomous roadmap
XerpaAI made its official debut at WebX Tokyo, Asia’s largest Web3 conference, unveiling the world’s first AI Growth Agent (AGA) — an intelligent, end-to-end solution built to help emerging projects accelerate growth with speed, precision, and scale. Through a vetted network of KOLs and community leaders, XerpaAI ensures authentic reach. Its proprietary Xerpa Index transforms fragmented influence metrics into a unified, verifiable score — giving projects the clarity and confidence to make smarter, data-driven growth decisions. The company has outlined a bold roadmap that includes Creative Labs 2.0, autonomous AI-operated social accounts, and expanded multi-channel growth initiatives. These innovations will provide emerging businesses with new tools to boost user acquisition, strengthen market presence, and achieve sustainable scaling.
CodeSightAI launches an AI code review platform that integrates with GitHub to cut review time by 60%, detect 90% of security issues, and automate smart fixes
CodeSightAI launched its AI-powered code review platform designed to help development teams deliver high-quality software faster. The platform seamlessly integrates with GitHub to provide intelligent code analysis, real-time collaboration, and comprehensive security scanning. The new platform addresses critical inefficiencies in traditional code review processes that cost the global software industry billions annually. By leveraging advanced AI algorithms, CodeSightAI enables development teams to reduce review time by up to 60% while catching 90% of security issues before deployment. CodeSightAI’s comprehensive feature set includes AI-powered code analysis that detects bugs, security vulnerabilities, and performance issues in real-time. The platform provides smart fix recommendations with automated application capabilities and supports multiple programming languages with pattern-based security scanning. The AI-powered code review platform offers seamless GitHub integration through one-click OAuth authentication, automated pull request analysis, and real-time synchronization with repository changes. Development teams benefit from live collaboration features including real-time cursors, code comments, and team performance analytics. Key capabilities of the platform include: – Advanced AI algorithms for bug and vulnerability detection; Real-time code quality assessment with detailed suggestions; Comprehensive security scanning with Row-Level Security; Team collaboration hub with activity feeds and performance metrics; Smart analytics dashboard tracking pull request metrics and quality improvements; Flexible billing and subscription management through Stripe integration.
Juniper rolls out self‑driving agentic networking featuring no‑touch fixes, ask AI in natural language and predict video issues with a large experience model
Juniper Networks Inc.‘s Mist platform, was purpose-built with AI in mind, leveraging automation and insight to optimize user experiences. Built into Mist is the Marvis AI engine and Assistant, which uses high quality data, advanced AI and machine learning data science, and a conversational interface to simplify deployment and troubleshooting. Now under Hewlett Packard Enterprise Co., Mist has been brought together with Aruba Networks to form what they are calling the “secure AI-native network,” which is a blend of leading AIOPs, product breadth and security to solve real customer and partner needs. Ultimately the company has a vision of using the platform to bring all HPE Networking products under common cloud management and AI engine with centralized operations. “One thing that we added is the ability to choose specific areas for self-driving mode that don’t require human intervention,” said Jeff Aaron, vice president of product and solution marketing at HPE. “If a switch port is stuck or an AP is running non-compliant software, for example, you can tell Marvis to go fix it on its own. We provide reporting to show which features were fixed autonomously, how they were fixed, and why the decision was made so IT still has complete visibility into what is happening.” In addition, Marvis got a back-end upgrade, leveraging more generative AI capabilities and agentic workflows for even better real-time troubleshooting. The assistant has always used natural language processing and understanding to understand simple language queries and provide insightful answers on par with human experts. Furthermore, Marvis’ AIOps capabilities have been expanded further into the data center through tighter integration with Juniper Apstra’s contextual graph database. This allows Marvis to analyze infrastructure configurations and provide answers to data center-related inquiries using the same Marvis conversational interface employed elsewhere in the network. Finally, HPE Networks also expanded their ability to proactively predict and prevent video issues using what it calls a large experience model or LEM. This pulls in billions of data points from Zoom and Microsoft Teams clients and correlates it with networking data to identify the root cause of video issues. The LEM framework has now been augmented with data from Marvis digital experience twins, or Minis, which probe the wired, wireless, WAN and data center networks autonomously when users aren’t even present to provide even richer data for predictive and proactive troubleshooting. The impact shows up in different ways across industries. ServiceNow reported a 90% reduction in network trouble tickets, while Blue Diamond Growers cut the time spent managing networks by 80%. Gap achieved 85% fewer truck rolls, and Bethesda Medical reported 85% faster upgrades.
VMware Cloud’s Private AI native services include agent/LLM tooling, MCP support, multi‑GPU runtime, and secure, GitOps‑ready private clouds
Broadcom Inc. is transforming its VMware Cloud Foundation 9.0 software into an artificial intelligence-native platform, giving developers a secure, modern and private cloud infrastructure that’s geared for the development of sophisticated AI applications and agents. The addition of VMware Private AI Services means VCF 9.0 can now be used as a platform for Private AI, where developers can find everything they need to get started building, and later deploy AI models and AI agents. VMware Private AI Services will launch early next year and span everything from fine-tuning to inference, with capabilities such as graphics processing unit monitoring, an AI model store, a model runtime, agent builder, vector database and data indexing and retrieval services, all available as part of the broader VCF 9.0 subscription. Developers will also be aided by a generative AI assistant called VCF Intelligent Assist, available in preview now, to help diagnose and resolve infrastructure problems. VCF 9.0 also gets support for the Model Context Protocol, enabling AI agents to tap into external data sources and tools and use them to collaborate with other agents, as well as a multi-accelerator model runtime that supports the flexible deployment of AI models on GPUs from Advanced Micro Devices Inc. and Nvidia Corp. In addition, customers will get access to multi-tenant models-as-a-service, which helps to lower costs by securely sharing AI models across tenants or separate lines of business. Broadcom said it wants developers to embrace VCF 9.0 as a single, unified platform for both AI and non-AI workloads, and to that end it has also announced a host of new updates that aim to speed up infrastructure delivery. For instance, VMware vSAN, a software-defined storage solution that combines local storage from multiple servers into a single shared storage pool, now gets native support for Amazon S3 compatible object storage interfaces. This, it said, will enable unstructured data to be stored on vSAN directly without any proprietary hardware or third-party licensing, so organizations can create unified storage policies for block, file and object storage and reduce storage infrastructure complexity. VCF 9.0 is also integrating with GitOps, Argo CD and Istio to secure application delivery, using Git as a source of truth for Kubernetes. It means developers will be able to store both their infrastructure and apps as code in Git, and use Argo CD to automate consistent deployments. Meanwhile, the Istio Service Mesh provides zero-trust networking, traffic control and observability for containers, which host the components of applications.
MIT says partner‑led GenAI deployments beat internal builds, but sustainable value needs persistent memory, integration, and user‑familiar interfaces over brittle bespoke apps
The GenAI implementation failure rate is staggering, according to a new report from MIT. While 80% of organizations have explored GenAI tools and 40% report deployment, only 5% of custom enterprise AI solutions reach production, creating a massive gap between pilot enthusiasm and actual transformation. Investment allocation misses high-ROI opportunities. 50% of GenAI budgets flow to sales and marketing despite back-office automation delivering faster payback periods, with successful implementations generating $2-10M annually in BPO cost reductions. Strategic partnerships dramatically outperform internal builds. External partnerships achieve 66% deployment success compared to just 33% for internally developed tools, yet most organizations continue pursuing expensive internal development efforts. The contrast becomes even sharper when examining enterprise-specific AI solutions. While 60% of organizations have evaluated custom or vendor-sold GenAI systems, only 20% progress to pilot stage. Of those brave enough to attempt implementation, a mere 5% achieve production deployment with sustained business value. The paradox of GenAI adoption becomes clear when examining user preferences. The same professionals who praise ChatGPT for flexibility and immediate utility express deep skepticism about custom enterprise tools. When asked to compare experiences, three consistent themes emerge: generic LLM interfaces consistently produce better answers, users already possess interface familiarity, and trust levels remain higher for consumer tools. This preference reveals the fundamental learning gap. Research reveals a stark preference hierarchy based on task complexity and learning requirements. For simple tasks such as email drafting, basic analysis, and quick summaries, 70% of users prefer AI assistance. But for anything requiring sustained context, relationship memory, or iterative improvement, humans dominate by 9-to-1 margins. The dividing line isn’t intelligence or capability; it’s memory, adaptability, and learning capacity. Current GenAI systems require extensive context input for each session, repeat identical mistakes, and cannot customize themselves to specific workflows or preferences. These limitations explain why 95% of enterprise AI initiatives fail to achieve sustainable value. This shadow usage demonstrates that individuals can successfully cross the GenAI Divide when given access to flexible, responsive tools. The pattern suggests that successful enterprise adoption must build on rather than replace this organic usage, providing the memory and integration capabilities that consumer tools lack while maintaining their flexibility and responsiveness.
Google brings air‑gapped, multimodal AI to Distributed Cloud so regulated enterprises can deploy GenAI on premise without sacrificing data sovereignty
Google announced the general availability of its Gemini artificial intelligence models on Google Distributed Cloud, extending its most advanced AI capabilities into enterprise and government data centers. The launch, which sees Gemini now available on GDC in an air-gapped configuration and in preview on GDC onnected, allows organizations with strict data residency and compliance requirements to deploy generative AI without sacrificing control over sensitive information. With the release and by bringing models on-premises, Google is addressing a longstanding issue faced by regulated industries: a choice between adopting modern AI tools or maintaining full sovereignty over their data. The integration provides access to Gemini’s multimodal capabilities, including text, images, audio and video. Google says that unlocks a range of use cases, including multilingual collaboration, automated document summarization, intelligent chatbots and AI-assisted code generation. The release also includes built-in safety tools that allow enterprises to improve compliance, detect harmful content and enforce policy adherence. Google argues that delivering these capabilities securely requires more than just models, positioning GDC as a full AI platform that combines infrastructure, model libraries and prebuilt agents such as the preview of Agentspace search. Under the hood, GDC makes use of Nvidia Corp.’s Hopper and Blackwell graphics processing units, paired with automated load balancing and zero-touch updates for high availability. Confidential computing is supported on both central processing units and GPUs, ensuring that sensitive data is encrypted even during processing. Customers also gain audit logging and granular access controls for end-to-end visibility of their AI workloads. Along with Gemini 2.5 Flash and Pro, the platform supports Vertex AI’s task-specific models and Google’s open-source Gemma family. Enterprises can also deploy their own open-source or proprietary models on managed virtual machines and Kubernetes clusters as part of a unified environment.
