Crypto exchange Coinbase has unveiled the x402 protocol to automate online payments with stablecoins, allowing direct transactions from holder to merchant without intermediaries. The protocol will be particularly useful for AI agents to pay for digital items automatically, but can serve any service requiring micropayments – essentially functioning as the online version of automated toll payments. Applications could include API usage, online content, flights, or computer resources. While designed for direct payments, various intermediaries will likely emerge to simplify the process for merchants. The key is that there is no need for an intermediary. The stablecoin payment goes directly from the holder to the merchant. That said, any number of intermediaries will pop up to make it easier for merchants. When browsing the web, if we come across a missing web page it often shows a 404 error. An error code that we never see is the 402 error, which says there’s a paywall and you need to make a payment. Coinbase is using this to build a protocol for payments to make it as seamless as sending a tweet. “We built x402 because the internet has always needed a native way to send and receive payments—and stablecoins finally make that possible,” said Erik Reppel, Head of Engineering at Coinbase Developer Platform. “Just like HTTPS secured the web, x402 could define the next era of the internet; one where value moves as freely and instantly as information. We’re laying the groundwork for an economy run not just by people, but by software—autonomous, intelligent, and always on.”
CodeAnt AI’s platform plugs into developer platforms, reviews the code, gives instant feedback across 30+ programming languages and suggests fixes that developers can apply with a single click
AI might be great at helping engineers write code, but it’s creating a new problem – all that code still needs to be reviewed by humans. CodeAnt AI is stepping in with a solution that uses AI to tackle the review process itself. CodeAnt AI’s platform plugs right into GitHub, GitLab, Bitbucket, and Azure DevOps, giving developers instant feedback on their code across more than 30 programming languages. More impressively, it doesn’t just find problems – it suggests fixes that developers can apply with a single click, turning reviews that used to take hours into proactive quick, five-minute sessions. For companies racing to get products out the door, this means fewer delays and higher quality code. It also means cost savings – fixing problems during code reviews costs 10x less compared to fixing them later during CI/CD or after production deployments. What makes CodeAnt AI different is the technology under the hood. The company built a proprietary language-agnostic AST engine that actually understands how different parts of a codebase connect, letting it spot issues that isolated code reviews would miss. The platform also pulls in data from major security databases and lets companies set up their own rules based on their specific needs. For security-conscious organizations, CodeAnt AI can run entirely within their own infrastructure, ensuring code never leaves their environment. It’s proven to help enterprises reduce manual code review time by over 50%.
Microsoft CEO’s endorsement of Google DeepMind‘s Agent2Agent (A2A) open protocol and Anthropic’s Model Context Protocol (MCP) will immediately accelerate agentic AI-based collaboration and interdependence
Microsoft CEO Satya Nadella’s endorsement of Google DeepMind‘s Agent2Agent (A2A) open protocol and Anthropic’s Model Context Protocol (MCP) will immediately accelerate agentic AI-based collaboration and interdependence, leading to rapid gains in agentic-based apps and platforms. Nadella’s endorsement delivers the catalyst the agentic AI development community needed to fast-track their collaborations, leading to entirely new apps, platforms and networks. Having historically been open about the potential for agentic AI to integrate across platforms, yesterday’s announcement, which also unveiled upcoming support for CoPilot Studio and Foundry, set a new precedent in how committed Microsoft is to open agentic standards. “Open protocols like A2A and MCP are key to enabling the agentic web,” he said before announcing upcoming support in Copilot Studio and Foundry. While often agreeing with the concept of open standards for agentic AI integration, this is the first time he’s endorsed a standard publicly. Nadella’s influence on the industry will lead to a shift from proprietary ecosystems toward cross-platform, agentic AI collaboration. Nadella’s endorsement of A2A and MCP protocols shows how far the Microsoft senior management team has agreed that an open protocol approach is the best direction for the company. Endorsing both on the same day shows how far Microsoft is in its strategies related to agentic AI collaboration, integration and how diverse agentic AI architectures can be combined. Interested in enabling agentic AI providers to collaborate in creating new agentic apps, systems and platforms, Microsoft’s enforcement of A2A and MCP will prove to be a noteworthy catalyst for agentic AI’s growth. Backing A2A and MCP open protocols ensures the core components of any agentic AI tech stack provided by Anthropic, Google or Microsoft will be compatible and capable of interoperability from the first product releases. That removes significant roadblocks for the hundreds of agentic AI startups and partners that rely on these companies for future growth.
Dianomic’s solution supports live digital twins and OT/IT convergence by abstracting enterprise-wide machines, sensors, and processes into a unified streaming analytics system and real-time operational data at scale
Dianomic, a leader in intelligent industrial data pipelines and edge AI/ML solutions, has launched FogLAMP Suite 3.0. The solutuion’s ‘Intelligent Industrial Data Pipelines’ abstracts machines, sensors, and processes into a unified real-time data and streaming analytics system for brownfield and greenfield alike. By seamlessly connecting and integrating the plant floor to the cloud and back with high quality normalized streaming data, FogLAMP 3.0 enables innovations like AI-driven applications, digital twins, lakehouse data management, unified namespace and OT/IT convergence. FogLAMP Suite 3.0 creates an intelligent data fabric, unifying and securing real-time operational data at scale with enterprise-grade management. This comprehensive data flow empowers both plant-level optimization and cloud-based insights. Its role-based access control, intuitive graphical interface, and flexible development tools—ranging from no-code to source code—empower both IT and OT teams to collaborate effectively or work independently with confidence. FogLAMP Suite 3.0 Key Features: Real-time Full Fidelity Streaming Analytics and Data Management – Where the physical world meets the digital; Enterprise Wide – Manage, integrate and monitor streaming data from diverse sources to clouds and back; Enable Live Digital Twins – Manage tags and namespaces, use semantic models, detect, predict and prescribe with machine AI/ML; Compatible with brownfield, greenfield, IIoT – Processes, equipment and sensors.
OpenAI’s enterprise adoption appears to be accelerating, at the expense of rivals – 32% of U.S. businesses are paying for subscriptions to OpenAI vs 8% and 0.1% subscribing to Anthropic’s products and Google AI respectively
OpenAI appears to be pulling well ahead of rivals in the race to capture enterprises’ AI spend, according to transaction data from fintech firm Ramp. According to Ramp’s AI Index, which estimates the business adoption rate of AI products by drawing on Ramp’s card and bill pay data, 32.4% of U.S. businesses were paying for subscriptions to OpenAI AI models, platforms, and tools as of April. That’s up from 18.9% in January and 28% in March. Competitors have struggled to make similar progress, Ramp’s data shows. Just 8% of businesses had subscriptions to Anthropic’s products as of last month compared to 4.6% in January. Google AI subscriptions saw a decline from 2.3% in February to 0.1% in April, meanwhile. “OpenAI continues to add customers faster than any other business on Ramp’s platform,” wrote Ramp Economist Ara Kharzian. “Our Ramp AI Index shows business adoption of OpenAI growing faster than competitor model companies.” To be clear, Ramp’s AI Index isn’t a perfect measure. It only looks at a sample of corporate spend data from around 30,000 companies. Moreover, because the index identifies AI products and services using merchant name and line-item details, it likely misses spend lumped into other cost centers. Still, the figures suggest that OpenAI is strengthening its grip on the large and growing enterprise market for AI. OpenAI is projecting $12.7 billion in revenue this year and $29.4 billion in 2026.
Talent development, right data infrastructure, industry-specific strategic bets, responsible AI governance and agentic architecture are key for scaling enterprise AI initiatives
A new study from Accenture provides a data-driven analysis of how leading companies are successfully implementing AI across their enterprises and reveals a significant gap between AI aspirations and execution. Here are five key takeaways for enterprise IT leaders from Accenture’s research.
Talent maturity outweighs investment as the key scaling factor. Accenture’s research reveals that talent development is actually the most critical differentiator for successful AI implementation. “We found the top achievement factor wasn’t investment but rather talent maturity,” Senthil Ramani, data and AI lead at Accenture, told. The report shows front-runners differentiate themselves through people-centered strategies. They focus four times more on cultural adaptation than other companies, emphasize talent alignment three times more and implement structured training programs at twice the rate of competitors. IT leader action item: Develop a comprehensive talent strategy that addresses both technical skills and cultural adaptation. Establish a centralized AI center of excellence – the report shows 57% of front-runners use this model compared to just 16% of fast-followers.
Data infrastructure makes or breaks AI scaling efforts. “The biggest challenge for most companies trying to scale AI is the development of the right data infrastructure,” Ramani said. “97% of front-runners have developed three or more new data and AI capabilities for gen AI, compared to just 5% of companies that are experimenting with AI.” These essential capabilities include advanced data management techniques like retrieval-augmented generation (RAG) (used by 17% of front-runners vs. 1% of fast-followers) and knowledge graphs (26% vs. 3%), as well as diverse data utilization across zero-party, second-party, third-party and synthetic sources. IT leader action item: Conduct a comprehensive data readiness assessment explicitly focused on AI implementation requirements. Prioritize building capabilities to handle unstructured data alongside structured data and develop a strategy for integrating tacit organizational knowledge.
Strategic bets deliver superior returns to broad implementation. While many organizations attempt to implement AI across multiple functions simultaneously, Accenture’s research shows that focused strategic bets yield significantly better results. “In the report, we referred to ‘strategic bets,’ or significant, long-term investments in gen AI focusing on the core of a company’s value chain and offering a very large payoff. This strategic focus is essential for maximizing the potential of AI and ensuring that investments deliver sustained business value.” This focused approach pays dividends. Companies that have scaled at least one strategic bet are nearly three times more likely to have their ROI from gen AI surpass forecasts compared to those that haven’t. IT leader action item: Identify 3-4 industry-specific strategic AI investments that directly impact your core value chain rather than pursuing broad implementation.
Responsible AI creates value beyond risk mitigation. Most organizations view responsible AI primarily as a compliance exercise, but Accenture’s research reveals that mature responsible AI practices directly contribute to business performance. “ROI can be measured in terms of short-term efficiencies, such as improvements in workflows, but it really should be measured against longer-term business transformation.” The report emphasizes that responsible AI includes not just risk mitigation but also strengthens customer trust, improves product quality and bolsters talent acquisition – directly contributing to financial performance. IT leader action item: Develop comprehensive responsible AI governance that goes beyond compliance checkboxes. Implement proactive monitoring systems that continually assess AI risks and impacts. Consider building responsible AI principles directly into your development processes rather than applying them retroactively.
Read Article
Model Context Protocol open standard architecture consisting of servers and clients will be key to building secure, two-way connections between AI agents’ data sources and tools as AI systems mature and start to maintain context
AI agents have been all the rage over the last several months, which has led to a need to come up with a standard for how they communicate with tools and data, leading to the creation of the Model Context Protocol (MCP) by Anthropic. MCP is “an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools,” Anthropic wrote in a blog post announcing it was open sourcing the protocol. MCP can do for AI agents what USB does for computers, Lin Sun, senior director of open source at cloud native connectivity company Solo.io, explained. According to Keith Pijanowski, AI solutions engineer at object storage company MinIO, an example use case for MCP is an AI agent for travel that can book a vacation that adheres to someone’s budget and schedule. Using MCP, the agent could look at the user’s bank account to see how much money they have to spend on a vacation, look at their calendar to ensure it’s booking travel when they have time off, or even potentially look at their company’s HR system to make sure they have PTO left. MCP consists of servers and clients. The MCP server is how an application or data source exposes its data, while the MCP client is how AI applications connect to those data sources. MinIO actually developed its own MCP server, which allows users to ask the AI agent about their MinIO installation like how many buckets they have, the contents of a bucket, or other administrative questions. The agent can also pass questions off to another LLM and then come back with an answer. “Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today’s fragmented integrations with a more sustainable architecture,” Anthropic wrote in its blog post.
A new HPC architecture with “bring-your-own-code” (BYOC) approach would enable existing code to run unmodifieD; the underlying technology adapts to each application without new languages or significant code changes
There’s now a need for a new path forward that allows developers to speed up their applications with fewer barriers, which will ensure faster time to innovation without being locked into any particular vendor. The answer is a new kind of accelerator architecture that embraces a “bring-your-own-code” (BYOC) approach. Rather than forcing developers to rewrite code for specialized hardware, accelerators that embrace BYOC would enable existing code to run unmodified. The focus should be on accelerators where the underlying technology adapts to each application without new languages or significant code changes. This approach offers several key advantages: Elimination of Porting Overhead: Developers can focus on maximizing results rather than wrestling with hardware-specific adjustments. Software Portability: As performance accelerates, applications retain their portability and avoid vendor lock-in and proprietary domain-specific languages. Self-Optimizing Intelligence: Advanced accelerator designs can continually analyze runtime behavior and automatically tune performance as the application executes to eliminate guesswork and manual optimizations. These advantages translate directly into faster results, reduced overhead, and significant cost savings. Finally liberated from extensive code adaptation and reliance on specialized HPC experts, organizations can accelerate R&D pipelines and gain insights sooner. The BYOC approach eliminates the false trade-off between performance gains and code stability, which has hampered HPC adoption. By removing these artificial boundaries, BYOC opens the door to a future where computational power accelerates scientific progress. A BYOC-centered ecosystem democratizes access to computational performance without compromise. It will enable domain experts across disciplines to harness the full potential of modern computing infrastructure at the speed of science, not at the speed of code adaptation.
kama.ai’s supports knowledge management with hybrid agents informed by Knowledge Graph AI, enterprise RAG tech and a Trusted Collection
kama.ai, a leader in responsible conversational AI solutions, announced the commercial release of the industry’s most trustworthy AI Agents powered by GenAI’s Sober Second Mind®, the latest addition to its Designed Experiential Intelligence® platform – Release 4. The new Hybrid AI Agents combine kama.ai’s classic knowledge base AI, guided by human values, with a new enterprise Retrieval Augmented Generation (RAG) process. This in turn is powered by a Trusted Collection feature set that produces the most reliable and accurate generative responses. The Trusted Collection features provide pre-integrated intentional document and collection management with enterprise document repositories like SharePoint, M-Files and AWS S3 Buckets. Designed Experiential Intelligence® Release 4 helps enterprise experts work faster with greater ease. It generates draft responses automatically for a Knowledge Manager or SME to review. This is needed for highly sensitive applications (like HR), or for high volume customer facing applications. User inquiries, feedback, and AI drafts all help improve the system. Together, consumers, clients, partners, and SMEs create a more efficient and effective human-AI ecosystem. kama.ai Release 4 also introduces a new API supporting 3rd party Hybrid AI Agent builders that can deliver 100% accurate and approved information curated for the enterprise.
Zencoder’s platform offers software teams access to third-party registries that host ready-to-use MCP connectors and MCP-powered pre-packaged AI agent integrations to enable them to build their own custom AI agents
Startup Zencoder, officially For Good AI Inc., introduced a cloud platform called Zen Agents that can be used to create coding-optimized AI agents. The new Zen Agents platform has two main components. The first is a catalog of open-source AI agents that can automate more than a half dozen programming tasks. The platform’s other component, in turn, is a tool that allows software teams to build their own custom AI agents. Developers can create an AI agent by entering a natural language description of the tasks it should perform. Zen Agents provides a collection of prepackaged AI agent integrations powered by MCP. The platform also offers access to third-party registries, or cloud services that host ready-to-use MCP connectors. The company says AI agents powered by its platform can create documentation that explains developers’ code, as well as generate new code in multiple programming languages. Software teams can also deploy AI agents that automatically test application updates for bugs. Zencoder has developed a technology it calls Repo Grokking to improve AI-generated code. It maps out the structure of an application’s code base, including details such as the programming best practices that the application’s developers follow. This information allows the AI models that power its platform to generate more relevant programming suggestions.
