Google is unveiling its “AI mode,” where instead of a list of search results, what the user get is Google‘s own take on a model-generated summary of whatever was put into the search box. A demo video from Google shows people asking various in-depth questions about a popsicle bridge, homebuying and other issues, and the new search engine feature replying accordingly, with cogent, multi-paragraph answers. However, Googling the kinds of phrases that we traditionally used in older days brings some vague results in AI Mode, as when I supplied the phrase “big dress,” which generated the following: “You seem to be interested in the term big dress, which can have several meanings,” and information on everything from voluminous ballgowns to the Guinness book of records. Basically, using AI results leads to fewer clicks on the Search Engine Results Page SERP results which harms websites that need ad revenue. Zero click searches where users just take the AI feedback don’t bring in web traffic to other parts of the Internet. Google CEO Sundar Pichai called AI Mode a “total reimagining of search.” “What all this progress means is that we are in a new phase of the AI platform shift, where decades of research are now becoming reality for people all over the world,” Pichai said in a press statement.
Trustworthy agentic AI is the next enterprise mandate- Accenture-ServiceNow’s platform enables development of enterprise-grade, trustworthy agentic AI using zero trust framework that enforces continuous verification and robust security measures throughout the AI lifecycle
Accenture PLC and ServiceNow are co-developing intelligent platforms that embed security, autonomy and interoperability at the core to deliver enterprise-grade AI agents that organizations not only use — but rely on. This partnership is redefining how trust is built into the very fabric of AI systems, according to Dave Kanter, senior managing director for Global ServiceNow Business Group lead at Accenture. Accenture’s AI Agent Zero Trust Model for Cyber Resilience is a cybersecurity framework developed to ensure the secure deployment and operation of AI agents within enterprise environments. It builds on the core principles of zero trust — where nothing is inherently trusted and everything must be verified. This model supports the development of trustworthy agentic AI by enforcing continuous verification and robust security measures throughout the AI lifecycle, according to Kanter. “You’ll hear our team talking about one of their favorite use cases — they call it Agent Zero,” he said. This use case, which is a complex workflow with decision-making, with intent-based guardrails, we can power that with the agentic AI. The first full-on agents built on the AI studio will be Agent Zero for three or four of our clients, and we expect those to go live in just a matter of days.” Unlike traditional AI, which typically follows predefined instructions or responds to prompts, agentic AI exhibits higher levels of independence, adaptability and initiative. As a result, it has the potential to redefine the workforce, Jacqui Canney, chief people and AI enablement officer at ServiceNow pointed out.
Combining agentic workflows with APIs enable scaling enterprise AI by abstracting the hardware and infrastructure complexities and offering modular, collaborative way for seamless integration across diverse environments
Agentic workflows are fast becoming the backbone of enterprise AI, enabling scalable automation that bridges on-prem systems and the cloud without adding complexity. Organizations adopting agentic workflows are increasingly turning to standard APIs and open-source platforms to simplify the deployment of AI at scale. By abstracting the hardware and infrastructure complexities, these workflows allow for seamless integration across diverse environments, giving companies the flexibility to shift workloads without rewriting code, according to Chris Branch, AI strategy sales manager at Intel. “With the agentic workflow combined with APIs, what you can do then is have a dashboard that runs multiple models simultaneously. What that agentic workflow with these APIs allows is for companies to run those on different systems at different times in different locations without changing any of their code.” This modularity also extends to inference use cases, such as chat interfaces, defect detection and IT automation. Each task might leverage a different AI model or compute resource, but with agentic workflows, they can all operate within a unified dashboard. Standards such as the Llama and OpenAI APIs are central to enabling this level of fluidity and collaboration between agents, according to Mungara. At the foundation of this vision is the Open Platform for Enterprise AI, which provides infrastructure-agnostic building blocks for generative AI. Supported by contributions from Advanced Micro Devices Inc., Neo4j Inc., Infosys Ltd. and others, OPEA allows enterprises to rapidly test, validate and deploy scalable solutions across cloud and on-prem infrastructure, Branch explained.
Open community platform for AI reliability and evaluation allows testing AI models with diverse, real-world prompts across a range of use cases; sees over 400 model evaluations, with over 3 millions votes cast on its platforms
LMArena, the open community platform for evaluating the best AI models, has secured $100 million in seed funding led by a16z and UC Investments (University of California) with participation from Lightspeed, Laude Ventures, Felicis, Kleiner Perkins and The House Fund. In a space moving at breakneck speed, LMArena is building something foundational: a neutral, reproducible, community-driven layer of infrastructure that allows researchers, developers, and users to understand how models actually perform in the real world. Over four hundred model evaluations have already been made on the platform, with over 3 millions votes cast, helping shape both proprietary and open-source models across the industry, including those from Google, OpenAI, Meta, and xAI. The new LMArena includes: a rebuilt UI, mobile-first design, lower latency, and new features like saved chat history and endless chat. The legacy site will remain live for a while, but all future innovation is happening on lmarena.ai. Backers say what makes LMArena different is not just the product, but the principles behind it. Evaluation is open, the leaderboard mechanics are published, and all models are tested with diverse, real-world prompts. This approach makes it possible to explore in-depth how AI performs across a range of use cases.
Open community platform for AI reliability and evaluation allows testing AI models with diverse, real-world prompts across a range of use cases; sees over 400 model evaluations, with over 3 millions votes cast on its platforms
LMArena, the open community platform for evaluating the best AI models, has secured $100 million in seed funding led by a16z and UC Investments (University of California) with participation from Lightspeed, Laude Ventures, Felicis, Kleiner Perkins and The House Fund. In a space moving at breakneck speed, LMArena is building something foundational: a neutral, reproducible, community-driven layer of infrastructure that allows researchers, developers, and users to understand how models actually perform in the real world. Over four hundred model evaluations have already been made on the platform, with over 3 millions votes cast, helping shape both proprietary and open-source models across the industry, including those from Google, OpenAI, Meta, and xAI. The new LMArena includes: a rebuilt UI, mobile-first design, lower latency, and new features like saved chat history and endless chat. The legacy site will remain live for a while, but all future innovation is happening on lmarena.ai. Backers say what makes LMArena different is not just the product, but the principles behind it. Evaluation is open, the leaderboard mechanics are published, and all models are tested with diverse, real-world prompts. This approach makes it possible to explore in-depth how AI performs across a range of use cases.
Pay-i’s platform measures the revenue, costs, and profit margins of generative AI apps running on usage-based pricing models and allows predicting inference costs pre-launch to help meet profitability targets
There was little evidence, some of Goldman’s analysts pointed out, of organisations worldwide making much of a return on the $1 trillion they had invested in artificial intelligence (AI) tools. Recent research from KPMG found that enthusiasm among enterprise leaders for AI remained high, but that none were yet able to point to significant returns on investment. A Forrester paper warned that some executives might start cutting back on AI investment given their impatience for tangible returns. A study from Appen suggests AI project deployments may already be slowing. Enterprises are right to be sceptical about what GenAI is actually achieving for their businesses, David Tepper, co-founder and CEO of Seattle-based start-up Pay-i argues – and they need more scientific methodologies for analysing returns, both ahead of deployments and once new AI projects are up and running. “C-suite leaders need forecasts of likely returns and reliable proof that they are being achieved,” Tepper says. “That’s how they’ll pinpoint which GenAI business cases and deployments are genuinely creating new value.” Pay-I offers tools to help businesses measure the cost of new GenAI initiatives, broken down into granular detail; such costs are currently opaque, Tepper argues, because they depend on a broad range of factors ranging from when and how business users make use of GenAI tools to which cloud architecture that business has opted for. In addition, Pay-i’s platform allows businesses to assign specific objectives to AI deployments and then to track the extent to which these objectives are achieved – and what value is realised accordingly. The idea is to give enterprises a means to evaluate both sides of the balance sheet for any given AI use case – what it costs and what it generates.
America’s biggest banks consider consortium to reinvent the stablecoin- but requires a shared governance model, common technical standards, airtight security protocols and legislative momentum as a prerequisite
JPMorgan Chase, Bank of America, Wells Fargo and Citigroup are exploring the creation of a jointly operated, fully fiat-backed stablecoin, marking a significant shift from skepticism to strategic investment in crypto by traditional finance. The proposed consortium is reportedly considering using existing rails like Early Warning Services (operator of Zelle) and The Clearing House to develop a new kind of stablecoin infrastructure — one built by regulated entities from the ground up. Their idea? To issue a token that could eventually be used for everything from peer-to-peer payments to B2B settlements, all potentially under the watchful eye of federal regulators. Because the U.S. stablecoin landscape has not yet found shelter under a clear regulatory framework, the banks are still in the exploratory phase, with a shared commitment to finding a model that’s compliant, scalable and secure. Their proposed stablecoin would be fully backed by fiat held at the banks and function similarly to other stablecoins, but with a key differentiator: trust in institutional governance. This vision is a clear departure from the early crypto ethos of disrupting incumbents. Instead, it’s a bet that those same incumbents are best positioned to bring digital dollars into the mainstream. creating a stablecoin is one thing. Coordinating among multiple banks — each with its own technology stack, risk appetite and strategic priorities — is another. This kind of collaboration will require a shared governance model, common technical standards and airtight security protocols.That’s why, for banks, the legislative momentum in the U.S. is a prerequisite. Institutions like JPMorgan and BofA are unlikely to risk their core operations on loosely regulated ventures. Instead, they see regulation as a moat, a way to differentiate themselves from crypto-native competitors and legitimize the space.
Anthropic’s new AI models can use tools in parallel, extract and save key facts from local files, operate in two modes including near-instant responses and extended thinking and can maintain full context to sustain focus on longer projects
Anthropic has introduced the next generation of its artificial intelligence (AI) models, Claude Opus 4 and Claude Sonnet 4. “These models advance our customers’ AI strategies across the board: Opus 4 pushes boundaries in coding, research, writing and scientific discovery, while Sonnet 4 brings frontier performance to everyday use cases as an instant upgrade from Sonnet 3.7,” the company said. The company said Claude Opus 4 is its most powerful model yet and “the world’s best coding model,” adding that it delivers sustained performance on complex, long-running tasks and agent workflows. Claude Sonnet 4 balances performance and efficiency . It provides a significant upgrade to its predecessor, Claude Sonnet 3.7, and offers superior coding and reasoning while responding more precisely to user instructions. Both models can use web search and other tools during extended thinking, use tools in parallel, and extract and save key facts from local files, per the announcement. In addition, both models offer two modes, including near-instant responses and extended thinking. These models are a large step toward the virtual collaborator — maintaining full context, sustaining focus on longer projects, and driving transformational impact.
R3 to bridge Corda with Solana’s network to enable regulated FIs to access public blockchain infrastructure for RWA tokenization through enterprise-grade, compliant service
R3 and Solana Foundation have partnered to bring regulated real-world assets onto a public blockchain. The collaboration will create a consensus service deployed on Solana to enable native interoperability between R3’s existing Corda platform – as well as other private networks – and Solana, bridging the gap between permissioned and public blockchain ecosystems for the first time. This will enable regulated financial institutions – including banks, financial market infrastructure providers, and asset managers – to fully harness the openness and efficiency of Solana without re-writing their applications or compromising on compliance, security, or asset control. It drives institutional adoption of public blockchain networks, capitalizing on greater regulatory clarity and growing institutional demand for tokenized real-world assets (RWAs). The announcement marks new strategic direction for R3, signalling its leadership in driving the convergence of public and private networks to unlock the next era of internet capital markets. It enables regulated financial institutions to directly access the speed and scale of Solana for broader asset distribution, enhanced liquidity, and a decisive step in bringing TradFi to DeFi.
Kraken partners Backed Finance to offer tokenized versions of popular US equities on Solana’s blockchain; to be compatible with wallets and protocols on the network
Crypto exchange Kraken plans to offer tokenized versions of popular U.S. equities. Kraken will list a new suite of tokenized equities dubbed xStocks in partnership with Backed Finance. The assets will reportedly be live on the Solana blockchain and represent actual shares held 1:1 by Backed. Clients in selected non-U.S. jurisdictions will reportedly be able to trade more than 50 U.S. stocks and ETFs, including Tesla, Nvidia, Apple, and the SPDR S&P 500 ETF, outside traditional market hours. The launch positions Kraken among the first exchanges to successfully list tokenized U.S. equities since Binance’s short-lived effort in 2021. Unlike earlier iterations, Kraken’s approach relies on real securities held in custody and tokenized on a fast, low-cost blockchain. Mark Greenberg, Kraken Global Head of Consumer said, “Access to traditional U.S. equities remains slow, costly, and restricted. With xStocks, we’re using blockchain technology to deliver something better, open, instant, accessible, and borderless exposure to some of America’s most iconic companies.” The xStocks tokens are reportedly issued as SPL tokens on Solana, meaning they are compatible with wallets and protocols on the network. This integration also allows users to leverage their tokenized stocks in decentralized finance environments, including as collateral.
