Google is unveiling its “AI mode,” where instead of a list of search results, what the user get is Google‘s own take on a model-generated summary of whatever was put into the search box. A demo video from Google shows people asking various in-depth questions about a popsicle bridge, homebuying and other issues, and the new search engine feature replying accordingly, with cogent, multi-paragraph answers. However, Googling the kinds of phrases that we traditionally used in older days brings some vague results in AI Mode, as when I supplied the phrase “big dress,” which generated the following: “You seem to be interested in the term big dress, which can have several meanings,” and information on everything from voluminous ballgowns to the Guinness book of records. Basically, using AI results leads to fewer clicks on the Search Engine Results Page SERP results which harms websites that need ad revenue. Zero click searches where users just take the AI feedback don’t bring in web traffic to other parts of the Internet. Google CEO Sundar Pichai called AI Mode a “total reimagining of search.” “What all this progress means is that we are in a new phase of the AI platform shift, where decades of research are now becoming reality for people all over the world,” Pichai said in a press statement.
Trustworthy agentic AI is the next enterprise mandate- Accenture-ServiceNow’s platform enables development of enterprise-grade, trustworthy agentic AI using zero trust framework that enforces continuous verification and robust security measures throughout the AI lifecycle
Accenture PLC and ServiceNow are co-developing intelligent platforms that embed security, autonomy and interoperability at the core to deliver enterprise-grade AI agents that organizations not only use — but rely on. This partnership is redefining how trust is built into the very fabric of AI systems, according to Dave Kanter, senior managing director for Global ServiceNow Business Group lead at Accenture. Accenture’s AI Agent Zero Trust Model for Cyber Resilience is a cybersecurity framework developed to ensure the secure deployment and operation of AI agents within enterprise environments. It builds on the core principles of zero trust — where nothing is inherently trusted and everything must be verified. This model supports the development of trustworthy agentic AI by enforcing continuous verification and robust security measures throughout the AI lifecycle, according to Kanter. “You’ll hear our team talking about one of their favorite use cases — they call it Agent Zero,” he said. This use case, which is a complex workflow with decision-making, with intent-based guardrails, we can power that with the agentic AI. The first full-on agents built on the AI studio will be Agent Zero for three or four of our clients, and we expect those to go live in just a matter of days.” Unlike traditional AI, which typically follows predefined instructions or responds to prompts, agentic AI exhibits higher levels of independence, adaptability and initiative. As a result, it has the potential to redefine the workforce, Jacqui Canney, chief people and AI enablement officer at ServiceNow pointed out.
Combining agentic workflows with APIs enable scaling enterprise AI by abstracting the hardware and infrastructure complexities and offering modular, collaborative way for seamless integration across diverse environments
Agentic workflows are fast becoming the backbone of enterprise AI, enabling scalable automation that bridges on-prem systems and the cloud without adding complexity. Organizations adopting agentic workflows are increasingly turning to standard APIs and open-source platforms to simplify the deployment of AI at scale. By abstracting the hardware and infrastructure complexities, these workflows allow for seamless integration across diverse environments, giving companies the flexibility to shift workloads without rewriting code, according to Chris Branch, AI strategy sales manager at Intel. “With the agentic workflow combined with APIs, what you can do then is have a dashboard that runs multiple models simultaneously. What that agentic workflow with these APIs allows is for companies to run those on different systems at different times in different locations without changing any of their code.” This modularity also extends to inference use cases, such as chat interfaces, defect detection and IT automation. Each task might leverage a different AI model or compute resource, but with agentic workflows, they can all operate within a unified dashboard. Standards such as the Llama and OpenAI APIs are central to enabling this level of fluidity and collaboration between agents, according to Mungara. At the foundation of this vision is the Open Platform for Enterprise AI, which provides infrastructure-agnostic building blocks for generative AI. Supported by contributions from Advanced Micro Devices Inc., Neo4j Inc., Infosys Ltd. and others, OPEA allows enterprises to rapidly test, validate and deploy scalable solutions across cloud and on-prem infrastructure, Branch explained.
Open community platform for AI reliability and evaluation allows testing AI models with diverse, real-world prompts across a range of use cases; sees over 400 model evaluations, with over 3 millions votes cast on its platforms
LMArena, the open community platform for evaluating the best AI models, has secured $100 million in seed funding led by a16z and UC Investments (University of California) with participation from Lightspeed, Laude Ventures, Felicis, Kleiner Perkins and The House Fund. In a space moving at breakneck speed, LMArena is building something foundational: a neutral, reproducible, community-driven layer of infrastructure that allows researchers, developers, and users to understand how models actually perform in the real world. Over four hundred model evaluations have already been made on the platform, with over 3 millions votes cast, helping shape both proprietary and open-source models across the industry, including those from Google, OpenAI, Meta, and xAI. The new LMArena includes: a rebuilt UI, mobile-first design, lower latency, and new features like saved chat history and endless chat. The legacy site will remain live for a while, but all future innovation is happening on lmarena.ai. Backers say what makes LMArena different is not just the product, but the principles behind it. Evaluation is open, the leaderboard mechanics are published, and all models are tested with diverse, real-world prompts. This approach makes it possible to explore in-depth how AI performs across a range of use cases.
Open community platform for AI reliability and evaluation allows testing AI models with diverse, real-world prompts across a range of use cases; sees over 400 model evaluations, with over 3 millions votes cast on its platforms
LMArena, the open community platform for evaluating the best AI models, has secured $100 million in seed funding led by a16z and UC Investments (University of California) with participation from Lightspeed, Laude Ventures, Felicis, Kleiner Perkins and The House Fund. In a space moving at breakneck speed, LMArena is building something foundational: a neutral, reproducible, community-driven layer of infrastructure that allows researchers, developers, and users to understand how models actually perform in the real world. Over four hundred model evaluations have already been made on the platform, with over 3 millions votes cast, helping shape both proprietary and open-source models across the industry, including those from Google, OpenAI, Meta, and xAI. The new LMArena includes: a rebuilt UI, mobile-first design, lower latency, and new features like saved chat history and endless chat. The legacy site will remain live for a while, but all future innovation is happening on lmarena.ai. Backers say what makes LMArena different is not just the product, but the principles behind it. Evaluation is open, the leaderboard mechanics are published, and all models are tested with diverse, real-world prompts. This approach makes it possible to explore in-depth how AI performs across a range of use cases.
Pay-i’s platform measures the revenue, costs, and profit margins of generative AI apps running on usage-based pricing models and allows predicting inference costs pre-launch to help meet profitability targets
There was little evidence, some of Goldman’s analysts pointed out, of organisations worldwide making much of a return on the $1 trillion they had invested in artificial intelligence (AI) tools. Recent research from KPMG found that enthusiasm among enterprise leaders for AI remained high, but that none were yet able to point to significant returns on investment. A Forrester paper warned that some executives might start cutting back on AI investment given their impatience for tangible returns. A study from Appen suggests AI project deployments may already be slowing. Enterprises are right to be sceptical about what GenAI is actually achieving for their businesses, David Tepper, co-founder and CEO of Seattle-based start-up Pay-i argues – and they need more scientific methodologies for analysing returns, both ahead of deployments and once new AI projects are up and running. “C-suite leaders need forecasts of likely returns and reliable proof that they are being achieved,” Tepper says. “That’s how they’ll pinpoint which GenAI business cases and deployments are genuinely creating new value.” Pay-I offers tools to help businesses measure the cost of new GenAI initiatives, broken down into granular detail; such costs are currently opaque, Tepper argues, because they depend on a broad range of factors ranging from when and how business users make use of GenAI tools to which cloud architecture that business has opted for. In addition, Pay-i’s platform allows businesses to assign specific objectives to AI deployments and then to track the extent to which these objectives are achieved – and what value is realised accordingly. The idea is to give enterprises a means to evaluate both sides of the balance sheet for any given AI use case – what it costs and what it generates.
America’s biggest banks consider consortium to reinvent the stablecoin- but requires a shared governance model, common technical standards, airtight security protocols and legislative momentum as a prerequisite
JPMorgan Chase, Bank of America, Wells Fargo and Citigroup are exploring the creation of a jointly operated, fully fiat-backed stablecoin, marking a significant shift from skepticism to strategic investment in crypto by traditional finance. The proposed consortium is reportedly considering using existing rails like Early Warning Services (operator of Zelle) and The Clearing House to develop a new kind of stablecoin infrastructure — one built by regulated entities from the ground up. Their idea? To issue a token that could eventually be used for everything from peer-to-peer payments to B2B settlements, all potentially under the watchful eye of federal regulators. Because the U.S. stablecoin landscape has not yet found shelter under a clear regulatory framework, the banks are still in the exploratory phase, with a shared commitment to finding a model that’s compliant, scalable and secure. Their proposed stablecoin would be fully backed by fiat held at the banks and function similarly to other stablecoins, but with a key differentiator: trust in institutional governance. This vision is a clear departure from the early crypto ethos of disrupting incumbents. Instead, it’s a bet that those same incumbents are best positioned to bring digital dollars into the mainstream. creating a stablecoin is one thing. Coordinating among multiple banks — each with its own technology stack, risk appetite and strategic priorities — is another. This kind of collaboration will require a shared governance model, common technical standards and airtight security protocols.That’s why, for banks, the legislative momentum in the U.S. is a prerequisite. Institutions like JPMorgan and BofA are unlikely to risk their core operations on loosely regulated ventures. Instead, they see regulation as a moat, a way to differentiate themselves from crypto-native competitors and legitimize the space.
Marqeta plans white label app that will allow customers to establish a track record for card programs without having to embed Marqeta’s solution into its app or website right out the gate
Marqeta has been making moves to add new revenue sources in addition to Jack Dorsey’s Block under the leadership of Mike Milotich. The plan is diversification through growth, a task easier said than done against the backdrop of macroeconomic and regulatory uncertainty. Milotich currently serves as Marqeta’s interim CEO as well as its chief financial officer. Broadly, flexible planning and an emphasis on execution will help Marqeta as an organization reach those goals, he said. The fintech has started planning in quarterly chunks, with mid-quarter check-ins to assess market conditions. Specifically, Marqeta is looking to broaden its customer base with new products, Milotich said, including an expansion into credit card issuing, the addition of more value-added services, such as tokenization and risk services, and new program management services, where Marqeta runs the card program on behalf of the client. “Before, [program management services] used to be more of a bundle, and now we’re breaking them up into more a la carte services, which allows our customers a little more flexibility to pick and choose,” he said. The card issuing fintech is also looking to expand abroad, with its pending acquisition of TransAct Pay, a European-based BIN sponsorship, e-money licensing and virtual account services company that will allow Marqeta to offer more robust card programs to its multinational clients. “Non-Block [total payment volume] saw continued strength and little to no macro-disruption”, Keybanc Capital Markets analyst Alex Markgraff wrote in a research note. “We view the print as generally positive with respect to new-business, non-Block growth, and macro-related resilience to date. Non-Block TPV grew roughly twice as fast as Block.” Block-related revenue was less than half – 45% – of Marqeta’s total revenue at the end of the quarter, down from 74% at the end of 2022. Marqeta’s biggest bets to increase the diversity of its clientele revolve around creating tools that make doing business with the fintech easier. To that end, Marqeta is launching a white label app that will allow customers to stand up a card program without heavy integration, Milotich said. The white label app will allow customers to establish a track record for the card program without having to go through the process of embedding Marqeta’s solution into its app or website right out the gate, The white label app is built with the tools that power its UX Toolkit, a selection of application programming interfaces released in 2024 that are designed to allow customers to more easily embed the card solutions into their app or website. At its core, the white label app is a time-to-market tool.
Wells Fargo’s business data leader stresses the importance of stepping back to assess systemic risk rather than overemphasizing isolated errors in continuous auditing workflows
Nathaniel Bell is the Corporate Functions Business Data Leader at Wells Fargo, where he specializes in optimizing data strategies to support AI initiatives and address organizational challenges. Focusing on bridging infrastructure investments and innovative AI use cases, Nate provides valuable insights into managing risks and aligning AI technologies with business objectives. For his episode in the MindBridge-sponsored series, Nathaniel highlights the ongoing tension in auditing between objectivity and subjectivity. Auditors aim to be objective, but as Nathaniel notes in his podcast appearance, they often work with human-led processes that are inherently subjective, especially when auditors and process owners have different perceptions about what constitutes a risk.
He tells the podcast audience that digital transformation, including AI, can help codify business processes, making them more structured and standardized. The shift will enable auditors to assess risks more objectively and data-driven. For example, if something breaks in a system, it becomes immediately transparent — less open to interpretation:
“I tend to focus on highly manual processes because they represent both risk and opportunity. These processes are not only time-consuming, but they also introduce a significant margin for human error. Research shows that in complex spreadsheets, we typically catch only about 70% of errors — leaving a substantial gap in accuracy and oversight. That’s why I always ask: where can we apply AI to reduce that margin of error and drive more reliable, efficient outcomes?”
Nathaniel also reflects on a common pitfall in audit workflows: getting fixated on a single issue within a process and treating it as a major risk without a broader context. He stresses the importance of stepping back to assess systemic risk rather than overemphasizing isolated errors.
Ultimately, Nathaniel believes auditors should and will spend less time on routine tasks and more time on storytelling as AI-driven automation becomes more commonplace in financial institutions. He sees the future of auditing as a discipline that leverages human talent to connect findings with broader business impact, helping stakeholders understand not just what went wrong but why it matters.
USAA taps Dell’s AI PCs leveraging enterprise-grade discrete neural processing units to provide fast and secure on-device inferencing at the edge for LLMs
Dell Technologies Inc. unveiled Dell Pro and Dell Pro Max, the latest additions to the company’s AI PC lineup. For customers such as United Services Automobile Association, Dell’s AI PC portfolio provides an opportunity to generate productivity boosts and extend AI applications throughout the organization. Dell’s AI PC offerings can facilitate a movement the company is seeing toward building intelligent workflows in-house. “We’re seeing a lot of customers now start to investigate building their own models and their own AI applications, because we’re seeing that’s where a lot of the true value is going to be with AI and the enterprise,” Jon Siegal, senior vice president of Dell portfolio marketing at Dell said. “We’re helping a number of companies out there today, and USAA is one of them, to help build these new AI applications and make sure that when they do it, they can do it once and deploy it across an AI fleet of PCs.” Dell’s new AI PC models leverage neural processing units, accelerators that can optimize AI and machine learning tasks. The Dell Pro Max Plus laptop utilizes an enterprise-grade discrete NPU to provide fast and secure on-device inferencing at the edge for large language models. NPU capabilities for AI PCs are part of what USAA is tracking as it deploys new devices within the company. The deployment of intelligent agents to perform a variety of enterprise tasks will require PCs that can handle a new set of demands that will likely include self-healing, according to Siegal.