By 2030, preemptive cybersecurity solutions are expected to comprise 50% of IT security spending, a significant rise from under 5% in 2024. Gartner highlights that these solutions employ advanced AI and ML to forecast and mitigate cyber threats through predictive threat intelligence and automated defenses. According to Carl Manion, DR-based security will become insufficient against AI-enabled attacks, necessitating preemptive measures. The emergence of the Autonomous Cyber Immune System (ACIS) is deemed crucial, as it addresses the inadequacy of traditional measures amidst the evolving global attack surface grid (GASG). The trend will shift towards tailored cybersecurity solutions aimed at specific sectors and threat methodologies, fostering collaboration among niche vendors. Failure to adopt preemptive capabilities could lead to significant cyber incidents and market share losses for product leaders in the near term.
A2A payments will reach $195 trillion by 2030, representing 113% growth from $91.5 trillion driven by Variable Recurring Payments and real-time rails that enable instant payroll and bill settlements, as per Juniper Research.
Account-to-Account (A2A) payments are on track to transform the global payments landscape, with transaction values forecasted to reach $195 trillion by 2030—a staggering 113% rise from $91.5 trillion today, according to a new report by Juniper Research. The surge reflects how real-time infrastructure, advanced data capabilities, and recurring payments are converging to unlock new possibilities for both consumers and businesses. The report highlights the role of real-time payment systems in expanding A2A beyond simple bank transfers. Recurring payments—particularly Variable Recurring Payments (VRPs)—are set to be a cornerstone for A2A’s evolution. VRPs allow businesses to initiate multiple future payments within pre-agreed parameters, minimising admin overhead while offering customers flexibility and control. In the UK, non-sweeping VRPs are seen as critical to unlocking adoption by giving firms a structured yet adaptable way to manage subscription and instalment-based services. The report stresses that opportunities lie beyond developed economies. Emerging markets with supportive regulators and growing real-time payment rails present fertile ground for A2A growth. Vendors that align with evolving compliance frameworks while tailoring solutions to local needs will be best positioned to capitalise. By combining real-time rails, richer data, and recurring capabilities, A2A is set to become the backbone for future digital finance. It can help businesses cut costs, improve liquidity, and deliver more personalised payment experiences—all while competing head-on with cards, wallets, and legacy methods.
Banks adopt AI for deposit pricing and customer segmentation; while AI dynamic pricing implementation is hampered by price discrimination compliance concerns
Banks are cautiously adopting AI in deposit pricing to improve customer segmentation, understand rate sensitivity, and deliver targeted offers, while regulatory hurdles and trust concerns are limiting its use in dynamic pricing. Institutions are moving from intuition-driven or static rate sheets to analytics and machine learning models that assess customer behavior, preferences, and competitive positioning to make more data-driven pricing decisions. With greater insight into customer attributes and preferences, banks say they are using AI making more informed calls on when to offer pricing exceptions, or how to price their deposit offerings more broadly. Research on rate-sensitive customers through AI can help paint a picture of future pricing moves across customer segments. For example, Chris Nichols, director of capital markets at SouthState Bank notes that some less rate-sensitive customers may care more about good customer service than shaving a few basis points off their savings or CD rate. Valley National Bank said it’s in the early stages of using AI to inform pricing strategy. Sanjay Sidhwani, the bank’s chief data and analytics officer, says the bank has used AI — traditional AI and machine learning — to garner insights on which customers are rate-sensitive. But he emphasizes that AI-based analysis isn’t a hands-off process. The bank can use it alongside other data to help inform pricing strategies, including how long someone has been a customer, what products they use, and how they interact with various channels. The bank is also using AI and machine learning to compare its historical competitive positioning — such as how its rates ranked on aggregator sites — with how many new deposits that approach brought in. These insights help inform pricing decisions and also guide how much to spend on marketing a particular deposit product. To implement true dynamic pricing, banks must navigate regulatory barriers, including compliance with rules on unfair, deceptive, or abusive acts or practices (UDAAP), and concerns about price discrimination, Sidhwani says. Without dynamic pricing, argues Sidhwani, banks “won’t be able to compete…they’re going to have to get there,” he says. Sidhwani predicts dynamic, AI-powered pricing could arrive within two to three years.
RapidFire AI’s engine uses hyper-parallel processing to test and compare multiple LLM configurations simultaneously, speeding up customization and fine-tuning by up to 20X
RapidFire AI’s “rapid experimentation” engine is designed to speed up and simplify large language models (LLMs) customization, fine‑tuning and post‑training. Hyper-parallel processing is at its core; instead of just one configuration, users can analyze 20 or more all at once, resulting in a 20X higher experimentation throughput, the company claims. With RapidFire AI, users can compare potentially dozens of configurations all at once on a single or multiple machines — various base model architectures, training hyperparameters, adapter specifics, data preprocessing and reward functions. The platform processes data in “chunks,” switching adapters and models to reallocate and maximize GPU use. Users have a live metrics stream on an MLflow dashboard and interactive control (IC) ops; this allows them to track and visualize all metrics and metadata and warm-start, stop, resume, clone, modify or prune configurations in real time. The platform is Hugging Face native, works with PyTorch and transformers and supports various quantization and fine-tuning methods (such as parameter efficient fine tuning, or PEFT, and low-rank adaptation, or LoRA) as well as supervised fine tuning, direct preference optimization and group relative policy optimization. Using RapidFire AI, the Data Science Alliance has sped up projects 2-3X, according to Ryan Lopez, director of operations and projects. Normally, multiple iterations would take a week; that timeline has been shortened to two days or less. With RapidFire, they can simultaneously process images and video to see how different vision models perform. RapidFire’s hyperparallelism, automated model selection, adaptive GPU utilization and continual improvement capabilities give customers a “massive increase” in speed and cost-optimization, noted John Santaferraro, CEO of Ferraro Consulting. This is compared to in-house hand coding or software tools that focus just on the software engineering aspect of model acceleration. Hyperparallelism accelerates the AI enablement of model selection, the ability to identify high-performing models and shut down low-performing models, while minimizing runtime overheads.
Waymo launches robotaxi corporate service with Carvana as first client enabling companies to subsidize employee rides across five cities
Waymo’s ever-expanding robotaxi aspirations have spread to the corporate world. The self-driving vehicle unit has launched “Waymo for Business,” a new service designed for companies to set up accounts so their employees can access robotaxis in cities like Los Angeles, Phoenix, and San Francisco. Waymo is inviting organizations to sign up for Waymo for Business. The new service will let businesses subsidize their employees’ rides or purchase promo codes in bulk, which can be handed out to clients, customers, or workers. The Waymo for Business rides will cost the same as its regular service. One of Waymo’s first customers is Carvana, the Phoenix-based online used car marketplace. While riders already use Waymo’s robotaxis to commute to work, this is the company’s first coordinated commercial effort to target corporations and other organizations. Waymo has said nearly one in six of its local riders in San Francisco, Los Angeles, and Phoenix rely on Waymo to commute to work or school. Through the Waymo for Business service, companies have access to a business portal that gives the organizations control over their ride programs. Corporate customers can dictate the geographic area in which their employees use the robotaxis, set pickup and drop-off locations, monitor ride activity, and track their budget.
Apple develops EPICACHE framework reducing LLM memory usage 6x through episodic compression, while improving accuracy 40% and cutting latency 2.4x for enterprises
Apple researchers have developed a breakthrough framework called EPICACHE that allows large language models to maintain context across extended conversations while using up to six times less memory than current approaches. The technique could prove crucial as businesses increasingly deploy AI systems for customer service, technical support, and other applications requiring sustained dialogue. “Recent advances in large language models (LLMs) have extended context lengths, enabling assistants to sustain long histories for coherent, personalized responses,” the researchers wrote in their paper. “This ability, however, hinges on Key-Value (KV) caching, whose memory grows linearly with dialogue length and quickly dominates under strict resource constraints.” The Apple team’s solution involves breaking down long conversations into coherent “episodes” based on topic, then selectively retrieving relevant portions when responding to new queries. This approach, they say, mimics how humans might recall specific parts of a long conversation. “EPICACHE bounds cache growth through block-wise prefill and preserves topic-relevant context via episodic KV compression, which clusters conversation history into coherent episodes and applies episode-specific KV cache eviction,” the researchers explained. Testing across three different conversational AI benchmarks, the system showed remarkable improvements. “Across three LongConvQA benchmarks, EPICACHE improves accuracy by up to 40% over recent baselines, sustains near-full KV accuracy under 4–6× compression, and reduces latency and memory by up to 2.4× and 3.5×,” according to the study. The new framework could be particularly valuable for enterprise applications where cost efficiency matters. By reducing both memory usage and computational latency, EPICACHE could make it more economical to deploy sophisticated AI assistants for customer service, technical support, and internal business processes.
Y Combinator debuts “Request for Startups Fintech 3.0” program with Base and Coinbase Ventures, focusing on local stablecoins, asset tokenization and AI financial agents
Y Combinator, Base, and Coinbase Ventures have jointly launched an initiative dubbed Fintech 3.0, urging founders to build financial systems directly on-chain as regulation, infrastructure, and adoption align in crypto and decentralized finance. The concept is that the next wave of fintech innovation will skip over legacy intermediary rails and instead use blockchain underpinnings as native infrastructure. The initiative is accepting applications from startups working on themes like expanding stablecoins beyond the U.S. dollar into local currencies, tokenization of assets (stocks, credit, real estate), and consumer-facing apps and AI-driven financial agents. It builds on Base (the Ethereum overlay blockchain tied to Coinbase), which already is being used for global USDC payments and is positioning itself as a backbone for decentralized financial services. The article frames Fintech 3.0 as a natural progression: Fintech 1.0 is banking going digital (online banking, mobile apps on top of legacy rails), Fintech 2.0 is fintechs layering new services (lending, payments, neobanks) on those rails, and Fintech 3.0 means moving the rails themselves onto programmable, transparent blockchain infrastructure. In this paradigm, financial logic, settlement, identity, compliance, and asset issuance can be native to the chain rather than grafted on. The initiative is both a branding and builder program, aiming to seed and support startups that can realize that vision, with backing, infrastructure access, and capital from Y Combinator, Base, and Coinbase Ventures. It’s a bet that the future of finance will be built on-chain, not beside it.
Fabrix.ai launches Agentic Operational Intelligence Platform featuring data fabric, AI fabric and automation fabric enabling weeks-to-production deployment for enterprise networking operations
AgentOps, a new paradigm, aims to reduce friction, automate decision-making, and embed intelligence into operations. Beyond a technical upgrade, AgentOps represents the reimagining of IT operations in an era of intelligent autonomy, according to Shailesh Manjrekar, chief AI and marketing officer of Fabrix.ai Inc. “What AgentOps … also [being] known as AgenticOps means is really how you operationalize the entire agentic stack from getting to a prompt all the way to looking at the MCP tools, getting to the proper guardrails, experimenting and then eventually getting to the lifecycle management of these agents.” By embedding large language models into the decision-making process, enterprises can move from reactive monitoring to proactive, predictive and eventually autonomous operations. Networking remains the nervous system of modern enterprises — and increasingly, of AI itself, according to Blili. Yet, networks are growing more complex, spanning data centers, cloud, wireless, mobility and telco domains. Agentic AI is uniquely suited to this challenge because of its ability to reason, deliberate and break down problems into solvable tasks. “In the areas of networking, that’s particularly important because some of the things that you’re trying to automate and improve require enormous amounts of data and understanding of relationships between entities,” Blili said. “In emerging markets, the highest [customer loyalty] number I ever saw was SD-WAN at 30%,” Kerravala said. “To have a number that high just shows the demand today for AI ops and AgentOps and how much customers are looking forward to it helping them with their operational issues.”
Stablecoins will rival Visa and Mastercard when smart contracts embed consumer protections and issuers fund insurance pools for fraud coverage
Stablecoins won’t unseat incumbent payment platforms, including Visa and Mastercard, until the blockchain tokens feature robust consumer protections, according to Guillaume Poncin, chief technology officer of payment company Alchemy. Stablecoin projects must integrate these features to attract the everyday person, Poncin told. Consumer protection features can be embedded directly in smart contracts, while stablecoin issuers and payment platforms can fund their own insurance pools for payouts in cases of fraud, Poncin said. He said traditional payment rails and stablecoins will merge: “I expect every major payment processor will integrate stablecoins, and every bank will issue its own. The future is one where traditional rails are enhanced by blockchain’s efficiency and new use cases. For cross-border payments and emerging markets, stablecoins are already winning. “For domestic retail, we will see hybrid models combining instant settlement with consumer protections,” he said.
Okta launches verified digital credentials with selective disclosure and biometric security; powering adoption in finance, healthcare, and government
Traditional identity checks often expose far more personal data than necessary, increasing the risk of leaks and misuse. Verified digital credentials flip this model by allowing consumers to share only what’s required in a transaction, according to Vivek Raman, vice president and general manager of Okta Personal at Okta Inc. “If I want to go to the liquor store and buy a bottle of wine, which I’ll probably do after this interview, today I hand over my driver’s license, which has my full name, my home address, my photo [and] all that stuff, where all they really need to know is am I over 21 or not? So, selective disclosure with verifiable credentials lets you, the user, be in control of what data you share.” That principle extends to online interactions, where businesses can verify that users meet legal or compliance requirements without maintaining sensitive personal data. Verified digital credentials can expire, live only on a consumer’s device and be locked with biometrics such as Face ID, making them more secure than static, physical documents, according to Raman. The healthcare and government sectors are leading the adoption, demonstrating how the technology can reduce onboarding times and enhance trust, according to Raman. Governments are driving standardization through initiatives such as U.S. state mobile driver’s licenses and forthcoming European legislation mandating interoperable citizen IDs. Looking ahead, Raman envisions individuals carrying a portfolio of verified digital credentials in their digital wallets, much like credit cards have transitioned to tap-to-pay systems. Adoption hurdles remain, but the convergence of standards and government momentum points to broad mainstream use in the near future, according to Raman.
