Kira, a next-generation fintech infrastructure startup, has officially emerged from stealth mode, announcing it has hit $3 million in revenue in its first year of operations. Kira is building the first all-in-one infrastructure to launch embedded fintech products — supercharged by Vertical AI Agents and stablecoins. By abstracting the complexity of DeFi, Kira gives businesses a turnkey solution to launch embedded products, such as cross-border remittances, treasury automation, currency trading, import/export payments and global payroll solutions. Kira’s platform is the first of its kind: an end-to-end, stablecoin-native infrastructure built for scale. It includes: Universal Payment Gateway: Abstracts away complex rails into a single, AI-driven interface. Accept payments via cash, debit, ACH, or SWIFT through a white-labeled payment link; AI-Powered Treasury & Wallet Infrastructure: Let vertical agents manage yield-bearing wallets and optimize treasury across stablecoins and U.S. Treasuries — generating up to 7% yield; Agentic Compliance: Automate onboarding and regulatory workflows — from KYC/KYB to AML, VASP, and sanctions screening — with integrated APIs and customizable agent sessions; AI-Managed Instant Global Payouts: Autonomous agents can initiate global payouts in seconds — using localized payment methods that deliver directly into bank accounts in 35+ countries.
Nymbus core system integrates Bud Financial’s transaction data enrichment and AI-driven insights tech supporting real-time affordability checks and dynamic risk profiling by analyzing actual income and spending behavior
Nymbus, a full-stack banking platform for U.S. banks and credit unions, has announced an agreement with Bud Financial, a leading provider of transaction data enrichment and AI-driven insights for the financial services industry. Nymbus will integrate Bud’s market-leading suite of personal financial management (PFM) widgets into the Nymbus Banking Platform, enhancing the digital banking experience and enabling smarter, more contextual customer engagement. The integration will provide customers with a clear and intuitive view of their finances, deliver proactive content and financial tools through Bud’s widgets, and tailor experiences across digital channels with categorized, contextual data. Nymbus Engage, a new customer engagement solution, will help community banks and credit unions activate data in smarter ways and drive more meaningful, long-term relationships. Bud has been a pioneer in applying AI to financial data since 2015, helping institutions turn raw transaction streams into structured, actionable insights. Engage – Personalized PFM: Banking clients embed Bud’s enriched data into their apps via widgets to deliver real-time, hyper-personalized financial experiences. Use cases include “left-to-spend” balances that account for upcoming bills, visualizations of spending habits such as weekend spikes or predicted future spending, and personalized nudges or actions like suggesting savings transfers or setting budgets around overspending categories. These insights are powered by Bud APIs combined with large language models (LLMs) to provide contextual, automated intelligence tailored to each customer journey. Drive – Portfolio Analytics & Marketing: Drive aggregates individual-level insights across the entire customer base, enabling banks to perform behavioral segmentation, detect deposit activity triggers, and identify churn risks. These insights integrate seamlessly with CRM systems such as Salesforce or Braze to enable data-driven marketing and relationship management. Assess – Credit & Cashflow Underwriting: Bud’s technology supports real-time affordability checks and dynamic risk profiling by analyzing actual income and spending behavior. This enables more accurate credit decisions and reduces default rates by grounding assessments in real cashflow data rather than static credit scores.
Kroger customers with loyalty card can grab printed flyer near the entrance of stores and scan a barcode on the flyer at checkout to download all the digital coupons at once without the need for app or internet
The Kroger Co. is giving its loyal shoppers easier access to digital deals. The national supermarket chain recently started adding paper flyers mirroring its weekly digital deals near the entrances of its stores. Kroger customers with a loyalty card can grab a printed handout when they enter the store to find out about the weekly digital deals. Then at checkout, shoppers scan a barcode on the flyer to download all of the digital coupons at once. Each coupon can be used up to five times in a single transaction. Using the printed flyer means that shoppers don’t have to go online or use the Kroger app to download each individual coupon. The Northeastern supermarket chain Stop & Shop also recently issued a brand-wide rollout of its innovative Savings Station. The in-store kiosk allows customers to quickly and easily activate all weekly circular digital coupons, as well as personalized offers – no smartphone, internet access or computer required. These in-store efforts from Kroger and Stop & Shop are finally helping to address digital discrimination in grocery. Seniors who aren’t tech-savvy and low-income families without smartphones could wind up paying higher prices, since they can’t easily access grocers’ digital deals.
Verizon created the role of dedicated Customer Champion, agents who will handle customer inquiries from start to finish, reducing the exasperating transfer process that plagues call centers
Verizon is trying to alleviate a persistent pain point by giving consumers a single point of contact, reducing the exasperating transfer process that plagues call centers. In an innovative idea that not many companies have implemented, Verizon recently created the role of dedicated Customer Champion, agents who will handle customer inquiries from start to finish. Under the plan, every customer service call will be assigned a champion, who will work on resolving the issue and updating the customer as the sole Verizon contact. The concept is creative, potentially very useful for customers and service teams alike, as are the specifics: the Verizon champions will update customers on progress through the channel of the customer’s choosing. Verizon’s overall focus on customer experience is laudable, yet the customer champions initiative stands out most of all. Research shows that being transferred from agent to agent is a particular source of customer frustration. People who get in touch often have a complaint from the start, and having to get new agents up to speed slows resolution times and can impact customer satisfaction scores. By eliminating transfers, the initiative will reduce customer frustration and increase satisfaction, while improving customer trust that the brand has their back. Dealing with a single agent may even trigger an emotional connection between customer and company, potentially leading to more memorable customer experience moments. Verizon is empowering agents to provide continuity and take personal ownership of issues, which could promote accountability and pride within service teams. The result? Higher employee satisfaction and a better employee experience. It will also give agents more productive things to do. With fewer phone calls to answer, company representatives can spend more time tracking the results of their interactions with individual customers, learning from how the issues were resolved, and perhaps even preventing future occurrences of the same problem.
QR Codes make a splash in the real time payments pool – X9 payment QR code standard introduces a common language for encoding payment data, so single QR code can work across multiple networks, such as FedNow, ACH, and TCH RTP
QR code payments have taken a big step towards becoming not only a mainstream payment option but also one that can accelerate the adoption of real-time payments. Late last week, the technology was used to facilitate a transaction over the FedNow network using the X9 standard. The demonstration transferred funds in one second from a credit union to a Top 4 bank in the United States. During the test, a bill was presented to a payer with a merchant-generated QR code. Upon scanning the code, the payer authorized the transaction via the payer’s credit union’s mobile app. Assisting in the transaction was technology from Matera, a fintech specializing in instant payments and QR code technology. Also involved in the transaction were Tyfone Inc., a digital-banking and -payments platform provider, and real-time payments provider Payfinia Inc., a Tyfone company. Key to making the transaction possible was the X9 payment QR code standard. Developed by the Accredited Standards Committee X9, the X9 standard introduces a common language for encoding payment data in “a secure, structured, and extensible way,” according to Matera. As a result, a single QR code can work across multiple networks, such as FedNow as well as the automated clearing house and The Clearing House’s RTP network. It can also work with different banks. The standard also supports multiple use cases, such as consumer-to-business, business-to-business, and peer-to-peer payments. Matera chief executive and co-founder Carlos Netto said “It opens the door to a broad range of use cases, bill payments, in-store payments and ecommerce, all initiated by QR code and settled in real time. Ultimately, this payment QR Code can accelerate the adoption of instant payments.”
Sakana AI’s new technique allows multiple LLMs to cooperate on a single task by enabling them to perform trial-and-error and combine their unique strengths to solve problems that are too complex for any individual model
Sakana AI has introduced a new technique that allows multiple large language models (LLMs) to cooperate on a single task, effectively creating a “dream team” of AI agents. The method, called Multi-LLM AB-MCTS, enables models to perform trial-and-error and combine their unique strengths to solve problems that are too complex for any individual model. For enterprises, this approach provides a means to develop more robust and capable AI systems. Instead of being locked into a single provider or model, businesses could dynamically leverage the best aspects of different frontier models, assigning the right AI for the right part of a task to achieve superior results. Sakana AI’s new algorithm is an “inference-time scaling” technique. On tasks where a clear path to a solution existed, the algorithm quickly identified the most effective LLM and used it more frequently. More impressively, the team observed instances where the models solved problems that were previously impossible for any single one of them. To help developers and businesses apply this technique, Sakana AI has released the underlying algorithm as an open-source framework called TreeQuest, available under an Apache 2.0 license (usable for commercial purposes). TreeQuest provides a flexible API, allowing users to implement Multi-LLM AB-MCTS for their own tasks with custom scoring and logic.
Kioxia’s algorithm enables vector searches directly on SSDs and reduces host memory requirements letting system architects fine-tune the optimal balance among RAG systems for a variety of contrasting workloads without the need to store index data in DRAM
Kioxia Corporation, a world leader in memory solutions, announced an update to its KIOXIA AiSAQ (All-in-Storage ANNS with Product Quantization) software. This new open-source release introduces flexible controls allowing system architects to define the balance point between search performance and the number of vectors, which are opposing factors in the fixed capacity of SSD storage in the system. The resulting benefit enables architects of RAG systems to fine tune the optimal balance of specific workloads and their requirements, without any hardware modifications. KIOXIA AiSAQ software uses a novel approximate nearest neighbor search (ANNS) algorithm that is optimized for SSDs and eliminates the need to store index data in DRAM. By enabling vector searches directly on SSDs and reducing host memory requirements, KIOXIA AiSAQ technology allows vector databases to scale, largely without the restrictions caused by limited DRAM capacity. This latest update allows administrators to select the optimal balance for a variety of contrasting workloads among the RAG system. This update makes KIOXIA AiSAQ technology a suitable SSD-based ANNS for not only RAG applications but also other vector-hungry applications such as offline semantic searches. With growing demand for scalable AI services, SSDs offer a practical alternative to DRAM for managing the high throughput and low latency that RAG systems require. KIOXIA AiSAQ software makes it possible to meet these demands efficiently, enabling large-scale generative AI without being constrained by limited memory resources.
Context engineering is replacing prompt engineering as the key to AI performance through smart context pipelines that integrate semantic search engines, versioned memory banks, and modular knowledge sources to guide LLMs effectively
Context engineering is fast becoming the backbone of serious AI deployments, especially those involving large language models (LLMs). Context engineering is the deliberate design, structuring, and management of the information ecosystem surrounding an AI model. Think of it as crafting not just the question, but the entire briefing memo, mood board, data warehouse, and toolkit that help an LLM give a decent answer. If you’re building a trading bot, customer service assistant, or research analyst powered by an LLM, you don’t want it guessing in the dark. Context engineering ensures it walks into the room prepped, briefed, and ready to speak intelligently about your client’s portfolio, market trends in sub-Saharan Africa, or whatever it might be. According to LlamaIndex, success in enterprise AI depends less on tweaking prompts and more on designing context pipelines that can integrate domain-specific knowledge, user preferences, compliance requirements, and temporal awareness. Finance is a perfect example: In financial analysis, client-facing chatbots, portfolio recommendations, context is key. With smart context pipelines, the LLM knows whether it’s speaking to a junior retail trader or a seasoned institutional player and deliver the information in the appropriate manner. As LangChain’s engineers put it, prompt engineering is fine for demos—but context engineering is what gets deployed in production. And production is where the money is. It involves integrating semantic search engines, versioned memory banks, and modular knowledge sources so the model doesn’t hallucinate a balance sheet or invent nonexistent market indices.
Contify’s agentic AI delivers trusted, decision-ready market and competitor insights by continuously analyzing unstructured updates from millions of verified external and internal sources and connecting information through Knowledge Graph
AI-native Market and Competitive Intelligence (M&CI) platform Contify, launched Athena, its proprietary Agentic AI insights engine. Athena eliminates manual M&CI work and delivers trusted, decision-ready market and competitor insights, with enterprise-grade accuracy. It enables organizations to make faster, more confident decisions and compete with greater agility. Unlike generic AI assistants like ChatGPT, Gemini, or Perplexity, which often hallucinate, respond from unverified web content, and even fabricate sources, making them unsuitable for enterprise use, Athena is built on a foundation of data integrity coupled with strict AI-usage guardrails. It continuously analyzes unstructured updates from millions of verified external and internal sources. It synthesizes what matters and connects information through a proprietary Knowledge Graph, which stores the organization’s context, to produce reliable insights. Mohit Bhakuni, CEO of Contify says “Intelligence professionals are often stretched thin, grappling with overwhelming data, frequent stakeholder requests, and generic AI assistant limitations. Athena transforms this by automating grunt work and providing rich, verified insights. It frees them to focus on strategic priorities and become trusted advisors their organizations rely on.”
New Liquid Foundation Models can be deployed on edge devices without the need for extended infrastructure of connected systems and are superior to transformer-based LLMs on cost, performance and operational efficiency
If you can simply run operations locally on a hardware device, that creates all kinds of efficiencies, including some related to energy consumption and fighting climate change. Enter the rise of new Liquid Foundation Models, which innovate from a traditional transformer-based LLM design, to something else. The new LFM models already boast superior performance to other transformer-based ones of comparable size such as Meta’s Llama 3.1-8B and Microsoft’s Phi-3.5 3.8B. The models are engineered to be competitive not only on raw performance benchmarks but also in terms of operational efficiency, making them ideal for a variety of use cases, from enterprise-level applications specifically in the fields of financial services, biotechnology, and consumer electronics, to deployment on edge devices. These post-transformer models can be used on devices, cars, drones, and planes, and applications to predictive finance and predictive healthcare. LFMs, he said, can do the job of a GPT, running locally on devices. If they’re running off-line on a device, you don’t need the extended infrastructure of connected systems. You don’t need a data center or cloud services, or any of that. In essence, these systems can be low-cost, high-performance, and that’s just one aspect of how people talk about applying a “Moore’s law” concept to AI. It means systems are getting cheaper, more versatile, and easier to manage – quickly.
