Vanguard launched its first client-facing GenAI capability that equips financial advisors with efficient and personalized content for client communications. Vanguard’s Client-Ready Article Summaries produce customizable synopses of its top-read market perspectives tailored by financial acumen, investing life stage, and tone. It also generates the necessary disclosures to accompany the article summaries, creating an efficient and seamless information sharing experience for advisors. Sid Ratna, Head of Digital and Analytics for Vanguard Financial Advisor Services said “The best advisors can get even better with AI in their client toolkit, and Vanguard’s Client-Ready Article Summaries help advisors drive personalized and actionable conversations that enhance client relationships over the long-term.” Vanguard Financial Advisor Services provides investment services, portfolio analytics and consulting, and research to over 50,000 advisory firms comprising 150,000 advisors.1 Supporting advisors so they can best service their clients is integral to Vanguard’s mission of giving investors the best chance for investment success. In addition to rolling out the Client-Ready Article Summaries, Vanguard continues to experiment with advanced technologies, including spatial and quantum computing and blockchain, to improve investment outcomes, expand investor access, and deliver personalized experiences.
When chatbots replace search bars, retailers will need to ensure their inventory, pricing and product information are optimized for AI crawling and decision-making algorithms
Shopping agents are positioned to become invisible sales conversion engines. As OpenAI, Perplexity, and others race to capture this trillion-dollar opportunity, the future of how consumers search and buy hinges on how these platforms will make money, and how their algorithms will decide which products to show consumers (or buy on their behalf). The answers will determine whether these chatbots deliver on their promise of personalized commerce in their truest and most authentic sense — or become a more sophisticated version of today’s pay-to-play search and commerce platforms. Their strength as a conversational interface, capable of understanding complex requests, makes them well-suited to complete purchases without users ever visiting a physical or digital store or leaving the conversation. An emerging Agentic AI commerce ecosystem now stands at the ready to help advance their ambitions. The speed at which the GenAI chatbots have amassed an audience shows the potential for how these models could upend the retail and commerce status quo by changing where consumers start their searches and end by making a purchase without a lot of steps or friction in between. As AI agents increasingly handle the search and presentation of results (or completed sales), traditional retailers risk becoming invisible in the commerce ecosystem altogether. In this world, payment credentials might emerge as the real winner as embedded offers, financing, rewards and other data-driven incentives become an invisible part of the transaction. The venture capital pouring into these platforms signals expectations for massive adoption and ROI. GenAI represents a new form of commerce orchestration across marketplaces, social signals and retail inventory through a simple and single conversational interface. The unique nature of conversational AI suggests that different approaches might not just be possible but necessary to compete. LLM platforms can build on GenAI’s growing sense of user trust, its potential for creating a distinctive new shopping and buying utility and its ability to monetize these new forms of value. How these models present recommendations and act on behalf of users presents new challenges because of their current lack of clarity — and new opportunities because of what they could become for the entire commerce ecosystem. And that could reshape how retailers and the ecosystem adapt their products and platforms to drive sales. For retailers and brands, that now means competing for both customer and AI attention. Retailers will need to ensure their inventory, pricing and product information are optimized for AI crawling and decision-making algorithms. The winners in this new era may be those who recognize that when conversations drive commerce, trust itself becomes the product. And that monetizing trust comes wrapped around a different business model.
Sam Altman’s World iris ID project is evolving into a superapp with a digital currency, a bank account number and by partnering gaming specialist Razer and dating platform operator Match Group
AI visionary Sam Altman is leading a project to distinguish real people from software fakes on the internet using eye scans. The World identification project, which uses eye scans to distinguish people from machines, is entering the money transfer and financial services business. Users can send money to friends and family free of charge via the World app and will have an account number for interactions with the banking system. The project aims to make it increasingly difficult to distinguish people from software online. Users create a profile called “World ID” using an eye scan on World scanners called Orb. As an incentive, World is launching its own digital currency. The project is also targeting online dating markets, such as gaming specialist Razer and dating platform operator Match Group. With these new functions, World is moving closer to the vision of a super app that covers all possible areas of everyday life, similar to WeChat in Asia. World, a web3 project started by Altman and Alex Blania that was formerly known as Worldcoin, is based on the idea that it will eventually be impossible to distinguish humans from AI agents on the internet. To address this, World wants to create digital “proof of human” tools; these announcements are part of its effort to get millions of people to sign up. After scanning your eyeball with one of its silver metal Orbs — or now, one of its Orb Minis — World will give you a unique identifier on the blockchain to verify that you’re a human.
IBM’s small models- Tiny Time Mixers — tackle network automation challenges where traditional large language models fall short and have an understanding time-series data
IBM Corp. is leaning into compact, specialized models — such as its new Tiny Time Mixers — to tackle network automation challenges where traditional large language models fall short. The key lies in understanding time-series data, something most large language models simply weren’t built to handle, according to Andrew Coward, general manager of software networking at IBM. “There’s new models, and IBM’s built one called Tiny Time Mixer. Very small parameters, million parameters, and they understand time. We can take network data, and then we can apply it to weather information or TV schedules. Then we can make predictions about what’s likely to happen. What we are seeing is the democratization of AI,” he said. “It’s almost free to put data in and run it against AI models, but if you need to train it, that’s the expensive bit. The training piece is coming down massively in costs.” Using small models, IBM helps address telco infrastructure problems, such as bandwidth congestion and poor network coverage. This explains why AI model accuracy takes center stage, Coward pointed out.
AWS announces Q Developer agentic AI that will generate code using the entire codebase in the GitHub repository
Amazon Web Services (AWS) has introduced a preview for its agentic artificial intelligence software development assistant, Q Developer, for Microsoft Corp.’s open-source code repository GitHub. GitHub is a platform used by millions of developers to store vast amounts of source code for software projects, enabling collaboration, version control, and code management. Q Developer is now available in the GitHub Marketplace, providing AI-powered capabilities such as feature development, code review, and Java code migration directly within the GitHub interface. Q Developer acts as a teammate, automating tedious tasks. Developers can assign issues to it, such as feature requests, and it will generate code using the entire codebase in the GitHub repository by following the description in the request. The AI agent will automatically update the code repository with the changes, performing syntactically sound checks and using GitHub Actions for security vulnerability scans and code quality checks. It will also use its own feedback to improve the code. Q Developer also offers easy migration for legacy codebases, allowing developers to assign a GitHub issue called “Migration” and assign it to the Amazon Q transform agent. This agent will handle all of the migration from the earlier version of Java to the newest, ensuring developers have access to the most recent features and capabilities.
IBM’s hybrid technologies enable businesses to build and deploy AI agents with their own enterprise data- offering Agent Catalog in watsonx Orchestrate to simplify access to 150+ agents
IBM is unveiling new hybrid technologies that break down the longstanding barriers to scaling enterprise AI – enabling businesses to build and deploy AI agents with their own enterprise data. IBM is providing a comprehensive suite of enterprise-ready agent capabilities in watsonx Orchestrate to help businesses put them into action. The portfolio includes: 1) Build-your-own-agent in under five minutes, with tooling that makes it easier to integrate, customize and deploy agents built on any framework – from no-code to pro-code tools for any kind of user. 2) Pre-built domain agents specialized in areas like HR, sales and procurement – with utility agents for simpler actions like web research and calculations. 3) Integration with 80+ leading enterprise applications from providers like Adobe, AWS, Microsoft, Oracle, Salesforce Agentforce, SAP, ServiceNow, and Workday. 4) Agent orchestration to handle the multi-agent, multi-tool coordination needed to tackle complex projects like planning workflows and routing tasks to the right AI tools across vendors. 5) Agent observability for performance monitoring, guardrails, model optimization, and governance across the entire agent lifecycle. IBM is also introducing the new Agent Catalog in watsonx Orchestrate to simplify access to 150+ agents and pre-built tools from both IBM and its wide ecosystem of partners. IBM is also introducing webMethods Hybrid Integration5, a next-generation solution that replaces rigid workflows with intelligent and agent-driven automation. It will help users manage the sprawl of integrations across apps, APIs, B2B partners, events, gateways, and file transfers in hybrid cloud environments.
Nvidia has launched Parakeet-TDT-0.6B-v2, an automatic speech recognition (ASR) model that can transcribe 60 minutes of audio in 1 second with an average “Word Error Rate” of just 6.05%
Nvidia has launched Parakeet-TDT-0.6B-v2, an automatic speech recognition (ASR) model that can, “transcribe 60 minutes of audio in 1 second [mind blown emoji].” This version two is so powerful, it currently tops the Hugging Face Open ASR Leaderboard with an average “Word Error Rate” (times the model incorrectly transcribes a spoken word) of just 6.05% (out of 100). To put that in perspective, it nears proprietary transcription models such as OpenAI’s GPT-4o-transcribe (with a WER of 2.46% in English) and ElevenLabs Scribe (3.3%). The model boasts 600 million parameters and leverages a combination of the FastConformer encoder and TDT decoder architectures. It can transcribe an hour of audio in just one second, provided it’s running on Nvidia’s GPU-accelerated hardware. The performance benchmark is measured at an RTFx (Real-Time Factor) of 3386.02 with a batch size of 128, placing it at the top of current ASR benchmarks maintained by Hugging Face. Parakeet-TDT-0.6B-v2 is aimed at developers, researchers, and industry teams building applications such as transcription services, voice assistants, subtitle generators, and conversational AI platforms. The model supports punctuation, capitalization, and detailed word-level timestamping, offering a full transcription package for a wide range of speech-to-text needs. Developers can deploy the model using Nvidia’s NeMo toolkit. The setup process is compatible with Python and PyTorch, and the model can be used directly or fine-tuned for domain-specific tasks. The open-source license (CC-BY-4.0) also allows for commercial use, making it appealing to startups and enterprises alike. Parakeet-TDT-0.6B-v2 is optimized for Nvidia GPU environments, supporting hardware such as the A100, H100, T4, and V100 boards. While high-end GPUs maximize performance, the model can still be loaded on systems with as little as 2GB of RAM, allowing for broader deployment scenarios.
Specialized blockchains are shaping the future of DeFi attracting robust ecosystems and offering developers more freedom to innovate in areas like algorithmic credit scoring, IP rights management, and tokenized commodities
Specialized blockchains like Berachain, Story (IPfi), Unichain, Monad, and MegaETH are leading a wave of specialized blockchain launches designed to serve diverse decentralized finance applications. These chains challenge the notion that a handful of general-purpose networks can support all use cases and declare that the future is not one monolithic chain to rule them all. Financial institutions are entering DeFi with expectations shaped by decades of traditional finance, and the demand is clear: performance-optimized platforms catering to high-speed trading, tokenized intellectual property, and sophisticated real-world asset markets. Critics warn that a highly fragmented landscape could dilute liquidity and create inefficiencies, making it harder for assets to flow seamlessly across different platforms. However, emerging data from beta deployments indicates that specialized networks can attract robust ecosystems, offering developers more freedom to innovate in areas like algorithmic credit scoring, IP rights management, and tokenized commodities. Experiments in liquid staking, real-world asset tokenization, and hybrid on-chain/off-chain data verification further validate the need for these chains as key infrastructure layers for the next wave of institutional DeFi. The long-term viability of this multi-chain paradigm will depend on whether interoperability frameworks can facilitate frictionless asset movement and whether institutions gain confidence in the governance and security of specialized chains. The future of blockchains is not monolithic; it’s modular, specialized, and taking off. As the market evolves, it’s crucial to develop seamless user interfaces and robust interoperability mechanisms that abstract away technical friction.
Playtron rolls out Game Dollar programmable stablecoin for gaming to enable purchases, subscriptions and rewards, giving a seamless and consistent experience for payments and rewards
Playtron, a Web3 gaming operating system, announced plans to roll out Game Dollar, a stablecoin for gaming that will be used to power purchases, subscriptions, and rewards across Playtron’s and other gaming ecosystems in the future. Game Dollar will be used to power Playtron’s GameOS, aiming to unify gaming ecosystems across platforms, where gaming economies remain mostly siloed. With Game Dollar, Playtron seeks to create a neutral, programmable financial layer across games and gaming marketplaces, giving a seamless and consistent experience for payments and rewards. Game Dollar will be built on top of the M0 stablecoin platform with seamless payments APIs powered by Bridge-supporting game marketplaces, publishers, and gamers. Importantly, Game Dollar will be available for use in a new, first-of-its-kind handheld gaming console, the SuiPlay0X1, allowing for payments and rewards on a wide range of PC games as well as new titles developed using the Sui blockchain within the console. “Programmable stablecoins are the next evolution of digital assets, and Game Dollar is a powerful example of how this innovation unlocks real utility in one of the world’s most dynamic sectors,” said Adeniyi Abiodun, chief product officer of Mysten Labs. Game Dollar will initially launch exclusively on Sui. M0 is the universal platform powering builders of application-specific stablecoins. With M0, developers can build safe, programmable and interoperable digital dollars.
Visa invests in Stabelcoin infrastructure platform BVNK that processes more than $12 billion annually for companies like Ferrari and Rapyd
Stabelcoin infrastructure platform BVNK received an investment from Visa. The new capital comes on the heels of a $50 million Series B funding round in December. “We’re proud to support BVNK as they help accelerate global adoption of stablecoin payments,” Rubail Birwadker, head of growth products and partnerships at Visa, said. “Stablecoins are fast becoming a part of global payment flows, and Visa invests in new technologies and builders like BVNK, staying at the forefront of what’s next in commerce to better serve our clients and partners.” There was $27 trillion in total stablecoin transaction volume globally across 1.25 billion transactions in 2024, per Visa Onchain Analytics. BVNK processes more than $12 billion annually for companies like Ferrari and Rapyd. “We’re experiencing a once-in-a-generation shift to a new foundational payment technology, powered by stablecoins,” BVNK co-founder and CEO Jesse Hemson Struthers said. “At BVNK, we’re building the infrastructure to make these new rails accessible to businesses, empowering them to operate at the speed of today’s economy.”
