Cash App saw a marked slowdown in the first quarter, monthly transacting members using the digital wallet showed 0% year-over-year growth, remaining stagnant at 57 million users. Drill down a bit and the use of Cash App Card slowed, to 7% growth in monthly transacting active users slowed to 7% (at 25 million), where in previous quarters that growth rate had been in the mid-teens percentage points. Inflows have slowed to 8% growth to $76.8 billion in the first quarter, down from the 17% year on year growth that had been logged in the year ago first quarter. On an individual basis, the inflows come out to $1,355 per transacting active in the latest quarter, at 8% growth, also down from double-digit growth rates. The read across here is that at least for now, users are arguably being conservative about how much money they want to — or can — put to work with the digital wallet as entry point into the Block financial ecosystem. This performance is attributed to changing consumer behavior, including a shift in spending away from discretionary items like travel and media toward non-discretionary areas like groceries and gas. CEO Jack Dorsey said. “Tax refunds are an important seasonal driver of Cash App inflows. This year, we saw a pronounced shift in consumer behavior during the time period that we typically see the largest disbursements, late February and into March,” said Dorsey. “This coincided with inflows coming in below our expectations. During the quarter, non-discretionary Cash App Card spend in areas like grocery and gas was more resilient, while we saw a more pronounced impact to discretionary spending in areas like travel and media. We believe this consumer softness was a key driver of our forecast miss.”
Marketplaces’ third-party sellers efforts to stock up to avoid the cost of tariffs is inadequate because shoppers are also buying ahead
The efforts of third-party sellers on platforms like Amazon to stock up on goods to avoid the cost of tariffs will reportedly work for only a limited time. Because shoppers are also buying ahead to avoid the impact of tariffs, merchants will eventually sell down their inventory, place new orders, and be faced with the challenge of trying to avoid price increases. It is unlikely sellers can stock up on enough inventory to meet their needs for more than six months and that they will feel the full impact of tariffs in the third or fourth quarter. Amazon CEO Andy Jassy said that demand had not yet softened because of tariffs and that if anything, the company had seen “heightened buying in certain categories that may indicate stocking up in advance of any potential tariff impact.” Amazon pulled forward inventory in the first quarter, while many marketplace merchants accelerated shipments to U.S. warehouses to insulate customers from price spikes. Jassy added that Amazon’s risk is muted relative to rivals because many traditional retailers buy from middlemen who themselves import from China, “so the total tariff will be higher for these retailers than for China-direct sellers” on Amazon’s platform.
Personal ”digital defense AI agents” can be useful for individuals to keep a lid on the types of bad actors that could otherwise jeopardize systems
The idea of a personal “digital defender” in the form of an AI agent is not very widely talked about on the web. Alex “Sandy” Pentland describes that your AI agent addresses all of that other agent activity that’s aimed at you, and intervenes on your behalf. In a way, it’s like having a public defender in court. There’s a legal effort against you, so you need your own advocacy to represent you on your side. It’s also sort of like consumer reporting – Pentland mentioned how Consumer Reports has been doing this kind of work for 80 years with polls and other tools. Another sort of similar idea is cybersecurity agents who are created by a company called Twine that are intended to protect people from cyberattacks. Essentially, Pentland argued, a bad actor can easily throw a system out of balance by being “just a little edgy,” by making small changes that lead to a domino effect that can be detrimental. He used the example of a traffic jam, which starts off as just one car in dense traffic changing its behavior. This type of game theory, he asserted, has to be factored into how we create our digital defense agent networks. With all of this in mind, it’s probably a good idea to think about building those digital defense agents. They might not be perfect right away, but they might be the defense that we need against an emerging army of hackers utilizing some of the most potent technologies we’ve ever seen. The idea also feeds back into the whole debate about open source and closed source models, and when tools should be published for all the world to use. It’s imperative to keep a lid on the types of bad actors that could otherwise jeopardize systems. In the cryptocurrency days, we had the notion of a 51% attack, where as soon as somebody held more than half of a given blockchain item, they had full control, with no exceptions.
LLMs can still be prohibitively expensive for some, and as with all ML models, LLMs are not always accurate. There will always be use cases where leveraging an ML implementation is not the right path forward. The key considerations for AI project managers to evaluate customers’ needs for AI implementation include: The inputs and outputs required to fulfill your customer’s needs: An input is provided by the customer to your product and the output is provided by your product. So, for a Spotify ML-generated playlist (an output), inputs could include customer preferences, and ‘liked’ songs, artists and music genre. Combinations of inputs and outputs: Customer needs can vary based on whether they want the same or different output for the same or different input. The more permutations and combinations we need to replicate for inputs and outputs, at scale, the more we need to turn to ML versus rule-based systems. Patterns in inputs and outputs: Patterns in the required combinations of inputs or outputs help you decide what type of ML model you need to use for implementation. If there are patterns to the combinations of inputs and outputs (like reviewing customer anecdotes to derive a sentiment score), consider supervised or semi-supervised ML models over LLMs because they might be more cost-effective. Cost and Precision: LLM calls are not always cheap at scale and the outputs are not always precise/exact, despite fine-tuning and prompt engineering. Sometimes, you are better off with supervised models for neural networks that can classify an input using a fixed set of labels, or even rules-based systems, instead of using an LLM.
LLMs can still be prohibitively expensive for some, and as with all ML models, LLMs are not always accurate. There will always be use cases where leveraging an ML implementation is not the right path forward. The key considerations for AI project managers to evaluate customers’ needs for AI implementation include: The inputs and outputs required to fulfill your customer’s needs: An input is provided by the customer to your product and the output is provided by your product. So, for a Spotify ML-generated playlist (an output), inputs could include customer preferences, and ‘liked’ songs, artists and music genre. Combinations of inputs and outputs: Customer needs can vary based on whether they want the same or different output for the same or different input. The more permutations and combinations we need to replicate for inputs and outputs, at scale, the more we need to turn to ML versus rule-based systems. Patterns in inputs and outputs: Patterns in the required combinations of inputs or outputs help you decide what type of ML model you need to use for implementation. If there are patterns to the combinations of inputs and outputs (like reviewing customer anecdotes to derive a sentiment score), consider supervised or semi-supervised ML models over LLMs because they might be more cost-effective. Cost and Precision: LLM calls are not always cheap at scale and the outputs are not always precise/exact, despite fine-tuning and prompt engineering. Sometimes, you are better off with supervised models for neural networks that can classify an input using a fixed set of labels, or even rules-based systems, instead of using an LLM.
AI radically transforms agile software development by reducing need for multiple teams and diminishing cross-team dependencies
Agile’s focus on delivering working software frequently has evolved into continuous integration/continuous delivery practices. AI is now pushing this boundary further toward what we might call “continuous creation.” When code generation approaches real-time, the limiting factor isn’t producing code but verifying it. AI offers solutions here as well—automated testing, security scanning, and quality analysis can be AI-enhanced. AI agents can write unit tests for new code and help create end-to-end tests, improving quality guarantees. The most successful teams will master this balance between acceleration and validation, exploring more ideas, failing faster, and converging on optimal solutions more quickly—all while maintaining high quality. These transformations create opportunities to streamline traditional Scrum processes. Teams can allocate a higher percentage of their sprint to spontaneous improvements as implementing features and bug fixes with AI may be faster than the overhead of including them in sprint planning. For architecture reviews, AI can serve as your first wave of feedback—a mental sparring partner to develop ideas before presenting to a committee. The AI-written summary can be shared asynchronously, often eliminating the need for formal meetings altogether. Retrospectives should now include discussions about AI usage. The improved individual productivity allows organizations to streamline overhead processes, leading to further increases in velocity. Teams can tackle larger, more complex problem spaces, and projects that previously required multiple teams can often be handled by a single team. Cross-team dependencies—a perennial challenge in scaled agile—diminish significantly. What’s most remarkable about AI’s impact is how it reinforces rather than replaces agile’s core values.
PhotoShelter enables complex organizations to partition their digital asset libraries across different departments or teams, while maintaining a unified platform and contract
PhotoShelter, the digital asset management (DAM) platform, launched a new feature that enables complex organizations to partition their digital asset libraries across different departments or teams, while maintaining a unified platform and contract. Within large organizations, asset access must be tightly managed so that teams can focus on what is relevant to them, without the risk of accidental changes or exposure of sensitive content. This feature enables organizations to segment their library, allowing each team to have a secure workspace and maintain control over its assets and workflows. This not only reduces confusion and clutter but also minimizes the risk of exposing sensitive content or having it unintentionally altered by others. And because all of this happens within a single platform and contract, organizations avoid the inefficiencies and costs associated with using multiple DAM vendors or separate accounts. Key benefits: Prevents unauthorized access or inadvertent changes to sensitive departmental assets; Eliminates inefficiencies from managing multiple DAM vendors or separate accounts; Consolidates billing, support contacts, and sharing processes; Reduces costs while improving organizational asset security. Use cases: 1) Higher education: Dozens of university departments can now manage their content with a single PhotoShelter account, maintaining independent control of assets and access, while benefiting from campus-wide integration and a single contract. 2) Corporate environments: Enterprises can allow separate departments to manage assets in a single library while maintaining each group’s control over its specific assets. 3) Healthcare systems: Providers can maintain stronger HIPAA compliance by allowing separate teams to manage content, ensuring patient data is visible only to those who need access.
Banks-Fintech-PSP consortium to promote use cases for Commercial Variable Recurring Payments (VRPs) in UK
A group of 31 fintechs, high street banks, challenger banks and payment providers have agreed to put up initial funding for a new entity that will be wholly owned and run by industry. Barclays, GoCardless, Mastercard, Monzo, Plaid, Revolut and Wise are among the backers. The proposed initial uses cases for cVRPs will focus on selected regulated industries such as payments to utility and rail companies, regulated financial firms, e-money institutions, government bodies, and charities. Offering cVRPs in these areas would give Brits better control over regular payments, as well as a frictionless experience when buying goods or services from websites. Henk Van Hulle, CEO, Open Banking Limited, says: “This is a significant moment for the industry, and I sincerely thank the organisations that have committed to fund efforts to create a company that will carry forward the important work on cVRPs. It is testament to the collaborative nature of our ecosystem that it can be industry-led.”
Vanguard unveils generative AI client summaries for financial advisors
Vanguard launched its first client-facing GenAI capability that equips financial advisors with efficient and personalized content for client communications. Vanguard’s Client-Ready Article Summaries produce customizable synopses of its top-read market perspectives tailored by financial acumen, investing life stage, and tone. It also generates the necessary disclosures to accompany the article summaries, creating an efficient and seamless information sharing experience for advisors. Sid Ratna, Head of Digital and Analytics for Vanguard Financial Advisor Services said “The best advisors can get even better with AI in their client toolkit, and Vanguard’s Client-Ready Article Summaries help advisors drive personalized and actionable conversations that enhance client relationships over the long-term.” Vanguard Financial Advisor Services provides investment services, portfolio analytics and consulting, and research to over 50,000 advisory firms comprising 150,000 advisors.1 Supporting advisors so they can best service their clients is integral to Vanguard’s mission of giving investors the best chance for investment success. In addition to rolling out the Client-Ready Article Summaries, Vanguard continues to experiment with advanced technologies, including spatial and quantum computing and blockchain, to improve investment outcomes, expand investor access, and deliver personalized experiences.
Standards for bank tokens proposed by Kinexys by JP Morgan, MIT – Ledger Insights – blockchain for enterprise
Kinexys by JP Morgan, the bank’s blockchain arm, and the Massachusetts Institute of Technology’s Digital Currency Initiative (MIT DCI) have collaborated on a paper to explore standards for bank tokens on open blockchains. The authors suggest primarily relying on existing Ethereum standards, but propose two new ones they believe are needed for interbank payments. They also suggest areas where regulations might be relaxed for blockchain-based bank payments. By open blockchains they mean permissionless blockchains and also open permissioned blockchains such as Unified Ledgers and Singapore’s Global Layer One. In the latter case, a key differentiating feature is the blockchain node operators are regulated. Part of the paper explores potential standards for bank tokens to enable interoperability for payments between banks. It maps various bank token functions against existing Ethereum token standards. However, this mapping process unveiled a couple of large gaps, particularly around payment orchestration. One example is AML and fraud analysis, which is based on large datasets, so would be processed off chain, and currently would be executed before the payment is initiated. The ERC-20 payment standard has three – payment and recipient wallet addresses and the amount. Banks need more variables. So, when a user wants to make a payment, the wallet would request the format of the payment information needed by the bank (or other entity with authority), and present the appropriate screen to the user for input. Once the user has entered the data, the bank responds to the wallet with the authorization, which is included in the on-chain transfer request. The transfer and authorization would be validated on -chain, for example, to ensure that the payment amount does not exceed the amount authorized. Stepping back, JP Morgan is keen for these standards to be “designed to be narrow in scope and componentized in a way that allows them to be easily composed with other standards,” the authors wrote.