Banks have long used traditional AI and machine learning techniques for various functions, such as customer service bots and decision algorithms that provide a faster-than-human response to market swings. But modern generative AI is different from prior AI/ML methods, and it has its own strengths and weaknesses. Hari Gopalkrishnan, Bank of America’s chief information officer and head of retail, preferred, small business, and wealth technology, said generative AI is a new tool that offers new capabilities, rather than a replacement for prior AI efforts. “We have a four-layer framework that we think about with regards to AI,” Gopalkrishnan told. The first layer is rules-based automation that takes actions based on specific conditions, like collecting and preserving data about a declined credit card transaction when one occurs. The second is analytical models, such as those used for fraud detection. The third layer is language classification, which Bank of America used to build Erica, a virtual financial assistant, in 2016. “Our journey of Erica started off with understanding language for the purposes of classification,” Gopalkrishnan said. But the company isn’t generating anything with Erica, he added: “We’re classifying customer questions into buckets of intents and using those intents to take customers to the right part of the app or website to help them serve themselves.” The fourth layer, of course, is generative AI. Given the history, it’d be reasonable to think banks would turn generative-AI tools into new chatbots that more or less serve as better versions of Bank of America’s Erica, or as autonomous financial advisors. But the most immediate changes instead came to internal processes and tools. Bank of America is pursuing similar applications, including a call center tool that saves customer associates’ time by transcribing customer conversations in real time, classifying the customer’s needs, and generating a summary for the agent. The decision to deploy generative AI internally first, rather than externally, was in part due to generative AI’s most notable weakness: hallucinations. Banks are wary of consumer-facing AI chatbots that could make similar errors about bank products and policies. Deploying generative AI internally lessens the concern. It’s not used to autonomously serve a bank’s customers and clients but to assist bank employees, who have the option to accept or reject its advice or assistance. Bank of America provides AI tools that can help relationship bankers prep.
Morgan Stanley is concentrating on making its AI tools easy to understand, thinking through the associated UX to make them intuitive to use
Koren Picariello, a Morgan Stanley managing director and its head of wealth management generative AI, said Morgan Stanley took a similar path. Throughout the 2010s, the company used machine learning for several purposes, like seeking investment opportunities that meet the needs and preferences of specific clients. Many of these techniques are still used. Morgan Stanley’s first major generative-AI tool, Morgan Stanley Assistant, was launched in September 2023 for employees such as financial advisors and support staff who help clients manage their money. Powered by OpenAI’s GPT-4, it was designed to give responses grounded in the company’s library of over 100,000 research reports and documents. The second tool, Morgan Stanley Debrief, was launched in June. It helps financial advisors create, review, and summarize notes from meetings with clients. “It’s kind of like having the most informed person at Morgan Stanley sitting next to you,” Picariello said. “Because any question you have, whether it was operational in nature or research in nature, what we’ve asked the model to do is source an answer to the user based on our internal content.” Picariello said Morgan Stanley takes a similar approach to using generative AI while maintaining accuracy. The company’s AI-generated meeting summaries could be automatically shared with clients, but they’re not. Instead, financial advisors review them before they’re sent. Meanwhile, Morgan Stanley is concentrating on making the company’s AI tools easy to understand. “We’ve spent a lot of time thinking through the UX associated with these tools, to make them intuitive to use, and taking users through the process and cycle of working with generative AI,” Picariello said. “Much of the training is built into the workflow and the user experience.” For example, Morgan Stanley’s tools can advise employees on how to reframe or change a prompt to yield a better response.
Inflows into Cash App ecosystem have slowed to 8% growth to $77 billion in the first quarter, down from the 17% in the year ago first quarter; Cash App Card in 1Q slowed, to 7% growth in monthly transacting active users (at 25 million)
Cash App saw a marked slowdown in the first quarter, monthly transacting members using the digital wallet showed 0% year-over-year growth, remaining stagnant at 57 million users. Drill down a bit and the use of Cash App Card slowed, to 7% growth in monthly transacting active users slowed to 7% (at 25 million), where in previous quarters that growth rate had been in the mid-teens percentage points. Inflows have slowed to 8% growth to $76.8 billion in the first quarter, down from the 17% year on year growth that had been logged in the year ago first quarter. On an individual basis, the inflows come out to $1,355 per transacting active in the latest quarter, at 8% growth, also down from double-digit growth rates. The read across here is that at least for now, users are arguably being conservative about how much money they want to — or can — put to work with the digital wallet as entry point into the Block financial ecosystem. This performance is attributed to changing consumer behavior, including a shift in spending away from discretionary items like travel and media toward non-discretionary areas like groceries and gas. CEO Jack Dorsey said. “Tax refunds are an important seasonal driver of Cash App inflows. This year, we saw a pronounced shift in consumer behavior during the time period that we typically see the largest disbursements, late February and into March,” said Dorsey. “This coincided with inflows coming in below our expectations. During the quarter, non-discretionary Cash App Card spend in areas like grocery and gas was more resilient, while we saw a more pronounced impact to discretionary spending in areas like travel and media. We believe this consumer softness was a key driver of our forecast miss.”
Marketplaces’ third-party sellers efforts to stock up to avoid the cost of tariffs is inadequate because shoppers are also buying ahead
The efforts of third-party sellers on platforms like Amazon to stock up on goods to avoid the cost of tariffs will reportedly work for only a limited time. Because shoppers are also buying ahead to avoid the impact of tariffs, merchants will eventually sell down their inventory, place new orders, and be faced with the challenge of trying to avoid price increases. It is unlikely sellers can stock up on enough inventory to meet their needs for more than six months and that they will feel the full impact of tariffs in the third or fourth quarter. Amazon CEO Andy Jassy said that demand had not yet softened because of tariffs and that if anything, the company had seen “heightened buying in certain categories that may indicate stocking up in advance of any potential tariff impact.” Amazon pulled forward inventory in the first quarter, while many marketplace merchants accelerated shipments to U.S. warehouses to insulate customers from price spikes. Jassy added that Amazon’s risk is muted relative to rivals because many traditional retailers buy from middlemen who themselves import from China, “so the total tariff will be higher for these retailers than for China-direct sellers” on Amazon’s platform.
Personal ”digital defense AI agents” can be useful for individuals to keep a lid on the types of bad actors that could otherwise jeopardize systems
The idea of a personal “digital defender” in the form of an AI agent is not very widely talked about on the web. Alex “Sandy” Pentland describes that your AI agent addresses all of that other agent activity that’s aimed at you, and intervenes on your behalf. In a way, it’s like having a public defender in court. There’s a legal effort against you, so you need your own advocacy to represent you on your side. It’s also sort of like consumer reporting – Pentland mentioned how Consumer Reports has been doing this kind of work for 80 years with polls and other tools. Another sort of similar idea is cybersecurity agents who are created by a company called Twine that are intended to protect people from cyberattacks. Essentially, Pentland argued, a bad actor can easily throw a system out of balance by being “just a little edgy,” by making small changes that lead to a domino effect that can be detrimental. He used the example of a traffic jam, which starts off as just one car in dense traffic changing its behavior. This type of game theory, he asserted, has to be factored into how we create our digital defense agent networks. With all of this in mind, it’s probably a good idea to think about building those digital defense agents. They might not be perfect right away, but they might be the defense that we need against an emerging army of hackers utilizing some of the most potent technologies we’ve ever seen. The idea also feeds back into the whole debate about open source and closed source models, and when tools should be published for all the world to use. It’s imperative to keep a lid on the types of bad actors that could otherwise jeopardize systems. In the cryptocurrency days, we had the notion of a 51% attack, where as soon as somebody held more than half of a given blockchain item, they had full control, with no exceptions.
LLMs can still be prohibitively expensive for some, and as with all ML models, LLMs are not always accurate. There will always be use cases where leveraging an ML implementation is not the right path forward. The key considerations for AI project managers to evaluate customers’ needs for AI implementation include: The inputs and outputs required to fulfill your customer’s needs: An input is provided by the customer to your product and the output is provided by your product. So, for a Spotify ML-generated playlist (an output), inputs could include customer preferences, and ‘liked’ songs, artists and music genre. Combinations of inputs and outputs: Customer needs can vary based on whether they want the same or different output for the same or different input. The more permutations and combinations we need to replicate for inputs and outputs, at scale, the more we need to turn to ML versus rule-based systems. Patterns in inputs and outputs: Patterns in the required combinations of inputs or outputs help you decide what type of ML model you need to use for implementation. If there are patterns to the combinations of inputs and outputs (like reviewing customer anecdotes to derive a sentiment score), consider supervised or semi-supervised ML models over LLMs because they might be more cost-effective. Cost and Precision: LLM calls are not always cheap at scale and the outputs are not always precise/exact, despite fine-tuning and prompt engineering. Sometimes, you are better off with supervised models for neural networks that can classify an input using a fixed set of labels, or even rules-based systems, instead of using an LLM.
LLMs can still be prohibitively expensive for some, and as with all ML models, LLMs are not always accurate. There will always be use cases where leveraging an ML implementation is not the right path forward. The key considerations for AI project managers to evaluate customers’ needs for AI implementation include: The inputs and outputs required to fulfill your customer’s needs: An input is provided by the customer to your product and the output is provided by your product. So, for a Spotify ML-generated playlist (an output), inputs could include customer preferences, and ‘liked’ songs, artists and music genre. Combinations of inputs and outputs: Customer needs can vary based on whether they want the same or different output for the same or different input. The more permutations and combinations we need to replicate for inputs and outputs, at scale, the more we need to turn to ML versus rule-based systems. Patterns in inputs and outputs: Patterns in the required combinations of inputs or outputs help you decide what type of ML model you need to use for implementation. If there are patterns to the combinations of inputs and outputs (like reviewing customer anecdotes to derive a sentiment score), consider supervised or semi-supervised ML models over LLMs because they might be more cost-effective. Cost and Precision: LLM calls are not always cheap at scale and the outputs are not always precise/exact, despite fine-tuning and prompt engineering. Sometimes, you are better off with supervised models for neural networks that can classify an input using a fixed set of labels, or even rules-based systems, instead of using an LLM.
AI radically transforms agile software development by reducing need for multiple teams and diminishing cross-team dependencies
Agile’s focus on delivering working software frequently has evolved into continuous integration/continuous delivery practices. AI is now pushing this boundary further toward what we might call “continuous creation.” When code generation approaches real-time, the limiting factor isn’t producing code but verifying it. AI offers solutions here as well—automated testing, security scanning, and quality analysis can be AI-enhanced. AI agents can write unit tests for new code and help create end-to-end tests, improving quality guarantees. The most successful teams will master this balance between acceleration and validation, exploring more ideas, failing faster, and converging on optimal solutions more quickly—all while maintaining high quality. These transformations create opportunities to streamline traditional Scrum processes. Teams can allocate a higher percentage of their sprint to spontaneous improvements as implementing features and bug fixes with AI may be faster than the overhead of including them in sprint planning. For architecture reviews, AI can serve as your first wave of feedback—a mental sparring partner to develop ideas before presenting to a committee. The AI-written summary can be shared asynchronously, often eliminating the need for formal meetings altogether. Retrospectives should now include discussions about AI usage. The improved individual productivity allows organizations to streamline overhead processes, leading to further increases in velocity. Teams can tackle larger, more complex problem spaces, and projects that previously required multiple teams can often be handled by a single team. Cross-team dependencies—a perennial challenge in scaled agile—diminish significantly. What’s most remarkable about AI’s impact is how it reinforces rather than replaces agile’s core values.
PhotoShelter enables complex organizations to partition their digital asset libraries across different departments or teams, while maintaining a unified platform and contract
PhotoShelter, the digital asset management (DAM) platform, launched a new feature that enables complex organizations to partition their digital asset libraries across different departments or teams, while maintaining a unified platform and contract. Within large organizations, asset access must be tightly managed so that teams can focus on what is relevant to them, without the risk of accidental changes or exposure of sensitive content. This feature enables organizations to segment their library, allowing each team to have a secure workspace and maintain control over its assets and workflows. This not only reduces confusion and clutter but also minimizes the risk of exposing sensitive content or having it unintentionally altered by others. And because all of this happens within a single platform and contract, organizations avoid the inefficiencies and costs associated with using multiple DAM vendors or separate accounts. Key benefits: Prevents unauthorized access or inadvertent changes to sensitive departmental assets; Eliminates inefficiencies from managing multiple DAM vendors or separate accounts; Consolidates billing, support contacts, and sharing processes; Reduces costs while improving organizational asset security. Use cases: 1) Higher education: Dozens of university departments can now manage their content with a single PhotoShelter account, maintaining independent control of assets and access, while benefiting from campus-wide integration and a single contract. 2) Corporate environments: Enterprises can allow separate departments to manage assets in a single library while maintaining each group’s control over its specific assets. 3) Healthcare systems: Providers can maintain stronger HIPAA compliance by allowing separate teams to manage content, ensuring patient data is visible only to those who need access.
Banks-Fintech-PSP consortium to promote use cases for Commercial Variable Recurring Payments (VRPs) in UK
A group of 31 fintechs, high street banks, challenger banks and payment providers have agreed to put up initial funding for a new entity that will be wholly owned and run by industry. Barclays, GoCardless, Mastercard, Monzo, Plaid, Revolut and Wise are among the backers. The proposed initial uses cases for cVRPs will focus on selected regulated industries such as payments to utility and rail companies, regulated financial firms, e-money institutions, government bodies, and charities. Offering cVRPs in these areas would give Brits better control over regular payments, as well as a frictionless experience when buying goods or services from websites. Henk Van Hulle, CEO, Open Banking Limited, says: “This is a significant moment for the industry, and I sincerely thank the organisations that have committed to fund efforts to create a company that will carry forward the important work on cVRPs. It is testament to the collaborative nature of our ecosystem that it can be industry-led.”