Search Engine Optimization (SEO) has been the cornerstone of digital visibility for decades. Now, Generative Engine Optimization (GEO) is emerging as its essential companion. The new GEO approach is about writing content that answers real questions thoroughly. This way, AI systems can quote your expertise. With SEO, we added meta-tags that humans never notice. GEO, in terms of metadata, requires adding clear labels that tell AI exactly what each page is about. SEO success meant counting clicks from search results. GEO success means tracking how often an AI tool mentions your page or links back to your content. Digital agencies are now offering “AI Readiness” audits and GEO services to help businesses adapt. The search landscape is evolving from “find information” to “get answers.” Generative Engine Optimization simply means ensuring your business is part of those answers. By focusing on clear, helpful content with proper technical structure, you can maintain visibility regardless of how people search—whether they’re typing in a search box, asking a voice assistant, or chatting with an AI. For most businesses, the principles aren’t actually new: create valuable content that genuinely helps your audience. What’s changing is how that content gets discovered and consumed. Companies that adapt quickly will maintain their connection to customers, while those that ignore this shift are at risk of becoming increasingly difficult to find. Here are some best practices for GEO: Answer the obvious questions first; Use plain headings and short paragraphs; Add behind-the-scenes labels once; Let reputable AI bots in; Earn mentions on trustworthy sites; Keep pages fresh; Track “mention share,” not just clicks.
Senator Warren urges Fed to reconsider Capital One deal for Discover as it would inflict “serious harm” on consumers and the banking system
The top Democrats on congressional banking committees called on the Federal Reserve to reconsider its decision to approve Capital One Financial Corp.’s purchase of Discover Financial Services, saying it would inflict “serious harm” on consumers and the banking system. The decision sounds like the Fed “had predetermined it was going to approve the transaction and either ignored relevant facts or explained them away with baseless assertions copied and pasted from Capital One’s application,” Senator Elizabeth Warren and Representative Maxine Waters said in a letter sent to the Fed. “Treating the transaction as a traditional bank merger was deeply misguided,” the lawmakers wrote. “These are not two traditional banks — they are card giants.” Warren and Waters emphasized the Fed’s review failed to appropriately assess the competitive effects on the credit-card market and didn’t take into account the views of the Consumer Financial Protection Bureau and the Federal Deposit Insurance Corp. The Fed said in an order last month that it consulted with other regulatory agencies including the FDIC and CFPB. Capital One said the deal’s approval follows “an exhaustive, fact-based 14-month examination where legal and regulatory experts examined the deal’s competitive impact, financial stability considerations, community needs, and all other relevant factors.”
Citi Restarts subscription line financing, lending to buyout funds; help banks build relationships with asset managers, who may hire their lenders in the future
Citigroup Inc. is ramping up lending to private equity and private credit groups, working to catch up with peers like JPMorgan Chase & Co. and Goldman Sachs Group Inc. after the bank spent years on the sidelines. The bank told investors it wants to get back into a lending business it retreated from several years ago. Citigroup in the past year returned to offering loans backed by the cash that investors pledge to funds, according to people familiar with the situation, granted anonymity to discuss private matters. As the bank pulled back on this kind of funding, known as subscription line financing, rivals moved to pick up more business. Goldman Sachs, JPMorgan and PNC Financial Services Group scooped up large amounts of the debt from First Republic Bank and Signature Bank, which were big providers of the revolving loans before they failed or were rescued in 2023. Citigroup’s return comes as CEO Jane Fraser pushes to overhaul the bank and boost profits by generating more fee-based revenue and forging ties with alternative asset managers. Last year, the lender hired Vis Raghavan, a rainmaker from rival JPMorgan, to run its global banking business. Subscription lines don’t generate high margins but they do help banks build relationships with asset managers, who may hire their lenders in the future to advise on acquisitions and underwrite junk bond sales. The lines have become extremely popular among fund managers, used by nearly 85% of buyout funds last year, up from just a quarter a decade ago, according to data from MSCI. Altogether, the sublines business is estimated to be roughly $900 billion globally, law firm Dechert LLP wrote last year. The financing is helpful when dealmaking picks up, but it also provides liquidity during a slowdown, which asset managers have faced for years as transactions dried up and some of their bets haven’t paid off. The threat of tighter standards under the previous White House led some large banks to exit capital-intensive lines of business. Regulators last year said they were going to ease rules known as Basel III Endgame, potentially freeing up space for banks to offer more financing. Fraser wants to lift Citigroup’s return on tangible common equity — a key measure of profitability — to 10% to 11% by the end of next year, bringing it more in line with its peers. Last quarter, that metric came in at 9.1%. When private equity firms raise funds, their investors agree to provide cash to fund leveraged buyouts over time. But to access that money, managers have to make a “capital call.” Subscription lines are backed by the promises to meet those calls. Because investors have rarely defaulted on capital calls, subscription lines are seen as safe. Many banks have packaged them into securities, freeing up their balance sheets to make new loans.
Bank of America adopts a four-layer framework for AI- – rules-based automation, analytical models, language classification and GenAI
Banks have long used traditional AI and machine learning techniques for various functions, such as customer service bots and decision algorithms that provide a faster-than-human response to market swings. But modern generative AI is different from prior AI/ML methods, and it has its own strengths and weaknesses. Hari Gopalkrishnan, Bank of America’s chief information officer and head of retail, preferred, small business, and wealth technology, said generative AI is a new tool that offers new capabilities, rather than a replacement for prior AI efforts. “We have a four-layer framework that we think about with regards to AI,” Gopalkrishnan told. The first layer is rules-based automation that takes actions based on specific conditions, like collecting and preserving data about a declined credit card transaction when one occurs. The second is analytical models, such as those used for fraud detection. The third layer is language classification, which Bank of America used to build Erica, a virtual financial assistant, in 2016. “Our journey of Erica started off with understanding language for the purposes of classification,” Gopalkrishnan said. But the company isn’t generating anything with Erica, he added: “We’re classifying customer questions into buckets of intents and using those intents to take customers to the right part of the app or website to help them serve themselves.” The fourth layer, of course, is generative AI. Given the history, it’d be reasonable to think banks would turn generative-AI tools into new chatbots that more or less serve as better versions of Bank of America’s Erica, or as autonomous financial advisors. But the most immediate changes instead came to internal processes and tools. Bank of America is pursuing similar applications, including a call center tool that saves customer associates’ time by transcribing customer conversations in real time, classifying the customer’s needs, and generating a summary for the agent. The decision to deploy generative AI internally first, rather than externally, was in part due to generative AI’s most notable weakness: hallucinations. Banks are wary of consumer-facing AI chatbots that could make similar errors about bank products and policies. Deploying generative AI internally lessens the concern. It’s not used to autonomously serve a bank’s customers and clients but to assist bank employees, who have the option to accept or reject its advice or assistance. Bank of America provides AI tools that can help relationship bankers prep.
Morgan Stanley is concentrating on making its AI tools easy to understand, thinking through the associated UX to make them intuitive to use
Koren Picariello, a Morgan Stanley managing director and its head of wealth management generative AI, said Morgan Stanley took a similar path. Throughout the 2010s, the company used machine learning for several purposes, like seeking investment opportunities that meet the needs and preferences of specific clients. Many of these techniques are still used. Morgan Stanley’s first major generative-AI tool, Morgan Stanley Assistant, was launched in September 2023 for employees such as financial advisors and support staff who help clients manage their money. Powered by OpenAI’s GPT-4, it was designed to give responses grounded in the company’s library of over 100,000 research reports and documents. The second tool, Morgan Stanley Debrief, was launched in June. It helps financial advisors create, review, and summarize notes from meetings with clients. “It’s kind of like having the most informed person at Morgan Stanley sitting next to you,” Picariello said. “Because any question you have, whether it was operational in nature or research in nature, what we’ve asked the model to do is source an answer to the user based on our internal content.” Picariello said Morgan Stanley takes a similar approach to using generative AI while maintaining accuracy. The company’s AI-generated meeting summaries could be automatically shared with clients, but they’re not. Instead, financial advisors review them before they’re sent. Meanwhile, Morgan Stanley is concentrating on making the company’s AI tools easy to understand. “We’ve spent a lot of time thinking through the UX associated with these tools, to make them intuitive to use, and taking users through the process and cycle of working with generative AI,” Picariello said. “Much of the training is built into the workflow and the user experience.” For example, Morgan Stanley’s tools can advise employees on how to reframe or change a prompt to yield a better response.
Inflows into Cash App ecosystem have slowed to 8% growth to $77 billion in the first quarter, down from the 17% in the year ago first quarter; Cash App Card in 1Q slowed, to 7% growth in monthly transacting active users (at 25 million)
Cash App saw a marked slowdown in the first quarter, monthly transacting members using the digital wallet showed 0% year-over-year growth, remaining stagnant at 57 million users. Drill down a bit and the use of Cash App Card slowed, to 7% growth in monthly transacting active users slowed to 7% (at 25 million), where in previous quarters that growth rate had been in the mid-teens percentage points. Inflows have slowed to 8% growth to $76.8 billion in the first quarter, down from the 17% year on year growth that had been logged in the year ago first quarter. On an individual basis, the inflows come out to $1,355 per transacting active in the latest quarter, at 8% growth, also down from double-digit growth rates. The read across here is that at least for now, users are arguably being conservative about how much money they want to — or can — put to work with the digital wallet as entry point into the Block financial ecosystem. This performance is attributed to changing consumer behavior, including a shift in spending away from discretionary items like travel and media toward non-discretionary areas like groceries and gas. CEO Jack Dorsey said. “Tax refunds are an important seasonal driver of Cash App inflows. This year, we saw a pronounced shift in consumer behavior during the time period that we typically see the largest disbursements, late February and into March,” said Dorsey. “This coincided with inflows coming in below our expectations. During the quarter, non-discretionary Cash App Card spend in areas like grocery and gas was more resilient, while we saw a more pronounced impact to discretionary spending in areas like travel and media. We believe this consumer softness was a key driver of our forecast miss.”
Marketplaces’ third-party sellers efforts to stock up to avoid the cost of tariffs is inadequate because shoppers are also buying ahead
The efforts of third-party sellers on platforms like Amazon to stock up on goods to avoid the cost of tariffs will reportedly work for only a limited time. Because shoppers are also buying ahead to avoid the impact of tariffs, merchants will eventually sell down their inventory, place new orders, and be faced with the challenge of trying to avoid price increases. It is unlikely sellers can stock up on enough inventory to meet their needs for more than six months and that they will feel the full impact of tariffs in the third or fourth quarter. Amazon CEO Andy Jassy said that demand had not yet softened because of tariffs and that if anything, the company had seen “heightened buying in certain categories that may indicate stocking up in advance of any potential tariff impact.” Amazon pulled forward inventory in the first quarter, while many marketplace merchants accelerated shipments to U.S. warehouses to insulate customers from price spikes. Jassy added that Amazon’s risk is muted relative to rivals because many traditional retailers buy from middlemen who themselves import from China, “so the total tariff will be higher for these retailers than for China-direct sellers” on Amazon’s platform.
Personal ”digital defense AI agents” can be useful for individuals to keep a lid on the types of bad actors that could otherwise jeopardize systems
The idea of a personal “digital defender” in the form of an AI agent is not very widely talked about on the web. Alex “Sandy” Pentland describes that your AI agent addresses all of that other agent activity that’s aimed at you, and intervenes on your behalf. In a way, it’s like having a public defender in court. There’s a legal effort against you, so you need your own advocacy to represent you on your side. It’s also sort of like consumer reporting – Pentland mentioned how Consumer Reports has been doing this kind of work for 80 years with polls and other tools. Another sort of similar idea is cybersecurity agents who are created by a company called Twine that are intended to protect people from cyberattacks. Essentially, Pentland argued, a bad actor can easily throw a system out of balance by being “just a little edgy,” by making small changes that lead to a domino effect that can be detrimental. He used the example of a traffic jam, which starts off as just one car in dense traffic changing its behavior. This type of game theory, he asserted, has to be factored into how we create our digital defense agent networks. With all of this in mind, it’s probably a good idea to think about building those digital defense agents. They might not be perfect right away, but they might be the defense that we need against an emerging army of hackers utilizing some of the most potent technologies we’ve ever seen. The idea also feeds back into the whole debate about open source and closed source models, and when tools should be published for all the world to use. It’s imperative to keep a lid on the types of bad actors that could otherwise jeopardize systems. In the cryptocurrency days, we had the notion of a 51% attack, where as soon as somebody held more than half of a given blockchain item, they had full control, with no exceptions.
LLMs can still be prohibitively expensive for some, and as with all ML models, LLMs are not always accurate. There will always be use cases where leveraging an ML implementation is not the right path forward. The key considerations for AI project managers to evaluate customers’ needs for AI implementation include: The inputs and outputs required to fulfill your customer’s needs: An input is provided by the customer to your product and the output is provided by your product. So, for a Spotify ML-generated playlist (an output), inputs could include customer preferences, and ‘liked’ songs, artists and music genre. Combinations of inputs and outputs: Customer needs can vary based on whether they want the same or different output for the same or different input. The more permutations and combinations we need to replicate for inputs and outputs, at scale, the more we need to turn to ML versus rule-based systems. Patterns in inputs and outputs: Patterns in the required combinations of inputs or outputs help you decide what type of ML model you need to use for implementation. If there are patterns to the combinations of inputs and outputs (like reviewing customer anecdotes to derive a sentiment score), consider supervised or semi-supervised ML models over LLMs because they might be more cost-effective. Cost and Precision: LLM calls are not always cheap at scale and the outputs are not always precise/exact, despite fine-tuning and prompt engineering. Sometimes, you are better off with supervised models for neural networks that can classify an input using a fixed set of labels, or even rules-based systems, instead of using an LLM.
LLMs can still be prohibitively expensive for some, and as with all ML models, LLMs are not always accurate. There will always be use cases where leveraging an ML implementation is not the right path forward. The key considerations for AI project managers to evaluate customers’ needs for AI implementation include: The inputs and outputs required to fulfill your customer’s needs: An input is provided by the customer to your product and the output is provided by your product. So, for a Spotify ML-generated playlist (an output), inputs could include customer preferences, and ‘liked’ songs, artists and music genre. Combinations of inputs and outputs: Customer needs can vary based on whether they want the same or different output for the same or different input. The more permutations and combinations we need to replicate for inputs and outputs, at scale, the more we need to turn to ML versus rule-based systems. Patterns in inputs and outputs: Patterns in the required combinations of inputs or outputs help you decide what type of ML model you need to use for implementation. If there are patterns to the combinations of inputs and outputs (like reviewing customer anecdotes to derive a sentiment score), consider supervised or semi-supervised ML models over LLMs because they might be more cost-effective. Cost and Precision: LLM calls are not always cheap at scale and the outputs are not always precise/exact, despite fine-tuning and prompt engineering. Sometimes, you are better off with supervised models for neural networks that can classify an input using a fixed set of labels, or even rules-based systems, instead of using an LLM.