McDonald’s is introducing a new version of its McDonaldland promotional concept that is quite different from the 1970s version featuring costumed TV performers. Starting Tuesday, Aug. 12, the chain will roll out “McDonaldland VR,” an interactive digital experience launching alongside its new limited-time “McDonaldland Meal,” which will be on McDonald’s menu for a limited time this summer. (The meal features a milkshake with what the company describes as a “surprise flavor,” choice of a Quarter Pounder with Cheese or 10-piece Chicken McNuggets, fries and one of six exclusive collectible tins featuring postcards, stickers, and more. ). Inside McDonaldland VR, customers can virtually interact with fully animated avatars of McDonald’s promotional characters such as Grimace, Hamburglar and Birdie. Users can also play interactive mini-games, explore themed virtual worlds such as the Hamburger Patch, search for hidden digital collectibles such as Mt. McDonaldland Shake icons and various Easter eggs throughout the virtual world, and complete quests to unlock in-game wearables like the Mayor McCheese Hat, Burger Buddy Backpack and Ronald McDonald’s Guitar. McDonaldland VR will be available on Meta Horizon Worlds via the Meta Quest VR headset and via browser-based Web VR access.
Google’s tiny AI model brings advanced, quantization-ready AI that fits on smartphones—empowering efficient, on-device reasoning and quick adaptation to enable private, offline AI for specialized and enterprise tasks
Google’s DeepMind AI research team has unveiled a new open source AI model, Gemma 3 270M — far smaller than the 70 billion or more parameters of many frontier LLMs (parameters being the number of internal settings governing the model’s behavior). While more parameters generally translates to a larger and more powerful model, Google’s focus with this is nearly the opposite: high-efficiency, giving developers a model small enough to run directly on smartphones and locally, without an internet connection, as shown in internal tests on a Pixel 9 Pro SoC. Yet, the model is still capable of handling complex, domain-specific tasks and can be quickly fine-tuned in mere minutes to fit an enterprise or indie developer’s needs. Google DeepMind Staff AI Developer Relations Engineer Omar Sanseviero added that it Gemma 3 270M can also run directly in a user’s web browser, on a Raspberry Pi, and “in your toaster,” underscoring its ability to operate on very lightweight hardware. Gemma 3 270M combines 170 million embedding parameters — thanks to a large 256k vocabulary capable of handling rare and specific tokens — with 100 million transformer block parameters. According to Google, the architecture supports strong performance on instruction-following tasks right out of the box while staying small enough for rapid fine-tuning and deployment on devices with limited resources, including mobile hardware. One of the model’s defining strengths is its energy efficiency. In internal tests using the INT4-quantized model on a Pixel 9 Pro SoC, 25 conversations consumed just 0.75% of the device’s battery. This makes Gemma 3 270M a practical choice for on-device AI, particularly in cases where privacy and offline functionality are important. The release includes both a pretrained and an instruction-tuned model, giving developers immediate utility for general instruction-following tasks. Quantization-Aware Trained (QAT) checkpoints are also available, enabling INT4 precision with minimal performance loss and making the model production-ready for resource-constrained environments. Google frames Gemma 3 270M as part of a broader philosophy of choosing the right tool for the job rather than relying on raw model size. For functions like sentiment analysis, entity extraction, query routing, structured text generation, compliance checks, and creative writing, the company says a fine-tuned small model can deliver faster, more cost-effective results than a large general-purpose one. By fine-tuning a Gemma 3 4B model for multilingual content moderation, the team outperformed much larger proprietary systems. Gemma 3 270M is designed to enable similar success at an even smaller scale, supporting fleets of specialized models tailored to individual tasks.
Lithic card issuance platform’s adoption of Visa’s account-level management API accelerates premium card enrollment, enhances rewards, and drives issuer economics without disrupting cardholder experience
Lithic, the card issuing processing platform powering next-generation financial experiences, announced its integration with Visa Account Level Management (ALM) over Visa’s Card Program Enrollment (VCPE) API, offering fintech partners faster delivery of premium card programs and improved card economics. Visa ALM evaluates spend across multiple cards under a single account to determine premium program eligibility. This shift from BIN-level to account-level assessment helps improve program economics while accelerating benefit delivery. The integration enables Lithic clients to enroll eligible cardholders into Visa’s Signature and Signature Preferred programs, and more, without card re-issuance or disruption to the cardholder experience. Unlike legacy batch enrollment methods that delay benefit activation and interchange assignment for days, Lithic’s VCPE API integration processes card enrollments in near real-time. “Integrating with Visa’s ALM system over the VCPE API allows Lithic partners to unlock enhanced revenue opportunities while maintaining a smooth, cardholder-friendly experience,” said Bo Jiang, CEO of Lithic. Fintechs can now manage card portfolios at the account level, paving the way for more personalized cardholder rewards, real-time tier upgrades, and premium Visa benefits like extended warranties, travel protections and concierge services. This “account for life” approach means cardholders keep their existing number while their account adapts to their evolving spending patterns.
Anthropic tests AI that can see pages, click, and fill forms in Chrome via a plugin, with guardrails to cut prompt‑injection risk from 23.6% to 11.2%.
Anthropic PBC, the startup developing the Claude Gen AI model family, announced the pilot of a browser extension on Tuesday that lets its AI model take control of users’ Google Chrome. The experimental browser-using capability, called Claude for Chrome, will be available for 1,000 users subscribed to the company’s Max plan for $100 or $200 per month. The company announced the extension as a controlled pilot for a small number of users so Anthropic can develop better security practices for this emerging technology. “We view browser-using AI as inevitable: so much work happens in browsers that giving Claude the ability to see what you’re looking at, click buttons, and fill forms will make it substantially more useful,” Anthropic said. The company said that early versions of Claude for Chrome showed promise in managing calendars, scheduling meetings, drafting email responses and testing website features. However, the feature is still experimental and represents a major new security concern, which is why it is not being released widely. Allowing AI models direct control of browsers means that they will encounter a higher chance of malicious instructions in the wild that could be executed on users’ computers, allowing attackers to manipulate the AI model. In experiments, Anthropic said prompt injection tests evaluated 123 attacks representing 29 different scenarios. Out of those, AI-controlled browser use without safety mitigation had a 23.6% success rate for deliberate attacks. “When we added safety mitigations to autonomous mode, we reduced the attack success rate of 23.6% to 11.2%, which represents a meaningful improvement over our existing Computer Use capability,” Anthropic said. Anthropic said for the pilot, users will be blocked from sites it considers “high-risk categories,” such as financial services, adult content and pirated content. The Anthropic team added that it will use insights from the pilot users to refine how prompt injection classifiers operate and how the security mechanisms work to protect users. By building an understanding of user behavior, especially unsafe behavior, and uncovering new attack patterns, the company said it hopes to develop more sophisticated controls for this type of safety-critical application.
Raise unifies public and private assets in one screen, enabling scenario simulation, optimization and personalized advice for family offices and private banks
Raise Partner, a France-based B2B WealthTech provider, has launched a new version of its flagship digital solution, Smart Risk Decisions, designed to meet the needs of family offices and private banks. The web-based platform allows wealth managers to model and optimize multi-asset class portfolios, including equities, fixed income, private assets, and real estate, with an intuitive interface that requires no IT integration. The solution provides a unified global view of client portfolios and supports the simulation, optimization, and risk assessment of complex allocations. The platform addresses growing client expectations for a global wealth approach and personalized, transparent advice. Its modular design allows for the integration of additional asset classes on demand.
Google’s EmbeddingGemma small model optimizes 308M-parameter multilingual embeddings for phones and laptops; enabling offline, private semantic search and retrieval in enterprise apps
Google’s open-source Gemma is already a small model designed to run on devices like smartphones. However, Google continues to expand the Gemma family of models and optimize these for local usage on phones and laptops. Its newest model, EmbeddingGemma, will take on embedding models already used by enterprises, touting a larger parameter than most and strong benchmark performance. EmbeddingGemma is a 300 million token parameter, open-source model that is best optimized for devices like laptops, desktops and mobile devices. Min Choi, product manager, and Sahil Dua, lead research engineer at Google DeepMind, wrote in a blog post that EmbeddingGemma “offers customizable output dimensions” and will work with its open-source Gemma 3n model. “Designed specifically for on-device AI, its highly efficient 308 million parameter design enables you to build applications using techniques such as RAG and semantic search that run directly on your hardware,” Choi and Dua said. “It delivers private, high-quality embeddings that work anywhere, even without an internet connection.” The model performed well on the Massive Text Embedding Benchmark (MTEB) multilingual v2, which measures the capabilities of embedding models. It is the highest-ranked model under 500M parameters. A significant use case for EmbeddingGemma involves developing mobile RAG pipelines and implementing semantic search. RAG relies on embedding models, which create numerical representations of data that models or agents can reference to answer queries. Building a mobile RAG pipeline enables information gathering and answering queries more directly on local devices. Employees can ask their questions or direct agents through their phones or other devices to find the information they need. Choi and Dua said that EmbeddingGemma works to create high-quality embeddings. To do this, EmbeddingGemma introduced a method called Matryoshka Representation Learning. This gives the model flexibility, as it can provide multiple embedding sizes within a single model.
Google’s new Agent Payments Protocol (AP2) extends A2A and MCP with cryptographically signed mandates, standardizing authorization and accountability for agent‑led purchases across platforms and merchants
Google announced the Agent Payments Protocol (AP2), an open protocol developed with leading payments and technology companies to securely initiate and transact agent-led payments across platforms. The protocol can be used as an extension of the Agent2Agent (A2A) protocol and Model Context Protocol (MCP). In concert with industry rules and standards, it establishes a payment-agnostic framework for users, merchants, and payments providers to transact with confidence across all types of payment methods. Google is collaborating on agentic payments with more than 60 companies, some of which include Adyen, American Express, Mastercard, PayPal, Coinbase and Revolut. AP2 builds trust by using Mandates—tamper-proof, cryptographically-signed digital contracts that serve as verifiable proof of a user’s instructions. These mandates are signed by verifiable credentials (VCs) and act as the foundational evidence for every transaction. Mandates address the two primary ways a user will shop with an agent: Real-time purchases (human present) and Delegated tasks (human not present). This complete sequence—from intent, to cart, to payment—creates a non-repudiable audit trail that answers the critical questions of authorization and authenticity, providing a clear foundation for accountability.
Bank of America, Wells Fargo and U.S. Bank now offer predictive checking account insights and forecasts based on transaction patterns, helping customers manage finances and potentially encouraging them to consolidate banking relationships
Bitcoin financial services company Fold has selected Stripe, the programmable financial services company, to power the upcoming launch of the Fold Bitcoin Credit Card™, a bitcoin-only rewards product designed to turn everyday spending into a direct path to bitcoin ownership. The card enables users to accumulate bitcoin with every purchase, offering a simple and consistent way to build long-term wealth. Issued on the Visa network and powered by Stripe Issuing, the Fold Bitcoin Rewards Credit Card delivers up to 3.5% back on every purchase, with no categories and no deposit requirements. Cardholders earn an unlimited 2% back instantly, plus up to 1.5% back when they pay off purchases using their Fold Checking Account with qualified activity. In addition, cardholders can earn up to 10% back with top brands in the Fold rewards network, including Amazon, Target, Home Depot, Lowe’s, Uber/Uber Eats, Starbucks, DoorDash, Best Buy, and hundreds more. Fold’s reward system is designed to be simple and transparent, offering bitcoin-only rewards without the complexity of tokens, staking tiers, or exchange lock-ins. Fold’s integration with Stripe Issuing marks a key milestone in Fold’s product development and reflects growing demand for digital asset integration in consumer financial tools. With Stripe’s infrastructure in place, Fold is positioned to bring the Fold Bitcoin Credit Card™ to market with the reach and reliability users expect.
Faster Payments Council delivers practical guidance for banks enabling dual-rail instant payment structure with business continuity frameworks interoperability
The U.S. Faster Payments Council (FPC), announced the release of its latest industry resource, Operational Considerations for Instant Payments Send-Side Guidelines. Produced by the FPC’s Operational Considerations Work Group (OCWG), sponsored by Endava, the resource provides financial institutions (FIs) with best practices and detailed guidance for successfully implementing instant payments sending capabilities. Miriam Sheril, Head of Product – US at Form3 and FPC Operational Considerations Work Group Chair said, “This specific deliverable focuses on helping banks get ready to send payments, with rich detail supported by clear guidelines that make it easy to read and apply for specific purposes.” The new guidelines cover a wide range of operational factors financial institutions must address when enabling send-side functionality, including liquidity management, user experience and interface design, real-time reconciliation, fraud mitigation, compliance requirements, and exception processing. The guidelines also address business continuity, staffing and training considerations, and the critical role of accountholder education and disclosures in ensuring success. In addition to operational details, the guidelines highlight interoperability and routing considerations for financial institutions using both the RTP® network and the FedNow® Service, outlining strategies for managing liquidity, ensuring uptime, and developing fallback options when real-time networks are unavailable. Reed Luhtanen, FPC Executive Director and CEO, “This new resource provides the industry with actionable insights that will help accelerate adoption and ensure instant payments are implemented in a way that prioritizes security, resiliency, and accountholder trust.”
Noon payments and Visa deploy FIDO-based Payment Passkey, ensuring seamless, fraud-resistant checkout experiences by keeping biometric data on devices across all payment channels
noon payments and Visa have partnered to launch Visa Payment Passkey, making noon payments the first globally as a payment service provider (PSP) to offer this innovative solution to its merchants and their customers. This strategic collaboration introduces Fast Identity Online (FIDO)-based authentication for payments, leveraging the biometric capabilities of consumer devices for ecommerce authentication, designed to create a smoother, more secure, and password-free online checkout experience for consumers and merchants in the Middle East. By leveraging Visa Acceptance Platform solutions such as Payer Authentication and Authorization, this initiative further promotes passkey readiness while ensuring a secure and seamless payment experience across the region. This innovative solution provides a next-generation approach to securing online transactions through eliminating the need for consumers to rely on static passwords, or one-time passcodes (OTP) at checkout by leveraging device unlocking methods such as biometrics (e.g. fingerprints, facial scans) or PINs. This not only streamlines the payment journey but also significantly enhances protection against fraud and scams like phishing. Visa Payment Passkey, which is built on open industry standards from the FIDO Alliance ensures sensitive biometric data does not leave the consumer’s device while ensuring a secure, seamless and convenient payment journey across all devices and payment channels.