Greenlight has partnered with Q2 Digital Banking Platform to embed its family-focused money app within their digital banking experience. The Q2 Partner Accelerator program allows financial institutions to deliver financial education and personal finance experiences directly to parents and kids through their bank’s app. Greenlight’s financial literacy tools enable kids and teens to earn, save, spend wisely, give, and learn. The integration also equips families to protect their finances, providing parents with tools to monitor transactions, automate allowances, and control spending. The Greenlight app includes curriculum-based content like Level Up™, Greenlight’s interactive financial literacy game. The Q2 Partner Accelerator program allows financial services companies to pre-integrate their technology to the Q2 Digital Banking Platform, allowing financial institutions to work with these partners, purchase their solutions, and deploy their standardized integrations to account holders. With the Greenlight Q2 Integration, families can more easily learn about smart money habits and find safe financial tools in their everyday banking platform.
SEC’s Atkins says most crypto assets are not securities; plans purpose-fit disclosures for crypto securities including for so-called ‘initial coin offerings,’ ‘airdrops’ and network rewards.”; could allow innovation with ‘super-apps’
SEC Chairman Paul Atkins said his agency is launching “Project Crypto” with an aim to make a quick start on the new crypto policies urged by President Donald Trump. Atkins said the effort will be rooted in the recommendations of the President’s Working Group report issued Wednesday by the White House. He described it as “a commission-wide initiative to modernize the securities rules and regulations to enable America’s financial markets to move on-chain.” “I have directed the commission staff to draft clear and simple rules of the road for crypto asset distributions, custody, and trading for public notice and comment,” Atkins said. “While the commission staff works to finalize these regulations, the commission and its staff will in the coming months consider using interpretative, exemptive and other authorities to make sure that archaic rules and regulations do not smother innovation and entrepreneurship in America. Despite what the SEC has said in the past, most crypto assets are not securities,” Atkins said. Atkins suggested his agency will move to begin answering those questions now, working on “clear guidelines that market participants can use to determine whether a crypto asset is a security or subject to an investment contract.” For crypto securities, he said he’s “asked staff to propose purpose-fit disclosures, exemptions, and safe harbors, including for so-called ‘initial coin offerings,’ ‘airdrops’ and network rewards.” Atkins said he means to “allow market participants to innovate with ‘super-apps'” that offer a “broad range of products and services under one roof with a single license.”
Anthropic’s new Claude Opus 4.1 model scores 74.5% on SWE-bench Verified, surpassing OpenAI’s o3 model at 69.1% and Google’s Gemini 2.5 Pro at 67.2%, indicating its dominance in AI-powered coding assistance
Anthropic unveiled the latest version of its flagship artificial intelligence model, the same day that OpenAI released its first two open reasoning models since 2019. Claude Opus 4.1 is better at agentic tasks, coding and reasoning, according to a company blog post. Leaks of Claude Opus 4.1 began appearing the day before on social platform X and TestingCatalog. Anthropic Chief Product Officer Mike Krieger said this release is different from previous model unveilings. Claude Opus 4.1 is a successor to Claude Opus 4, which launched May 22. Opus 4.1 shows gains on benchmarks such as SWE-Bench Verified, a coding evaluation test, where it scores two percentage points higher than the previous model. The 4.1 model is also strong in agentic terminal coding, with a score of 43.3% on the Terminal-Bench benchmark compared with 39.2% for Opus 4, 30.2% for OpenAI’s o3, and 25.3% for Google’s Gemini 2.5 Pro. Customers such as Windsurf, a coding app being acquired by Cognition, and Japan’s Rakuten Group have reported quicker and more accurate completion of coding tasks using Claude Opus 4.1. The Claude Opus 4.1 release came amid signs that rival OpenAI is nearing the debut of GPT-5
McDonald’s using a digital experience that lets customers virtually interact with animated avatars of its promotional characters, play interactive mini-games, explore themed virtual worlds and unlock in-game wearables by completing quests
McDonald’s is introducing a new version of its McDonaldland promotional concept that is quite different from the 1970s version featuring costumed TV performers. Starting Tuesday, Aug. 12, the chain will roll out “McDonaldland VR,” an interactive digital experience launching alongside its new limited-time “McDonaldland Meal,” which will be on McDonald’s menu for a limited time this summer. (The meal features a milkshake with what the company describes as a “surprise flavor,” choice of a Quarter Pounder with Cheese or 10-piece Chicken McNuggets, fries and one of six exclusive collectible tins featuring postcards, stickers, and more. ). Inside McDonaldland VR, customers can virtually interact with fully animated avatars of McDonald’s promotional characters such as Grimace, Hamburglar and Birdie. Users can also play interactive mini-games, explore themed virtual worlds such as the Hamburger Patch, search for hidden digital collectibles such as Mt. McDonaldland Shake icons and various Easter eggs throughout the virtual world, and complete quests to unlock in-game wearables like the Mayor McCheese Hat, Burger Buddy Backpack and Ronald McDonald’s Guitar. McDonaldland VR will be available on Meta Horizon Worlds via the Meta Quest VR headset and via browser-based Web VR access.
Google’s tiny AI model brings advanced, quantization-ready AI that fits on smartphones—empowering efficient, on-device reasoning and quick adaptation to enable private, offline AI for specialized and enterprise tasks
Google’s DeepMind AI research team has unveiled a new open source AI model, Gemma 3 270M — far smaller than the 70 billion or more parameters of many frontier LLMs (parameters being the number of internal settings governing the model’s behavior). While more parameters generally translates to a larger and more powerful model, Google’s focus with this is nearly the opposite: high-efficiency, giving developers a model small enough to run directly on smartphones and locally, without an internet connection, as shown in internal tests on a Pixel 9 Pro SoC. Yet, the model is still capable of handling complex, domain-specific tasks and can be quickly fine-tuned in mere minutes to fit an enterprise or indie developer’s needs. Google DeepMind Staff AI Developer Relations Engineer Omar Sanseviero added that it Gemma 3 270M can also run directly in a user’s web browser, on a Raspberry Pi, and “in your toaster,” underscoring its ability to operate on very lightweight hardware. Gemma 3 270M combines 170 million embedding parameters — thanks to a large 256k vocabulary capable of handling rare and specific tokens — with 100 million transformer block parameters. According to Google, the architecture supports strong performance on instruction-following tasks right out of the box while staying small enough for rapid fine-tuning and deployment on devices with limited resources, including mobile hardware. One of the model’s defining strengths is its energy efficiency. In internal tests using the INT4-quantized model on a Pixel 9 Pro SoC, 25 conversations consumed just 0.75% of the device’s battery. This makes Gemma 3 270M a practical choice for on-device AI, particularly in cases where privacy and offline functionality are important. The release includes both a pretrained and an instruction-tuned model, giving developers immediate utility for general instruction-following tasks. Quantization-Aware Trained (QAT) checkpoints are also available, enabling INT4 precision with minimal performance loss and making the model production-ready for resource-constrained environments. Google frames Gemma 3 270M as part of a broader philosophy of choosing the right tool for the job rather than relying on raw model size. For functions like sentiment analysis, entity extraction, query routing, structured text generation, compliance checks, and creative writing, the company says a fine-tuned small model can deliver faster, more cost-effective results than a large general-purpose one. By fine-tuning a Gemma 3 4B model for multilingual content moderation, the team outperformed much larger proprietary systems. Gemma 3 270M is designed to enable similar success at an even smaller scale, supporting fleets of specialized models tailored to individual tasks.
Lithic card issuance platform’s adoption of Visa’s account-level management API accelerates premium card enrollment, enhances rewards, and drives issuer economics without disrupting cardholder experience
Lithic, the card issuing processing platform powering next-generation financial experiences, announced its integration with Visa Account Level Management (ALM) over Visa’s Card Program Enrollment (VCPE) API, offering fintech partners faster delivery of premium card programs and improved card economics. Visa ALM evaluates spend across multiple cards under a single account to determine premium program eligibility. This shift from BIN-level to account-level assessment helps improve program economics while accelerating benefit delivery. The integration enables Lithic clients to enroll eligible cardholders into Visa’s Signature and Signature Preferred programs, and more, without card re-issuance or disruption to the cardholder experience. Unlike legacy batch enrollment methods that delay benefit activation and interchange assignment for days, Lithic’s VCPE API integration processes card enrollments in near real-time. “Integrating with Visa’s ALM system over the VCPE API allows Lithic partners to unlock enhanced revenue opportunities while maintaining a smooth, cardholder-friendly experience,” said Bo Jiang, CEO of Lithic. Fintechs can now manage card portfolios at the account level, paving the way for more personalized cardholder rewards, real-time tier upgrades, and premium Visa benefits like extended warranties, travel protections and concierge services. This “account for life” approach means cardholders keep their existing number while their account adapts to their evolving spending patterns.
Oracle launches MCP Server for Oracle Database to allow users to securely interact with core database platform and navigate complex data schemas using natural language, with the server translating questions into SQL queries
Oracle Corp unveiled MCP Server for Oracle Database, a new Model Context Protocol offering that brings AI-powered interaction directly into its core database platform to help developers and analysts query and manage data using natural language. The new MCP server enables LLMs to securely connect to Oracle Database and interact with it contextually while respecting user permissions and roles. MCP Server for Oracle Database allows users to interact with Oracle’s core database platform using natural language, with the server translating questions into SQL queries, helping users retrieve insights from data without needing to write complex code, making tasks such as performance diagnostics, schema summarization and query generation easier. The integration has been designed to simplify the process of working with SQL queries and navigating complex data schemas. MCP Server for Oracle Database AI agents can act as copilots for developers and analysts by generating code and analyzing performance. The protocol also supports read and write operations, allowing users to take action through the AI assistant, such as creating indexes, checking performance plans, or optimizing workloads. The AI agent operates strictly within the access boundaries of the authenticated user by using a private, dedicated schema to isolate the agent’s interactions from production data, allowing it to generate summaries or sample datasets for language models without exposing full records.
Anthropic unveils ‘auditing agents’ to test for AI misalignment finding prompts that elicit “concerning” behaviors
Anthropic researchers developed auditing agents that achieved “impressive performance at auditing tasks, while also shedding light on their limitations.” The researchers stated that these agents, created during the pre-deployment testing of Claude Opus 4, enhanced alignment validation tests and enabled researchers to conduct multiple parallel audits at scale. The three agents they explored were: Tool-using investigator agent for open-ended investigation of models using chat, data analysis and interpretability tools; Evaluation agent that builds behavioral evaluations and can discriminate between models that have implanted behaviors and those that do not; Breadth-first red-teaming agent, which was developed specifically for the Claude 4 alignment assessment, so that it can discover implanted test behaviors. According to the researchers, the investigator agent successfully found the root cause of the issues 10-13% of the time. Still, this performance improved to 42% “when using a super-agent approach that aggregates findings across multiple investigations.” The next step in most alignment audits is typically a behavioral evaluation, which determines whether the model consistently misbehaves in various settings. To test its evaluation agent, the researchers designed 10 models “given system prompt instructions to exhibit a specific behavior, such as excessive deference to the user.” They ran the agent five times per model and saw that the agent correctly finds and flags at least one quirk of the model. However, the agent sometimes failed to identify specific quirks consistently. It had trouble evaluating subtle quirks, such as self-promotion and research-sandbagging, as well as quirks that are difficult to elicit, like the Hardcode Test Cases quirk. The last test and agent concern behavioral red-teaming to find the prompts that elicit “concerning” behaviors. The breadth-first red-teaming agent converses with the target model (in Anthropic’s case, it was with Claude Opus 4), and this chat is then rated for alignment-relevant properties. The agent identified seven of the ten system quirks, but it also struggled with the same issues as the evaluator agent.
Crypto fund JellyC’s partnership with crypto exchange OKX and Standard Chartered to enable it to trade cryptos using tokenized money market fund as off-exchange collateral, without requiring to move funds to the exchange upfront.
JellyC, a digital asset investment manager with over $100 million in assets under management, said it joined a program with crypto exchange OKX and international bank Standard Chartered (STAN) that will allow it to trade cryptocurrencies while keeping its collateral secure off-platform. The company will use a Franklin Templeton tokenized money market fund (TMMF) as its preferred trading collateral. The collateral will be held by Standard Chartered. JellyC said the initiative will enhance its capital efficiency and reduce its direct exposure to OKX, potentially attracting institutional investments and mitigating the risk of an FTX-style blowup that destroyed billions in investor wealth. “Franklin Templeton’s natively minted on-chain TMMF provides legal certainty of fund ownership in real time, 24/7/365, and airdrops daily as new tokens,” JellyC CEO Michael Prendiville said in the email. “Marrying the Franklin TMMF with the Standard Chartered and OKX tripartite collateral structure elevates safety and soundness to a level akin to traditional finance, making this fit for purpose in a digital world.” Prendville said the approach is suitable for the wealth and funds management sector, as well as Australia’s superannuation, or pensions savings, industry and caters to the demand for digital asset trading products that leverage established banking infrastructure to ensure secure and compliant capital deployment in the cryptocurrency market.
Digital marketing platform for financial advisors Wealthtender can automatically structure FAQ content to be more easily surfaced in Google AI Overviews and as direct answers in AI tools by embedding FAQ schema on advisor websites and profiles
Wealthtender, a digital marketing platform for financial advisors and wealth management firms, announced the launch of AI-Optimized FAQs, extending its range of features that play a valuable role in Search Engine Optimization (SEO) and Answer Engine Optimization (AEO). By embedding FAQ schema, a specialized code recognized by search engines and answer engines, Wealthtender automatically structures FAQ content to be more easily surfaced in Google AI Overviews and as direct answers in AI tools. Brian Thorp, Wealthtender founder and CEO. “With traditional search engines evolving to include AI Overviews and the rapid adoption of AI-powered tools like ChatGPT and Gemini, FAQs published on advisor websites and Wealthtender profiles, especially when enhanced with FAQ schema, are more powerful than ever for building trust, visibility, credibility, and increasing the likelihood of an advisor landing on a prospect’s shortlist.” Upon activation of the AI-Optimized FAQs feature, advisors can publish up to 10 questions and answers on their Wealthtender profiles that showcase their expertise and areas of specialization, address common questions, and appear more prominently when prospective clients use Google, ChatGPT, Gemini, and other AI search tools to find and evaluate financial advisors.