Google Wallet and Pay had a number of announcements, including some new features (like nearby passes) that end users will benefit from. A redesign for the Google Pay payment sheet adds a dark theme “for a more integrated feel.” We’re already seeing it live on our devices, with Google also adding “richer card art and names” to make card selection faster. Meanwhile, Digital IDs are a big focus for Google Wallet, with their availability helping power other capabilities. With Zero-Knowledge Proof, Google wants to allow “age verification without any possibility to link back to a user’s personal identity.” The company will open-source these libraries. Currently, it’s available to Android apps through the Credential Manager Jetpack Library and mobile web, with desktop Chrome in testing. Google showed off a “seamless experience between Chrome on desktop and your Android device” that involves QR code scanning. Google Wallet is adding Nearby Passes notifications that send users an alert when they’re near a specific location. This can be used by loyalty cards, offers, boarding passes, or event tickets. By highlighting these value-added benefits, such as exclusive offers or upgrade options, you can guide users back to your app or website, creating a dynamic gateway for ongoing user interaction. With an update to Auto Linked Passes, airlines that have loyalty cards for frequent flyer programs can “ automatically push boarding passes to their users’ wallets once they check in for a flight.” Google is also adding passes that can include a picture of the user.
Google is betting on a ‘world model’, an AI operating system that mirrors human brain with a deep understanding of real-world dynamics, simulating cause and effect and learning by observing
Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google. This concept of “a world model,” as articulated by Demis Hassabis, CEO of Google DeepMind, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. “That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.” An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.” CEO Sundar Pichai reinforced this, citing Project Astra, which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context” (connecting search history, and soon Gmail/Calendar) enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understands. This, Woodward emphasized, is “where we’re headed with Gemini,” enabled by the Gemini 2.5 Pro model allowing users to “think things into existence.” Gemini 2.5 Pro with “Deep Think” and the hyper-efficient 2.5 Flash (now with native audio and URL context grounding from Gemini API) form the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology.
Google’s updates to media apps on Android Auto to allow apps to show different sections in the browsing UI and offer more flexibility in layout to build richer and more complete experiences
Google introduced two new changes to media apps on Android Auto. The first change is to the browsing interface in media apps. The new “SectionedItemTemplate” will allow apps to show different sections in the browsing UI, with Google’s example showing “Recent search” above a list of albums. The other change is the to “MediaPlaybackTemplate,” which is used as the “Now Playing” screen. It appears that Google is going to grant developers more flexibility in layout here, with the demo shown putting the media controls in the bottom right corner instead of the center, and in a different order than usual – although that might become the standard at some point. The UI isn’t drastically different or any harder to understand, but it’s a different layout than we usually see on Android Auto, which is actually a bit refreshing. Google is also allowing developers to build “richer and more complete experiences” for media apps using the “Car App Library.” This could make it easier to navigate some apps, as most media apps on Android Auto are shells of their smartphone counterpart in terms of functionality. This category is just in beta for now, though.
Apple Watch growth lags, growing a modest 5% increase from 2024, as rivals push hard on health features & lower prices while Apple is focusing on enhancing the stickiness of its ecosystem
The global wearable band market grew 13% year over year in the first quarter of 2025, reaching 46.6 million shipments, according to new data from Canalys. The rebound was driven by broad demand across categories, especially in emerging markets, and a low comparison base from the first quarter of 2024. Xiaomi surged back into the lead with 8.7 million units shipped, up 44% from 2024. Apple came in second with 7.6 million Apple Watch shipments, a modest 5% increase from 2024. That’s in line with seasonal expectations, as the first quarter tends to be the furthest point from Apple’s typical September refresh cycle. Instead of chasing hardware overhauls, Apple is focusing on enhancing the stickiness of its ecosystem. Huawei held third place with 7.1 million units shipped, a 36% year-over-year gain. Its GT and Fit series found traction outside China, supported by a wider rollout of the Huawei Health app. Samsung followed with 4.9 million shipments, a sharp 74% increase driven by a dual-market strategy. Garmin rounded out the top five with 1.8 million units shipped, up 10%. The launch of Garmin Connect+, a subscription platform for deeper health insights and training tools, signals the brand’s move toward recurring revenue. As hardware margins tighten, vendors are shifting focus from features to ecosystems. Huawei is taking a more health-centric approach, building a closed-loop system through its Health app. Price, battery life, and health tracking remain the top buying factors. But as ecosystems mature and software capabilities expand, vendors that offer reliable integration and trusted data handling will have the edge. Xiaomi’s rise highlights how affordable devices, when paired with a growing ecosystem, can take the lead even against brands with a head start.
Paypal owned Honey drops to 15 million users on Chrome, down 5 million in less than six months after being exposed for shady tactics
In late 2024, the popular Chrome extension Honey was exposed for shady tactics including simply not doing what it was promised to do, with uninstalls following suit and Honey now dropping down to 15 million users from a peak of over 20 million. An exposé from YouTube channel MegaLag previously highlighted two things about PayPal-owned Honey. The extension had dropped down to 16 million users as of March 2025. As of this week, Honey has now dropped to 15 million users, down from over 20 million users at its peak prior to the exposé. This drops Honey well below the “17+ million members” that it advertises on the Chrome Web Store, though it’s still very possible the extension has a couple million more members across browsers such as Microsoft Edge, Safari, and Firefox. Mozilla shows around 460,000 Honey users on its browser, while Apple doesn’t show a figure. Microsoft says Honey has 5,000,000 users on Edge, though that number may be inflated by Microsoft’s own questionable tactics with Edge copying Chrome when installed on the same machine. Regardless, it’s clear that these numbers are dropping. The impact of the MegaLag video has passed, with roughly 3 million views on that video since January. Google has implemented restrictions on extensions following the debacle, which led to Honey making changes to some of its biggest abuse of affiliate codes.
Gmail can now automatically show “Gemini summary cards” which summarizes all the key points from the email thread and refreshes it when people reply to retain freshness
Last year, Gmail for Android and iOS introduced a summarize feature, and Gemini can now surface summary cards automatically. At launch, the Gemini-powered “Summarize this email” capability was a button underneath the subject line that you had to manually tap. Doing so would slide up the Gemini sheet with the requested bullet points. On mobile, Gmail will now show “Gemini summary cards” automatically “when a summary could be helpful, such as with longer email threads.” You can collapse the card from the top-right corner if it’s not helpful. Google notes how “Gemini summarizes all the key points from the email thread and refreshes it when people reply,” so it’s always fresh. This launch was detailed in the “May Workspace feature drop.” Google also highlighted the availability of Gemini summaries in Google Chat’s Home view and for documents. There’s also the Google Docs summary building block. Google is also recapping Mind Maps and Discover sources in NotebookLM. In Meet, Google is highlighting Dynamic layouts, as well as Studio look, lighting, and sound.
Apple’s LLM FOR Siri with 150 billion parameters equals the quality of ChatGPT’s recent releases but shows higher levels of hallucination
A new report claims that internally, Apple has already been testing Large Language Models for Siri that are vastly more powerful than the shipping Apple Intelligence, but executives disagree about when to release it. Apple is said to be testing models with 3 billion, 7 billion, 33 billion, and 150 billion parameters. For comparison, Apple in 2024 said that Apple Intelligence’s foundation language models were of the order of 3 billion parameters. That version of Apple Intelligence is intentionally small in order for it to be possible to run on-device instead of requiring all prompts and requests to be sent to the cloud. The larger versions are cloud-based, and in the case of the 150 billion parameter model, now also said to approach the quality of ChatGPT’s most recent releases. However, there reportedly remain concerns over AI hallucinations. Apple is said to have held off releasing this Apple Intelligence model in part because of this, implying that the level of hallucinations is too high. There is said to be another reason for not yet shipping this cloud-based and much improved Siri Chatbot, though. It is claimed that there are philosophical differences between Apple’s senior executives over the release.
Google’s new app lets users find, download, and run openly available AI models that generate images, answer questions, write and edit code, and more on their phones without needing an internet connection
Google quietly released an app that lets users run a range of openly available AI models from the AI dev platform Hugging Face on their phones. Called Google AI Edge Gallery, the app is available for Android and will soon come to iOS. It allows users to find, download, and run compatible models that generate images, answer questions, write and edit code, and more. The models run offline, without needing an internet connection, tapping into supported phones’ processors. Google AI Edge Gallery, which Google is calling an “experimental Alpha release,” can be downloaded from GitHub. The home screen shows shortcuts to AI tasks and capabilities like “Ask Image” and “AI Chat.” Tapping on a capability pulls up a list of models suited for the task, such as Google’s Gemma 3n. Google AI Edge Gallery also provides a “Prompt Lab” users can use to kick off “single-turn” tasks powered by models, like summarizing and rewriting text. The Prompt Lab comes with several task templates and configurable settings to fine-tune the models’ behaviors. Your mileage may vary in terms of performance, Google warns. Modern devices with more powerful hardware will predictably run models faster, but the model size also matters. Larger models will take more time to complete a task — say, answering a question about an image — than smaller models. Google’s inviting members of the developer community to give feedback on the Google AI Edge Gallery experience. The app is under an Apache 2.0 license, meaning it can be used in most contexts — commercial or otherwise — without restriction.
Google is aiming to control the distributed AI network and win data privacy war through its experimental Android app that enables running gen AI models entirely on the edge
Google has quietly released an experimental Android application that enables users to run sophisticated AI models directly on their smartphones without requiring an internet connection. The app, called AI Edge Gallery, allows users to download and execute AI models from the popular Hugging Face platform entirely on their devices, enabling tasks such as image analysis, text generation, coding assistance, and multi-turn conversations while keeping all data processing local. The application, released under an open-source Apache 2.0 license and available through GitHub rather than official app stores, represents Google’s latest effort to democratize access to advanced AI capabilities while addressing growing privacy concerns about cloud-based artificial intelligence services. “The Google AI Edge Gallery is an experimental app that puts the power of cutting-edge Generative AI models directly into your hands, running entirely on your Android devices.” At the heart of the offering is Google’s Gemma 3 model, a compact 529-megabyte language model that can process up to 2,585 tokens per second during prefill inference on mobile GPUs. This performance enables sub-second response times for tasks like text generation and image analysis, making the experience comparable to cloud-based alternatives. The app includes three core capabilities: AI Chat for multi-turn conversations, Ask Image for visual question-answering, and Prompt Lab for single-turn tasks such as text summarization, code generation, and content rewriting. Users can switch between different models to compare performance and capabilities, with real-time benchmarks showing metrics like time-to-first-token and decode speed. The local processing approach addresses growing concerns about data privacy in AI applications, particularly in industries handling sensitive information. By keeping data on-device, organizations can maintain compliance with privacy regulations while leveraging AI capabilities. Qualcomm’s AI Engine, built into Snapdragon chips, drives voice recognition and smart assistants in Android smartphones, while Samsung uses embedded neural processing units in Galaxy devices. By open-sourcing the technology and making it widely available, Google ensures broad adoption while maintaining control over the underlying infrastructure that powers the entire ecosystem. Google open-sources its tools and makes on-device AI widely available because it believes controlling tomorrow’s AI infrastructure matters more than owning today’s data centers. If the strategy works, every smartphone becomes part of Google’s distributed AI network. That possibility makes this quiet app launch far more important than its experimental label suggests.
Google has a new voice input waveform for AI Mode; the transcription appears in real-time below it
Ahead of Search Live, Google is giving AI Mode a straightforward voice input feature that has a particularly delightful animation. The main input screen (from the Search bar shortcut) now has a microphone icon next to the submit button. It joins the gallery to add existing images and Google Lens on the left side. Upon tapping you get an arc-shaped speech-to-text indicator that alternates between the AI Mode colors as you talk. The transcription appears in real-time below it. This replaces a more generic rectangular version that was available at launch on the AI Mode homepage. Search Live will be using this same animation for the immersive conversation experience, and it’s nice that we’re getting it ahead of time. Google has long used the four bouncing dots that morph into a waveform for voice input in Search and Assistant. This new one makes for a nice modernization, and contributes to how AI Mode is one of the nicest interfaces out of Google Search in quite some time