In late 2024, the popular Chrome extension Honey was exposed for shady tactics including simply not doing what it was promised to do, with uninstalls following suit and Honey now dropping down to 15 million users from a peak of over 20 million. An exposé from YouTube channel MegaLag previously highlighted two things about PayPal-owned Honey. The extension had dropped down to 16 million users as of March 2025. As of this week, Honey has now dropped to 15 million users, down from over 20 million users at its peak prior to the exposé. This drops Honey well below the “17+ million members” that it advertises on the Chrome Web Store, though it’s still very possible the extension has a couple million more members across browsers such as Microsoft Edge, Safari, and Firefox. Mozilla shows around 460,000 Honey users on its browser, while Apple doesn’t show a figure. Microsoft says Honey has 5,000,000 users on Edge, though that number may be inflated by Microsoft’s own questionable tactics with Edge copying Chrome when installed on the same machine. Regardless, it’s clear that these numbers are dropping. The impact of the MegaLag video has passed, with roughly 3 million views on that video since January. Google has implemented restrictions on extensions following the debacle, which led to Honey making changes to some of its biggest abuse of affiliate codes.
Gmail can now automatically show “Gemini summary cards” which summarizes all the key points from the email thread and refreshes it when people reply to retain freshness
Last year, Gmail for Android and iOS introduced a summarize feature, and Gemini can now surface summary cards automatically. At launch, the Gemini-powered “Summarize this email” capability was a button underneath the subject line that you had to manually tap. Doing so would slide up the Gemini sheet with the requested bullet points. On mobile, Gmail will now show “Gemini summary cards” automatically “when a summary could be helpful, such as with longer email threads.” You can collapse the card from the top-right corner if it’s not helpful. Google notes how “Gemini summarizes all the key points from the email thread and refreshes it when people reply,” so it’s always fresh. This launch was detailed in the “May Workspace feature drop.” Google also highlighted the availability of Gemini summaries in Google Chat’s Home view and for documents. There’s also the Google Docs summary building block. Google is also recapping Mind Maps and Discover sources in NotebookLM. In Meet, Google is highlighting Dynamic layouts, as well as Studio look, lighting, and sound.
Apple’s LLM FOR Siri with 150 billion parameters equals the quality of ChatGPT’s recent releases but shows higher levels of hallucination
A new report claims that internally, Apple has already been testing Large Language Models for Siri that are vastly more powerful than the shipping Apple Intelligence, but executives disagree about when to release it. Apple is said to be testing models with 3 billion, 7 billion, 33 billion, and 150 billion parameters. For comparison, Apple in 2024 said that Apple Intelligence’s foundation language models were of the order of 3 billion parameters. That version of Apple Intelligence is intentionally small in order for it to be possible to run on-device instead of requiring all prompts and requests to be sent to the cloud. The larger versions are cloud-based, and in the case of the 150 billion parameter model, now also said to approach the quality of ChatGPT’s most recent releases. However, there reportedly remain concerns over AI hallucinations. Apple is said to have held off releasing this Apple Intelligence model in part because of this, implying that the level of hallucinations is too high. There is said to be another reason for not yet shipping this cloud-based and much improved Siri Chatbot, though. It is claimed that there are philosophical differences between Apple’s senior executives over the release.
Google’s new app lets users find, download, and run openly available AI models that generate images, answer questions, write and edit code, and more on their phones without needing an internet connection
Google quietly released an app that lets users run a range of openly available AI models from the AI dev platform Hugging Face on their phones. Called Google AI Edge Gallery, the app is available for Android and will soon come to iOS. It allows users to find, download, and run compatible models that generate images, answer questions, write and edit code, and more. The models run offline, without needing an internet connection, tapping into supported phones’ processors. Google AI Edge Gallery, which Google is calling an “experimental Alpha release,” can be downloaded from GitHub. The home screen shows shortcuts to AI tasks and capabilities like “Ask Image” and “AI Chat.” Tapping on a capability pulls up a list of models suited for the task, such as Google’s Gemma 3n. Google AI Edge Gallery also provides a “Prompt Lab” users can use to kick off “single-turn” tasks powered by models, like summarizing and rewriting text. The Prompt Lab comes with several task templates and configurable settings to fine-tune the models’ behaviors. Your mileage may vary in terms of performance, Google warns. Modern devices with more powerful hardware will predictably run models faster, but the model size also matters. Larger models will take more time to complete a task — say, answering a question about an image — than smaller models. Google’s inviting members of the developer community to give feedback on the Google AI Edge Gallery experience. The app is under an Apache 2.0 license, meaning it can be used in most contexts — commercial or otherwise — without restriction.
Google is aiming to control the distributed AI network and win data privacy war through its experimental Android app that enables running gen AI models entirely on the edge
Google has quietly released an experimental Android application that enables users to run sophisticated AI models directly on their smartphones without requiring an internet connection. The app, called AI Edge Gallery, allows users to download and execute AI models from the popular Hugging Face platform entirely on their devices, enabling tasks such as image analysis, text generation, coding assistance, and multi-turn conversations while keeping all data processing local. The application, released under an open-source Apache 2.0 license and available through GitHub rather than official app stores, represents Google’s latest effort to democratize access to advanced AI capabilities while addressing growing privacy concerns about cloud-based artificial intelligence services. “The Google AI Edge Gallery is an experimental app that puts the power of cutting-edge Generative AI models directly into your hands, running entirely on your Android devices.” At the heart of the offering is Google’s Gemma 3 model, a compact 529-megabyte language model that can process up to 2,585 tokens per second during prefill inference on mobile GPUs. This performance enables sub-second response times for tasks like text generation and image analysis, making the experience comparable to cloud-based alternatives. The app includes three core capabilities: AI Chat for multi-turn conversations, Ask Image for visual question-answering, and Prompt Lab for single-turn tasks such as text summarization, code generation, and content rewriting. Users can switch between different models to compare performance and capabilities, with real-time benchmarks showing metrics like time-to-first-token and decode speed. The local processing approach addresses growing concerns about data privacy in AI applications, particularly in industries handling sensitive information. By keeping data on-device, organizations can maintain compliance with privacy regulations while leveraging AI capabilities. Qualcomm’s AI Engine, built into Snapdragon chips, drives voice recognition and smart assistants in Android smartphones, while Samsung uses embedded neural processing units in Galaxy devices. By open-sourcing the technology and making it widely available, Google ensures broad adoption while maintaining control over the underlying infrastructure that powers the entire ecosystem. Google open-sources its tools and makes on-device AI widely available because it believes controlling tomorrow’s AI infrastructure matters more than owning today’s data centers. If the strategy works, every smartphone becomes part of Google’s distributed AI network. That possibility makes this quiet app launch far more important than its experimental label suggests.
Google has a new voice input waveform for AI Mode; the transcription appears in real-time below it
Ahead of Search Live, Google is giving AI Mode a straightforward voice input feature that has a particularly delightful animation. The main input screen (from the Search bar shortcut) now has a microphone icon next to the submit button. It joins the gallery to add existing images and Google Lens on the left side. Upon tapping you get an arc-shaped speech-to-text indicator that alternates between the AI Mode colors as you talk. The transcription appears in real-time below it. This replaces a more generic rectangular version that was available at launch on the AI Mode homepage. Search Live will be using this same animation for the immersive conversation experience, and it’s nice that we’re getting it ahead of time. Google has long used the four bouncing dots that morph into a waveform for voice input in Search and Assistant. This new one makes for a nice modernization, and contributes to how AI Mode is one of the nicest interfaces out of Google Search in quite some time
2025 Apple Design Awards includes Speechify text to audio solution and Play that lets users build interactive prototypes with SwiftUI frameworks
Apple announced the winners and finalists of this year’s Apple Design Awards, celebrating 12 standout apps and games that set a high bar in design.
App:CapWords
Developer: HappyPlan Tech (China)
CapWords is a dynamic language learning tool that transforms images of everyday objects into interactive stickers — helping learners explore new words in a more intuitive and visual way. Supporting nine languages, the app is a delightful way to learn independently while immersing users in their surroundings.
Game: Balatro
Developer: LocalThunk (Canada)
Balatro is a satisfying fusion of poker, solitaire, and deck-building with roguelike elements. Players combine poker hands with joker cards — each with their own unique abilities — to create varied synergies. Hallmarked by clever details, gripping gameplay challenges players to advance their scores by crafting original decks to beat devious blinds and secure victory.
App: Play
Developer: Rabbit 3 Times (United States)
Play is a sophisticated yet accessible tool that lets users build interactive prototypes with SwiftUI frameworks. Its thoughtfully crafted user interface is both powerful and easy to navigate, helping designers create interactive prototypes and collaborate across Mac and iPhone, all synced in real time for seamless creativity.
Game: PBJ — The Musical
Developer: Philipp Stollenmayer (Germany)
PBJ — The Musical is snack-based Shakespeare, a charming game that tells the story of Romeo and Juliet with condiments. PBJ creatively mixes rhythm-based gameplay with narrative storytelling and a wonderful soundtrack. And with haptic feedback, clever camera work, and fun dialogue, it’s joyful from the start.
App: Taobao
Developer: Zhejiang Taobao Network (China)
Taobao offers a convenient and engaging shopping experience on Apple Vision Pro, providing incredible 3D models comparable to their real-life counterparts. The immersive experience enhances shopping for users, taking into consideration placement, position, controls, size, and function, and giving people the ability to compare items side by side from an extensive selection of products.
Game: DREDGE
Developer: Black Salt Games (New Zealand)
DREDGE blends slow-burn horror with exploration and adventure. Players take the helm of a fishing boat to navigate eerie islands, uncover strange wildlife, and piece together a haunting mystery. The game offers seamless interactions and a fun world of hidden treasures across iPhone, iPad, and Mac.
App: Speechify
Developer: Speechify (United States)
With support for hundreds of voices in over 50 languages, Speechify is a powerful tool that transforms written text into audio with ease. Designed with accessibility at its core, and by offering features like Dynamic Type and VoiceOver, the app serves as a vital resource for people with dyslexia, ADHD, and low vision, as well as anyone who learns best by listening.
Game: Art of Fauna
Developer: Klemens Strasser (Austria)
Beautifully illustrated and mindfully designed, Art of Fauna is a puzzle game that blends vintage-inspired wildlife imagery with a deep commitment to inclusivity and conservation. Players can solve puzzles by rearranging visual elements or reordering descriptive text, making gameplay uniquely interactive. With features like full VoiceOver support and haptic feedback, accessibility is woven throughout the experience.
App: Watch Duty
Developer: Sherwood Forestry Service (United States)
During devastating wildfires in Southern California, Watch Duty once again served as a lifeline, delivering up-to-the-minute updates, evacuation information, and critical resources with clarity and reliability. The app reports information like active fire perimeters and progress, wind speed and direction, and evacuation orders.
Game: Neva
Developer: Devolver Digital (United States)
Visually stunning and emotionally resonant, Neva is an action-adventure tale that follows a girl and her wolf companion through a beautiful world in decline. As the seasons shift, so does their relationship — offering a quiet meditation on care, connection, and the cost of environmental loss. With themes of friendship and leadership, players guide the pair through breathtaking landscapes, and a story that is as moving as it is timely.
App: Feather: Draw in 3D
Developer: Sketchsoft (South Korea)
This drawing tool allows users to transform 2D designs into 3D masterpieces. Developed with a focus on creativity and user experience, Feather makes it easy for people of all skill levels to build advanced 3D modeling designs on iPad, drawing on touch and Apple Pencil interactions to help users bring their imaginations to life.
Game: Infinity Nikki
Developer: Infold Games (Singapore)
With its enchanted realm of color, detail, and rendering, Infinity Nikki is a true visual achievement. This cozy open-world adventure challenges players to collect wonderful things, and is packed with magical outfits, whimsical creatures, and unexpected moments.
Google Photos albums redesign adds Material 3 Expressive toolbar, QR code sharing for albums
Google’s Material 3 Expressive for Photos — a redesign of the albums view — is now live on Android. QR code sharing for albums is also now available. Upon opening, the previous design showed an “Add description” field and buttons for Share, Add photos, and Order photos underneath the album cover. That is now gone and replaced by a Material 3 Expressive toolbar. That floating toolbar is how you Share, Add photos, and Edit the album. The latter was previously in the overflow menu, which has been tweaked and gains some icons. Google has also elevated the Sort photos button to the top bar. The Edit view has a docked toolbar to Add photos, text, and locations, which was previously in the top bar. You can add the description here, while there are two cards up top for editing Highlights and Album cover. There are more than a few Material 3 Expressive components with this albums redesign, but the full bleed design for the cover is not here yet. Overall, it’s a bit cleaner than before, with M3E giving apps the opportunity to consolidate things. The rest of Google Photos has yet to be updated, with that possibly coming later this month with the redesigned editor interface. Meanwhile, opening the share sheet shows the new “Show QR Code” option that was announced last week. The design uses a Material 3 shape with the Google Photos logo at the center. We’re seeing both the albums redesign and QR code sharing with Google Photos 7.30 on Android.
Pret-A-Manger secures global payment processing with store-and-forward functionality helping ensure uninterrupted transaction processing even during connectivity disruptions
Pret-A-Manger is leveraging the FreedomPay payment acceptance solution in the U.S., U.K. and Hong Kong. The retailer plans further deployments of the system in Europe later in 2025. Utilizing FreedomPay payment acceptance technology, Pret-A-Manger seeks consistently available and operational processing of customer payments. The platform’s store-and-forward functionality helps ensure uninterrupted transaction processing even during connectivity disruptions, helping the retailer maintain business continuity and maximize revenue. Chris Matthews, global retail technology director at Pret A Manger. “Partnering with FreedomPay allows us to leverage their best-in-class technology to ensure secure and reliable payment processing, no matter where our customers are in the world. This partnership is a key ingredient in our recipe for international success, allowing us to focus on what we do best: delivering delicious, freshly made food and organic coffee.” With the help of Dallas Holdings Limited, the retailer opened its first shop in Los Angeles in 2024 as part of its U.S. growth trajectory, with the chain aiming to reach 300 stores by 2029.
iOS 18 saw below average adoption despite Apple Intelligence, just an average of 83.2% compared to iOS 14 which saw the highest adoption rate with 90%.
By January 2025, iOS 18 appeared to be ahead of its predecessor, reaching 76% of all compatible iPhones a month before iOS 17 did the year before. Users are adopting iOS 18.1 at twice the rate that they adopted 17.1 in the year ago quarter. Apple says that iOS 18 is currently installed on 82% of all compatible iPhones. In announcing that figure, Apple said that this adoption rate is down to its users being aware of the benefits of updating, plus how simple the company has made it to update. Comparing Apple’s own figures from the last ten years, however, iOS 18 comes in at just under the average of 83.2%. In the last decade, iOS 14 saw the highest adoption rate with 90%. Then iOS 17 scored the lowest with 77%. since 2019, the company has separately recorded the iOS adoption rate for iPhones released in the previous four years.It’s not clear why it introduced this, or why it chose four years, but the figures do not materially help iOS 18’s case.Using only this last-four-years data from Apple in 2019, the average adoption rate is 87.9%. That means iOS 18’s figure of 88% is just 0.1% above the average.The minimum adoption rate during this period and for this range of iPhones is 85%, which was achieved by both iOS 12 and iOS 14. The maximum was 92% for iOS 13. Overall, iOS adoption rate for all compatible devices is reasonably steady, having never fallen below 77% in the last ten years, and never rising above 90%.