Last year, Gmail for Android and iOS introduced a summarize feature, and Gemini can now surface summary cards automatically. At launch, the Gemini-powered “Summarize this email” capability was a button underneath the subject line that you had to manually tap. Doing so would slide up the Gemini sheet with the requested bullet points. On mobile, Gmail will now show “Gemini summary cards” automatically “when a summary could be helpful, such as with longer email threads.” You can collapse the card from the top-right corner if it’s not helpful. Google notes how “Gemini summarizes all the key points from the email thread and refreshes it when people reply,” so it’s always fresh. This launch was detailed in the “May Workspace feature drop.” Google also highlighted the availability of Gemini summaries in Google Chat’s Home view and for documents. There’s also the Google Docs summary building block. Google is also recapping Mind Maps and Discover sources in NotebookLM. In Meet, Google is highlighting Dynamic layouts, as well as Studio look, lighting, and sound.
Google’s new app lets users find, download, and run openly available AI models that generate images, answer questions, write and edit code, and more on their phones without needing an internet connection
Google quietly released an app that lets users run a range of openly available AI models from the AI dev platform Hugging Face on their phones. Called Google AI Edge Gallery, the app is available for Android and will soon come to iOS. It allows users to find, download, and run compatible models that generate images, answer questions, write and edit code, and more. The models run offline, without needing an internet connection, tapping into supported phones’ processors. Google AI Edge Gallery, which Google is calling an “experimental Alpha release,” can be downloaded from GitHub. The home screen shows shortcuts to AI tasks and capabilities like “Ask Image” and “AI Chat.” Tapping on a capability pulls up a list of models suited for the task, such as Google’s Gemma 3n. Google AI Edge Gallery also provides a “Prompt Lab” users can use to kick off “single-turn” tasks powered by models, like summarizing and rewriting text. The Prompt Lab comes with several task templates and configurable settings to fine-tune the models’ behaviors. Your mileage may vary in terms of performance, Google warns. Modern devices with more powerful hardware will predictably run models faster, but the model size also matters. Larger models will take more time to complete a task — say, answering a question about an image — than smaller models. Google’s inviting members of the developer community to give feedback on the Google AI Edge Gallery experience. The app is under an Apache 2.0 license, meaning it can be used in most contexts — commercial or otherwise — without restriction.
Google is aiming to control the distributed AI network and win data privacy war through its experimental Android app that enables running gen AI models entirely on the edge
Google has quietly released an experimental Android application that enables users to run sophisticated AI models directly on their smartphones without requiring an internet connection. The app, called AI Edge Gallery, allows users to download and execute AI models from the popular Hugging Face platform entirely on their devices, enabling tasks such as image analysis, text generation, coding assistance, and multi-turn conversations while keeping all data processing local. The application, released under an open-source Apache 2.0 license and available through GitHub rather than official app stores, represents Google’s latest effort to democratize access to advanced AI capabilities while addressing growing privacy concerns about cloud-based artificial intelligence services. “The Google AI Edge Gallery is an experimental app that puts the power of cutting-edge Generative AI models directly into your hands, running entirely on your Android devices.” At the heart of the offering is Google’s Gemma 3 model, a compact 529-megabyte language model that can process up to 2,585 tokens per second during prefill inference on mobile GPUs. This performance enables sub-second response times for tasks like text generation and image analysis, making the experience comparable to cloud-based alternatives. The app includes three core capabilities: AI Chat for multi-turn conversations, Ask Image for visual question-answering, and Prompt Lab for single-turn tasks such as text summarization, code generation, and content rewriting. Users can switch between different models to compare performance and capabilities, with real-time benchmarks showing metrics like time-to-first-token and decode speed. The local processing approach addresses growing concerns about data privacy in AI applications, particularly in industries handling sensitive information. By keeping data on-device, organizations can maintain compliance with privacy regulations while leveraging AI capabilities. Qualcomm’s AI Engine, built into Snapdragon chips, drives voice recognition and smart assistants in Android smartphones, while Samsung uses embedded neural processing units in Galaxy devices. By open-sourcing the technology and making it widely available, Google ensures broad adoption while maintaining control over the underlying infrastructure that powers the entire ecosystem. Google open-sources its tools and makes on-device AI widely available because it believes controlling tomorrow’s AI infrastructure matters more than owning today’s data centers. If the strategy works, every smartphone becomes part of Google’s distributed AI network. That possibility makes this quiet app launch far more important than its experimental label suggests.
Google has a new voice input waveform for AI Mode; the transcription appears in real-time below it
Ahead of Search Live, Google is giving AI Mode a straightforward voice input feature that has a particularly delightful animation. The main input screen (from the Search bar shortcut) now has a microphone icon next to the submit button. It joins the gallery to add existing images and Google Lens on the left side. Upon tapping you get an arc-shaped speech-to-text indicator that alternates between the AI Mode colors as you talk. The transcription appears in real-time below it. This replaces a more generic rectangular version that was available at launch on the AI Mode homepage. Search Live will be using this same animation for the immersive conversation experience, and it’s nice that we’re getting it ahead of time. Google has long used the four bouncing dots that morph into a waveform for voice input in Search and Assistant. This new one makes for a nice modernization, and contributes to how AI Mode is one of the nicest interfaces out of Google Search in quite some time
Google Photos albums redesign adds Material 3 Expressive toolbar, QR code sharing for albums
Google’s Material 3 Expressive for Photos — a redesign of the albums view — is now live on Android. QR code sharing for albums is also now available. Upon opening, the previous design showed an “Add description” field and buttons for Share, Add photos, and Order photos underneath the album cover. That is now gone and replaced by a Material 3 Expressive toolbar. That floating toolbar is how you Share, Add photos, and Edit the album. The latter was previously in the overflow menu, which has been tweaked and gains some icons. Google has also elevated the Sort photos button to the top bar. The Edit view has a docked toolbar to Add photos, text, and locations, which was previously in the top bar. You can add the description here, while there are two cards up top for editing Highlights and Album cover. There are more than a few Material 3 Expressive components with this albums redesign, but the full bleed design for the cover is not here yet. Overall, it’s a bit cleaner than before, with M3E giving apps the opportunity to consolidate things. The rest of Google Photos has yet to be updated, with that possibly coming later this month with the redesigned editor interface. Meanwhile, opening the share sheet shows the new “Show QR Code” option that was announced last week. The design uses a Material 3 shape with the Google Photos logo at the center. We’re seeing both the albums redesign and QR code sharing with Google Photos 7.30 on Android.
Google AI Mode can create charts to answer financial questions, based on Google credits “advanced models [that] understand the intent of the question
Google can now answer your questions with custom data visualization and graphs. The first domain for this is financial data when asking about stocks and mutual funds. This can be used to compare stocks, see prices during a specific period, and more. Google credits “advanced models [that] understand the intent of the question,” with AI Mode using historical and real-time information. It will then “intelligently determine how to present information to help you make sense of it.” You can interact with the generated chart and ask follow-up questions. Other AI Mode features Google previewed at I/O include Search Live, Deep Search, Personal Context, and agentic capabilities powered by Project Mariner. In other AI Mode tweaks, Google restored Lens and voice input to the Search bar when you’re scrolling through the Discover feed. Meanwhile, Google Labs announced an experiment that “lets you interact conversationally with AI representations of trusted experts built in partnership with the experts themselves.” You can ask questions of these “Portraits” and get back responses based on their knowledge and “authentic” content/work in the expert’s voice “via an illustrated avatar.” The first is from “Radical Candor” author Kim Scott. You might want to ask about “tough workplace situations or practice difficult conversations.” Portraits use “Gemini’s understanding and reasoning capabilities to generate a relevant and insightful response.” Google says it “conducted extensive testing and implemented user feedback mechanisms to proactively identify and address potential problematic scenarios.”
Google starts testing ‘Search Live’ in AI Moe letting you have a real-time conversation with Google
Google is beginning to test AI Mode’s new “Search Live” experience. Powered by Project Astra (just like Gemini Live), it lets you have a real-time conversation with Google. If rolled out to you, the Google app will show a waveform badged by a sparkle underneath the Search bar. (That is curiously the same icon used by Gemini Live. As such, this must be Google’s icon for “Live” conversational experiences.) It replaces the left Google Lens shortcut that immediately opened your gallery/screenshots. Another way to launch Search Live is from the new circular button to the right of the text field in AI Mode conversations. The fullscreen interface has a light or dark background with the new ‘G’ logo in the top-left corner. There’s a curved waveform in the Google colors, while pill-shaped buttons let you “Mute” and get a “Transcript.” Currently, that second button just opens the AI Mode text chat (ending the Live conversation) instead of showing you real-time captions. Tap the three-dot overflow menu for Voice settings, with four options available: Cosmo, Neso, Terra, and Cassini. After you ask a question, Search Live will surface sites used to inform the answer with a scrollable carousel. Google can ask you clarifying questions to refine your query, while you’re free to ask follow-ups. You can exit the Google app and continue your conversation in the background.(The iOS app makes use of Live Activities.) As of today, Search Live’s camera capability that lets you stream video is not yet available. It’s similar to how Gemini Live first rolled out the voice experience before getting camera sharing.
Apple Wallet gains new travel-friendly features in iOS 26- a digital passport and new features for boarding passes
Apple Wallet is enhancing its travel capabilities with the introduction of a digital passport and new features for boarding passes. The digital passport is not a replacement for a physical passport but can be used in apps requiring age and identity verification and at TSA checkpoints. The boarding pass feature now includes links to terminal maps, making it easier to find the gate or baggage claim. Additionally, users can track the progress of their AirTag luggage using the Find My link.
iOS 26 to allow reporting spam voicemails by tapping on new “Report Spam” button for voicemail from an unknown number
iOS 26 has an updated Phone app with several new functions. When you tap into a voicemail from an unknown number, you’ll see a new “Report Spam” button that you can tap if it is a spam call. Tapping on option sends the voicemail to Apple, and you can either report the message as spam and keep it, or report it and delete it. The Call Screening option in iOS 26 intercepts calls from numbers that are not saved in your contacts list, and asks the caller for more information like a name and reason for calling before forwarding the call along to you. The Messages app also has a refined spam reporting workflow in iOS 26. Messages that Apple detects are spam are sent to a specific Spam folder, which is now distinct from the Unknown Senders folder. Messages from numbers that aren’t in your contacts, such as 2FA messages, go in Unknown Senders. Scam messages are sent to the spam folder. Messages from unknown senders and spam messages are both silenced and you won’t get a notification for them, but you will see a badge at the top of the Messages app. You can disable these features in the Messages section of the Settings app, if desired. There is no automatic filtering of spam voicemails, but that is a feature that Apple could use in the future after receiving enough voicemails that people flag as spam.
New Gemini 2.5 models can process problems more deliberately before responding by spending additional computational resources working through complex problems step-by-step, making them cost-effective for high-throughput enterprise tasks like large-scale document summarization
Google has announced that its most powerful Gemini 2.5 models are ready for enterprise production while unveiling a new ultra-efficient variant designed to undercut competitors on cost and speed. The announcements represent Google’s most assertive challenge yet to OpenAI’s market leadership Two of its flagship AI models—Gemini 2.5 Pro and Gemini 2.5 Flash— are now generally available, signaling the company’s confidence that the technology can handle mission-critical business applications. Google simultaneously introduced Gemini 2.5 Flash-Lite, positioning it as the most cost-effective option in its model lineup for high-volume tasks. What distinguishes Google’s approach is its emphasis on “reasoning” or “thinking” capabilities — a technical architecture that allows models to process problems more deliberately before responding. Unlike traditional language models that generate responses immediately, Gemini 2.5 models can spend additional computational resources working through complex problems step-by-step. This “thinking budget” gives developers unprecedented control over AI behavior. They can instruct models to think longer for complex reasoning tasks or respond quickly for simple queries, optimizing both accuracy and cost. The feature addresses a critical enterprise need: predictable AI behavior that can be tuned for specific business requirements. Gemini 2.5 Pro, positioned as Google’s most capable model, excels at complex reasoning, advanced code generation, and multimodal understanding. Gemini 2.5 Flash strikes a balance between capability and efficiency, designed for high-throughput enterprise tasks like large-scale document summarization and responsive chat applications. The newly introduced Flash-Lite variant sacrifices some intelligence for dramatic cost savings, targeting use cases like classification and translation where speed and volume matter more than sophisticated reasoning.