Google can now answer your questions with custom data visualization and graphs. The first domain for this is financial data when asking about stocks and mutual funds. This can be used to compare stocks, see prices during a specific period, and more. Google credits “advanced models [that] understand the intent of the question,” with AI Mode using historical and real-time information. It will then “intelligently determine how to present information to help you make sense of it.” You can interact with the generated chart and ask follow-up questions. Other AI Mode features Google previewed at I/O include Search Live, Deep Search, Personal Context, and agentic capabilities powered by Project Mariner. In other AI Mode tweaks, Google restored Lens and voice input to the Search bar when you’re scrolling through the Discover feed. Meanwhile, Google Labs announced an experiment that “lets you interact conversationally with AI representations of trusted experts built in partnership with the experts themselves.” You can ask questions of these “Portraits” and get back responses based on their knowledge and “authentic” content/work in the expert’s voice “via an illustrated avatar.” The first is from “Radical Candor” author Kim Scott. You might want to ask about “tough workplace situations or practice difficult conversations.” Portraits use “Gemini’s understanding and reasoning capabilities to generate a relevant and insightful response.” Google says it “conducted extensive testing and implemented user feedback mechanisms to proactively identify and address potential problematic scenarios.”
Google starts testing ‘Search Live’ in AI Moe letting you have a real-time conversation with Google
Google is beginning to test AI Mode’s new “Search Live” experience. Powered by Project Astra (just like Gemini Live), it lets you have a real-time conversation with Google. If rolled out to you, the Google app will show a waveform badged by a sparkle underneath the Search bar. (That is curiously the same icon used by Gemini Live. As such, this must be Google’s icon for “Live” conversational experiences.) It replaces the left Google Lens shortcut that immediately opened your gallery/screenshots. Another way to launch Search Live is from the new circular button to the right of the text field in AI Mode conversations. The fullscreen interface has a light or dark background with the new ‘G’ logo in the top-left corner. There’s a curved waveform in the Google colors, while pill-shaped buttons let you “Mute” and get a “Transcript.” Currently, that second button just opens the AI Mode text chat (ending the Live conversation) instead of showing you real-time captions. Tap the three-dot overflow menu for Voice settings, with four options available: Cosmo, Neso, Terra, and Cassini. After you ask a question, Search Live will surface sites used to inform the answer with a scrollable carousel. Google can ask you clarifying questions to refine your query, while you’re free to ask follow-ups. You can exit the Google app and continue your conversation in the background.(The iOS app makes use of Live Activities.) As of today, Search Live’s camera capability that lets you stream video is not yet available. It’s similar to how Gemini Live first rolled out the voice experience before getting camera sharing.
Apple Wallet gains new travel-friendly features in iOS 26- a digital passport and new features for boarding passes
Apple Wallet is enhancing its travel capabilities with the introduction of a digital passport and new features for boarding passes. The digital passport is not a replacement for a physical passport but can be used in apps requiring age and identity verification and at TSA checkpoints. The boarding pass feature now includes links to terminal maps, making it easier to find the gate or baggage claim. Additionally, users can track the progress of their AirTag luggage using the Find My link.
iOS 26 to allow reporting spam voicemails by tapping on new “Report Spam” button for voicemail from an unknown number
iOS 26 has an updated Phone app with several new functions. When you tap into a voicemail from an unknown number, you’ll see a new “Report Spam” button that you can tap if it is a spam call. Tapping on option sends the voicemail to Apple, and you can either report the message as spam and keep it, or report it and delete it. The Call Screening option in iOS 26 intercepts calls from numbers that are not saved in your contacts list, and asks the caller for more information like a name and reason for calling before forwarding the call along to you. The Messages app also has a refined spam reporting workflow in iOS 26. Messages that Apple detects are spam are sent to a specific Spam folder, which is now distinct from the Unknown Senders folder. Messages from numbers that aren’t in your contacts, such as 2FA messages, go in Unknown Senders. Scam messages are sent to the spam folder. Messages from unknown senders and spam messages are both silenced and you won’t get a notification for them, but you will see a badge at the top of the Messages app. You can disable these features in the Messages section of the Settings app, if desired. There is no automatic filtering of spam voicemails, but that is a feature that Apple could use in the future after receiving enough voicemails that people flag as spam.
New Gemini 2.5 models can process problems more deliberately before responding by spending additional computational resources working through complex problems step-by-step, making them cost-effective for high-throughput enterprise tasks like large-scale document summarization
Google has announced that its most powerful Gemini 2.5 models are ready for enterprise production while unveiling a new ultra-efficient variant designed to undercut competitors on cost and speed. The announcements represent Google’s most assertive challenge yet to OpenAI’s market leadership Two of its flagship AI models—Gemini 2.5 Pro and Gemini 2.5 Flash— are now generally available, signaling the company’s confidence that the technology can handle mission-critical business applications. Google simultaneously introduced Gemini 2.5 Flash-Lite, positioning it as the most cost-effective option in its model lineup for high-volume tasks. What distinguishes Google’s approach is its emphasis on “reasoning” or “thinking” capabilities — a technical architecture that allows models to process problems more deliberately before responding. Unlike traditional language models that generate responses immediately, Gemini 2.5 models can spend additional computational resources working through complex problems step-by-step. This “thinking budget” gives developers unprecedented control over AI behavior. They can instruct models to think longer for complex reasoning tasks or respond quickly for simple queries, optimizing both accuracy and cost. The feature addresses a critical enterprise need: predictable AI behavior that can be tuned for specific business requirements. Gemini 2.5 Pro, positioned as Google’s most capable model, excels at complex reasoning, advanced code generation, and multimodal understanding. Gemini 2.5 Flash strikes a balance between capability and efficiency, designed for high-throughput enterprise tasks like large-scale document summarization and responsive chat applications. The newly introduced Flash-Lite variant sacrifices some intelligence for dramatic cost savings, targeting use cases like classification and translation where speed and volume matter more than sophisticated reasoning.
Google’s AI Mode now lets users have a free-flowing, back-and-forth voice conversation with Search and explore links from across the web with the option to tap the “transcript” button to view the text response
Google is rolling out the ability for users to have a back-and-forth voice conversation with AI Mode, its experimental Search feature that lets users ask complex, multi-part questions. With the new Search Live integration, users can have a free-flowing voice conversation with Search and explore links from across the web. Users will be able to access the feature by opening the Google app and tapping the new “Live” icon to ask their question aloud. They will then hear an AI-generated audio response, and they can follow up with another question. The feature will be useful in instances where you’re on the go or multitasking. As you’re having the conversation, you’ll find links right on your screen if you want to dig deeper into your search. Because Search Live works in the background, you can continue the conversation while in another app. Plus, you have the option to tap the “transcript” button to view the text response and continue to ask questions by typing if you’d like to. You can also revisit a Search Live response by navigating to your AI Mode history. The custom model is built on Search’s best-in-class quality and information systems, so you still get reliable, helpful responses no matter where or how you’re asking your question. Search Live with voice also uses query fan-out technique to show you a wider and more diverse set of helpful web content, enabling new opportunities for exploration.
Google virtual try-on app lets users not only virtually “try on” outfits but also see themselves in motion while wearing them in AI-generated videos
Google launched an experimental app that lets users not only virtually “try on” outfits but also see themselves in motion while wearing them. The new Doppl app from Google Labs builds on the capabilities of the AI Mode virtual try-on feature launched in May by Google Shopping, adding the ability to turn static images into artificial intelligence-generated videos. The dynamic visuals give users “an even better sense for how an outfit might feel.” Users can generate these images and videos by uploading a full-body photo of themselves as well as photos or screenshots of the items they would like to try on. “With Doppl, you can try out any look, so if you see an outfit you like from a friend, at a local thrift shop, or featured on social media, you can upload a photo of it into Doppl and imagine how it might look on you,” Google’s post said. “You can also save or share your best looks with friends or followers.”
Google Wallet starts rolling out Material 3 Expressive redesign in Android
Google Wallet is the latest first-party app to get a Material 3 Expressive redesign on Android in a simple modernization. On the homepage, “Wallet” in the top-left corner is replaced by the app’s logo to provide a nice balance with your profile avatar on the other side. The list of pass cards is a bit larger than before, while the “Archived passes” button is placed in a pill with an accompanying icon. Lastly, a large FAB (floating action button) is in use. The Recent activity page has been updated to place everything in containers, with the first and last cards featuring more rounded corners. Overall, this is a pretty straightforward Material 3 Expressive redesign for Google Wallet. In other Google Wallet developments, the web app picker in the top-right corner of every Google website recently added a “Wallet App” shortcut to wallet.google.com.
Google’s AI Overviews which summarizes results from the web in a AI-generated text form, is now used by more than 1.5 billion users, . Circle to Search, is now available on more than 250 million devices
By Google’s estimation, AI Overviews is now used by more than 1.5 billion users monthly across over 100 countries. AI Overviews compiles results from around the web to answer certain questions and will show AI-generated text at the top of the Google Search results page. While the feature has dampened traffic to some publishers, Google sees it and other AI-powered search capabilities as potentially meaningful revenue drivers and ways to boost engagement on Search. During its Q1 2025 earnings call, Google highlighted the growth of its other AI-based search products as well, including Circle to Search. Circle to Search, which lets you highlight something on your smartphone’s screen and ask questions about it, is now available on more than 250 million devices, Google said — up from around 200 million devices as of late last year. Circle to Search usage rose close to 40% quarter-over-quarter, according to the company. Google also noted in its call that visual searches on its platforms are growing at a steady clip. According to CEO Sundar Pichai, searches through Google Lens, Google’s multimodal AI-powered search technology, have increased by 5 billion since October. The number of people shopping on Lens was up over 10% in Q1, meanwhile.
Google’s Android devices would be able to include SIM in backups in addition to contacts, call history, device settings, apps & app data, SMS & MMS, potentially making it that much easier to swap phones
Device backups currently save things such as your app list, contacts, SMS/MMS/RCS messages, call history, and some device settings as well. Combined with Google Photos for photo/video backup, that makes it easier to swap phones, especially in the case that your previous device is lost, stolen, or broken. Google is apparently looking to extend on this. New findings suggest that Android devices may soon be able to include your SIM in a device backup, potentially making it that much easier to swap phones. Google’s services would be able to “back up contacts, call history, device settings, apps & app data, SMS & MMS messages, and SIMs.” This is very likely referring to eSIM rather than a physical SIM card, but the utility here is obvious. Google is already working to make it easier to transfer an eSIM between devices, and the ability to back that SIM up would just make things all the more painless when restoring from a device you no longer have access to. There are still a lot of questions around how SIM backup on Android would work, including how carriers would be involved, but it’s a nice idea. As for when it might be implemented, that’s not remotely clear either.