Google can now answer your questions with custom data visualization and graphs. The first domain for this is financial data when asking about stocks and mutual funds. This can be used to compare stocks, see prices during a specific period, and more. Google credits “advanced models [that] understand the intent of the question,” with AI Mode using historical and real-time information. It will then “intelligently determine how to present information to help you make sense of it.” You can interact with the generated chart and ask follow-up questions. Other AI Mode features Google previewed at I/O include Search Live, Deep Search, Personal Context, and agentic capabilities powered by Project Mariner. In other AI Mode tweaks, Google restored Lens and voice input to the Search bar when you’re scrolling through the Discover feed. Meanwhile, Google Labs announced an experiment that “lets you interact conversationally with AI representations of trusted experts built in partnership with the experts themselves.” You can ask questions of these “Portraits” and get back responses based on their knowledge and “authentic” content/work in the expert’s voice “via an illustrated avatar.” The first is from “Radical Candor” author Kim Scott. You might want to ask about “tough workplace situations or practice difficult conversations.” Portraits use “Gemini’s understanding and reasoning capabilities to generate a relevant and insightful response.” Google says it “conducted extensive testing and implemented user feedback mechanisms to proactively identify and address potential problematic scenarios.”
Google starts testing ‘Search Live’ in AI Moe letting you have a real-time conversation with Google
Google is beginning to test AI Mode’s new “Search Live” experience. Powered by Project Astra (just like Gemini Live), it lets you have a real-time conversation with Google. If rolled out to you, the Google app will show a waveform badged by a sparkle underneath the Search bar. (That is curiously the same icon used by Gemini Live. As such, this must be Google’s icon for “Live” conversational experiences.) It replaces the left Google Lens shortcut that immediately opened your gallery/screenshots. Another way to launch Search Live is from the new circular button to the right of the text field in AI Mode conversations. The fullscreen interface has a light or dark background with the new ‘G’ logo in the top-left corner. There’s a curved waveform in the Google colors, while pill-shaped buttons let you “Mute” and get a “Transcript.” Currently, that second button just opens the AI Mode text chat (ending the Live conversation) instead of showing you real-time captions. Tap the three-dot overflow menu for Voice settings, with four options available: Cosmo, Neso, Terra, and Cassini. After you ask a question, Search Live will surface sites used to inform the answer with a scrollable carousel. Google can ask you clarifying questions to refine your query, while you’re free to ask follow-ups. You can exit the Google app and continue your conversation in the background.(The iOS app makes use of Live Activities.) As of today, Search Live’s camera capability that lets you stream video is not yet available. It’s similar to how Gemini Live first rolled out the voice experience before getting camera sharing.
Apple introduces iOS 26 and macOS 26 in major operating system rebrand
Apple has changed its operating system names at WWDC 2025, introducing the year as part of the name. All systems will be called iOS 26, iPadOS 26, macOS 26, tvOS 26, watchOS 26, and visionOS 26, aiming to make the naming process clearer and more consistent across all platforms. This change aligns with rivals Samsung and Microsoft, making it easier for users to understand and identify the latest updates. The design overhaul, known as “Liquid Glass,” features a transparent interface.
Apple introduces live translation across Messages, FaceTime, and Phone at WWDC 25
Apple is introducing Live Translation, powered by Apple Intelligence, for Messages, FaceTime, and Phone calls. Live translation can translate conversation on the fly. The translation feature is “enabled by Apple Built models that run entirely on your device so your personal conversations stay personal. In Messages, Live Translation will automatically translate text for you as you type and deliver it in your preferred language. Similarly, when the person you’re texting responds, each text can be instantly translated. When catching up on FaceTime, Apple’s translation feature will provide live captions. And on a phone call — whether you’re talking to an Apple user or not — your words can be translated as you talk, and the translation is spoken out loud for the call recipient. As the person you’re speaking to responds in their own language, you’ll hear a spoken translation of their voice.
Apple Wallet gains new travel-friendly features in iOS 26- a digital passport and new features for boarding passes
Apple Wallet is enhancing its travel capabilities with the introduction of a digital passport and new features for boarding passes. The digital passport is not a replacement for a physical passport but can be used in apps requiring age and identity verification and at TSA checkpoints. The boarding pass feature now includes links to terminal maps, making it easier to find the gate or baggage claim. Additionally, users can track the progress of their AirTag luggage using the Find My link.
Developers can use API keys to bring AI models from other providers to Xcode
Apple has released a new version of its app development suite, Xcode, which integrates OpenAI’s ChatGPT for coding, document generation, and more. Developers can use API keys to bring AI models from other providers to Xcode for AI-powered programming suggestions. The new AI integrations allow developers to generate code previews, iterate on designs, and fix errors. ChatGPT can be accessed without creating an account, and paid users can increase rate limits. Apple also launched the Foundation Models framework, allowing developers to access on-device AI models with just three lines of code. The company chose ChatGPT over a vibe-coding software in partnership with Anthropic.
Apple’s widgets, now integrated into your space with visionOS 26, offer personalized information at a glance
Personalized spatial widgets: Apple’s widgets, now integrated into your space with visionOS 26, offer personalized information at a glance. Users can customize widgets to size, color, and depth, and add features like customizable clocks, weather adapters, quick music access, and photos that can transform into panoramas. Adding depth to 2D images: Apple has updated its visionOS Photos app with an AI algorithm that creates multiple perspectives for 2D photos, allowing users to “lean right into them and look around.” Spatial browsing on Safari can also enhance web browsing by hiding distractions and revealing inline photos, and developers can add it to their apps. Talking heads: Apple has introduced Personas, an AI avatar for video calls, on the Vision Pro. The new avatars, created usingvolumetric rendering and machine learning technology, are more realistic and accurate in appearance, including hair, eyelashes, and complexion. They are created on-device in seconds. Immerse together: VisionOS 26 allows users to collaborate with headset-wearing friends to watch movies or play spatial games. This feature is also being marketed for enterprise clients, such as Dassault Systèmes, which uses the 3DLive app to visualize 3D designs in person and with remote colleagues. Enterprise APIs and tools: Apple has introduced visionOS 26, a new operating system that allows organizations to share devices among team members and securely save eye and hand data, vision prescription, and accessibility settings to iPhones. The system also includes a “for your eyes only” mode to restrict access to confidential materials. Additionally, Apple has introduced Logitech Muse, a spatial accessory for Vision Pro, allowing precise 3D drawing and collaboration. The company plans to add more APIs for app development.
WWDC 25 was notably quiet on a more personalized, AI-powered Siri
Apple announced several updates to its operating systems, services, and software, including a new look called “Liquid Glass” and a rebranded naming convention. However, the company was notably quiet on a more personalized, AI-powered Siri, which was first introduced at WWDC 24. The company’s SVP of Software Engineering, Craig Federighi, only briefly mentioned the Siri update during the keynote address, stating that the work needed more time to reach high-quality standards. The delay in the AI era suggests that Apple won’t have news about the Siri update until 2026, a significant delay in the AI era. The more personalized Siri is expected to bring artificial intelligence updates to the virtual assistant built into iPhone and other Apple devices. Bloomberg reported that the in-development version of the more personalized Siri was functional, but it was not consistently working properly, making it not viable to ship. Apple officially announced in March that the Siri update would take longer to deliver than anticipated.
Apple Intelligence opened up to all developers with Foundation Models Framework
Apple has announced that developers will soon be able to access the on-device large language models that power Apple Intelligence in their own apps through the Foundation Models framework. This will allow third-party apps to use the features for image creation, text generation, and more. The on-device processing will allow for fast, powerful, privacy-focused AI features that are available without an internet connection. Apple has also announced plans to expand the number of languages its AI platform supports and make the generative models that power it more capable and efficient. The company’s move comes as the company continues to make its intelligence systems accessible to third-party apps.
Apple Vision Pro ‘Spatial Widgets’ blend digital life into your real space allowing users to pin interactive elements like clocks, music controls, weather panels, and photo galleries directly into their physical space.
Apple has introduced spatial widgets in its Vision Pro headset, allowing users to pin interactive elements like clocks, music controls, weather panels, and photo galleries directly into their physical space. These widgets are customizable in size, color, depth, and layout, and are meant to be part of the user’s space. The Vision Pro update marks a clear step toward persistent spatial computing, with widgets like Photos, Clock, Weather, and Music playing the role of physical objects. However, the experience of using spatial widgets raises questions about how digital environments are changing the way we relate to physical ones. While Vision Pro is still shown in cozy, furnished homes, the integration of digital objects into physical spaces could lead to a different reality. The visionOS 26 update is currently available in developer beta, with a public release expected in fall 2025. As more developers build spatial widgets, the headset might feel useful in quiet, everyday ways. The end goal of AR/VR is to augmentation of reality, overlaying digital things on analog reality. However, Apple is not pushing this path for now, as it would be crucified if it did. The company has a decent track record for a corporation, despite the potential for a dystopian future where technology works against us.