Digital traffic pipeline management startup Gravitee Topco has closed on a $60 million Series C funding round, bringing its total amount raised to date to more than $125 million. The company is the creator of an open-source API management platform that provides developers with the tools they need to easily manage both legacy and newer data streaming protocols. It also provides a wealth of API security tools with its platform. Gravitee’s core offering is split into two products, with the Gravitee API Management tool designed for API publishers, and the Gravitee Access Management offering aimed at the developers who need to use those APIs. Through the two platforms, it provides tools that span API design, access, management, deployment and security. Gravitee can therefore be thought of as a kind of control plane for APIs, which often come with a confusing array of protocols and tools that can quickly overwhelm developers, despite their intention of making life simpler. Companies can deploy Gravitee’s core, open-source offering in the cloud or on-premises, or they can access the premium platform through the startup’s software-as-a-service offering. Its core features include a tool for designing and deploying APIs, mock testing and a dashboard that provides an overview of team’s API deployments. What makes Gravitee different is that it supports both asynchronous and synchronous APIs, meaning APIs that deliver data at a later point in time, and those that deliver data immediately, in real time.
Google Wallet adding nearby pass notifications- providing timely notifications for relevant passes stored in the app
Google Wallet and Pay had a number of announcements, including some new features (like nearby passes) that end users will benefit from. A redesign for the Google Pay payment sheet adds a dark theme “for a more integrated feel.” We’re already seeing it live on our devices, with Google also adding “richer card art and names” to make card selection faster. Meanwhile, Digital IDs are a big focus for Google Wallet, with their availability helping power other capabilities. With Zero-Knowledge Proof, Google wants to allow “age verification without any possibility to link back to a user’s personal identity.” The company will open-source these libraries. Currently, it’s available to Android apps through the Credential Manager Jetpack Library and mobile web, with desktop Chrome in testing. Google showed off a “seamless experience between Chrome on desktop and your Android device” that involves QR code scanning. Google Wallet is adding Nearby Passes notifications that send users an alert when they’re near a specific location. This can be used by loyalty cards, offers, boarding passes, or event tickets. By highlighting these value-added benefits, such as exclusive offers or upgrade options, you can guide users back to your app or website, creating a dynamic gateway for ongoing user interaction. With an update to Auto Linked Passes, airlines that have loyalty cards for frequent flyer programs can “ automatically push boarding passes to their users’ wallets once they check in for a flight.” Google is also adding passes that can include a picture of the user.
Google is betting on a ‘world model’, an AI operating system that mirrors human brain with a deep understanding of real-world dynamics, simulating cause and effect and learning by observing
Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google. This concept of “a world model,” as articulated by Demis Hassabis, CEO of Google DeepMind, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. “That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.” An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.” CEO Sundar Pichai reinforced this, citing Project Astra, which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context” (connecting search history, and soon Gmail/Calendar) enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understands. This, Woodward emphasized, is “where we’re headed with Gemini,” enabled by the Gemini 2.5 Pro model allowing users to “think things into existence.” Gemini 2.5 Pro with “Deep Think” and the hyper-efficient 2.5 Flash (now with native audio and URL context grounding from Gemini API) form the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology.
Google’s updates to media apps on Android Auto to allow apps to show different sections in the browsing UI and offer more flexibility in layout to build richer and more complete experiences
Google introduced two new changes to media apps on Android Auto. The first change is to the browsing interface in media apps. The new “SectionedItemTemplate” will allow apps to show different sections in the browsing UI, with Google’s example showing “Recent search” above a list of albums. The other change is the to “MediaPlaybackTemplate,” which is used as the “Now Playing” screen. It appears that Google is going to grant developers more flexibility in layout here, with the demo shown putting the media controls in the bottom right corner instead of the center, and in a different order than usual – although that might become the standard at some point. The UI isn’t drastically different or any harder to understand, but it’s a different layout than we usually see on Android Auto, which is actually a bit refreshing. Google is also allowing developers to build “richer and more complete experiences” for media apps using the “Car App Library.” This could make it easier to navigate some apps, as most media apps on Android Auto are shells of their smartphone counterpart in terms of functionality. This category is just in beta for now, though.
Gmail can now automatically show “Gemini summary cards” which summarizes all the key points from the email thread and refreshes it when people reply to retain freshness
Last year, Gmail for Android and iOS introduced a summarize feature, and Gemini can now surface summary cards automatically. At launch, the Gemini-powered “Summarize this email” capability was a button underneath the subject line that you had to manually tap. Doing so would slide up the Gemini sheet with the requested bullet points. On mobile, Gmail will now show “Gemini summary cards” automatically “when a summary could be helpful, such as with longer email threads.” You can collapse the card from the top-right corner if it’s not helpful. Google notes how “Gemini summarizes all the key points from the email thread and refreshes it when people reply,” so it’s always fresh. This launch was detailed in the “May Workspace feature drop.” Google also highlighted the availability of Gemini summaries in Google Chat’s Home view and for documents. There’s also the Google Docs summary building block. Google is also recapping Mind Maps and Discover sources in NotebookLM. In Meet, Google is highlighting Dynamic layouts, as well as Studio look, lighting, and sound.
Google’s new app lets users find, download, and run openly available AI models that generate images, answer questions, write and edit code, and more on their phones without needing an internet connection
Google quietly released an app that lets users run a range of openly available AI models from the AI dev platform Hugging Face on their phones. Called Google AI Edge Gallery, the app is available for Android and will soon come to iOS. It allows users to find, download, and run compatible models that generate images, answer questions, write and edit code, and more. The models run offline, without needing an internet connection, tapping into supported phones’ processors. Google AI Edge Gallery, which Google is calling an “experimental Alpha release,” can be downloaded from GitHub. The home screen shows shortcuts to AI tasks and capabilities like “Ask Image” and “AI Chat.” Tapping on a capability pulls up a list of models suited for the task, such as Google’s Gemma 3n. Google AI Edge Gallery also provides a “Prompt Lab” users can use to kick off “single-turn” tasks powered by models, like summarizing and rewriting text. The Prompt Lab comes with several task templates and configurable settings to fine-tune the models’ behaviors. Your mileage may vary in terms of performance, Google warns. Modern devices with more powerful hardware will predictably run models faster, but the model size also matters. Larger models will take more time to complete a task — say, answering a question about an image — than smaller models. Google’s inviting members of the developer community to give feedback on the Google AI Edge Gallery experience. The app is under an Apache 2.0 license, meaning it can be used in most contexts — commercial or otherwise — without restriction.
Google is aiming to control the distributed AI network and win data privacy war through its experimental Android app that enables running gen AI models entirely on the edge
Google has quietly released an experimental Android application that enables users to run sophisticated AI models directly on their smartphones without requiring an internet connection. The app, called AI Edge Gallery, allows users to download and execute AI models from the popular Hugging Face platform entirely on their devices, enabling tasks such as image analysis, text generation, coding assistance, and multi-turn conversations while keeping all data processing local. The application, released under an open-source Apache 2.0 license and available through GitHub rather than official app stores, represents Google’s latest effort to democratize access to advanced AI capabilities while addressing growing privacy concerns about cloud-based artificial intelligence services. “The Google AI Edge Gallery is an experimental app that puts the power of cutting-edge Generative AI models directly into your hands, running entirely on your Android devices.” At the heart of the offering is Google’s Gemma 3 model, a compact 529-megabyte language model that can process up to 2,585 tokens per second during prefill inference on mobile GPUs. This performance enables sub-second response times for tasks like text generation and image analysis, making the experience comparable to cloud-based alternatives. The app includes three core capabilities: AI Chat for multi-turn conversations, Ask Image for visual question-answering, and Prompt Lab for single-turn tasks such as text summarization, code generation, and content rewriting. Users can switch between different models to compare performance and capabilities, with real-time benchmarks showing metrics like time-to-first-token and decode speed. The local processing approach addresses growing concerns about data privacy in AI applications, particularly in industries handling sensitive information. By keeping data on-device, organizations can maintain compliance with privacy regulations while leveraging AI capabilities. Qualcomm’s AI Engine, built into Snapdragon chips, drives voice recognition and smart assistants in Android smartphones, while Samsung uses embedded neural processing units in Galaxy devices. By open-sourcing the technology and making it widely available, Google ensures broad adoption while maintaining control over the underlying infrastructure that powers the entire ecosystem. Google open-sources its tools and makes on-device AI widely available because it believes controlling tomorrow’s AI infrastructure matters more than owning today’s data centers. If the strategy works, every smartphone becomes part of Google’s distributed AI network. That possibility makes this quiet app launch far more important than its experimental label suggests.
Google has a new voice input waveform for AI Mode; the transcription appears in real-time below it
Ahead of Search Live, Google is giving AI Mode a straightforward voice input feature that has a particularly delightful animation. The main input screen (from the Search bar shortcut) now has a microphone icon next to the submit button. It joins the gallery to add existing images and Google Lens on the left side. Upon tapping you get an arc-shaped speech-to-text indicator that alternates between the AI Mode colors as you talk. The transcription appears in real-time below it. This replaces a more generic rectangular version that was available at launch on the AI Mode homepage. Search Live will be using this same animation for the immersive conversation experience, and it’s nice that we’re getting it ahead of time. Google has long used the four bouncing dots that morph into a waveform for voice input in Search and Assistant. This new one makes for a nice modernization, and contributes to how AI Mode is one of the nicest interfaces out of Google Search in quite some time
Google Photos albums redesign adds Material 3 Expressive toolbar, QR code sharing for albums
Google’s Material 3 Expressive for Photos — a redesign of the albums view — is now live on Android. QR code sharing for albums is also now available. Upon opening, the previous design showed an “Add description” field and buttons for Share, Add photos, and Order photos underneath the album cover. That is now gone and replaced by a Material 3 Expressive toolbar. That floating toolbar is how you Share, Add photos, and Edit the album. The latter was previously in the overflow menu, which has been tweaked and gains some icons. Google has also elevated the Sort photos button to the top bar. The Edit view has a docked toolbar to Add photos, text, and locations, which was previously in the top bar. You can add the description here, while there are two cards up top for editing Highlights and Album cover. There are more than a few Material 3 Expressive components with this albums redesign, but the full bleed design for the cover is not here yet. Overall, it’s a bit cleaner than before, with M3E giving apps the opportunity to consolidate things. The rest of Google Photos has yet to be updated, with that possibly coming later this month with the redesigned editor interface. Meanwhile, opening the share sheet shows the new “Show QR Code” option that was announced last week. The design uses a Material 3 shape with the Google Photos logo at the center. We’re seeing both the albums redesign and QR code sharing with Google Photos 7.30 on Android.
Google AI Mode can create charts to answer financial questions, based on Google credits “advanced models [that] understand the intent of the question
Google can now answer your questions with custom data visualization and graphs. The first domain for this is financial data when asking about stocks and mutual funds. This can be used to compare stocks, see prices during a specific period, and more. Google credits “advanced models [that] understand the intent of the question,” with AI Mode using historical and real-time information. It will then “intelligently determine how to present information to help you make sense of it.” You can interact with the generated chart and ask follow-up questions. Other AI Mode features Google previewed at I/O include Search Live, Deep Search, Personal Context, and agentic capabilities powered by Project Mariner. In other AI Mode tweaks, Google restored Lens and voice input to the Search bar when you’re scrolling through the Discover feed. Meanwhile, Google Labs announced an experiment that “lets you interact conversationally with AI representations of trusted experts built in partnership with the experts themselves.” You can ask questions of these “Portraits” and get back responses based on their knowledge and “authentic” content/work in the expert’s voice “via an illustrated avatar.” The first is from “Radical Candor” author Kim Scott. You might want to ask about “tough workplace situations or practice difficult conversations.” Portraits use “Gemini’s understanding and reasoning capabilities to generate a relevant and insightful response.” Google says it “conducted extensive testing and implemented user feedback mechanisms to proactively identify and address potential problematic scenarios.”