Google’s ad network has begun showing advertising within the flow of conversations with chatbots — part of Alphabet Inc.’s efforts to keep its edge in digital advertising as generative artificial intelligence takes off. Earlier this year, Google’s AdSense for Search network, which traditionally shows ads within the search results of other websites, expanded to include conversations with chatbots operated by AI startups. Google made the move after conducting tests last year and earlier this year with a handful of startups, including AI search apps iAsk and Liner, according to people familiar with the matter who asked not to be identified discussing private information. Showing ads alongside its own search results is the heart of Google’s business, bolstered by a business that serves up advertising across much of the web. That empire has come under threat as new entrants like OpenAI and Perplexity AI seek to siphon off the search giant’s audience with products aiming to help users find what they are looking for more quickly. Generative AI startups are increasingly exploring advertising-based business models to offset the high costs of answering users’ questions with artificial intelligence. For example, before inviting users to ask follow-up questions, iAsk shows ads below its AI-generated responses. In addition to Google, startups such as Koah Labs have begun allowing brands to serve ads to the chatbot audience. AI search startup Perplexity, one of the most prominent players using AI to reshape internet services, establishes relationships directly with brands that want to buy ads on the site, according to a person familiar with the matter. Perplexity allows brands to sponsor follow-up questions to users’ queries.
Google’s AdSense for Search places ads inside chatbot conversations with AI startups
Google’s ad network has begun showing advertising within the flow of conversations with chatbots — part of Alphabet Inc.’s efforts to keep its edge in digital advertising as generative artificial intelligence takes off. Earlier this year, Google’s AdSense for Search network, which traditionally shows ads within the search results of other websites, expanded to include conversations with chatbots operated by AI startups. Google made the move after conducting tests last year and earlier this year with a handful of startups, including AI search apps iAsk and Liner, according to people familiar with the matter who asked not to be identified discussing private information. Showing ads alongside its own search results is the heart of Google’s business, bolstered by a business that serves up advertising across much of the web. That empire has come under threat as new entrants like OpenAI and Perplexity AI seek to siphon off the search giant’s audience with products aiming to help users find what they are looking for more quickly. Running experiments with AI startups will allow the company to test the waters for advertising in the relatively new world of AI chats. Generative AI startups are increasingly exploring advertising-based business models to offset the high costs of answering users’ questions with artificial intelligence. For example, before inviting users to ask follow-up questions, iAsk shows ads below its AI-generated responses. In addition to Google, startups such as Koah Labs have begun allowing brands to serve ads to the chatbot audience. AI search startup Perplexity, one of the most prominent players using AI to reshape internet services, establishes relationships directly with brands that want to buy ads on the site, according to a person familiar with the matter. Perplexity allows brands to sponsor follow-up questions to users’ queries.
Google improves coding capabilities – enhancing code transformation, code editing and developing of complex agentic workflows- in Gemini 2.5 Pro preview, especially for interactive web applications
Google is providing early access to an updated version of its Gemini 2.5 Pro multimodal AI model. Called Gemini 2.5 Pro Preview, the model has “significantly” improved capabilities for coding, especially for interactive web applications. Google said it released the preview model early due to “overwhelming enthusiasm” for the model “so people can start building.” The model was supposed to be unveiled at Google’s I/O developer conference later this month. The updates include enhancing code transformation, code editing and developing of complex agentic workflows. Google said the updates enabled the model to top the chart in the WebDev Arena Leaderboard, which ranks large language models based on their performance in web development. This ranking measures how well a model can build “aesthetically pleasing and functional” web applications. In the arena, AI models compete with each other in front-end UI design and coding contests to earn ELO points. Google said Gemini 2.5 Pro Preview has surpassed the prior version by 147 ELO points. Gemini 2.5 Pro is in third place, after Claude 3.7 Sonnet. OpenAI’s GPT-4.1 came in fourth and Claude 3.5 Sonnet is fifth. Gemini 2.5 Pro Preview also scored well in video understanding, at 84.8% on the VideoMME benchmark, which assesses the capabilities of multimodal models in how well they analyze videos. On WebDev Arena Leaderboard, a third-party metric that ranks models by human preference based on their ability to generate visually appealing and functional web apps, Gemini 2.5 Pro Preview (05-06) has now overtaken Anthropic’s Claude 3.7 Sonnet at the number one spot.
Adoption of RCS protocol is rising, now supporting over a billion messages per day in the U.S., following the adoption by Apple Messages
Google offered a brief update on the adoption of the RCS (Rich Communication Services) protocol, an upgrade to SMS that offers high-resolution photos and videos, typing indicators, read receipts, improved group chat, and more. The company shared that the messaging standard now supports over a billion messages per day in the U.S. This metric is based on an average of the last 28 days, Google noted. The stat is notable because Google fought for years to get Apple to adopt support for RCS on iOS, allowing for better communication between Android and Apple devices. Unlike with iMessage, group chats with Android users couldn’t be renamed, nor could people be added or removed, and you couldn’t exit when you wanted. That changed with the fall 2024 launch of iOS 18, when Apple finally added RCS support to its Messages app. Though the functionality has been upgraded, Apple still displays RCS chats as green bubbles, hoping to keep the stigma of being an Android user intact. This is particularly important among young people in the U.S., where demand for the blue bubbles has cemented the iPhone as teens’ most popular device.
Gemini Advanced users can now directly add a public or private codebase on GitHub, to the chatbot to allow it to generate and explain code, debug existing code, and more
Gemini, Google’s AI-powered chatbot, can now connect to GitHub — for users subscribed to the $20-per-month Gemini Advanced plan, that is. Gemini Advanced customers can directly add a public or private codebase on GitHub to Gemini to allow the chatbot to generate and explain code, debug existing code, and more. Users can connect GitHub to Gemini by clicking the “+” button in the prompt bar, selecting “import code,” and pasting a GitHub URL. A word of warning: AI models, including Google’s, still struggle to code quality software. Code-generating AI tends to introduce security vulnerabilities and errors, owing to weaknesses in areas like the ability to understand programming logic. One recent evaluation of Devin, a popular AI coding tool, found that it could only complete three out of 20 programming tests.
Google Wallet is now requiring ‘verify’ authentication to even open the app
For the past year or so, opening Google Wallet 3+ minutes after unlocking your phone would result in a “For your security, you need to verify it’s you before paying” message appearing at the top of the app. As such, three minutes from unlock, tap-to-pay transactions don’t work until PIN, pattern, password, fingerprint, face (Class 3 biometric unlock), etc. Recently, Google Wallet has changed or is in the process of testing a new behavior. Now, after three minutes, you cannot even access the app’s homepage with your carousel of cards and list of passes without authentication. Google throws up a splash screen with the Wallet logo up top and system-level “Verify it’s you” sheet to authenticate. Sometimes, we still see the old card prompt at the top instead of the new fullscreen version, but the latter is beginning to appear more frequently. We’re seeing this change with version 25.18 of Google Wallet on both Pixel and Samsung phones. This is quite a security escalation. As our digital wallets contain more and more (including state IDs, passports, home/room and car keys, boarding passes, medical information, etc.), you might not want people with your phone to even know what’s being stored.
Gravitee Topco’s open-source API management platform offers an array of tools for developers that span API design, access, management, deployment and security with support for both asynchronous and synchronous APIs
Digital traffic pipeline management startup Gravitee Topco has closed on a $60 million Series C funding round, bringing its total amount raised to date to more than $125 million. The company is the creator of an open-source API management platform that provides developers with the tools they need to easily manage both legacy and newer data streaming protocols. It also provides a wealth of API security tools with its platform. Gravitee’s core offering is split into two products, with the Gravitee API Management tool designed for API publishers, and the Gravitee Access Management offering aimed at the developers who need to use those APIs. Through the two platforms, it provides tools that span API design, access, management, deployment and security. Gravitee can therefore be thought of as a kind of control plane for APIs, which often come with a confusing array of protocols and tools that can quickly overwhelm developers, despite their intention of making life simpler. Companies can deploy Gravitee’s core, open-source offering in the cloud or on-premises, or they can access the premium platform through the startup’s software-as-a-service offering. Its core features include a tool for designing and deploying APIs, mock testing and a dashboard that provides an overview of team’s API deployments. What makes Gravitee different is that it supports both asynchronous and synchronous APIs, meaning APIs that deliver data at a later point in time, and those that deliver data immediately, in real time.
Google Wallet adding nearby pass notifications- providing timely notifications for relevant passes stored in the app
Google Wallet and Pay had a number of announcements, including some new features (like nearby passes) that end users will benefit from. A redesign for the Google Pay payment sheet adds a dark theme “for a more integrated feel.” We’re already seeing it live on our devices, with Google also adding “richer card art and names” to make card selection faster. Meanwhile, Digital IDs are a big focus for Google Wallet, with their availability helping power other capabilities. With Zero-Knowledge Proof, Google wants to allow “age verification without any possibility to link back to a user’s personal identity.” The company will open-source these libraries. Currently, it’s available to Android apps through the Credential Manager Jetpack Library and mobile web, with desktop Chrome in testing. Google showed off a “seamless experience between Chrome on desktop and your Android device” that involves QR code scanning. Google Wallet is adding Nearby Passes notifications that send users an alert when they’re near a specific location. This can be used by loyalty cards, offers, boarding passes, or event tickets. By highlighting these value-added benefits, such as exclusive offers or upgrade options, you can guide users back to your app or website, creating a dynamic gateway for ongoing user interaction. With an update to Auto Linked Passes, airlines that have loyalty cards for frequent flyer programs can “ automatically push boarding passes to their users’ wallets once they check in for a flight.” Google is also adding passes that can include a picture of the user.
Google is betting on a ‘world model’, an AI operating system that mirrors human brain with a deep understanding of real-world dynamics, simulating cause and effect and learning by observing
Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google. This concept of “a world model,” as articulated by Demis Hassabis, CEO of Google DeepMind, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. “That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.” An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.” CEO Sundar Pichai reinforced this, citing Project Astra, which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context” (connecting search history, and soon Gmail/Calendar) enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understands. This, Woodward emphasized, is “where we’re headed with Gemini,” enabled by the Gemini 2.5 Pro model allowing users to “think things into existence.” Gemini 2.5 Pro with “Deep Think” and the hyper-efficient 2.5 Flash (now with native audio and URL context grounding from Gemini API) form the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology.
Google’s updates to media apps on Android Auto to allow apps to show different sections in the browsing UI and offer more flexibility in layout to build richer and more complete experiences
Google introduced two new changes to media apps on Android Auto. The first change is to the browsing interface in media apps. The new “SectionedItemTemplate” will allow apps to show different sections in the browsing UI, with Google’s example showing “Recent search” above a list of albums. The other change is the to “MediaPlaybackTemplate,” which is used as the “Now Playing” screen. It appears that Google is going to grant developers more flexibility in layout here, with the demo shown putting the media controls in the bottom right corner instead of the center, and in a different order than usual – although that might become the standard at some point. The UI isn’t drastically different or any harder to understand, but it’s a different layout than we usually see on Android Auto, which is actually a bit refreshing. Google is also allowing developers to build “richer and more complete experiences” for media apps using the “Car App Library.” This could make it easier to navigate some apps, as most media apps on Android Auto are shells of their smartphone counterpart in terms of functionality. This category is just in beta for now, though.