While most wallet apps, like Samsung Wallet, let you store cards and even digital keys, Google Wallet offers features you don’t often see in other wallet apps. You can store your passport, various IDs, including your driver’s license, loyalty cards, and hotel keys, all within the Google Wallet app. While these features are already impressive, another feature completely changes how the app is used and it might be useful for you too. You can favorite frequently used cards and passes, but if you’re like me and use a mix of both, you’re still stuck hunting through the list. The Nearby Passes notification feature in Google Wallet uses your device’s location and the cards or passes in your wallet to surface the right one at the right time. For example, say you have a loyalty card for a coffee shop near your place. Google Wallet, using your device’s location, sends a notification to your phone’s lock screen so you can access that card instantly, without opening the app or scrolling through everything. The only catch is that it’s not always activated by default, particularly on devices that have had Google Wallet installed for a while or are running an older version of Android. Thankfully, you can activate it easily on your phone. In addition to the feature mentioned above, another Google Wallet feature I’ve been using a lot is the ability to create a loyalty card or pass, even for items that aren’t natively supported by the app. You can manually add unsupported passes to the app. So, if you need to access a card but don’t have your physical wallet on hand, this can be incredibly useful. It’s a lifesaver and allows you to create a centralized place to store all your passes.
Google Pixel 10 leads smartphone AI race with native AI for real-time translation, voice cloning, photo coaching, and editing; putting Apple’s iPhone at risk of losing innovation edge.
Apple is behind Google in the race to add artificial intelligence (AI) features to smartphones, according to Wall Street Journal Personal Tech Columnist Nicole Nguyen. An iPhone user, Nguyen wrote that her experience with Google’s upcoming Pixel 10 showed that Google has “lapped” Apple as both companies work to develop the “killer AI-powered phone.” Nguyen highlighted the Pixel 10’s AI-powered ability to surface information when needed, provide translations via a real-time voice clone and transcript, coach users to take good photos, and edit photos that have already been taken. “The race continues and for now, Apple has a lot of catching up to do,” Nguyen wrote. Apple faces the risk of its iPhone becoming a commodity because the Pixel 9 has, and Pixel 10 will ship with, embedded AI that lets users speak, search, transact and navigate with a native AI experience. The risk is how many consumers will keep waiting around for Apple to deliver. It’s a massive pain to switch from iOS to Android devices, and most people don’t. Getting an AI-powered Android device just may be enough for people to dump their iPhones.
Google reduces switching friction from iPhone to Pixel 10 with pre shipment data prep, auto-prepared password/app transfers, AI assistant for real-time help and contextual tips.
Google is making it easier than ever for potential iPhone converts to make the jump to Pixel 10 and switch allegiances over to Android. If you pre-order or purchase a Pixel 10 series handset directly from the Google Store, you’ll receive a helpful email that prepares your iPhone data for transfer. This will include passwords from iOS, other wallet items, and app data. It’ll do this even before your new phone arrives. Once you have your new Pixel 10 in hand, the support continues. If you’re new to Android, your Pixel 10 will provide contextual tips as you use it, guiding you through basic functions like taking a screenshot or turning the device off. Most of that is not new, but it might help those not familiar with the intricacies of Android make that daunting jump over from the mess that is Liquid Glass on iPhone. To simplify things further, the upgraded My Pixel app works in tandem with these features to get you up to speed quickly. When combined, these tools aim to make your switch as effortless as possible, so you can start enjoying your new device without any stress. You can also stay connected with your friends and family using RCS in Google Messages, no matter what phone they have. A new on-device, AI-powered agent is also available to provide instant support and help troubleshoot issues. This agent can seamlessly hand you off to a live customer support representative if you need further assistance. It’s up to Google now to convince people to switch from iPhone to the Pixel 10, but maybe this might give people an easier “out” from Apple if they want it.
Apple pivots to a full LLM Siri after hybrid led to delays; promising context‑aware tasks, legacy app control, and evaluating external models to accelerate capability without compromising privacy
Apple is developing a new version of Siri that’s supposed to be better than the existing Siri in every way. It will be smarter and able to do more, functioning like ChatGPT or Claude instead of a barely competent 2012-era smartphone assistant. The next-generation version of Siri will use advanced large language models, similar to ChatGPT, Claude, Gemini, and other AI chatbots. Here’s what we’re waiting on: Personal Context: With personal context, Siri will be able to keep track of emails, messages, files, photos, and more, learning more about you to help you complete tasks and keep track of what you’ve been sent. Onscreen awareness will let Siri see what’s on your screen and complete actions involving whatever you’re looking at. Deeper app integration means that Siri will be able to do more in and across apps, performing actions and completing tasks that are just not possible with the personal assistant right now. Apple is rumored to be considering a partnership with ChatGPT creator OpenAI or Claude creator Anthropic to power the smarter version of Siri. Both companies are reportedly training versions of their models that would work with Apple’s Private Cloud Compute servers, and Apple is running tests with both its own models and models from outside companies. No final decision on Siri has been made as of yet. Partnering with a company like Anthropic or OpenAI would allow Apple to deliver the exact Siri feature set that it is aiming for, while also giving it time to continue work on its own LLM behind the scenes.
Google moves Gemini beyond chat into a full creative‑ daily productivity platform with guided learning, flash‑card generation, privacy‑tuned temporary chat and watch integration
Google is beefing up its features for Gemini, its primary suite of generative AI models and the chatbot that serves as its main interface. For creative folks, the Gemini app now offers image editing using text prompts through its viral Gemini 2.5 Flash Image model, codenamed Nano Banana. Also, Google has added Veo 3, the newest version of its video generation model. The tool can animate still photos, drawings or digital art into moving video clips, complete with AI-generated audio. For productivity, Google is also adding scheduled actions, a feature that lets users queue tasks and recurring requests directly within the Gemini app. The Productivity Planner Gem integrates email, Calendar and Drive into a single view, designed to help users prioritize daily tasks more easily. Meanwhile, Temporary Chat allows people to hold private conversations with Gemini that won’t be saved or affect future responses, an answer to growing demand for more user control over AI memory. Gemini can now draw on past chat history if users opt in to provide more relevant answers. Users can manage or delete stored conversations. Real-time captions to Gemini Live, its voice chatbot, can connect with Google services such as Maps. For education, one new feature is Guided Learning, which helps users break down complex topics into digestible steps. The tool is designed to make explanations more interactive, with the AI walking learners through a process rather than delivering a static answer. Students and business professionals can also now generate study guides and flash cards directly from their own notes, readings or problem sets, automating one of the more time-consuming aspects of learning. Google has also introduced Storybook, a feature that allows users to turn personal memories or even dense concepts into illustrated stories that can be read, shared or printed. The tool can add text and audio, blending creative writing with multimodal AI generation.
Pixel 10 can now connect to Galaxy Watch 8 following update
Following an update to the August 1 Google Play services update, users are reporting that the Galaxy Watch 8 can now connect to the Pixel 10. It’s unknown if this is the case for other Pixel devices, as the issue appeared to include other models. In our own testing, a previously barred Galaxy Watch 8 Classic paired flawlessly to our Pixel 10 Pro XL on the first go following the update. It’s worth noting we skipped signing into Samsung Health at the start, though that should have no bearing on a successful connection. The connection appears stable after rebooting both devices. One user had submitted a ticket indicating that their Pixel 10 Pro would not connect with the Galaxy Watch 8, and over 260 users have indicated that they’re experiencing the same issue. According to them, after confirming pairing codes on both watch and phone, the setup fails or glitches out. No pending updates seem to be the issue. Some devices on the 9to5Google team had the same issue. After getting the Watch 8 Classic to work on the Pixel 9 Pro Fold with no problems, upgrading to the Pixel 10 Pro XL resulted in multiple failed setup attempts. Sometimes the Pixel 10 Pro would refuse to give a pairing code, and the Watch 8 Classic would have to be reset because of the error. Other times, the Galaxy Watch 8 Classic and Pixel 10 Pro XL could agree on codes and begin the pairing process, though it never exceeded 84% completion without faulting. This issue takes a tremendous amount of time and has no success.
70% of Galaxy S25 owners are using Galaxy AI features and more than half are using Circle to Search; Galaxy AI to expand to 400 million devices by the end of 2025
Samsung is planning a big expansion of AI features on Galaxy phones, and claims that a huge percentage of its users are already leveraging AI features in one way or another. Samsung says that 70% of Galaxy S25 owners are using Galaxy AI features. There’s no specific timeline (as in, how often after the features being used), but it’s still a big number. Samsung further adds that: “More than half” of Galaxy S25 owners use Circle to Search (a Google feature); Photo Assist usage “doubled” compared to Galaxy S24 users; Now Brief is used by “one in three” Galaxy S25 owners; Google Gemini use “tripled” on “the latest Galaxy S series.” With all of this in mind, Samsung says that it will expand Galaxy AI to hundreds of millions of devices over the course of 2025. Specifically, the company wants to double its previous “200+ million” figure to over 400 million. At the center of our innovation is a desire to bring consumers seamless and secure mobile AI experiences that align with their needs. That’s why Samsung Galaxy is committed to expanding Galaxy AI to 400 million devices by the end of this year — democratizing the power and possibilities of mobile AI to even more users. It stands to reason that new device launches and updates to existing devices will play a big role, but it’s still a big promise.
Google Messages testing RCS’ new MLS encryption which makes E2E encryption possible across different RCS clients and providers
Google Messages is beginning to test the new Messaging Layer Security (MLS) protocol. Universal Profile 3.0 adds support for MLS, which makes E2E encryption possible across different RCS clients and providers. Google first announced its support for this interoperable protocol in 2023. The GSMA and Apple announced official adoption this March. Google Messages is now beginning to test MLS encryption for RCS. It starts with a new message “Details” (long-press on the chat/text) screen that’s fullscreen compared to the current approach. You get a preview of the message at the top, with Google also showing a “Status” section for “Sent” and Delivered” that explains the new checkmarks. We see Google using the latest single circle design that has yet to become widely available. There’s also a “From” section, while the bottom portion provides more technical details including Type, Priority, Message id and Encryption Protocol. This new design is not widely rolled out in the beta channel. It’s unclear if that’s also the case for MLS as the old UI makes no indication, while Apple has yet to specify when support is coming.
Google’s AI research agent combines diffusion mechanisms and retrieval tools to produce more comprehensive and accurate research on complex topics by emulating human process of making iterative revisions turning the draft into higher-quality outputs
Google researchers have developed a new framework for AI research agents that outperforms leading systems from rivals OpenAI, Perplexity and others on key benchmarks. The new agent, called Test-Time Diffusion Deep Researcher (TTD-DR), is inspired by the way humans write by going through a process of drafting, searching for information, and making iterative revisions. The system uses diffusion mechanisms and evolutionary algorithms to produce more comprehensive and accurate research on complex topics. For enterprises, this framework could power a new generation of bespoke research assistants for high-value tasks that standard retrieval augmented generation (RAG) systems struggle with, such as generating a competitive analysis or a market entry report. Unlike the linear process of most AI agents, human researchers work in an iterative manner. They typically start with a high-level plan, create an initial draft, and then engage in multiple revision cycles. During these revisions, they search for new information to strengthen their arguments and fill in gaps. Google’s researchers observed that this human process could be emulated using a diffusion model augmented with a retrieval component. (A trained diffusion model initially generates a noisy draft, and the denoising module, aided by retrieval tools, revises this draft into higher-quality (or higher-resolution) outputs. TTD-DR is built on this blueprint. The framework treats the creation of a research report as a diffusion process, where an initial, “noisy” draft is progressively refined into a polished final report. This is achieved through two core mechanisms. The first, which the researchers call “Denoising with Retrieval,” starts with a preliminary draft and iteratively improves it. In each step, the agent uses the current draft to formulate new search queries, retrieves external information, and integrates it to “denoise” the report by correcting inaccuracies and adding detail. The second mechanism, “Self-Evolution,” ensures that each component of the agent (the planner, the question generator, and the answer synthesizer) independently optimizes its own performance. The resulting research companion is “capable of generating helpful and comprehensive reports for complex research questions across diverse industry domains. In side-by-side comparisons with OpenAI Deep Research on long-form report generation, TTD-DR achieved win rates of 69.1% and 74.5% on two different datasets. It also surpassed OpenAI’s system on three separate benchmarks that required multi-hop reasoning to find concise answers, with performance gains of 4.8%, 7.7%, and 1.7%.
Wear OS watches are showing Google Wallet photo passes containing a barcode or QR code, suggesting Google may be experimenting with support specifically for scannable photo passes rather than full photo pass
Some Wear OS watches are starting to show Google Wallet photo passes with patterns like QR codes or barcodes. The interface is in Spanish, and the pass includes a label that translates to “Press to scan,” suggesting it’s the kind of pass that contains a barcode or QR code. The other screenshot below shows a disclaimer explaining that the pass was created with a photo and that some of its information might not be visible on the watch, with a prompt to open it on your phone for the full view. hoto passes that include a QR code or barcode appear to show up on some Wear OS watches, while passes that are simply photos, like an image of a document, don’t. That suggests Google may be experimenting with support specifically for scannable photo passes rather than full photo pass support. There’s been no official announcement from Google, and its support pages still state that “private” passes aren’t supported on Wear OS. But this is the clearest sign yet that things might be changing.