While most wallet apps, like Samsung Wallet, let you store cards and even digital keys, Google Wallet offers features you don’t often see in other wallet apps. You can store your passport, various IDs, including your driver’s license, loyalty cards, and hotel keys, all within the Google Wallet app. While these features are already impressive, another feature completely changes how the app is used and it might be useful for you too. You can favorite frequently used cards and passes, but if you’re like me and use a mix of both, you’re still stuck hunting through the list. The Nearby Passes notification feature in Google Wallet uses your device’s location and the cards or passes in your wallet to surface the right one at the right time. For example, say you have a loyalty card for a coffee shop near your place. Google Wallet, using your device’s location, sends a notification to your phone’s lock screen so you can access that card instantly, without opening the app or scrolling through everything. The only catch is that it’s not always activated by default, particularly on devices that have had Google Wallet installed for a while or are running an older version of Android. Thankfully, you can activate it easily on your phone. In addition to the feature mentioned above, another Google Wallet feature I’ve been using a lot is the ability to create a loyalty card or pass, even for items that aren’t natively supported by the app. You can manually add unsupported passes to the app. So, if you need to access a card but don’t have your physical wallet on hand, this can be incredibly useful. It’s a lifesaver and allows you to create a centralized place to store all your passes.
Google Pixel 10 leads smartphone AI race with native AI for real-time translation, voice cloning, photo coaching, and editing; putting Apple’s iPhone at risk of losing innovation edge.
Apple is behind Google in the race to add artificial intelligence (AI) features to smartphones, according to Wall Street Journal Personal Tech Columnist Nicole Nguyen. An iPhone user, Nguyen wrote that her experience with Google’s upcoming Pixel 10 showed that Google has “lapped” Apple as both companies work to develop the “killer AI-powered phone.” Nguyen highlighted the Pixel 10’s AI-powered ability to surface information when needed, provide translations via a real-time voice clone and transcript, coach users to take good photos, and edit photos that have already been taken. “The race continues and for now, Apple has a lot of catching up to do,” Nguyen wrote. Apple faces the risk of its iPhone becoming a commodity because the Pixel 9 has, and Pixel 10 will ship with, embedded AI that lets users speak, search, transact and navigate with a native AI experience. The risk is how many consumers will keep waiting around for Apple to deliver. It’s a massive pain to switch from iOS to Android devices, and most people don’t. Getting an AI-powered Android device just may be enough for people to dump their iPhones.
Google reduces switching friction from iPhone to Pixel 10 with pre shipment data prep, auto-prepared password/app transfers, AI assistant for real-time help and contextual tips.
Google is making it easier than ever for potential iPhone converts to make the jump to Pixel 10 and switch allegiances over to Android. If you pre-order or purchase a Pixel 10 series handset directly from the Google Store, you’ll receive a helpful email that prepares your iPhone data for transfer. This will include passwords from iOS, other wallet items, and app data. It’ll do this even before your new phone arrives. Once you have your new Pixel 10 in hand, the support continues. If you’re new to Android, your Pixel 10 will provide contextual tips as you use it, guiding you through basic functions like taking a screenshot or turning the device off. Most of that is not new, but it might help those not familiar with the intricacies of Android make that daunting jump over from the mess that is Liquid Glass on iPhone. To simplify things further, the upgraded My Pixel app works in tandem with these features to get you up to speed quickly. When combined, these tools aim to make your switch as effortless as possible, so you can start enjoying your new device without any stress. You can also stay connected with your friends and family using RCS in Google Messages, no matter what phone they have. A new on-device, AI-powered agent is also available to provide instant support and help troubleshoot issues. This agent can seamlessly hand you off to a live customer support representative if you need further assistance. It’s up to Google now to convince people to switch from iPhone to the Pixel 10, but maybe this might give people an easier “out” from Apple if they want it.
Zil Money transforms spending control with AI-powered virtual card features- to automate receipt categorization, analyze transaction data, and generate actionable reports within seconds
Zil Money, a leading fintech solution, has introduced two innovative features in its Virtual Card suite: AI-powered receipt parsing and automated spending analysis reports. These features are designed to provide businesses with real-time insights and complete control over their expenses. Businesses can now automatically track spending, with detailed breakdowns by category, merchant, and more, offering unparalleled visibility into financial transactions. Following the introduction of its Virtual Card, Zil Money has empowered users to create an unlimited number of cards instantly, set customized spending limits, and easily manage expenses. Businesses have reported significant improvements in efficiency, particularly in areas like vendor payments, employee reimbursements, and subscription management. These features automate receipt categorization, analyze transaction data, and generate actionable reports within seconds. With real-time analysis, businesses can instantly verify transactions, categorize purchases, and gain valuable insights, streamlining expense management. This innovation reinforces Zil Money’s commitment to delivering cutting-edge tools that empower smarter financial decisions.
Apple devices adds granular enterprise controls to gate employee access to external AI, letting IT disable or route ChatGPT requests and restrict other providers while preserving on‑device privacy
Apple is introducing tools for businesses to manage how and when employees can use artificial intelligence. These controls are granular enough for managing which features can be enabled or disabled. The system also apparently allows companies to potentially restrict whether an employee’s AI requests go to ChatGPT’s cloud service, even if the business doesn’t buy services from OpenAI directly. This can prevent employees from accidentally handing over internal-only IP or data to ChatGPT, which could be used elsewhere. However, while the focus is on ChatGPT, it’s a set of tools that won’t be limited just to OpenAI’s service. The same tools can restrict any “external” AI provider, which could include Anthropic or Google, for example. Apple has a public deal with OpenAI that enables deep integration with ChatGPT on the iPhone. However, the new tools may indicate that Apple is preparing for a future where corporate users want more freedom on which AI service they use, and for Apple to potentially have more, similar integrations. While Apple does have its own Private Cloud Compute architecture to protect user data under Apple Intelligence, it doesn’t have any way of ensuring security or privacy for third party services. The tool is an attempt to provide enterprise customers some more control over these services.
Apple pivots to a full LLM Siri after hybrid led to delays; promising context‑aware tasks, legacy app control, and evaluating external models to accelerate capability without compromising privacy
Apple is developing a new version of Siri that’s supposed to be better than the existing Siri in every way. It will be smarter and able to do more, functioning like ChatGPT or Claude instead of a barely competent 2012-era smartphone assistant. The next-generation version of Siri will use advanced large language models, similar to ChatGPT, Claude, Gemini, and other AI chatbots. Here’s what we’re waiting on: Personal Context: With personal context, Siri will be able to keep track of emails, messages, files, photos, and more, learning more about you to help you complete tasks and keep track of what you’ve been sent. Onscreen awareness will let Siri see what’s on your screen and complete actions involving whatever you’re looking at. Deeper app integration means that Siri will be able to do more in and across apps, performing actions and completing tasks that are just not possible with the personal assistant right now. Apple is rumored to be considering a partnership with ChatGPT creator OpenAI or Claude creator Anthropic to power the smarter version of Siri. Both companies are reportedly training versions of their models that would work with Apple’s Private Cloud Compute servers, and Apple is running tests with both its own models and models from outside companies. No final decision on Siri has been made as of yet. Partnering with a company like Anthropic or OpenAI would allow Apple to deliver the exact Siri feature set that it is aiming for, while also giving it time to continue work on its own LLM behind the scenes.
Bloomberg tips a three‑year iPhone redesign: slim “Air” now, foldable with reduced‑crease display in 2026, and all‑around curved glass for the 20th‑anniversary model in 2027
Apple made a big splash in 2017 by introducing an all-screen iPhone with a notch and no physical home button for the 10th anniversary of the iPhone. The company is now preparing to introduce another major overhaul for the iPhone’s 20th anniversary, featuring a new curved glass design. The iPhone 20, set to be launched in 2027, will have curved glass edges all around, likely to suit the new “Liquid Glass” design philosophy of iOS. The publication’s report also noted that before the 20th anniversary iPhone’s release, Apple will launch its first foldable phone in 2026. It said that Apple is in the process of switching screen technology for its upcoming foldable, which might result in a display that hides the crease well.
Apple’s first foldable reportedly adopts a book‑style form with less crease visibility, four cameras and replaces Face ID with Touch ID on the power button
A new report by Bloomberg’s Mark Gurman details some of the features of the upcoming Apple’s foldable iPhone, likely to launch in the fall of 2026. Apple’s first foldable phone will reportedly be a book-style foldable, opening vertically into a small tablet. It will have a total of four cameras – two on the back, one on the inside, and one on the front. It will not have Face ID; instead, it will have Touch ID built into the power button, similar to what we’ve seen on some of the company’s iPads. Other features of note include new screen tech that should make the crease in the unfolded display less visible. The foldable iPhone will come with Apple’s own C2 modem, which is the same chip that will be used by the iPhone 18 Pro line of products. And it won’t have a physical SIM-card slot, claims Gurman. The specs line up with Apple analyst Ming-Chi Kuo’s report earlier this year, which also said that the foldable iPhone will have a 7.8-inch inner display, a 5.5-inch outer display, and just 9 to 9.5mm of thickness when folded. It’s all a part of Apple’s big plan to shake up its lineup for three years straight; starting with the new iPhone 17 Air this September, followed by the foldable iPhone next year, and, in 2027, the “iPhone 20,” a sort of an anniversary model that will have curved glass edges all around.
iPadOS’ new update – Mac‑style windowing and menu system turns iPad multitasking into desktop‑class control with resizable windows
The lines between iPad and Mac have never been blurrier – with iPadOS 26. The update brings a suite of powerful new features that elevate the iPad’s utility, bridging the gap between touch-first tablet and full-fledged desktop machine: Menu Bar: Within any active app, swipe down from the top of the screen and you will see a new, fully functioning macOS-style menu bar. With the foremost dropdown menu being the app’s name (where app settings are typically accessed), other standard menus can include File, Edit, Format, View, Window, and Help. The menu bar is dynamic, and will display menus specific to the app. Windowed Apps: A new Windowed Apps mode allows users to arrange and resize multiple app windows in a single space, similar to how it works on a Mac. This mode can also be activated from the Control Center using a new button, which supports long press to switch between Windowed Apps and Stage Manager. Users can move and stack multiple windows by dragging them from the top, resize them by dragging the bottom-right corner, and quickly snap them to half the screen by dragging to a corner. Tapping a space on the Home Screen scatters all open windows to the sides, creating room to open other apps. Traffic Lights: Tapping the three familiar traffic lights symbol, straight out of macOS expands it into red, amber, and green buttons for closing, minimizing, and expanding the window to fullscreen. Long-pressing the buttons also reveals the Mac-style Move & Resize and Fill & Arrange options, as well as an option to park the app off-screen to Add a New Window App Exposé: In the new Windowed Apps mode, iPadOS 26 also includes an App Exposé-style view that’s similar to the App Switcher. Swipe up from the bottom of the screen to invoke the view, which shows all the open apps in the current space. You can also scroll the new interface to see your other open apps, whether they’re sharing spaces or open in full-screen mode. Preview: The iPad finally includes the Mac’s long-standing Preview app, only now with Apple Pencil support, enabling you to easily open, edit, and mark up a range of images, documents, and file types. The Preview app’s browsing menu is a lot like the Files interface, where you can browse your files and check out recent and shared items. You can also scan documents from right within the app. Trackpad Pointer: If you have a Magic Keyboard trackpad or a Bluetooth mouse connected to your iPad, the cursor is now a Mac-like pointer rather than a circle. And if you shake it, the pointer will get bigger so that you can easily locate it on the screen. Advanced File Management: The iPad’s Files app is enhanced with a new List view that features resizable columns and collapsible folders, and new filters, allowing users to see more document details at a glance and organize their files. To help you identify folders more easily, the app now supports folder customization with custom colors, icons, and emoji, all of which sync across devices. Folders in Dock: In the Files app, long press on a folder and you’ll see a new Add to Dock option in the contextual dropdown menu. So you can now park any folder in your Dock, and if you long press on its icon, you’ll see Mac-style display options to view the content as a Grid or a Fan, as well as the typical sorting preferences. In iPadOS 26, you can now fit up to 23 icons in the Dock, so there’s nothing stopping you from adding multiple folders. In Settings ➝ Multitasking & Gestures, there’s also a new option to Automatically Show and Hide the Dock, just like in macOS.
Google moves Gemini beyond chat into a full creative‑ daily productivity platform with guided learning, flash‑card generation, privacy‑tuned temporary chat and watch integration
Google is beefing up its features for Gemini, its primary suite of generative AI models and the chatbot that serves as its main interface. For creative folks, the Gemini app now offers image editing using text prompts through its viral Gemini 2.5 Flash Image model, codenamed Nano Banana. Also, Google has added Veo 3, the newest version of its video generation model. The tool can animate still photos, drawings or digital art into moving video clips, complete with AI-generated audio. For productivity, Google is also adding scheduled actions, a feature that lets users queue tasks and recurring requests directly within the Gemini app. The Productivity Planner Gem integrates email, Calendar and Drive into a single view, designed to help users prioritize daily tasks more easily. Meanwhile, Temporary Chat allows people to hold private conversations with Gemini that won’t be saved or affect future responses, an answer to growing demand for more user control over AI memory. Gemini can now draw on past chat history if users opt in to provide more relevant answers. Users can manage or delete stored conversations. Real-time captions to Gemini Live, its voice chatbot, can connect with Google services such as Maps. For education, one new feature is Guided Learning, which helps users break down complex topics into digestible steps. The tool is designed to make explanations more interactive, with the AI walking learners through a process rather than delivering a static answer. Students and business professionals can also now generate study guides and flash cards directly from their own notes, readings or problem sets, automating one of the more time-consuming aspects of learning. Google has also introduced Storybook, a feature that allows users to turn personal memories or even dense concepts into illustrated stories that can be read, shared or printed. The tool can add text and audio, blending creative writing with multimodal AI generation.
