With the introduction of the Phone app on macOS, you can now take and place calls directly from your Mac. The macOS Phone app takes many of the cues from the updated iOS Phone app. The new Unified view is recreated on the bigger screen, with the default appearance being a list of recent calls and favorite callers on the top. These all take advantage of contact photos and posters, if other contacts have employed them. Selecting a recent call will bring up a larger version of the contact poster, along with more information about that person. There’s also an option to Manage Filtering, which on an iPhone would bring up the Settings app with options on how to handle unknown or spam callers. An Edit button lets you change your Favorites list or to select multiple logs in a list for mass deletion. The onboard voicemail function of the iPhone is also accessible from the macOS Phone app. If there is a voicemail recording and transcript available on the connected iPhone, these can be heard and read from the Mac directly. When you place a call by pressing the relevant icon or using the on-screen keypad, or receive a call, a box will appear in the top right corner of your Mac’s display. The buttons on the box offer extra features, including a compact keypad for menu systems. One brings up options for enabling Call Recording, Live Translation, Hold Assist, and Screen Sharing.
Overhauled Shortcuts app in iOS 26 supports Apple Intelligence models for actions like summarizing PDFs, generating recipes, answering questions, and more
Apple overhauled the Shortcuts app in iOS 26, iPadOS 26, and macOS Tahoe, and there are now Apple Intelligence options that users can take advantage of. The app supports Apple Intelligence models for things like summarizing PDFs, generating recipes, answering questions, and more. Here’s what Apple offers, along with the descriptions: Morning Summary – Use Model to describe the day ahead of you. Action Items From Meeting Notes – Use Model to grab action items from meeting notes. Summarize PDF – Use Model to summarize the open PDF in Safari. Is Severance Season 3 Out? – Use Model to find out if something has been released. ASCII Art – Use Model to draw you some ASCII art. Document Review – Use Mode to help you compare and contrast documents. Reminders Roulette – Use Model to punt an unimportant reminder to tomorrow. Get Started With Language Models – A tutorial for Use Model with examples. As the last pre-made Shortcut suggests, you can create your own shortcuts that incorporate Apple’s AI model, and Apple’s offerings serve as examples. When you go to create a Shortcut, there’s a new Apple Intelligence section. You can opt to use an on-device model, a cloud model that takes advantage of Private Cloud Compute, or ChatGPT. There are some pre-determined options, so you can do things like open Visual Intelligence or generate an image with Image Playground. There are several Writing Tools features for adjusting the tone of text, proofreading, creating a list from text, summarizing text, or rewriting text. When you tap on Cloud, On-Device Model, or ChatGPT, there’s an open-ended prompt where you can write in what you want to do. You need to work within the confines of the model that Apple provides, pairing it with other functionality in Shortcuts. You can pull in data from the Weather app, your Calendar, and Reminders, then ask the model to prepare a summary, for example. AI models can be incorporated into any Shortcut.
Stripe buys crypto wallet startup Privy building products on crypto rails; using a single API, clients can spin up wallets rather than use external ones
Payments giant Stripe is acquiring crypto wallet infrastructure startup Privy for an undisclosed fee. The deal is part of Stripe’s aggressive push back into crypto following a six year hiatus, building on its recent $1.1 billion takeover of stablecoin platform Bridge. Privy aims to make it easy for developers to build products on crypto rails. Through a single API, clients can spin up wallets rather than use external ones, sign transactions, and integrate any onchain system. The firm now claims to power over 75 million accounts across more than 1000 developer teams, orchestrating billions in transactions. Among its clients are trading platform Hyperliquid and restaurant app Blackbird. Like Bridge, the startup will operate as an independent product under Stripe.
iOS 26 update allows users to deploy Visual Intelligence on anything on their iPhone’s screen, without requiring them to point the iPhone camera at anything
Apple has made the smallest update to Visual Intelligence in iOS 26, and yet the impact of being able to use it on any image is huge, and at least doubles the usefulness of this one feature. Visual Intelligence involved pointing your iPhone camera at whatever you were interested in. What Apple has done with iOS 26 is take that step away. Everything else is the same, but you no longer have to use your camera. You can instead deploy Visual Intelligence on anything on your iPhone’s screen. This one thing means that researchers can find out more about objects they see on websites. And shoppers can freeze frame a YouTube video and use Visual Intelligence to track down the bag that influencer is wearing. There is an issue that this means there are now two different ways to use Visual Intelligence, and they involve you having to do two different things to start them. The new version is an extra part of Visual Intelligence, not a replacement. Visual Intelligence is replete with different ways to use it, one of which provides a very different service to the rest. Yet being able to identify just about anything on your screen is a huge boon. And consequently Apple increased the usefulness of Visual Intelligence just by not requiring the step where you point your iPhone camera at anything.
New Gemini 2.5 models can process problems more deliberately before responding by spending additional computational resources working through complex problems step-by-step, making them cost-effective for high-throughput enterprise tasks like large-scale document summarization
Google has announced that its most powerful Gemini 2.5 models are ready for enterprise production while unveiling a new ultra-efficient variant designed to undercut competitors on cost and speed. The announcements represent Google’s most assertive challenge yet to OpenAI’s market leadership Two of its flagship AI models—Gemini 2.5 Pro and Gemini 2.5 Flash— are now generally available, signaling the company’s confidence that the technology can handle mission-critical business applications. Google simultaneously introduced Gemini 2.5 Flash-Lite, positioning it as the most cost-effective option in its model lineup for high-volume tasks. What distinguishes Google’s approach is its emphasis on “reasoning” or “thinking” capabilities — a technical architecture that allows models to process problems more deliberately before responding. Unlike traditional language models that generate responses immediately, Gemini 2.5 models can spend additional computational resources working through complex problems step-by-step. This “thinking budget” gives developers unprecedented control over AI behavior. They can instruct models to think longer for complex reasoning tasks or respond quickly for simple queries, optimizing both accuracy and cost. The feature addresses a critical enterprise need: predictable AI behavior that can be tuned for specific business requirements. Gemini 2.5 Pro, positioned as Google’s most capable model, excels at complex reasoning, advanced code generation, and multimodal understanding. Gemini 2.5 Flash strikes a balance between capability and efficiency, designed for high-throughput enterprise tasks like large-scale document summarization and responsive chat applications. The newly introduced Flash-Lite variant sacrifices some intelligence for dramatic cost savings, targeting use cases like classification and translation where speed and volume matter more than sophisticated reasoning.
Apple Intelligence’s transcription tool is as accurate and 2X as fast as OpenAI’s Whisper
Newly released to developers, Apple Intelligence’s transcription tools are fast, accurate, and typically double the speed of OpenAI’s longstanding equivalent. Pitching Apple Intelligence against MacWhisper’s Large V3 Turbo model showed a dramatic difference. Apple’s Speech framework tools were consistently just over twice the speed of that Whisper-based app. A test 4K 7GB video file was read and transcribed into subtitles by Apple Intelligence in 45 seconds. It took MacWhisper with the Large V3 Turbo LLM at total of 1 minute and 41 seconds.Then the MacWhisper Large C2 model took 3 minutes and 55 seconds to do the same job.None of these transcriptions were perfect, and all required editing. But the Apple Intelligence version was as accurate as the Whisper-based tools, and twice as fast.As well as releasing these Apple Intelligence tools to developers, Apple has published videos with details of how to implement the technology.
Google’s AI Mode now lets users have a free-flowing, back-and-forth voice conversation with Search and explore links from across the web with the option to tap the “transcript” button to view the text response
Google is rolling out the ability for users to have a back-and-forth voice conversation with AI Mode, its experimental Search feature that lets users ask complex, multi-part questions. With the new Search Live integration, users can have a free-flowing voice conversation with Search and explore links from across the web. Users will be able to access the feature by opening the Google app and tapping the new “Live” icon to ask their question aloud. They will then hear an AI-generated audio response, and they can follow up with another question. The feature will be useful in instances where you’re on the go or multitasking. As you’re having the conversation, you’ll find links right on your screen if you want to dig deeper into your search. Because Search Live works in the background, you can continue the conversation while in another app. Plus, you have the option to tap the “transcript” button to view the text response and continue to ask questions by typing if you’d like to. You can also revisit a Search Live response by navigating to your AI Mode history. The custom model is built on Search’s best-in-class quality and information systems, so you still get reliable, helpful responses no matter where or how you’re asking your question. Search Live with voice also uses query fan-out technique to show you a wider and more diverse set of helpful web content, enabling new opportunities for exploration.
Apple’s speech transcription AI is twice as fast and cost-effective as OpenAI’s Whisper
Apple’s speech transcription AI is twice as fast and cost-effective as OpenAI’s Whisper, according to early testing by MacStories. The AI is used in Apple’s apps like Notes and phone call transcriptions, and Apple has made its native speech frameworks available to developers within macOS Tahoe. The AI processes a 7GBm 34-minute video file in just 45 seconds, 55% faster than Whisper’s fastest model. This is due to Apple processing speech on the device, making it faster and more secure. This indicates that Apple will continue to introduce new Language Learning Models (LLMs) to drive software solutions that compete well in the market, boosted by privacy and price.
iPadOS 26 turns iPad into a productivity powerhouse- lets iPad users to export or download large files in the background while they do other stuff, open several windows at once and freely resize them, and access downloads and documents right from the Dock, making it more Mac-like
iPadOS 26 is going to boost iPad users’ productivity not only with the new design, but with several new features that make the iPad with a Magic Keyboard the ultimate laptop replacement. Here are five ways iPadOS 26 is going to improve productivity for iPad users: Folders in the Dock: For the first time, users will be able to access downloads, documents, and other folders right from the Dock, making it more Mac-like. Supercharged Files app: The Files app is a key part of the iPad experience. With iPadOS 26, Apple takes this application to the next level, from an updated list view with resizable columns to collapsible folders. Users can add colors and other customization options to make it easier to find important documents. They can also set default apps for opening specific file types. Preview app: It’s easier than ever to open, edit, and mark up PDFs and images. Apple says the new Preview app was designed for a proper Apple Pencil experience, which means signing documents and taking notes should be faster and more reliable than ever. Background Tasks: Believe it or not, iPadOS 26 finally unlocks true background tasks. Users can now export or download large files in the background while they do other stuff. This might be one of the best iPadOS 26 productivity features. Better windowing system: Apple revamped the iPadOS 18 windowing system. Forget about Stage Manager, Split View, and Slide Over. With the upcoming iPadOS 26 update, users will be able to open several windows at once and freely resize and arrange them. There are also new ways to control windows with a familiar menu bar and Mac-like controls.
Car makers are holding off from Apple’s CarPlay Ultra in favor of their own solutions, due to limited avenue to sell subscriptions to drivers from infotainment system and in-car services, along with design and UI challenges
Apple’s CarPlay Ultra faces a long road to becoming a widely-used feature, as car makers are pushing back on supporting Apple’s system in favor of their own solutions. Car manufacturers Mercedes-Benz, Audi, Volvo, Polestar, and Renault have no interest to include CarPlay Ultra support in their vehicles. While Volvo is among those rejecting CarPlay Ultra, chief executive Hakan Samuelsson did admit that car makers don’t so software as well as tech companies. “There are others who can do that better, and then we should offer that in our cars,” he insisted. While design and interface discussions are the more obvious reasons for holding off from CarPlay Ultra, manufacturers also have another incentive. It is said that the infotainment system and in-car services are still a possible revenue source for car makers. This was one of the reasons why GM ditched CarPlay in favor of its own system in 2023, due to the potential to sell subscriptions to drivers. For some car manufacturers shying away from handing over control to CarPlay Ultra, they are stopping short of blocking Apple entirely. In most cases, the current limited CarPlay will still be offered, in tandem with their own systems. BMW insisted that CarPlay will be used in its infotainment system. Meanwhile, Audi believes it should provide drivers “a customized and seamless digital experience” of its own creation, while still maintaining CarPlay support.