Google has announced that its most powerful Gemini 2.5 models are ready for enterprise production while unveiling a new ultra-efficient variant designed to undercut competitors on cost and speed. The announcements represent Google’s most assertive challenge yet to OpenAI’s market leadership Two of its flagship AI models—Gemini 2.5 Pro and Gemini 2.5 Flash— are now generally available, signaling the company’s confidence that the technology can handle mission-critical business applications. Google simultaneously introduced Gemini 2.5 Flash-Lite, positioning it as the most cost-effective option in its model lineup for high-volume tasks. What distinguishes Google’s approach is its emphasis on “reasoning” or “thinking” capabilities — a technical architecture that allows models to process problems more deliberately before responding. Unlike traditional language models that generate responses immediately, Gemini 2.5 models can spend additional computational resources working through complex problems step-by-step. This “thinking budget” gives developers unprecedented control over AI behavior. They can instruct models to think longer for complex reasoning tasks or respond quickly for simple queries, optimizing both accuracy and cost. The feature addresses a critical enterprise need: predictable AI behavior that can be tuned for specific business requirements. Gemini 2.5 Pro, positioned as Google’s most capable model, excels at complex reasoning, advanced code generation, and multimodal understanding. Gemini 2.5 Flash strikes a balance between capability and efficiency, designed for high-throughput enterprise tasks like large-scale document summarization and responsive chat applications. The newly introduced Flash-Lite variant sacrifices some intelligence for dramatic cost savings, targeting use cases like classification and translation where speed and volume matter more than sophisticated reasoning.
Apple Intelligence’s transcription tool is as accurate and 2X as fast as OpenAI’s Whisper
Newly released to developers, Apple Intelligence’s transcription tools are fast, accurate, and typically double the speed of OpenAI’s longstanding equivalent. Pitching Apple Intelligence against MacWhisper’s Large V3 Turbo model showed a dramatic difference. Apple’s Speech framework tools were consistently just over twice the speed of that Whisper-based app. A test 4K 7GB video file was read and transcribed into subtitles by Apple Intelligence in 45 seconds. It took MacWhisper with the Large V3 Turbo LLM at total of 1 minute and 41 seconds.Then the MacWhisper Large C2 model took 3 minutes and 55 seconds to do the same job.None of these transcriptions were perfect, and all required editing. But the Apple Intelligence version was as accurate as the Whisper-based tools, and twice as fast.As well as releasing these Apple Intelligence tools to developers, Apple has published videos with details of how to implement the technology.
Google’s AI Mode now lets users have a free-flowing, back-and-forth voice conversation with Search and explore links from across the web with the option to tap the “transcript” button to view the text response
Google is rolling out the ability for users to have a back-and-forth voice conversation with AI Mode, its experimental Search feature that lets users ask complex, multi-part questions. With the new Search Live integration, users can have a free-flowing voice conversation with Search and explore links from across the web. Users will be able to access the feature by opening the Google app and tapping the new “Live” icon to ask their question aloud. They will then hear an AI-generated audio response, and they can follow up with another question. The feature will be useful in instances where you’re on the go or multitasking. As you’re having the conversation, you’ll find links right on your screen if you want to dig deeper into your search. Because Search Live works in the background, you can continue the conversation while in another app. Plus, you have the option to tap the “transcript” button to view the text response and continue to ask questions by typing if you’d like to. You can also revisit a Search Live response by navigating to your AI Mode history. The custom model is built on Search’s best-in-class quality and information systems, so you still get reliable, helpful responses no matter where or how you’re asking your question. Search Live with voice also uses query fan-out technique to show you a wider and more diverse set of helpful web content, enabling new opportunities for exploration.
Apple’s speech transcription AI is twice as fast and cost-effective as OpenAI’s Whisper
Apple’s speech transcription AI is twice as fast and cost-effective as OpenAI’s Whisper, according to early testing by MacStories. The AI is used in Apple’s apps like Notes and phone call transcriptions, and Apple has made its native speech frameworks available to developers within macOS Tahoe. The AI processes a 7GBm 34-minute video file in just 45 seconds, 55% faster than Whisper’s fastest model. This is due to Apple processing speech on the device, making it faster and more secure. This indicates that Apple will continue to introduce new Language Learning Models (LLMs) to drive software solutions that compete well in the market, boosted by privacy and price.
iPadOS 26 turns iPad into a productivity powerhouse- lets iPad users to export or download large files in the background while they do other stuff, open several windows at once and freely resize them, and access downloads and documents right from the Dock, making it more Mac-like
iPadOS 26 is going to boost iPad users’ productivity not only with the new design, but with several new features that make the iPad with a Magic Keyboard the ultimate laptop replacement. Here are five ways iPadOS 26 is going to improve productivity for iPad users: Folders in the Dock: For the first time, users will be able to access downloads, documents, and other folders right from the Dock, making it more Mac-like. Supercharged Files app: The Files app is a key part of the iPad experience. With iPadOS 26, Apple takes this application to the next level, from an updated list view with resizable columns to collapsible folders. Users can add colors and other customization options to make it easier to find important documents. They can also set default apps for opening specific file types. Preview app: It’s easier than ever to open, edit, and mark up PDFs and images. Apple says the new Preview app was designed for a proper Apple Pencil experience, which means signing documents and taking notes should be faster and more reliable than ever. Background Tasks: Believe it or not, iPadOS 26 finally unlocks true background tasks. Users can now export or download large files in the background while they do other stuff. This might be one of the best iPadOS 26 productivity features. Better windowing system: Apple revamped the iPadOS 18 windowing system. Forget about Stage Manager, Split View, and Slide Over. With the upcoming iPadOS 26 update, users will be able to open several windows at once and freely resize and arrange them. There are also new ways to control windows with a familiar menu bar and Mac-like controls.
Car makers are holding off from Apple’s CarPlay Ultra in favor of their own solutions, due to limited avenue to sell subscriptions to drivers from infotainment system and in-car services, along with design and UI challenges
Apple’s CarPlay Ultra faces a long road to becoming a widely-used feature, as car makers are pushing back on supporting Apple’s system in favor of their own solutions. Car manufacturers Mercedes-Benz, Audi, Volvo, Polestar, and Renault have no interest to include CarPlay Ultra support in their vehicles. While Volvo is among those rejecting CarPlay Ultra, chief executive Hakan Samuelsson did admit that car makers don’t so software as well as tech companies. “There are others who can do that better, and then we should offer that in our cars,” he insisted. While design and interface discussions are the more obvious reasons for holding off from CarPlay Ultra, manufacturers also have another incentive. It is said that the infotainment system and in-car services are still a possible revenue source for car makers. This was one of the reasons why GM ditched CarPlay in favor of its own system in 2023, due to the potential to sell subscriptions to drivers. For some car manufacturers shying away from handing over control to CarPlay Ultra, they are stopping short of blocking Apple entirely. In most cases, the current limited CarPlay will still be offered, in tandem with their own systems. BMW insisted that CarPlay will be used in its infotainment system. Meanwhile, Audi believes it should provide drivers “a customized and seamless digital experience” of its own creation, while still maintaining CarPlay support.
Apple’s Swift coding language to add support to Android platform with focus on improving support for official distribution, determining the range of supported Android API levels and developing support for debugging Swift applications
Apple usually doesn’t give Android the time of day, but that’s not stopping the company’s Swift coding language from expanding over to Android app development. Android apps are generally coded in Kotlin, but Apple is looking to provide its Swift coding language as an alternative. Apple first launched its coding language back in 2014 with its own platforms in mind, but currently also supports Windows and Linux officially. Swift has opened up an “Android Working Group” which will “establish and maintain Android as an officially supported platform for Swift.” A few of the key pillars the Working Group will look to accomplish include: 1) Improve and maintain Android support for the official Swift distribution, eliminating the need for out-of-tree or downstream patches 2) Recommend enhancements to core Swift packages such as Foundation and Dispatch to work better with Android idioms 3) Work with the Platform Steering Group to officially define platform support levels generally, and then work towards achieving official support of a particular level for Android 4) Determine the range of supported Android API levels and architectures for Swift integration 5) Develop continuous integration for the Swift project that includes Android testing in pull request checks. 6) Identify and recommend best practices for bridging between Swift and Android’s Java SDK and packaging Swift libraries with Android apps 7) Develop support for debugging Swift applications on Android 8) Advise and assist with adding support for Android to various community Swift packages.
Car makers are holding off from Apple’s CarPlay Ultra in favor of their own solutions, due to limited avenue to sell subscriptions to drivers from infotainment system and in-car services, along with design and UI challenges
Apple’s CarPlay Ultra faces a long road to becoming a widely-used feature, as car makers are pushing back on supporting Apple’s system in favor of their own solutions. Car manufacturers Mercedes-Benz, Audi, Volvo, Polestar, and Renault have no interest to include CarPlay Ultra support in their vehicles. While Volvo is among those rejecting CarPlay Ultra, chief executive Hakan Samuelsson did admit that car makers don’t so software as well as tech companies. “There are others who can do that better, and then we should offer that in our cars,” he insisted. While design and interface discussions are the more obvious reasons for holding off from CarPlay Ultra, manufacturers also have another incentive. It is said that the infotainment system and in-car services are still a possible revenue source for car makers. This was one of the reasons why GM ditched CarPlay in favor of its own system in 2023, due to the potential to sell subscriptions to drivers. For some car manufacturers shying away from handing over control to CarPlay Ultra, they are stopping short of blocking Apple entirely. In most cases, the current limited CarPlay will still be offered, in tandem with their own systems. BMW insisted that CarPlay will be used in its infotainment system. Meanwhile, Audi believes it should provide drivers “a customized and seamless digital experience” of its own creation, while still maintaining CarPlay support.
Apple’s Swift coding language to add support to Android platform with focus on improving support for official distribution, determining the range of supported Android API levels and developing support for debugging Swift applications
Apple usually doesn’t give Android the time of day, but that’s not stopping the company’s Swift coding language from expanding over to Android app development. Android apps are generally coded in Kotlin, but Apple is looking to provide its Swift coding language as an alternative. Apple first launched its coding language back in 2014 with its own platforms in mind, but currently also supports Windows and Linux officially. Swift has opened up an “Android Working Group” which will “establish and maintain Android as an officially supported platform for Swift.” A few of the key pillars the Working Group will look to accomplish include: 1) Improve and maintain Android support for the official Swift distribution, eliminating the need for out-of-tree or downstream patches 2) Recommend enhancements to core Swift packages such as Foundation and Dispatch to work better with Android idioms 3) Work with the Platform Steering Group to officially define platform support levels generally, and then work towards achieving official support of a particular level for Android 4) Determine the range of supported Android API levels and architectures for Swift integration 5) Develop continuous integration for the Swift project that includes Android testing in pull request checks. 6) Identify and recommend best practices for bridging between Swift and Android’s Java SDK and packaging Swift libraries with Android apps 7) Develop support for debugging Swift applications on Android 8) Advise and assist with adding support for Android to various community Swift packages
Google virtual try-on app lets users not only virtually “try on” outfits but also see themselves in motion while wearing them in AI-generated videos
Google launched an experimental app that lets users not only virtually “try on” outfits but also see themselves in motion while wearing them. The new Doppl app from Google Labs builds on the capabilities of the AI Mode virtual try-on feature launched in May by Google Shopping, adding the ability to turn static images into artificial intelligence-generated videos. The dynamic visuals give users “an even better sense for how an outfit might feel.” Users can generate these images and videos by uploading a full-body photo of themselves as well as photos or screenshots of the items they would like to try on. “With Doppl, you can try out any look, so if you see an outfit you like from a friend, at a local thrift shop, or featured on social media, you can upload a photo of it into Doppl and imagine how it might look on you,” Google’s post said. “You can also save or share your best looks with friends or followers.”