Apple is introducing tools for businesses to manage how and when employees can use artificial intelligence. These controls are granular enough for managing which features can be enabled or disabled. The system also apparently allows companies to potentially restrict whether an employee’s AI requests go to ChatGPT’s cloud service, even if the business doesn’t buy services from OpenAI directly. This can prevent employees from accidentally handing over internal-only IP or data to ChatGPT, which could be used elsewhere. However, while the focus is on ChatGPT, it’s a set of tools that won’t be limited just to OpenAI’s service. The same tools can restrict any “external” AI provider, which could include Anthropic or Google, for example. Apple has a public deal with OpenAI that enables deep integration with ChatGPT on the iPhone. However, the new tools may indicate that Apple is preparing for a future where corporate users want more freedom on which AI service they use, and for Apple to potentially have more, similar integrations. While Apple does have its own Private Cloud Compute architecture to protect user data under Apple Intelligence, it doesn’t have any way of ensuring security or privacy for third party services. The tool is an attempt to provide enterprise customers some more control over these services.
The GENIUS Act’s requirement for all approved stablecoin issuers to have robust AML, KYC and risk monitoring programs coupled with national trust bank charter that allows companies to offer crypto custodial services to open the stablecoin market to a wider range of users
With the GENIUS Act now law, stablecoin issuers—including banks and fintechs—finally have regulatory clarity under federal oversight, allowing for broader adoption. Federally insured banks can issue stablecoins, while fintechs require Federal Reserve approval. The law aims to legitimize stablecoins with rules on consumer protections, reserves, AML, and KYC, drawing more users and revenue. Paxos CEO Charles Cascarilla said the law will help stablecoins go mainstream, while Mastercard’s Jorn Lambert emphasized regulation as key to adoption. Paxos, PayPal, Fiserv, and Mastercard are part of the Global Dollar Network pushing for scale. Though stablecoins won’t likely replace everyday payments in developed economies, they’re seen as transformative for cross-border transactions, gig economy pay, and digital wallets. Tether and Circle welcomed the law, with Circle seeking a national trust bank charter to expand its services. Critics, including the ABA and Consumer Reports, warn that stablecoins could disrupt traditional banking and lack adequate consumer safeguards. Still, large banks like Citi, JPMorgan, and BofA are exploring stablecoin strategies, with Citi appearing the most bullish, according to KeyBanc.
Google’s AI research agent combines diffusion mechanisms and retrieval tools to produce more comprehensive and accurate research on complex topics by emulating human process of making iterative revisions turning the draft into higher-quality outputs
Google researchers have developed a new framework for AI research agents that outperforms leading systems from rivals OpenAI, Perplexity and others on key benchmarks. The new agent, called Test-Time Diffusion Deep Researcher (TTD-DR), is inspired by the way humans write by going through a process of drafting, searching for information, and making iterative revisions. The system uses diffusion mechanisms and evolutionary algorithms to produce more comprehensive and accurate research on complex topics. For enterprises, this framework could power a new generation of bespoke research assistants for high-value tasks that standard retrieval augmented generation (RAG) systems struggle with, such as generating a competitive analysis or a market entry report. Unlike the linear process of most AI agents, human researchers work in an iterative manner. They typically start with a high-level plan, create an initial draft, and then engage in multiple revision cycles. During these revisions, they search for new information to strengthen their arguments and fill in gaps. Google’s researchers observed that this human process could be emulated using a diffusion model augmented with a retrieval component. (A trained diffusion model initially generates a noisy draft, and the denoising module, aided by retrieval tools, revises this draft into higher-quality (or higher-resolution) outputs. TTD-DR is built on this blueprint. The framework treats the creation of a research report as a diffusion process, where an initial, “noisy” draft is progressively refined into a polished final report. This is achieved through two core mechanisms. The first, which the researchers call “Denoising with Retrieval,” starts with a preliminary draft and iteratively improves it. In each step, the agent uses the current draft to formulate new search queries, retrieves external information, and integrates it to “denoise” the report by correcting inaccuracies and adding detail. The second mechanism, “Self-Evolution,” ensures that each component of the agent (the planner, the question generator, and the answer synthesizer) independently optimizes its own performance. The resulting research companion is “capable of generating helpful and comprehensive reports for complex research questions across diverse industry domains. In side-by-side comparisons with OpenAI Deep Research on long-form report generation, TTD-DR achieved win rates of 69.1% and 74.5% on two different datasets. It also surpassed OpenAI’s system on three separate benchmarks that required multi-hop reasoning to find concise answers, with performance gains of 4.8%, 7.7%, and 1.7%.
Apple pivots to a full LLM Siri after hybrid led to delays; promising context‑aware tasks, legacy app control, and evaluating external models to accelerate capability without compromising privacy
Apple is developing a new version of Siri that’s supposed to be better than the existing Siri in every way. It will be smarter and able to do more, functioning like ChatGPT or Claude instead of a barely competent 2012-era smartphone assistant. The next-generation version of Siri will use advanced large language models, similar to ChatGPT, Claude, Gemini, and other AI chatbots. Here’s what we’re waiting on: Personal Context: With personal context, Siri will be able to keep track of emails, messages, files, photos, and more, learning more about you to help you complete tasks and keep track of what you’ve been sent. Onscreen awareness will let Siri see what’s on your screen and complete actions involving whatever you’re looking at. Deeper app integration means that Siri will be able to do more in and across apps, performing actions and completing tasks that are just not possible with the personal assistant right now. Apple is rumored to be considering a partnership with ChatGPT creator OpenAI or Claude creator Anthropic to power the smarter version of Siri. Both companies are reportedly training versions of their models that would work with Apple’s Private Cloud Compute servers, and Apple is running tests with both its own models and models from outside companies. No final decision on Siri has been made as of yet. Partnering with a company like Anthropic or OpenAI would allow Apple to deliver the exact Siri feature set that it is aiming for, while also giving it time to continue work on its own LLM behind the scenes.
Amazon acquires AI wearables startup Bee whose stand-alone Fitbit-like bracelet records everything it hears with the goal of listening to conversations to create reminders and to-do lists for the user
Amazon has acquired the AI wearables startup Bee. Bee makes both a stand-alone Fitbit-like bracelet (which retails for $49.99, plus a $19-per-month subscription) and an Apple Watch app. The product records everything it hears — unless the user manually mutes it — with the goal of listening to conversations to create reminders and to-do lists for the user. The company hopes to create a “cloud phone,” or a mirror of your phone that gives the personal Bee device access to the user’s accounts and notifications, making it possible to get reminders about events or send messages. At a $50 price point, Bee’s devices are more cost-accessible to a curious consumer who doesn’t want to make a big financial commitment. This acquisition signals Amazon’s interest in developing wearable AI devices, a different avenue from its voice-controlled home assistant products like its line of Echo speakers. Bee says that users can delete their data at any time and that audio recordings are not saved, stored, or used for AI training. The app does store data that the AI learns about the user, however, which is how it can function as an assistant. Bee also says it’s working on a feature to allow users to define boundaries — both based on topic and location — that will automatically pause the device’s learning. The company noted that it plans to build on-device AI processing, which generally poses less of a privacy risk than processing data in the cloud.
New Siri will bring voice control to just about all apps — but maybe not banking
Apple’s goal for its Apple Intelligence-based Siri is to give it the ability to control apps through voice commands, but will be limited at launch, and may never control banking apps. It’s claimed that Apple is already testing this functionality across a series of popular third-party apps, such as Uber, Amazon, YouTube, Facebook and others. Apple is also building this into its own apps, plus some selected games. Reportedly, the feature will either exclude or at least limit the control of banking, financial and other sensitive apps, perhaps including health. That’s because the feature has to be entirely and constantly reliable, and Apple will not be rolling it out to all apps at once. The report claims that this use of Siri is enormously significant, and specifically that it is vastly more important than the promised ability to ask Siri the name of someone a user has forgotten. But that promise was a rare demonstration of AI being used for something people would actually do. It would turn the iPhone and Siri into a “Star Trek”-like device, and it could be an impressively fast way of using devices. But Apple needs to show us a reason to want Apple Intelligence, and “what’s the name of the guy I met that time” is what would persuade more people than a quicker way to leave a sarky comment on Facebook. Perhaps an improved search facility would help, too. The latest reports are that Siri may gain ChatGPT-like search powers in the first half of 2026.
Bloomberg tips a three‑year iPhone redesign: slim “Air” now, foldable with reduced‑crease display in 2026, and all‑around curved glass for the 20th‑anniversary model in 2027
Apple made a big splash in 2017 by introducing an all-screen iPhone with a notch and no physical home button for the 10th anniversary of the iPhone. The company is now preparing to introduce another major overhaul for the iPhone’s 20th anniversary, featuring a new curved glass design. The iPhone 20, set to be launched in 2027, will have curved glass edges all around, likely to suit the new “Liquid Glass” design philosophy of iOS. The publication’s report also noted that before the 20th anniversary iPhone’s release, Apple will launch its first foldable phone in 2026. It said that Apple is in the process of switching screen technology for its upcoming foldable, which might result in a display that hides the crease well.
WEX and BP partnership to enable fleet drivers to pay for parts and service, tolls, car washes, parking, and roadside assistance using earnify fleet card and earn rebates for fuel an vehicle-related purchases at over 8,000 stations
WEX and bp announced a new partnership to provide fleet drivers access to fuel savings through the earnify™fleet fuel card program in the U.S. Moving forward, earnify™fleet cards can be used for fuel and vehicle-related purchases at merchants that accept WEX and Mastercard® with on-going fuel rebates available at over 8,000 stations across the bp family of brands. Designed for small businesses and large fleets, this partnership will expand the program’s valuable fuel rebates to bp, Amoco, TravelCenters of America, TA Express, and Petro stations across the country. “With the earnify™fleet card, we’re combining WEX’s payments technology with bp’s fueling network to give fleets a smarter, more efficient way to manage operations,” said Brian Fournier, Americas SVP & GM, Mobility at WEX. Drivers can use their earnify™fleet card to pay for parts and service, tolls, car washes, parking, and roadside assistance. With all of these options available on one card, this simplified solution can also be utilized for integrated reporting and invoicing. Business owners and fleet operators can set purchase controls on employee spending based on product type, dollar amount, time of day, and more. The cards come with EMV chip technology, giving businesses extra protection from fraud. earnify™fleet drivers can sign up for the earnify™ rewards program and earn personal loyalty points when they fuel for work at participating locations. This added benefit is designed to enhance earnify™fleet driver satisfaction and encourage in-network fueling, benefiting both drivers and fleet managers.
Wear OS watches are showing Google Wallet photo passes containing a barcode or QR code, suggesting Google may be experimenting with support specifically for scannable photo passes rather than full photo pass
Some Wear OS watches are starting to show Google Wallet photo passes with patterns like QR codes or barcodes. The interface is in Spanish, and the pass includes a label that translates to “Press to scan,” suggesting it’s the kind of pass that contains a barcode or QR code. The other screenshot below shows a disclaimer explaining that the pass was created with a photo and that some of its information might not be visible on the watch, with a prompt to open it on your phone for the full view. hoto passes that include a QR code or barcode appear to show up on some Wear OS watches, while passes that are simply photos, like an image of a document, don’t. That suggests Google may be experimenting with support specifically for scannable photo passes rather than full photo pass support. There’s been no official announcement from Google, and its support pages still state that “private” passes aren’t supported on Wear OS. But this is the clearest sign yet that things might be changing.
Apple’s first foldable reportedly adopts a book‑style form with less crease visibility, four cameras and replaces Face ID with Touch ID on the power button
A new report by Bloomberg’s Mark Gurman details some of the features of the upcoming Apple’s foldable iPhone, likely to launch in the fall of 2026. Apple’s first foldable phone will reportedly be a book-style foldable, opening vertically into a small tablet. It will have a total of four cameras – two on the back, one on the inside, and one on the front. It will not have Face ID; instead, it will have Touch ID built into the power button, similar to what we’ve seen on some of the company’s iPads. Other features of note include new screen tech that should make the crease in the unfolded display less visible. The foldable iPhone will come with Apple’s own C2 modem, which is the same chip that will be used by the iPhone 18 Pro line of products. And it won’t have a physical SIM-card slot, claims Gurman. The specs line up with Apple analyst Ming-Chi Kuo’s report earlier this year, which also said that the foldable iPhone will have a 7.8-inch inner display, a 5.5-inch outer display, and just 9 to 9.5mm of thickness when folded. It’s all a part of Apple’s big plan to shake up its lineup for three years straight; starting with the new iPhone 17 Air this September, followed by the foldable iPhone next year, and, in 2027, the “iPhone 20,” a sort of an anniversary model that will have curved glass edges all around.