OpenAI is reportedly close to releasing a browser that could potentially take on Google LLC’s market dominance with its Chrome browser, several months after the company said that it would be interested in buying Chrome from Google. The browser is slated to be launched in the coming weeks and uses artificial intelligence to fundamentally change how consumers browse the web. Notably, the browser would also give OpenAI direct access to user data, which it could use to train its models. The browser is expected to be built on Chromium, the open-source codebase that underpins Chrome and most other browsers except Firefox, but with AI tightly integrated into the user experience. The OpenAI browser is said to include a chat-style assistant that can perform complex tasks on behalf of the user, such as summarizing pages, autofilling forms, booking travel or completing online purchases, without requiring users to click through websites manually, instead of simply serving as a traditional interface for web navigation. Where things could get particularly interesting is that the browser may include OpenAI’s Operator agent, an agentic AI offering designed to handle multistep tasks across the web, allowing users to delegate responsibilities such as scheduling appointments or ordering food to an AI agent. Its inclusion could turn the browser into more than just a gateway to the internet versus a fully capable assistant embedded directly in the browsing environment. The move could place OpenAI in direct competition with Google on multiple fronts, not only in search but also in advertising and data collection.
Microsoft’s Edge browser claims it can load websites faster now- First Contentful Paint (FCP) takes less than 300 milliseconds to start rendering the first parts of a website—whether it’s text, images, or interface components
Microsoft is introducing its Edge browser to users who are tired of Google Chrome’s memory and space consumption. Edge now loads websites faster than before, allowing users to see text and images load on the webpage. Microsoft has been accused of not giving Edge the momentum to load pages faster, as industry standards suggest any loading time beyond 300 ms is not ideal for users. The new-gen browser with Copilot support has dramatically reduced load times by an average of 40% across features like read aloud, split screen, and workspaces. Microsoft has shared details about the upgrade and even the Settings page on Edge now loads faster, giving people good reasons to consider the Microsoft browser for their daily use. While Chrome has an assailable lead with over 68% market share, Edge is used by less than 5% of users worldwide. Microsoft hopes Edge can become a strong challenger to Google’s superiority in the near future.
Pinwheel’s smartwatch for kids aged 7 to 14 prevents access to social media and internet features an AI chatbot that enables them to ask questions about everyday curiosities, social interactions, and homework-related questions
Pinwheel, a kid-friendly tech company, is introducing a new solution for parents who want to stay connected with their children without giving them a phone. The Pinwheel Watch is a recently launched smartwatch designed specifically for kids aged 7 to 14, offering a child-safe alternative that prevents access to social media and the internet. It features parental management tools, GPS tracking, a camera, voice-to-text messaging, fun mini-games, and — here’s a surprise — an AI chatbot. The smartwatch itself features a sleek black design and a screen that is slightly larger than that of an Apple Watch. In addition to a more standard set of parental controls, the feature some parents might be wary of is the watch’s AI assistant, “PinwheelGPT.” PinwheelGPT is designed as a safer alternative to typical AI chatbots, enabling kids to ask questions about various topics, including everyday curiosities, social interactions, and homework-related questions. In addition to the AI feature, kids and tweens can make calls and send texts on the watch by using voice commands or a keyboard. There’s also a camera for video calls and selfies, along with a voice recorder app. The parent-monitoring features are available through the “Caregiver” app. This allows parents to create a “Safelist” of contacts that their children are permitted to talk to, as well as reject certain phone numbers from being added to the list.
Truv, a provider income, employment, and asset verification solutions integrates with Blue Sage Solutions, a cloud-based digital lending platform for mortgage originators; of direct-to-source verification improves processing turn times and reduces time to close
Truv, a provider of direct-to-source income, employment, and asset verification solutions, announced a strategic integration with Blue Sage Solutions, a cloud-based digital lending platform for mortgage originators. The integration gives lenders access to Truv’s advanced verification capabilities within their existing workflow in the Blue Sage Loan Origination System (LOS), creating a streamlined verification process that significantly reduces costs and improves efficiency for mortgage lenders from application to closing. The integration delivers substantial benefits to mortgage lenders and borrowers: Significant Cost Savings: Lenders using Truv save 60-80% on verification costs compared to traditional solutions, increasing margins per loan file; Accelerated Loan Processing: Direct-to-source verification data coupled with Blue Sage’s automation capabilities improves processing turn times and reduces time to close; Enhanced Borrower Experience: The fully digital verification process reduces the paperchase and cumbersome manual process for borrowers; Streamlined Implementation: Go live quickly with minimal effort, using straightforward configurations. Carmine Cacciavillani, CEO of Blue Sage Solutions, remarked, “This partnership with Truv aligns perfectly with our mission to modernize mortgage lending through technology. The integration provides our clients with instant access to critical verification data, eliminating manual processes while ensuring compliance and accuracy.” Andrew Badstubner, CIO at First Community Mortgage said, ”The integration between Truv and Blue Sage supports that mission by eliminating document clutter and verification delays by providing fast, reliable data straight from the source.”
New technique for detecting tampering in PDF documents uses Python to generate hashes and access intricate PDF structures such as metadata and images, embedding them as hidden key in the relevant file’s page objects
Researchers from the University of Pretoria have developed a new technique for detecting tampering in PDF documents by analyzing the file’s page objects. The new prototype uses Python to detect changes to a PDF document, such as text, images, or metadata. PDFs are increasingly used in various industries and are a target for criminals who want to affect contracts or aid in misinformation. Current techniques for detecting changes in PDFs rely on watermarking and hashing, which can only detect visible parts of a PDF. However, these methods do not analyze hidden elements like metadata or background data, making it difficult to identify exactly where or what was changed. The new prototype uses the hashlib, Merkly, and PDFRW libraries to generate hashes and access intricate PDF structures. It performs two primary functions: protecting a PDF and assessing a PDF for forgery. To protect a PDF, the prototype reads the PDF document and calculates unique digital fingerprints, known as hashes, from various elements. These hashes are secretly embedded as new, hidden keys into the relevant file page object and the PDF’s main “root” object. The PDF tampering prototype works well with Adobe Acrobat, but it does not yet detect all possible PDF changes, such as altering a document’s font without changing the actual content or adding JavaScript code.
Naext’s indoor spatial computing enables people with visual impairments to navigate large-scale, complex buildings independently through smartphones and smart glasses
Naext, founded by Lukas van Delft and Victor van Dinten, is a European startup focused on creating a more accessible world through innovative, privacy-friendly technology. The company uses AI, computer vision, and immersive experiences to enable people to navigate complex buildings independently, using smartphones and smart glasses. Naext’s most visible application is making Dutch public transportation more accessible for people with and without visual impairments. The startup has raised €1.5 million in funding and plans to roll out its technology across Europe through European mobility hubs. The company competes with companies like Be My Eyes, GoodMaps, and Niantic, but sees them as ecosystem partners. Naext’s biggest success is the largest hospital in the Netherlands, which now has over 1,000 active users every month. The Brainport ecosystem supports Naext’s development, with investors, partners, and talent. However, there is room for improvement in large clients willing to adopt startup technology on a large scale.
Apple’s AI agent can provide accessible interactions using Street View imagery, analyze what is seen on a route, and describe the details of the elements to offer contextual clues for visually impaired users
A paper released through Apple Machine Learning Research talks about SceneScout, a multi-modal LLM-driven AI agent that can be used to view Street View imagery, analyze what is seen, and to describe it to the viewer. At the moment, pre-travel advice provides details like landmarks and turn-by-turn navigation, which do not provide much in the way of landscape context for visually impaired users. However, Street View style imagery, such as Apple Maps Look Around, often presents sighted users with a lot more contextual clues, which are often missed out on by people who cannot see it. This is where SceneScout steps in, as an AI agent to provide accessible interactions using Street View imagery. There are two modes to Scene Scout, with Route Preview providing details of elements it can observe on a route. For example, it could advise of trees at a turning and other more tactile elements to the user. A second mode, Virtual Exploration, is described as enabling free movement within Street View imagery, describing elements to the user as they virtually move. In its user study, the team determined that SceneScout is helpful to visually impaired people, in terms of uncovering information that they would not otherwise access using existing methods. If the research pans out, it could become a tool to help visually impaired people virtually explore a location in advance.
Dynamic Island again rumored to change with iPhone 17
The iPhone 17 range has, again, been rumored to use a new Dynamic Island, changing how the UI elements appear for the new smartphone line. Serial leaker Digital Chat Station has posted a series of details about the iPhone 17 collection includes a mention about the Dynamic Island. “The system has a brand new Smart Island UI,” the leaker says, according to a computerized translation. The Dynamic Island’s UI change is only part of the short list of changes that are on the way, according to the leaker’s post. The rest of the list includes a mention of how the standard iPhone 17 will have a fine-tuned design, but without saying what’s happening. The Pro series will have a new-design “horizontal large matrix,” but again, there is no explanation for what this specifically applies to. There is also the expectation of LIPO screens with narrower bezels, the use of a high-resolution 5x optical zoom camera on the Pro models, and the previously rumored camera bump changes. In January, analyst Ming-Chi Kuo said that the Dynamic Island’s size will “remain largely unchanged across the 2H25 iPhone 17 series.” This does give a little leeway for the Pro Max to be changed while others stay the same, but Digital Chat Station’s latest claim seems to apply to more than one model.
iOS 26 lends a frosted glass appearance to the Lock Screen clock, allows users to select lighting effects for any of the clock fonts, and choose a color to tint the glass for a realistic glass effect and can be resized to better match iPhone’s wallpaper
Liquid Glass is everywhere in iOS 26, and it starts right when you pick up your device. Here’s what you’ll see first when you upgrade to iOS 26. The two customizable control buttons on the Lock Screen are larger and have a floating, glass-like appearance like the other Liquid Glass interface options in iOS 26. The clock has a frosted glass appearance with the new “Glass” option, using lighting effects to make it look like glass in the real world. Glass can be selected for any of the clock fonts, and you can choose a color to tint the glass. Apple has multiple preset options, or you can select your own. When you tilt your iPhone, light reflects and glints with the movement, for a realistic glass effect. Notifications that are on your Lock Screen have a Liquid Glass aesthetic with a frosted glass look that leaves your wallpaper visible behind them. In addition to having a Liquid Glass aesthetic, the clock can be resized to better match your iPhone’s wallpaper using a new adaptive feature. When you’re customizing your Lock Screen, you can grab the corner of the time and drag it down to expand it. Adjusting the size of the time only works with the first font option, and only with the standard Arabic, Western numbering. With photo wallpapers, the time can automatically expand to fill in missing space, and it can change based on the image if you have Photo Shuffle set. The subject in photo wallpapers is meant to always be visible, and can overlap the time in unique ways in iOS 26. There is a new default wallpaper that was designed for iOS 26. It’s multiple shades of blue, with the same floating glass aesthetic that the rest of iOS 26 features. The wallpaper can subtly shift with iPhone movement. Aside from the Liquid Glass time, Spatial Scenes are the biggest change to the Lock Screen. 2D photos that you set as wallpaper can be turned into 3D spatial images that separate the subject of the photo from the background using depth information. When you move your iPhone, Spatial Scenes shift and move along with it, making the images feel alive. Spatial Scenes is a feature in the Photos app too, and it can be added to any image that you’ve taken with your iPhone, including older ones. Lock Screen widgets can be placed on the top of the display under the time, or at the bottom of the display. With the adaptive clock and new wallpaper options, widgets can also shift down automatically to ensure the subject of an image is always visible. Apple added a new Lock Screen widget for Apple Music search, but there are no other new Lock Screen widget options. What is new, though, is a new full screen Now Playing interface that shows album art. Artwork expands and animates right on the Lock Screen.
Google enables launching AI Mode with one-tap search on Android and iOS that does away with the homepage; adds slick animation with color glows to encompass entire screen for iOS
Besides the widget shortcut, Google is making AI Mode faster to access with one-tap search on Android and iOS. Previously, launching AI Mode from the shortcut beneath the Search bar in the Google app or widget would bring you to an introductory homepage. You’d then have to touch the “Ask AI Mode” field before you could start typing. Opening AI Mode now immediately takes you to the input box with the keyboard open. The header just shows the ‘G’ logo (and close button), while the suggested queries carousel disappears after you enter text for a minimalist look. With the previous homepage no longer available, you cannot quickly access conversation history. Google tells us to soon expect direct access from the text field. One-tap AI Mode access is live on both Google for Android and iOS. On the latter platform, Google has introduced a very slick animation. Tapping the AI Mode button will expand the usual Search field to encompass your entire screen as the keyboard pops up. As this occurs, there’s a four-color glow around the expanding perimeter that looks very nice. It fades out just as everything settles, while closing AI Mode also results in a visual effect. There’s no equivalent animation on Android right now, but there are other colorful touches.