Researchers from the University of Pretoria have developed a new technique for detecting tampering in PDF documents by analyzing the file’s page objects. The new prototype uses Python to detect changes to a PDF document, such as text, images, or metadata. PDFs are increasingly used in various industries and are a target for criminals who want to affect contracts or aid in misinformation. Current techniques for detecting changes in PDFs rely on watermarking and hashing, which can only detect visible parts of a PDF. However, these methods do not analyze hidden elements like metadata or background data, making it difficult to identify exactly where or what was changed. The new prototype uses the hashlib, Merkly, and PDFRW libraries to generate hashes and access intricate PDF structures. It performs two primary functions: protecting a PDF and assessing a PDF for forgery. To protect a PDF, the prototype reads the PDF document and calculates unique digital fingerprints, known as hashes, from various elements. These hashes are secretly embedded as new, hidden keys into the relevant file page object and the PDF’s main “root” object. The PDF tampering prototype works well with Adobe Acrobat, but it does not yet detect all possible PDF changes, such as altering a document’s font without changing the actual content or adding JavaScript code.
Naext’s indoor spatial computing enables people with visual impairments to navigate large-scale, complex buildings independently through smartphones and smart glasses
Naext, founded by Lukas van Delft and Victor van Dinten, is a European startup focused on creating a more accessible world through innovative, privacy-friendly technology. The company uses AI, computer vision, and immersive experiences to enable people to navigate complex buildings independently, using smartphones and smart glasses. Naext’s most visible application is making Dutch public transportation more accessible for people with and without visual impairments. The startup has raised €1.5 million in funding and plans to roll out its technology across Europe through European mobility hubs. The company competes with companies like Be My Eyes, GoodMaps, and Niantic, but sees them as ecosystem partners. Naext’s biggest success is the largest hospital in the Netherlands, which now has over 1,000 active users every month. The Brainport ecosystem supports Naext’s development, with investors, partners, and talent. However, there is room for improvement in large clients willing to adopt startup technology on a large scale.
Apple’s AI agent can provide accessible interactions using Street View imagery, analyze what is seen on a route, and describe the details of the elements to offer contextual clues for visually impaired users
A paper released through Apple Machine Learning Research talks about SceneScout, a multi-modal LLM-driven AI agent that can be used to view Street View imagery, analyze what is seen, and to describe it to the viewer. At the moment, pre-travel advice provides details like landmarks and turn-by-turn navigation, which do not provide much in the way of landscape context for visually impaired users. However, Street View style imagery, such as Apple Maps Look Around, often presents sighted users with a lot more contextual clues, which are often missed out on by people who cannot see it. This is where SceneScout steps in, as an AI agent to provide accessible interactions using Street View imagery. There are two modes to Scene Scout, with Route Preview providing details of elements it can observe on a route. For example, it could advise of trees at a turning and other more tactile elements to the user. A second mode, Virtual Exploration, is described as enabling free movement within Street View imagery, describing elements to the user as they virtually move. In its user study, the team determined that SceneScout is helpful to visually impaired people, in terms of uncovering information that they would not otherwise access using existing methods. If the research pans out, it could become a tool to help visually impaired people virtually explore a location in advance.
Dynamic Island again rumored to change with iPhone 17
The iPhone 17 range has, again, been rumored to use a new Dynamic Island, changing how the UI elements appear for the new smartphone line. Serial leaker Digital Chat Station has posted a series of details about the iPhone 17 collection includes a mention about the Dynamic Island. “The system has a brand new Smart Island UI,” the leaker says, according to a computerized translation. The Dynamic Island’s UI change is only part of the short list of changes that are on the way, according to the leaker’s post. The rest of the list includes a mention of how the standard iPhone 17 will have a fine-tuned design, but without saying what’s happening. The Pro series will have a new-design “horizontal large matrix,” but again, there is no explanation for what this specifically applies to. There is also the expectation of LIPO screens with narrower bezels, the use of a high-resolution 5x optical zoom camera on the Pro models, and the previously rumored camera bump changes. In January, analyst Ming-Chi Kuo said that the Dynamic Island’s size will “remain largely unchanged across the 2H25 iPhone 17 series.” This does give a little leeway for the Pro Max to be changed while others stay the same, but Digital Chat Station’s latest claim seems to apply to more than one model.
iOS 26 lends a frosted glass appearance to the Lock Screen clock, allows users to select lighting effects for any of the clock fonts, and choose a color to tint the glass for a realistic glass effect and can be resized to better match iPhone’s wallpaper
Liquid Glass is everywhere in iOS 26, and it starts right when you pick up your device. Here’s what you’ll see first when you upgrade to iOS 26. The two customizable control buttons on the Lock Screen are larger and have a floating, glass-like appearance like the other Liquid Glass interface options in iOS 26. The clock has a frosted glass appearance with the new “Glass” option, using lighting effects to make it look like glass in the real world. Glass can be selected for any of the clock fonts, and you can choose a color to tint the glass. Apple has multiple preset options, or you can select your own. When you tilt your iPhone, light reflects and glints with the movement, for a realistic glass effect. Notifications that are on your Lock Screen have a Liquid Glass aesthetic with a frosted glass look that leaves your wallpaper visible behind them. In addition to having a Liquid Glass aesthetic, the clock can be resized to better match your iPhone’s wallpaper using a new adaptive feature. When you’re customizing your Lock Screen, you can grab the corner of the time and drag it down to expand it. Adjusting the size of the time only works with the first font option, and only with the standard Arabic, Western numbering. With photo wallpapers, the time can automatically expand to fill in missing space, and it can change based on the image if you have Photo Shuffle set. The subject in photo wallpapers is meant to always be visible, and can overlap the time in unique ways in iOS 26. There is a new default wallpaper that was designed for iOS 26. It’s multiple shades of blue, with the same floating glass aesthetic that the rest of iOS 26 features. The wallpaper can subtly shift with iPhone movement. Aside from the Liquid Glass time, Spatial Scenes are the biggest change to the Lock Screen. 2D photos that you set as wallpaper can be turned into 3D spatial images that separate the subject of the photo from the background using depth information. When you move your iPhone, Spatial Scenes shift and move along with it, making the images feel alive. Spatial Scenes is a feature in the Photos app too, and it can be added to any image that you’ve taken with your iPhone, including older ones. Lock Screen widgets can be placed on the top of the display under the time, or at the bottom of the display. With the adaptive clock and new wallpaper options, widgets can also shift down automatically to ensure the subject of an image is always visible. Apple added a new Lock Screen widget for Apple Music search, but there are no other new Lock Screen widget options. What is new, though, is a new full screen Now Playing interface that shows album art. Artwork expands and animates right on the Lock Screen.
Google enables launching AI Mode with one-tap search on Android and iOS that does away with the homepage; adds slick animation with color glows to encompass entire screen for iOS
Besides the widget shortcut, Google is making AI Mode faster to access with one-tap search on Android and iOS. Previously, launching AI Mode from the shortcut beneath the Search bar in the Google app or widget would bring you to an introductory homepage. You’d then have to touch the “Ask AI Mode” field before you could start typing. Opening AI Mode now immediately takes you to the input box with the keyboard open. The header just shows the ‘G’ logo (and close button), while the suggested queries carousel disappears after you enter text for a minimalist look. With the previous homepage no longer available, you cannot quickly access conversation history. Google tells us to soon expect direct access from the text field. One-tap AI Mode access is live on both Google for Android and iOS. On the latter platform, Google has introduced a very slick animation. Tapping the AI Mode button will expand the usual Search field to encompass your entire screen as the keyboard pops up. As this occurs, there’s a four-color glow around the expanding perimeter that looks very nice. It fades out just as everything settles, while closing AI Mode also results in a visual effect. There’s no equivalent animation on Android right now, but there are other colorful touches.
Eppo, a feature flagging and experimentation platform offers “confidence intervals” to make it easier to understand and interpret the results of a randomized app experiments and different versions of apps and models
Datadog has acquired Eppo, a feature-flagging and experimentation platform. Despite the demand for tools that let developers experiment with different versions of apps, the infrastructure required for product analytics remains relatively complex to build. Beyond data pipelines and statistical methods, experimentation infrastructure relies on analytics workflows often sourced from difficult-to-configure cloud environments. Eppo will continue supporting existing customers and bringing on new ones under the brand “Eppo by Datadog.” Eppo offers “confidence intervals” to make it easier to understand and interpret the results of a randomized app experiment. The platform supports experimentation with AI and machine learning models, leveraging techniques to perform live experiments that show whether one model is outperforming another. Eppo co-founder and CEO Che Sharma said “With Datadog, we are uniting product analytics, feature management, AI, and experimentation capabilities for businesses to reduce risk, learn quickly, and ship high-quality products.” For Datadog, the Eppo buy could bolster the company’s current product analytics solutions. “The use of multiple AI models increases the complexity of deploying applications in production,” Michael Whetten, VP of Product at Datadog, said. “Experimentation solves this correlation and measurement problem, enabling teams to compare multiple models side-by-side, determine user engagement against cost tradeoffs, and ultimately build AI products that deliver measurable value.”
Google redesigning the Search bar widget on Android taking after Circle to Search revamp earlier this year with an overarching pill-shaped container
Google is rolling out a redesign of the Search bar homescreen widget on Android that better emphasizes the optional shortcut. The previous design was a pill with the Google ‘G’ logo at the left. Next up is a custom shortcut, voice input microphone, and Google Lens shortcut. This new design takes after Circle to Search revamp earlier this year with an overarching pill-shaped container. It’s slightly taller than before, which aligns with Material 3’s preference for thicker search fields. At the left is a large Search bar that’s unchanged. What’s new is how Google moved the optional shortcut to a standalone circle at the right. This results in the custom button standing out much more, and is easier to tap. The available options are: None, AI Mode, Translate (text), Song Search, Weather, Translate (camera), Sports, Dictionary, Homework, Finance, Saved, and News. The minimum width to have everything appear is 4×1, instead of 3×1, which might disrupt some layouts. When you adjust the transparency slider, the outer container is what changes the most. We’re seeing this Search bar redesign with Google app 16.17 (latest beta). If you don’t have this change yet, highlight the widget on your homescreen and tap the pencil icon.
Web browsers, by including agentic AI capabilities (capable of understanding context, automating and executing multi-step tasks), can access information without any tabs, clicking or scrolling
From Netscape to Chrome, browsers are digital windows to the world. But that era is potentially poised to quickly circle the drain as AI comes to control a greater share of the flow of information. ChatGPT.com is now the fifth-most visited website in the world, with Google.com on top, followed by YouTube, Facebook and Instagram. The news that Perplexity is developing its own web browser, Comet, that is expected to include agentic AI capabilities and the ability to automate certain tasks, is already showing that how users find things, how they buy things and even how they know things, could increasingly be up for grabs. Instead of opening a browser window and typing a URL, users may soon speak or text a request into an agent that goes out, searches the internet and delivers what they need. No tabs, no clicking and no endless scrolling. That, at least, is the envisioned future. The whole concept of a web browser may be absorbed into an ecosystem of intelligent, personalized, persistent AI agents. The advent of the agentic AI web experience could mark a transformative period in how users access and interact with information online. At the heart of the potential evolution are large language models (LLMs) like OpenAI’s GPT-4, Google’s Gemini and Anthropic’s Claude. These systems are increasingly capable of understanding context, maintaining memory and executing multi-step tasks. But true agency requires more than linguistic prowess. Integration is key. APIs now serve as conduits through which AI agents interact with apps, services and devices. If AI agents are making purchasing decisions, traditional advertising strategies could falter. SEO, influencer marketing and even visual design may lose relevance if AI agents bypass websites in favor of direct API transactions. Brands will need to pivot, optimizing not for human attention but for AI interoperability. The AI browser wars have begun, and the outcome will shape the future of the digital landscape.
Apple Vision Pro’s new eye-tracking feature to let users move around the app simply by looking around without requiring any hand gesture for selecting or interacting
Owners of the Apple Vision Pro will soon have the option of scrolling through apps using their eyes, without lifting a finger. Apple is working on a feature that builds upon the existing eye-tracking functionality of the Apple Vision Pro. Allegedly being tested for possible inclusion in visionOS 3, it will let users move around the app simply by looking around. The Apple Vision Pro already uses eye-tracking to determine what a user is looking at, with a pinching hand gesture used to select what is being focused upon. This seems like it would be a fairly reasonable progression of the functionality, and could be a boon for users who don’t necessarily wish to keep raising and lowering their hands to interact with an app. Apple will be making the functionality available across its own app collection. Developers will also be able to use the feature in their visionOS apps. The Apple Vision Pro is not the only device with eye-tracking functions. In June 2024, Apple introduced eye-tracking features to iOS 18 and iPadOS 18 as an accessibility feature, using the front-facing camera. In that iteration, Dwell Control automatically selects an item for a user once they have rested their gaze on a selectable element for a period of time. Smoothing and Snap-to-Item were also configurable to help with hands-free navigation.