Meta is planning to use its annual Connect conference next month to announce a deeper push into smart glasses, including the launch of the company’s first consumer-ready glasses with a display. That’s one of the two new devices Meta is planning to unveil at the event. The company will also launch its first wristband that will allow users to control the glasses with hand gestures. The glasses are internally codenamed Hypernova and will include a small digital display in the right lens of the device. The device is expected to cost about $800 and will be sold in partnership with EssilorLuxottica. With Hypernova, Meta will finally be offering glasses with a display to consumers, but the company is setting low expectations for sales. That’s because the device requires more components than its voice-only predecessors, and will be slightly heavier and thicker. Although Hypernova will feature a display, those visual features are expected to be limited. The color display will offer about a 20 degree field of view — meaning it will appear in a small window in a fixed position — and will be used primarily to relay simple bits of information, such as incoming text messages. The Hypernova glasses will also come paired with a wristband that will use technology built by Meta’s CTRL Labs. The wristband is expected to be a key input component for the company’s future release of full AR glasses, so getting data now with Hypernova could improve future versions of the wristband. In addition to Hypernova and the wristband, Meta will also announce a third-generation of its voice-only smart glasses with Luxottica at Connect
Meta’s ‘Hypernova’ smart glasses to be priced more affordably at $800 for September launch; adds a right‑lens display and EMG wristband to push mainstream mixed‑reality adoption.
Meta’s upcoming smart glasses with a built-in display, codenamed Hypernova, could be cheaper than we initially thought. In April, Bloomberg’s Mark Gurman reported that the company is working on a new type of smart glasses, which will look a lot like a regular pair but equipped with a tiny display that can show photos and apps. That report claimed the new smart glasses will cost upwards of $1,000 — perhaps as high as $1,300 to $1,400. But now, Gurman has revised that price significantly, claiming that Meta’s glasses, which are expected to launch in September, will be priced from $800. “The change stems in part from the company accepting lower margins to boost demand — a common tactic for new products,” he wrote on X. That’s a pretty large difference in price, and one that will probably incite many buyers to make the leap into Meta’s version of (slightly) mixed reality. For comparison, the Ray-Ban Meta smart glasses, which have a built-in camera but no display, start at $299. Meta’s own Quest 3 headset, which is a far bulkier mixed reality headset, starts at $499.99. And Apple’s Vision Pro, which is an even bulkier, but also far more powerful, mixed reality headset, still starts at $3,499 — though Apple is reportedly working on a cheaper version.
Amazon readies AR eyewear for consumers and delivery drivers; consumer model gets one eye full‑color display, mic, speakers and camera while driver units provide turn‑by‑turn navigation on a small screen
Amazon.com is developing AR glasses for consumers, a move that would put the company in competition with Facebook owner Meta. The glasses, internally codenamed “Jayhawk,” will include microphones, speakers, a camera and a full‑color display in one eye. Amazon is aiming to roll out the product to consumers in late 2026 or early 2027. The AR glasses for delivery drivers will have features that help with the sorting and delivery of packages. Both products will use the same underlying technology, but only the consumer one will have a full-color display. It was reported Aug. 22 that Meta will debut its first pair of smart glasses with a display at this month’s Connect conference. Codenamed Hypernova, these glasses will include a small digital display in the right lens.
Merchants adopt credit card surcharges to offset interchange as debit fee rules face 8th Circuit appeal; shifting cost recovery amongst legal uncertainty and state specific surcharge rules
As the fate of debit card interchange costs are decided in the Federal Reserve’s appeal to the 8th U.S. Circuit Court of Appeals, many merchants are quietly embracing credit card surcharges as a way to offset interchange costs on credit card transactions. Credit card surcharges are an incremental fee that a merchant charges a customer that pays the merchant with a credit card. The surcharges are designed to offset the merchant’s credit card interchange costs, i.e., the fees that a payment network charges a merchant for the privilege of accepting credit card transactions. Surcharging software can be extremely effective if the surcharging program is designed properly and the agreement between the merchant and the provider properly allocates compliance responsibilities, protects the merchant with strong indemnifications and requires the provider to maintain a robust regulatory change management program. Payment-card accepting merchants of all types, large and small, are rapidly implementing credit card surcharges, including for large-ticket business-to-business and commercial transactions where end users routinely settle six-figure invoices with a payment card. A surcharge program can help a merchant recoup credit card interchange costs from the end user, but the absence of a federal surcharging standard has led to complex compliance requirements. Credit card surcharges are governed by a balkanized regime of disparate state laws in combination with stringent payment card network rules. The payment card networks’ operating regulations act as minimum compliance standards for all surcharging transactions, and even apply in states that have no surcharging limitations. And when a state has a surcharging law that is more restrictive than the payment card networks’ rules, then the merchant must follow those stricter state laws. Merchants operating across large geographic footprints are frequently managing myriad state surcharging laws in addition to payment card network operating regulations for each brand.
AmEx’s offers US cardholders NFT-format travel stamps as non-transferable souvenirs; using Fireblocks’ Wallet-as-a-Service infrastructure and zero knowledge private keys
American Express has launched Amex Passport, a program for US cardholders to collect digital NFT-format passport stamps. The initiative aims to replace physical passport stamps with a blockchain-based solution, reducing the risk of personal exposure. Amex stamps are designed to be non-transferable and have no economic value, making them digital souvenirs. The technical part is managed via Fireblocks’ Wallet-as-a-Service infrastructure, ensuring users never directly come into contact with private keys or transaction costs. The project is part of a broader digital renewal strategy that includes the release of the new Amex Travel app, which provides tools for travel planning and management. However, Amex Passport is only available in the United States, which limits its immediate impact on the European regulatory framework. The immutable nature of the blockchain conflicts with certain GDPR principles, such as the right to be forgotten and data erasure. Amex Passport’s model could be replicated by other players in the financial or tourism sector interested in strengthening customer relations through certified digital tools. However, significant issues may arise in the European market, including GDPR compliance, taxation, and ecosystem issues. As the NFT market matures, Amex Passport represents a significant test for the use of NFTs as functional tools, not just investment objects. If successful, it could pave the way for more widespread use of NFTs as functional tools.
Fortnite has evolved into a full-blown digital stage, blurring the line between gameplay and global events and featuring experience concerts, movie previews, and live storytelling
Fortnite has evolved into a full-blown digital stage, blurring the line between gameplay and global events. Players can now experience concerts, movie previews, and live storytelling alongside their friends in a virtual world. These digital spectacles have become a staple of the Fortnite experience, with players often wanting to fully immerse themselves in the world through character skins, themed gear, and timed bundles. Fortnite’s social layer adds a sense of togetherness rarely felt in traditional live streams, positioning it as a platform for shared cultural experiences. For artists, studios, and brands, it’s a new form of exposure. For players, it’s a free ticket to an event where participation isn’t passive—it’s expressive. Current packages, like Fortnite bundles from digital platforms, are more than just cosmetic packs; they’re a way to instantly gear up for the moment, whether it’s a live event or a new chapter launch. Platforms like Eneba offer access to these bundles without forcing full-price commitments, making them ideal for specific collaborations or limited-time celebrations. Fortnite remains at the center of the conversation as digital experiences evolve, redefining what’s possible when digital space is treated like a social stage. More integration, creativity, and opportunities for players to participate in globally shared events, right from their console or PC, are on the horizon. Bundles allow players to upgrade their look or emotes, making them appealing for those looking to engage more deeply with these moments.
Ray-Ban Meta smart glasses become an intuitive accessibility tool through Be My Eyes, offering object recognition and live help at the wearer’s command.
Ray-Ban Meta smart glasses are not experimental hardware for early adopters. They are designed for anyone to wear daily. That is the shift. Meta may not have set out to design an accessibility tool. However, by focusing on natural interaction, not screens or buttons, the glasses open the door to new ways of participating in the world. That shift, whether intentional or not, makes them worth a closer look. That potential becomes even more powerful when paired with human insight and real-world use. That is precisely what Be My Eyes is delivering. Be My Eyes has become an essential mobile experience for many blind and low-vision users, a go-to tool for accessing the visual world through real-time human connection. The platform supports nearly 1 million users and is powered by over 8.8 million sighted volunteers who offer assistance via live video calls. With Ray-Ban Meta, Mike Buckley, CEO of Be My Eyes explained, the service moves from the palm of your hand to the bridge of your nose. Real-time environmental information, object recognition, and volunteer support are now available without lifting a finger. What stands out now is this. While Meta and EssilorLuxottica are the brands behind the Ray-Ban Meta glasses, the real breakthrough is Be My Eyes. They are not just adding functionality. They are changing how vision support is delivered as something intuitive, integrated, and empowering. With a simple voice command, “Call a volunteer,” they have transformed smart glasses into a tool for independence, dignity, and connection. And they have done it at scale. This is not just about innovation. It is about expanding who can participate and making accessibility part of everyday life.
Meta to unveil Smart Glasses with display in September at a price starting at $800
Meta is reportedly set to unveil smart glasses that incorporate a display in September and offer them at a price starting at $800. The new smart glasses, internally named Hypernova, were initially planned to be priced at $1,000, at least, but will now start at $800 before style variations or prescription lenses are added. The current Meta Ray-Ban glasses are priced at $200 to $400 and the Oakley smart glasses cost up to $500. The new Hypernova smart glasses will feature a screen for apps and alerts on one lens and a wrist accessory that can control the glasses. While the Hypernova smart glasses are designed to be used with a mobile phone, they hint at a future when glasses might replace phones. It is rumored that Hypernova could have a smartphone-quality camera and a voice-activated artificial intelligence query tool, Frederick Stanbrell, head of wearables for EMEA at IDC, told. “We are likely seeing the first generation of a device that Mark Zuckerberg intends to one day replace phones,” Stanbrell said.
Amazon and Nvidia deploy zero-touch manufacturing leveraging AI-powered photorealistic digital twins and synthetic simulations to revolutionize factory floor production amid supply chain challenges
The hype around the metaverse fizzled out, but its core technologies, like real-time 3D modeling, simulation and synthetic environments, have found practical use in manufacturing, particularly through digital twins. Amazon Devices & Services, in collaboration with Nvidia, for example, is rolling out “zero-touch manufacturing” powered by Nvidia AI and digital twin simulations. The concept sounds deceptively simple and entails robots that can learn to assemble and inspect new products without ever touching a single prototype. Robotic arms can now rely on photorealistic replicas of a factory floor where every movement is modeled, tested and optimized in software. By combining synthetic data, AI-driven planning and physics-based simulations, the companies said they can cut prototyping costs, slash lead times, and push more devices into production faster using robotic arms. A robotic arm can be trained to assemble a fragile part virtually, learning thousands of variations overnight in Nvidia’s servers. Synthetic data generated in Omniverse teaches the AI how to respond to edge cases like a slightly warped component, a misaligned feeder or a sudden change in torque. When the real robot is finally tasked with assembly, it arrives with “experience” accumulated in a risk-free virtual environment. The result is fewer prototypes, less scrap and faster scaling from pilot runs to mass production.
Apple has published a patent application that introduces a more advanced system for generating contextual interfaces, which are dynamic control panels that adapt to the content and user intent within a 3D or XR environment. These interfaces are designed to appear proximate to specific UI elements and are customized based on content type, user gaze, and gesture data. The patent emphasizes privacy-preserving input recognition and out-of-process interaction handling, which goes beyond Vision Pro’s current capabilities. The system interprets user activity as intentional input, generating a contextual interface nearby when a user views a 2D webpage or application window within a 3D XR environment. This interface provides relevant controls without cluttering the main content, making it easier to interact with media players, navigate long articles, or manipulate panoramic and stereoscopic visuals. Machine learning meets spatial awareness, classifying content types and segmenting webpages into meaningful categories, and determining the appropriate interface shape, layout, and control set based on this classification. The patent also introduces an input support framework that operates outside of individual application processes, enabling legacy applications to function seamlessly in XR environments without needing custom 3D input logic. This patent suggests a more intelligent and adaptive interface paradigm, enhancing usability in complex XR environments and laying the groundwork for more secure and privacy-conscious interaction models. If implemented, this technology could redefine how users engage with digital content in mixed reality, making interactions more fluid, personalized, and secure.