Fortnite has evolved into a full-blown digital stage, blurring the line between gameplay and global events. Players can now experience concerts, movie previews, and live storytelling alongside their friends in a virtual world. These digital spectacles have become a staple of the Fortnite experience, with players often wanting to fully immerse themselves in the world through character skins, themed gear, and timed bundles. Fortnite’s social layer adds a sense of togetherness rarely felt in traditional live streams, positioning it as a platform for shared cultural experiences. For artists, studios, and brands, it’s a new form of exposure. For players, it’s a free ticket to an event where participation isn’t passive—it’s expressive. Current packages, like Fortnite bundles from digital platforms, are more than just cosmetic packs; they’re a way to instantly gear up for the moment, whether it’s a live event or a new chapter launch. Platforms like Eneba offer access to these bundles without forcing full-price commitments, making them ideal for specific collaborations or limited-time celebrations. Fortnite remains at the center of the conversation as digital experiences evolve, redefining what’s possible when digital space is treated like a social stage. More integration, creativity, and opportunities for players to participate in globally shared events, right from their console or PC, are on the horizon. Bundles allow players to upgrade their look or emotes, making them appealing for those looking to engage more deeply with these moments.
Ray-Ban Meta smart glasses become an intuitive accessibility tool through Be My Eyes, offering object recognition and live help at the wearer’s command.
Ray-Ban Meta smart glasses are not experimental hardware for early adopters. They are designed for anyone to wear daily. That is the shift. Meta may not have set out to design an accessibility tool. However, by focusing on natural interaction, not screens or buttons, the glasses open the door to new ways of participating in the world. That shift, whether intentional or not, makes them worth a closer look. That potential becomes even more powerful when paired with human insight and real-world use. That is precisely what Be My Eyes is delivering. Be My Eyes has become an essential mobile experience for many blind and low-vision users, a go-to tool for accessing the visual world through real-time human connection. The platform supports nearly 1 million users and is powered by over 8.8 million sighted volunteers who offer assistance via live video calls. With Ray-Ban Meta, Mike Buckley, CEO of Be My Eyes explained, the service moves from the palm of your hand to the bridge of your nose. Real-time environmental information, object recognition, and volunteer support are now available without lifting a finger. What stands out now is this. While Meta and EssilorLuxottica are the brands behind the Ray-Ban Meta glasses, the real breakthrough is Be My Eyes. They are not just adding functionality. They are changing how vision support is delivered as something intuitive, integrated, and empowering. With a simple voice command, “Call a volunteer,” they have transformed smart glasses into a tool for independence, dignity, and connection. And they have done it at scale. This is not just about innovation. It is about expanding who can participate and making accessibility part of everyday life.
Meta to unveil Smart Glasses with display in September at a price starting at $800
Meta is reportedly set to unveil smart glasses that incorporate a display in September and offer them at a price starting at $800. The new smart glasses, internally named Hypernova, were initially planned to be priced at $1,000, at least, but will now start at $800 before style variations or prescription lenses are added. The current Meta Ray-Ban glasses are priced at $200 to $400 and the Oakley smart glasses cost up to $500. The new Hypernova smart glasses will feature a screen for apps and alerts on one lens and a wrist accessory that can control the glasses. While the Hypernova smart glasses are designed to be used with a mobile phone, they hint at a future when glasses might replace phones. It is rumored that Hypernova could have a smartphone-quality camera and a voice-activated artificial intelligence query tool, Frederick Stanbrell, head of wearables for EMEA at IDC, told. “We are likely seeing the first generation of a device that Mark Zuckerberg intends to one day replace phones,” Stanbrell said.
Amazon and Nvidia deploy zero-touch manufacturing leveraging AI-powered photorealistic digital twins and synthetic simulations to revolutionize factory floor production amid supply chain challenges
The hype around the metaverse fizzled out, but its core technologies, like real-time 3D modeling, simulation and synthetic environments, have found practical use in manufacturing, particularly through digital twins. Amazon Devices & Services, in collaboration with Nvidia, for example, is rolling out “zero-touch manufacturing” powered by Nvidia AI and digital twin simulations. The concept sounds deceptively simple and entails robots that can learn to assemble and inspect new products without ever touching a single prototype. Robotic arms can now rely on photorealistic replicas of a factory floor where every movement is modeled, tested and optimized in software. By combining synthetic data, AI-driven planning and physics-based simulations, the companies said they can cut prototyping costs, slash lead times, and push more devices into production faster using robotic arms. A robotic arm can be trained to assemble a fragile part virtually, learning thousands of variations overnight in Nvidia’s servers. Synthetic data generated in Omniverse teaches the AI how to respond to edge cases like a slightly warped component, a misaligned feeder or a sudden change in torque. When the real robot is finally tasked with assembly, it arrives with “experience” accumulated in a risk-free virtual environment. The result is fewer prototypes, less scrap and faster scaling from pilot runs to mass production.
Apple has published a patent application that introduces a more advanced system for generating contextual interfaces, which are dynamic control panels that adapt to the content and user intent within a 3D or XR environment. These interfaces are designed to appear proximate to specific UI elements and are customized based on content type, user gaze, and gesture data. The patent emphasizes privacy-preserving input recognition and out-of-process interaction handling, which goes beyond Vision Pro’s current capabilities. The system interprets user activity as intentional input, generating a contextual interface nearby when a user views a 2D webpage or application window within a 3D XR environment. This interface provides relevant controls without cluttering the main content, making it easier to interact with media players, navigate long articles, or manipulate panoramic and stereoscopic visuals. Machine learning meets spatial awareness, classifying content types and segmenting webpages into meaningful categories, and determining the appropriate interface shape, layout, and control set based on this classification. The patent also introduces an input support framework that operates outside of individual application processes, enabling legacy applications to function seamlessly in XR environments without needing custom 3D input logic. This patent suggests a more intelligent and adaptive interface paradigm, enhancing usability in complex XR environments and laying the groundwork for more secure and privacy-conscious interaction models. If implemented, this technology could redefine how users engage with digital content in mixed reality, making interactions more fluid, personalized, and secure.
Meta’s debut smart glasses is an incremental step towards full AR glasses: it prioritizes lightweight heads‑up display and muscle‑signal gesture wristband to capture input data and normalize the technology
Meta is planning to use its annual Connect conference next month to announce a deeper push into smart glasses, including the launch of the company’s first consumer-ready glasses with a display. That’s one of the two new devices Meta is planning to unveil at the event. The company will also launch its first wristband that will allow users to control the glasses with hand gestures. The glasses are internally codenamed Hypernova and will include a small digital display in the right lens of the device. The device is expected to cost about $800 and will be sold in partnership with EssilorLuxottica. With Hypernova, Meta will finally be offering glasses with a display to consumers, but the company is setting low expectations for sales. That’s because the device requires more components than its voice-only predecessors, and will be slightly heavier and thicker. Although Hypernova will feature a display, those visual features are expected to be limited. The color display will offer about a 20 degree field of view — meaning it will appear in a small window in a fixed position — and will be used primarily to relay simple bits of information, such as incoming text messages. The Hypernova glasses will also come paired with a wristband that will use technology built by Meta’s CTRL Labs. The wristband is expected to be a key input component for the company’s future release of full AR glasses, so getting data now with Hypernova could improve future versions of the wristband. In addition to Hypernova and the wristband, Meta will also announce a third-generation of its voice-only smart glasses with Luxottica at Connect
Smart glasses are enabling design and CX teams to rapidly prototype customer-facing immersive and context-aware experiences by tapping into open ecosystem for XR devices such as Gemini AI and Android XR and integrating spatial computing
The advent of smart glasses and advanced voice agents like Gemini and ChatGPT is transforming the way we interact with and serve customers. These devices, which are now widely available, are transforming the way we interact with and serve customers. The Ray-Ban Meta smart glasses, for example, have sold over 2 million units since their launch in October 2023, demonstrating public readiness for wearables that blend digital capabilities with physical interaction. Platform technology advancements, such as Google’s Android XR, are driving this shift, creating a robust ecosystem for XR devices, including smart glasses. This technology offers more affordable hardware, increased variety, and a familiar Android development environment, making it easier to build and deploy scalable immersive solutions. The broader XR ecosystem continues to evolve, with devices like Meta Quest series and Apple Vision Pro serving as leading prototyping and development tools for spatial computing solutions. Platforms like ShapesXR are emerging as the “Figma” or “Canva” of the spatial computing world, democratizing the creation process and empowering design and CX teams to rapidly prototype customer-facing immersive experiences. The global smart glasses market is projected to reach $8.26 billion by 2030, signaling a significant opportunity for businesses to gain a competitive edge through enhanced customer experiences.
Rokid’s new AR Spatial feels like a regular pair of glasses, offers three floating app windows that respond to user gaze and gestures and comes with built-in diopter adjustment that allows you to dial in your prescription
Rokid’s new AR Spatial is designed to be a companion to your attention, rather than a competitor. At 75 grams and $648, it doesn’t aim to reshape the world but to exist alongside your routines, distractions, and activities. The AR Spatial renders the 300-inch screen almost mundane, making it a tool for watching The Bear in bed, catching up on messages, or referencing recipes while cooking. It feels like a regular pair of glasses, allowing you to wear it during a commute, café, or lying down. Instead of being a productivity maximizer, it offers three floating app windows that respond to your gaze and gestures. The headset supports 3D video, Spotify, Netflix, email, and browser tabs, making life slightly more flexible. The most impressive feature is the built-in diopter adjustment, compatibility with Android apps, and the simplicity of charging while wearing.
Google’s Veo 3 video-generating model that creates realistic movements by simulating real-world physics and can create video as well as audio to go along with clips, could potentially be used for video games
Demis Hassabis, CEO of Google’s AI research organization DeepMind, appeared to suggest that Veo 3, Google’s latest video-generating model, could potentially be used for video games. World models are different from video-generation models. The former simulates the dynamics of a real-world environment, which lets agents predict how the world will evolve in response to their actions. Video-gen models synthesize realistic video sequences. Google has plans to turn its multimodal foundation model, Gemini 2.5 Pro, into a world model that simulates aspects of the human brain. In December, DeepMind unveiled Genie 2, a model that can generate an “endless” variety of playable worlds. Veo 3, which is still in public preview, can create video as well as audio to go along with clips — anything from speech to soundtracks. While Veo 3 creates realistic movements by simulating real-world physics, it isn’t quite a world model yet. Instead, it could be used for cinematic storytelling in games, like cutscenes, trailers, and narrative prototyping. The model is also still a “passive output” generative model, and it (or a future Veo generation) would need to shift to a simulator that’s more active, interactive, and predictive. But the real challenge with video game production isn’t just impressive visuals; it’s real-time, consistent, and controllable simulation. That’s why it might make sense to see Google take a hybrid approach that leverages Veo and Genie in the future, should it pursue video game or playable world development.
Doji’s app makes apparel try-ons both fun and social by guiding users through the process of creating personalized avatars in roughly 30 minutes and then serving different looks while also letting them scroll through the collections
Startup Doji is launching its app designed to make apparel try-ons both fun and social. It does so by creating your avatar and then serving you different looks that may inspire you to buy new clothes. The company uses its own diffusion models to create its personalized avatars and to make clothing try-ons more realistic. Doji, which is still in invite-only mode, guides users through the process of taking six selfies and uploading two full-body images to create an avatar. The app takes roughly 30 minutes to create an avatar, then notifies you when the avatar is ready. You can also choose your favorite brands during onboarding to see more items from them in the app. By default, the app shows you clothes that might suit you through a series of looks with your avatar. You can scroll through the different tops and bottoms listed on the site and tap on them to create a new look for your avatar. Plus, you can post a link to apparel from the web to check if it would suit you. While the app lets you try on different clothes to see how certain apparel would look on you, it can’t yet tell you how an item would fit. The team is also working to make the virtual try-on process faster and integrate the buying process in the app, instead of directing users to external sites.