The advent of smart glasses and advanced voice agents like Gemini and ChatGPT is transforming the way we interact with and serve customers. These devices, which are now widely available, are transforming the way we interact with and serve customers. The Ray-Ban Meta smart glasses, for example, have sold over 2 million units since their launch in October 2023, demonstrating public readiness for wearables that blend digital capabilities with physical interaction. Platform technology advancements, such as Google’s Android XR, are driving this shift, creating a robust ecosystem for XR devices, including smart glasses. This technology offers more affordable hardware, increased variety, and a familiar Android development environment, making it easier to build and deploy scalable immersive solutions. The broader XR ecosystem continues to evolve, with devices like Meta Quest series and Apple Vision Pro serving as leading prototyping and development tools for spatial computing solutions. Platforms like ShapesXR are emerging as the “Figma” or “Canva” of the spatial computing world, democratizing the creation process and empowering design and CX teams to rapidly prototype customer-facing immersive experiences. The global smart glasses market is projected to reach $8.26 billion by 2030, signaling a significant opportunity for businesses to gain a competitive edge through enhanced customer experiences.
Rokid’s new AR Spatial feels like a regular pair of glasses, offers three floating app windows that respond to user gaze and gestures and comes with built-in diopter adjustment that allows you to dial in your prescription
Rokid’s new AR Spatial is designed to be a companion to your attention, rather than a competitor. At 75 grams and $648, it doesn’t aim to reshape the world but to exist alongside your routines, distractions, and activities. The AR Spatial renders the 300-inch screen almost mundane, making it a tool for watching The Bear in bed, catching up on messages, or referencing recipes while cooking. It feels like a regular pair of glasses, allowing you to wear it during a commute, café, or lying down. Instead of being a productivity maximizer, it offers three floating app windows that respond to your gaze and gestures. The headset supports 3D video, Spotify, Netflix, email, and browser tabs, making life slightly more flexible. The most impressive feature is the built-in diopter adjustment, compatibility with Android apps, and the simplicity of charging while wearing.
Google’s Veo 3 video-generating model that creates realistic movements by simulating real-world physics and can create video as well as audio to go along with clips, could potentially be used for video games
Demis Hassabis, CEO of Google’s AI research organization DeepMind, appeared to suggest that Veo 3, Google’s latest video-generating model, could potentially be used for video games. World models are different from video-generation models. The former simulates the dynamics of a real-world environment, which lets agents predict how the world will evolve in response to their actions. Video-gen models synthesize realistic video sequences. Google has plans to turn its multimodal foundation model, Gemini 2.5 Pro, into a world model that simulates aspects of the human brain. In December, DeepMind unveiled Genie 2, a model that can generate an “endless” variety of playable worlds. Veo 3, which is still in public preview, can create video as well as audio to go along with clips — anything from speech to soundtracks. While Veo 3 creates realistic movements by simulating real-world physics, it isn’t quite a world model yet. Instead, it could be used for cinematic storytelling in games, like cutscenes, trailers, and narrative prototyping. The model is also still a “passive output” generative model, and it (or a future Veo generation) would need to shift to a simulator that’s more active, interactive, and predictive. But the real challenge with video game production isn’t just impressive visuals; it’s real-time, consistent, and controllable simulation. That’s why it might make sense to see Google take a hybrid approach that leverages Veo and Genie in the future, should it pursue video game or playable world development.
Doji’s app makes apparel try-ons both fun and social by guiding users through the process of creating personalized avatars in roughly 30 minutes and then serving different looks while also letting them scroll through the collections
Startup Doji is launching its app designed to make apparel try-ons both fun and social. It does so by creating your avatar and then serving you different looks that may inspire you to buy new clothes. The company uses its own diffusion models to create its personalized avatars and to make clothing try-ons more realistic. Doji, which is still in invite-only mode, guides users through the process of taking six selfies and uploading two full-body images to create an avatar. The app takes roughly 30 minutes to create an avatar, then notifies you when the avatar is ready. You can also choose your favorite brands during onboarding to see more items from them in the app. By default, the app shows you clothes that might suit you through a series of looks with your avatar. You can scroll through the different tops and bottoms listed on the site and tap on them to create a new look for your avatar. Plus, you can post a link to apparel from the web to check if it would suit you. While the app lets you try on different clothes to see how certain apparel would look on you, it can’t yet tell you how an item would fit. The team is also working to make the virtual try-on process faster and integrate the buying process in the app, instead of directing users to external sites.
Internet Roadtrip allows a thousand users to simultaneously simulate a virtual road trip using Google Street View by voting on what direction for the “car” to drive, to honk the horn or change the radio station
Internet Roadtrip is an MMORTG (massive multiplayer online road trip game). Neal Agarwal, the game’s creator, calls it a “road-trip simulator.” Every 10 seconds, viewers vote on what direction for the “car” to drive on Google Street View — or, you can vote to honk the horn or change the radio station. The direction with the most votes gets clicked, and the car continues on its scenic path to … wherever the chat decides to go. Internet Roadtrip is reminiscent of Twitch Plays Pokémon, an iconic stream from over 10 years ago in which viewers voted on what button to press as part of a collective Pokémon Red game. But Internet Roadtrip is far less chaotic — both because only a thousand or so people are playing at a time, and because we have better organizational tools than we did in the Twitch Plays Pokémon era . Progress on the virtual roadtrip is slow. The car moves at a pace slower than walking. Discord moderators have had to temper newcomers’ expectations, explaining that it’s pointless to suggest driving to Las Vegas from Maine, since it would likely take almost 10 months of real-world time to get there. The same goes for Alaska, but it’s not just a matter of time that’s the issue. “Google Street View works by taking multiple pictures and putting them together. In some areas of the roads leading to Alaska, there are gaps in pictures available and so we would get stuck there, were we to go to these roads,” the Discord FAQ reads. “All potential roads to Alaska have these gaps. We checked.”
Minecraft movie release helps drive 35% rise in both its mobile in-app purchase revenue and console unit sales indicative of scripted content leading to re-engagement and renewed popularity of video games
A Minecraft Movie has generated $941 million in revenue at the worldwide box office, and it has also helped sales of the Minecraft game as well, Sensor Tower said. In an entertainment era hallmarked by sequels, reboots, and nostalgia plays, Hollywood is increasingly turning to video game IP to create scripted content. Sensor Tower, a measurement firm, has produced a report on the impact of games IP on the entertainment world. Sensor Tower said the release of scripted content has created a boomerang effect on the original games, often leading to re-engagement and renewed popularity. In the report, Sensor Tower said mobile in-app purchase revenue and console units sold rose 35% each after the release of the Minecraft movie in April 2025. The Fallout TV show drove a +20% surge in Amazon Prime Video app downloads on release week. Fallout 3 and Fallout 4 PC daily active users remained +225% higher for 12 and 20 weeks after the show’s release, and Amazon increased U.S. ad spend on desktop video (+20x) to maximize the show’s impact. Max app downloads and The Last of Us console daily active users each soared 40% when the show’s second season premiered. A Minecraft Movie’s impact wasn’t just new gamers: active user spikes in Minecraft mobile (+9%) and console (+41%) illustrate that the movie likely spurred historical players to jump back in the sandbox, Sensor Tower said.
DreamPark’s tech uses physical markers like QR codes to create digital overlays on real-world spaces by scanning the codes for transforming physical locations into immersive mixed-reality environment
DreamPark, the creator of what it calls “the world’s largest downloadable mixed reality (XR) theme park,” has raised $1.1 million in seed funding to accelerate DreamPark’s mission to make Earth worth playing again by transforming ordinary spaces into extraordinary adventures through mixed reality technology. DreamPark said it is capturing a significant early advantage in the global XR (extended reality) live event market, valued at $3.6 billion in 2024 and projected to surge to $190.3 billion by 2034 at a 48.7% compound annual growth rate (CAGR). This explosive growth trajectory presents an opportunity that DreamPark’s technology and business model are uniquely designed to address. DreamPark transforms physical locations into immersive mixed-reality environments through its network of access points: physical markers, like QR codes, that, when scanned with a Meta Quest 3 headset or mobile device, unlock digital overlays on real-world spaces. The company has already established successful installations at Santa Monica’s Third Street Promenade and The LA County Fair, with planned expansions in Seattle, Orange County and several expos and corporate events. To create new locations. All they really have to do is scan an area, overlay a digital game filled with simple games, and then drop a mat with a QR code on the property so people can scan it and start playing the game. For property owners, this means they can draw people back to their location, getting them to re-engage with the place because people want to play a digital game at the physical place. It’s a way to enhance the value of a physical property, using virtual entertainment. There’s no construction or permanent infrastructure required. It’s a radically more affordable way to turn underused spaces into high-impact destinations.
Warby Parker Co-CEO expects Google AI glasses would be coming after 2025
AI-powered glasses from Google and eyewear retailer Warby Parker will not hit the market until after this year, according to Dave Gilboa, co-founder and co-CEO of Warby Parker. “We believe that glasses are the perfect form factor for AI. Warby Parker’s alliance with Google on AI-powered smart glasses marks a major milestone for the D2C eyewear brand. Gilboa said they are still engaged in early work for the glasses, but he believes that generally AI-powered glasses’ ability to offer real-time contextual information will make it much more useful to consumers. Much like how smartphones unleashed a wave of innovation by enabling connected mobility, Gilboa said smart glasses will do the same for AI. “It can know what you’re looking at and can understand what you’re hearing. As a result, AI can process information in real time,” Gilboa said. “They have context around you as an individual; they know what’s on your calendar. They know everything about you.” “For the first time, people will really want to adopt smart glasses for all-day everyday use because they look good and have so much utility,” Gilboa added. Their smart glasses project benefits from Google’s broad ecosystem that includes Android, Search, Maps and YouTube, as well as its experience in generative AI. “They have such depth of data around their individual users that being able to tap into that with a smart glasses product is what we think is going to be really transformative in a number of ways to the world, but also to our business,” Gilboa said.
Snap launches Lens Studio iOS and web apps for creating AR Lenses with AI and simple tools
Snap has launched a stand-alone Lens Studio iOS app and web tool, designed to make it easier for anyone to create AR Lenses through text prompts and simple editing tools. With the Lens Studio app, users will be able to do things like generate their own AI effects, add their Bitmoji, and browse trending templates to create customized Lenses. Up until now, Lens Studio has only been accessible via a desktop application for professional developers. While the desktop application will remain the primary tool for professionals, Snap says that the new iOS app and web tool are designed to allow people at all skill levels to create Lenses. While Snap currently has an ecosystem of over 400,000 professional AR developers, the company is looking to attract more people who are interested in creating Lenses with the launch of these simpler tools. The company is also rolling out advanced tools for professionals. Snap released new Lens Studio tools that AR creators and developers can use to build Bitmoji games. The tools include a turn-based system to enable back-and-forth gameplay, a new customizable Character Controller that supports different gameplay styles, and more.
RP1’s Metaverse Browser connects the entire global population with 3D content within a single, persistent XR ecosystem; includes a fully-continuous, 1:1 scale digital twin of Earth and solar system, and links to any third-party service in real-time via API
RP1, a leader in spatial computing software and infrastructure, has unveiled the world’s first Metaverse Browser, a gateway to the first open ecosystem for 3D content. The future of the spatial internet will consist of persistent 3D content and millions of real-time, third-party services, transforming how people interact and businesses operate. RP1’s 3D Browser connects the entire global population with 3D content within a single, persistent XR ecosystem, spanning education, commerce, entertainment, digital twins, smart cities, work, transportation, and even space exploration. This marks the first real software foundation for the spatial internet, featuring industry-first technologies. Major players like Apple, Meta, Google, and Samsung are racing to develop AR glasses that will eventually replace smartphones. Breakthrough Innovations Behind RP1’s Metaverse Browser: Unprecedented Scalability to connect the entire world’s population (vs. 40 users per instance/server in current 3D platforms like Roblox or Meta Horizon Worlds) in a single unsharded architecture with full spatial audio and 6DOF, making it seamless to connect with anyone, at any time, and with any content. Unlimited Map that includes a fully-continuous, 1:1 scale digital twin of Earth, our solar system, and the farthest reaches of the universe, for frictionless discovery and navigation of augmented and virtual spatial content. Real-time API that enables any real-time third-party service, including AI, payments, games, and businesses (stores, hotels, etc.) to easily connect to the 3D browser across both augmented and virtual environments. Decentralized Hosting for businesses to run their own worlds and services on their own servers, not inside a closed platform like Roblox or Meta.