Google has been working on building the foundations for the modern AI era, from pioneering the Transformer architecture to developing agent systems that can learn and plan. They are now working to extend their best multimodal foundation model, Gemini 2.5 Pro, to become a “world model” that can make plans and imagine new experiences by understanding and simulating aspects of the world, just as the brain does. This is a critical step in developing a universal AI assistant that is intelligent, understands the context you are in, and can plan and take action on your behalf across any device. The ultimate vision is to transform the Gemini app into a universal AI assistant that will perform everyday tasks, take care of mundane admin, and surface delightful new recommendations, making us more productive and enriching our lives. This includes capabilities like video understanding, screen sharing, and memory. Over the past year, they have integrated these capabilities into Gemini Live, and are gathering feedback from trusted testers to bring them to Gemini Live, new experiences in Search, the Live API for developers, and new form factors like glasses. Safety and responsibility are central to their work, and they recently conducted a large research project exploring the ethical issues surrounding advanced AI assistants. Project Mariner, a research prototype that explores the future of human-agent interaction, includes a system of agents that can complete up to ten different tasks at a time. It is available to Google AI Ultra subscribers in the U.S. and will be brought into the Gemini API and Google products throughout the year.