Step Into the Painting: Google's New AI Generates Infinite, Interactive Worlds in Real Time
Translate this article
The frontier of generative AI is expanding from images and text into fully navigable spaces. Google DeepMind has begun rolling out an experimental research prototype called Project Genie, offering a first-hand look at the future of interactive world generation. Currently available to Google AI Ultra subscribers in the U.S., the prototype allows users to create, explore, and remix immersive environments that unfold in real time.
From Static Images to Living Worlds
At its core, Project Genie is powered by Genie 3, a foundational world model developed by Google DeepMind. Unlike traditional 3D rendering or pre-built environments, a world model simulates environmental dynamics, predicting how scenes evolve and respond to actions. Genie 3 doesn't just create a static snapshot; it generates the path ahead moment-by-moment as a user moves and interacts within the world.
Three Core Capabilities for Creation
The prototype web app, which also leverages Nano Banana Pro and Gemini, centers on three creative functions:
1. World Sketching: Users can prompt worlds using text or images (either generated or uploaded). They can define a character and choose how to navigate—whether by walking, flying, driving, or other methods. Integrated tools allow for fine-tuning the preview of a world before entering it.
2. World Exploration: Once inside, the environment is fully navigable. The model generates the unfolding scenery in real time based on the user's chosen actions and perspective.
3. World Remixing: Users can take existing worlds—either from a curated gallery or their own creations—and remix them into new interpretations by modifying the original prompts. Completed explorations can be downloaded as videos.
A Responsible Research Prototype
Google emphasizes that Project Genie is an early experimental prototype with acknowledged areas for improvement. Generated worlds may not always be photorealistic, might diverge from prompts, or have imperfect physics. Character controllability and latency are being refined, and generations are currently limited to 60 seconds. Some advanced capabilities announced for the Genie 3 model, like promptable in-world events, are not yet included in this release.
This controlled rollout aims to gather feedback on how people interact with world-model technology, informing future research in both AI and generative media. The goal, according to Google, is to eventually make such interactive experiences accessible to a broader audience. For now, it provides a groundbreaking glimpse into a future where AI can serve as the engine for boundless, personalized digital worlds.
About the Author

omar ali
Recent Articles
Subscribe to Newsletter
Enter your email address to register to our newsletter subscription!