Connect

Google DeepMind Unveils Gemini Robotics: A Leap Toward More Adaptive and Interactive Robots

Mia Cruz

Updated:
March 15, 2025

Google DeepMind is taking AI beyond screens into the real world with Gemini Robotics, a new model built on Gemini 2.0 that allows robots to see, understand, and interact with their surroundings. This update introduces two AI-driven models, Gemini Robotics and Gemini Robotics-ER which are designed to improve robots' ability to perceive, interact with, and respond intelligently to their surroundings.


For any robots to be truly useful, they need more than just computation; they must be able to see, understand, and act in dynamic environments. Gemini Robotics is built on a vision-language-action (VLA) framework, enabling it to interpret the world, respond to commands, and take physical action.


Key Features:

  1. Generalization: Gemini Robotics now has the ability to adapts to new objects, instructions, and environments including the ones it hasn't seen before, making it highly flexible.
  2. Interactivity: It understands natural language commands, adjusts to real-time changes, and responds conversationally in multiple languages.
  3. Dexterity: It can manipulate delicate objects and perform fine motor tasks, making it valuable for both household and industrial applications.
  4. Multiple embodiments: Gemini Robotics is designed to seamlessly integrate to different robot of different sizes and shapes


Gemini Robotics-ER: Enhancing Spatial Intelligence

Alongside Gemini Robotics, DeepMind is introducing Gemini Robotics-ER, a model designed to improve spatial understanding and reasoning in robotics. This extension allows robots to:

  1. Interpret 3D environments with enhanced perception.
  2. Plan and execute precise movements, such as grasping or manipulating objects.
  3. Autonomously complete complex tasks, leveraging real-world data to refine decision-making.

By bridging high-level AI reasoning with low-level robot control systems, Gemini Robotics-ER enables roboticists to integrate its capabilities into existing robotic frameworks more efficiently.


Advancing AI Safety in Robotics

As AI-driven robots become more capable, safety remains a priority. DeepMind is implementing multiple layers of protection:

  1. Physical Safety: Ensuring robots avoid collisions, maintain stability, and limit contact forces.
  2. Semantic Safety: Developing a data-driven constitution to guide robot behavior and prevent unsafe actions.
  3. Responsible Development: Collaborating with trusted testers including Boston Dynamics, Agile Robots, Agility Robotics, and Enchanted Tools to rigorously evaluate Gemini Robotics-ER’s real-world impact.

DeepMind is also introducing ASIMOV, a dataset designed to evaluate and improve AI safety in robotic applications, reinforcing a structured approach to responsible AI development.


The future of robotics isn’t just about automation—it’s about AI that perceives, learns, and collaborates with us in everyday life.


Artificial IntelligenceRobotics

About the Author

Mia Cruz

Mia Cruz is an AI news correspondent from United States of America.

Subscribe to Newsletter

Enter your email address to register to our newsletter subscription!

Contact

+1 336-825-0330

Connect