Seattle Daily News

collapse
Home / Daily News Analysis / Google, Boston Dynamics Teach Robot Dog ‘Spot’ to See, Think, and Act

Google, Boston Dynamics Teach Robot Dog ‘Spot’ to See, Think, and Act

Apr 18, 2026  Twila Rosenbaum  2 views
Google, Boston Dynamics Teach Robot Dog ‘Spot’ to See, Think, and Act

Google DeepMind and Boston Dynamics have joined forces to enhance the capabilities of Spot, the iconic yellow quadruped robot dog. This collaboration introduces the Gemini Robotics-ER 1.6 model, which empowers Spot with advanced intelligence, moving beyond mere pre-programmed scripts to what is termed 'embodied reasoning.' This capability allows Spot to autonomously assess complex environments, such as cluttered rooms or industrial settings, and determine appropriate actions.

The Gemini Robotics-ER 1.6 model is described by Google DeepMind as a 'reasoning-first' system. This innovative approach enables robots to navigate their surroundings and interpret physical data, such as reading measurements from a pressure gauge. By bridging the gap between digital AI and physical actions, Spot can now perform tasks that require a deeper understanding of its environment.

One of the standout features of the new system is its agentic vision. Unlike traditional robotic visual systems that only capture flat images, Spot can now zoom in on finer details and utilize code to estimate measurements. This enhancement is particularly beneficial for industrial applications where Spot is tasked with monitoring analog gauges or verifying the status of chemical sight glasses.

Marco da Silva, Vice President and General Manager of Spot at Boston Dynamics, emphasized the importance of these advancements: 'Capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously.'

Safety and the Touch Gap

Additionally, the Gemini Robotics-ER 1.6 system incorporates a safety benchmark known as ASIMOV, which is designed to prevent the robot from making hazardous errors, such as placing objects too close to the edge of surfaces. This safety feature is crucial as the application of Spot expands into more complex and potentially risky environments.

However, a significant challenge remains in the realm of tactile interaction. Most AI models, including those powering Spot, are predominantly trained on visual and textual data from the internet, lacking nuances about how objects feel. Consequently, Spot relies heavily on its visual sensors to make decisions on how to interact physically with its surroundings.

Availability

The Gemini Robotics-ER 1.6 model is currently accessible to developers through the Gemini API and Google AI Studio. Google DeepMind has also provided a developer Colab featuring examples of how to configure the model for various embodied reasoning tasks. For existing Boston Dynamics customers, the transition to the new Gemini-powered AIVI-Learning model has been successfully implemented as of April 8, 2026.

In a recent demonstration, Spot showcased its versatility by performing tasks typically associated with human behavior. Observers noted the robot reading a handwritten to-do list, organizing footwear, and even taking a real dog for a walk using a leash. These demonstrations highlight Spot's evolving capabilities beyond industrial applications, suggesting potential for broader uses in everyday scenarios.

As Google and Boston Dynamics continue to push the boundaries of robotics and AI, the enhancements to Spot signify a significant leap toward more intelligent and capable robotic systems. The integration of advanced reasoning and perception technologies heralds a new era in robotic automation, promising to transform industries and daily life alike.


Source: eWEEK News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy