Close Menu
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now

WWDC 2025: watchOS 26 to Reportedly Get Support for Third-Party Control Centre Widgets

6 June 2025

Naughty Dog Owner Sony Files Opposition to ‘Naughty Cat’ Trademark Application, Says ‘Dog’ and ‘Cat’ Are ‘Highly Similar’

6 June 2025

Coco Attraction and Avatar Land Get Locations at Disneyland, Monsters, Inc. Mike & Sulley to the Rescue! Gets a Closing Window

6 June 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact
Facebook X (Twitter) Instagram Pinterest VKontakte
Tech News VisionTech News Vision
  • Home
  • What’s On
  • Mobile
  • Computers
  • Gadgets
  • Apps
  • Gaming
  • How To
  • More
    • Web Stories
    • Global
    • Press Release
Tech News VisionTech News Vision
Home » Google DeepMind’s new AI models help robots perform physical tasks, even without training
What's On

Google DeepMind’s new AI models help robots perform physical tasks, even without training

News RoomBy News Room12 March 2025Updated:12 March 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

Google DeepMind is launching two new AI models designed to help robots “perform a wider range of real-world tasks than ever before.” The first, called Gemini Robotics, is a vision-language-action model capable of understanding new situations, even if it hasn’t been trained on them.

Gemini Robotics is built on Gemini 2.0, the latest version of Google’s flagship AI model. During a press briefing, Carolina Parada, the senior director and head of robotics at Google DeepMind, said Gemini Robotics “draws from Gemini’s multimodal world understanding and transfers it to the real world by adding physical actions as a new modality.”

The new model makes advancements in three key areas that Google DeepMind says are essential to building helpful robots: generality, interactivity, and dexterity. In addition to the ability to generalize new scenarios, Gemini Robotics is better at interacting with people and their environment. It’s also capable of performing more precise physical tasks, such as folding a piece of paper or removing a bottle cap.

“While we have made progress in each one of these areas individually in the past with general robotics, we’re bringing [drastically] increasing performance in all three areas with a single model,” Parada said. “This enables us to build robots that are more capable, that are more responsive and that are more robust to changes in their environment.”

Google DeepMind is also launching Gemini Robotics-ER (or embodied reasoning), which the company describes as an advanced visual language model that can “understand our complex and dynamic world.”

As Parada explains, when you’re packing a lunchbox and have items on a table in front of you, you’d need to know where everything is, as well as how to open the lunchbox, how to grasp the items, and where to place them. That’s the kind of reasoning Gemini Robotics-ER is expected to do. It’s designed for roboticists to connect with existing low-level controllers — the system that controls a robot’s movements — allowing them to enable new capabilities powered by Gemini Robotics-ER.

In terms of safety, Google DeepMind researcher Vikas Sindhwani told reporters that the company is developing a “layered-approach,” adding that Gemini Robotics-ER models “are trained to evaluate whether or not a potential action is safe to perform in a given scenario.” The company is also releasing new benchmarks and frameworks to help further safety research in the AI industry. Last year, Google DeepMind introduced its “Robot Constitution,” a set of Isaac Asimov-inspired rules for its robots to follow.

Google DeepMind is working with Apptronik to “build the next generation of humanoid robots.” It’s also giving “trusted testers” access to its Gemini Robotics-ER model, including Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools. “We’re very focused on building the intelligence that is going to be able to understand the physical world and be able to act on that physical world,” Parada said. “We’re very excited to basically leverage this across multiple embodiments and many applications for us.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Top Peacock Promo Codes and Coupons for June 2025

6 June 2025

How to transfer your Switch 1 data to a Switch 2

6 June 2025

Silicon Valley Is Starting to Pick Sides in Musk and Trump’s Breakup

6 June 2025

Wing and Walmart are bringing drone delivery to 100 new stores

6 June 2025
Editors Picks

Samsung Galaxy Z Flip FE Price, Storage Options Leaked Again; Here’s How Much It Could Cost

6 June 2025

Black Myth: Wukong Finally Makes Its Way to Xbox Series X/S on August 20

6 June 2025

Top Peacock Promo Codes and Coupons for June 2025

6 June 2025

Snapchat Launches Apple Watch App With Scribble, Dictation Support; Lens Studio Now on iOS

6 June 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending Now
Tech News Vision
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact
© 2025 Tech News Vision. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.