Chief AI scientist at Meta Yann LeCun has said that artificial general intelligence (AGI) will be viable in three to five years.
AGI is a theoretical system that can match or even surpass the capabilities of a human being.
Speaking at Nvidia’s GTC on Tuesday, the largest annual gathering for the tech giant, LeCun told the audience of more than 25,000 people that he prefers the term “advanced machine intelligence” over AGI because “human intelligence is superspecialised, so calling it general is a misnomer.”
The chief scientist also said that it is important for open-source projects to support the development of diverse AI assistants.
“We need assistants that are extremely diverse,” he continued. “We need to speak all languages, understand all the cultures, all the value systems, all the sectors of interest.
“So we need a platform that anybody can use to build those assistants, a diverse population of assistants — and right now that can only be done through open-source platforms.”
LeCun went on to talk about how Meta is developing world models that can understand, reason, and plan around physical environments .
“What you need is a predictor that, given the state of the world and an action you imagine, can predict the next state of the world,” he said. “And if you have such a system, then you can plan a sequence of actions to arrive at a particular outcome.”
Bill Dally, chief scientist at Nvidia said that world models such as these need significant AI infrastructure powered by Nvidia GPUs.
Agentic AI
At the event, which took place in San Jose, Nvidia founder and chief executive Jensen Huang announced a host of new partnerships and technologies.
The company said it has expanded its existing relationship with Google to help advance AI, democratise access to AI tools, speed up the development of physical AI, and transform industries such as healthcare, manufacturing, and energy.
Huang revealed that engineers and researchers throughout Alphabet are working closely with technical teams at Nvidia to use AI and simulation to develop robots with grasping skills, reimagine drug discovery, and optimise energy grids and more.
Oracle, which earlier this week announced plans to invest $5 billion in UK cloud infrastructure over the next five years, has also partnered with Nvidia for a “first-of-its-kind” integration.
The companies are integrating Nvidia’s accelerated computing and inference software with Oracle’s infrastructure and genAI services to help organisations speed up the creation of agentic AI applications.
Nvidia additionally unveiled a new agentic AI platform with EY which is designed to integrate private, domain-specific Nvidia AI reasoning models with human knowledge to boost operations through the productivity of AI agents.
Capgemini is also introducing customised agentic solutions designed in collaboration with the US tech giant to accelerate enterprise AI adoption.
Humanoid model launch
As well as its new advancements and partnerships on agentic AI, Nvidia announced a portfolio of new technologies to “supercharge humanoid robot development.”
The chief executive introduced Nvidia Isaac GR00T N1, which the company describes as the “world’s first” open, fully customisable foundation model for generalised humanoid reasoning and skills.
The other technologies include simulation frameworks and blueprints such as the Nvidia Isaac GR00T Blueprint for generating synthetic data, as well as Newton, an open-source physics engine — under development with Google DeepMind and Disney Research — purpose-built for developing robots.
“Available now, GR00T N1 is the first of a family of fully customisable models that Nvidia will pretrain and release to worldwide robotics developers — accelerating the transformation of industries challenged by global labor shortages estimated at more than 50 million people.
“The age of generalist robotics is here,” said Huang. “With Nvidia Isaac GR00T N1 and new data-generation and robot-learning frameworks, robotics developers everywhere will open the next frontier in the age of AI.”