Alibaba is turbocharging its AI push as it announces new data centre openings, a partnership with Nvidia, and the release of its new LLM model Qwen3-Max.
At the company’s annual technology conference on Wednesday, Alibaba Cloud revealed it has launched its first data centres in Brazil, France and the Netherlands.
The move as the e-commerce giant continues to prioritise AI and cloud as a key business objectives alongside its traditional e-commerce operations.
The business confirmed that additional data centres in Mexico, Japan, South Korea, Malaysia and Dubai will also be open over the course of the next 12 months.
At the event, chief executive Eddie Wu said that the strategic expansion will also include the creation of new regional service centres in Indonesia and Germany to provide 24-hour multilingual customer support.
Wu added that Alibaba, which currently operates in 29 regions globally, is committed to further boosting spend on AI developments.
“The speed of AI industry development has far exceeded our expectations, and the industry’s demand for AI infrastructure has also far exceeded our expectations,” Wu said.
Earlier this year, the company announced plans to invest 380 billion yuan (£45.4 billion) in AI-related infrastructure over the next three years as competition in the development of advanced AI capabilities intensifies in the Chinese market.
At the conference, Alibaba also announced a partnership with Nvidia to develop physical AI capabilities covering data synthesis and processing, model training, reinforcement learning with environmental simulation, and model validation testing.
The collaboration aims to provide developers with a cloud-native platform to drive development of physical AI solutions, equipping developers with the tools and agility needed to rapidly advance innovations in humanoid robotics, the firm said.
The e-commerce giant also unveiled its largest ever AI language model, the Qwen3-Max.
The model has over one trillion parameters and promises to achieve impressive performance across a wide range of benchmarks especially in code generation and agentic capabilities, the firm said.
“In the future, large AI models will be deeply integrated into a wide range of devices, functioning like operating systems – equipped with persistent memory, seamless cloud-edge coordination, and the ability to continuously evolve,” Wu said at the conference.
He added that the firm remains committed to open-sourcing Qwen and shaping it into the “operating system of the AI era”.
According to a statement from the firm, since the launch of the first generation of Qwen in 2023, Alibaba has open-sourced over 300 AI models built on its two foundation models – the large language model Qwen and the visual generation model Wan.