Google updated its artificial intelligence principles on Tuesday to remove commitments around not using the technology in ways “that cause or are likely to cause overall harm.” A scrubbed section of the revised AI ethics guidelines previously committed Google to not designing or deploying AI for use in surveillance, weapons, and technology intended to injure people. The change was first spotted by The Washington Post and captured here by the Internet Archive.
Coinciding with these changes, Google DeepMind CEO Demis Hassabis, and Google’s senior exec for technology and society James Manyika published a blog post detailing new “core tenets” that its AI principles would focus on. These include innovation, collaboration, and “responsible” AI development — the latter making no specific commitments.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” reads the blog post. “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
Hassabis joined Google after it acquired DeepMind in 2014. In an interview with Wired in 2015, he said that the acquisition included terms that prevented DeepMind technology from being used in military or surveillance applications.