OpenAI and Google are pushing the US government to allow their AI models to train on copyrighted material. Both companies outlined their stances in proposals published this week, with OpenAI arguing that applying fair use protections to AI “is a matter of national security.”
The proposals come in response to a request from the White House, which asked governments, industry groups, private sector organizations, and others for input on President Donald Trump’s “AI Action Plan.” The initiative is supposed to “enhance America’s position as an AI powerhouse,” while preventing “burdensome requirements” from impacting innovation.
In its comment, Open claims that allowing AI companies to access copyrighted content would help the US “avoid forfeiting” its lead in AI to China, while calling out the rise of DeepSeek.
“There’s little doubt that the PRC’s [People’s Republic of China] AI developers will enjoy unfettered access to data — including copyrighted data — that will improve their models,” OpenAI writes. “If the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over.”
Google, unsurprisingly, agrees. The company’s response similarly states that copyright, privacy, and patents policies “can impede appropriate access to data necessary for training leading models.” It adds that fair use policies, along with text and data mining exceptions, have been “critical” to training AI on publicly available data.
“These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation,” Google says.
Anthropic, the AI company behind the AI chatbot Claude, also submitted a proposal – but it doesn’t mention anything about copyrights. Instead, it asks the US government to develop a system to assess an AI model’s national security risks and to strengthen export controls on AI chips. Like Google and OpenAI, Anthropic also suggests that the US bolster its energy infrastructure to support the growth of AI.