Google calls for weakened copyright and export rules in AI policy proposal

Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Government & Policy
Hardware
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
Google, following on the heels of OpenAI, published a policy proposal in response to the Trump Administration’s call for a national “AI Action Plan.” The tech giant endorsed weak copyright restrictions on AI training, as well as “balanced” export controls that “protect national security while enabling U.S. exports and global business operations.”
“The U.S. needs to pursue an active international economic policy to advocate for American values and support AI innovation internationally,” Google wrote in the document. “For too long, AI policymaking has paid disproportionate attention to the risks, often ignoring the costs that misguided regulation can have on innovation, national competitiveness, and scientific leadership — a dynamic that is beginning to shift under the new Administration.”
One of Google’s more controversial recommendations pertains to the use of IP-protected material.
Google argues that “fair use and text-and-data mining exceptions” are “critical” to AI development and AI-related scientific innovation. Like OpenAI, the company seeks to codify the right for it and rivals to train on publicly available data — including copyrighted data— largely without restriction.
“These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders,” Google wrote, “and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation.”
Google, which has reportedly trained a number of models on public, copyrighted data, is battling lawsuits with data owners who accuse the company of failing to notify and compensate them before doing so. U.S. courts have yet to decide whether fair use doctrine effectively shields AI developers from IP litigation.
In its AI policy proposal, Google also takes issue with certain export controls imposed under the Biden Administration, which it says “may undermine economic competitiveness goals” by “imposing disproportionate burdens on U.S. cloud service providers.” That contrasts with statements from Google competitors like Microsoft, which in January said that it was “confident” it could “comply fully” with the rules.
Importantly, the export rules, which seek to limit the availability of advanced AI chips in disfavored countries, carve out exemptions for trusted businesses seeking large clusters of chips.
Pointing to the chaotic regulatory environment created by the U.S.’ patchwork of state AI laws, Google urged the government to pass federal legislation on AI, including a comprehensive privacy and security framework. Just over two months into 2025, the number of pending AI bills in the U.S. has grown to 781, according to an online tracking tool.
Google cautions the U.S. government against imposing what it perceives to be onerous obligations around AI systems, like usage liability obligations. In many cases, Google argues, the developer of a model “has little to no visibility or control” over how a model is being used and thus shouldn’t bear responsibility for misuse.
Historically, Google has opposed laws like California’s defeated SB 1047, which clearly laid out what would constitute precautions an AI developer should take before releasing a model and in which cases developers might be held liable for model-induced harms.
“Even in cases where a developer provides a model directly to deployers, deployers will often be best placed to understand the risks of downstream uses, implement effective risk management, and conduct post-marketmonitoring and logging,” Google wrote.
Google in its proposal also called disclosure requirements like those being contemplated by the EU “overly broad,” and said the U.S. government should oppose transparency rules that require “divulging trade secrets, allow competitors to duplicate products, or compromise national security by providing a roadmap to adversaries on how to circumvent protections or jailbreak models.”
A growing number of countries and states have passed laws requiring AI developers to reveal more about how their systems work. California’s AB-2013 mandates that companies developing AI systems publish a high-level summary of the data sets that they used to train their systems. In the EU, to comply with the AI Act once it comes into force, companies will have to supply model deployers with detailed instructions on the operation, limitations, and risks associated with the model.
Topics
AI Editor
Travis Kalanick thinks Uber screwed up: ‘Wish we had an autonomous ride-sharing product’
Anthropic CEO says spies are after $100M AI secrets in a ‘few lines of code’
Browser Use, one of the tools powering Manus, is also going viral
A comprehensive list of 2025 tech layoffs
North Korean government hackers snuck spyware on Android app store
DOGE axes CISA ‘red team’ staffers amid ongoing federal cuts
Subscribe for the industry’s biggest tech news
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
TechCrunch's AI experts cover the latest news in the fast-moving field.
Every Monday, gets you up to speed on the latest advances in aerospace.
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
By submitting your email, you agree to our Terms and Privacy Notice.
© 2025 Yahoo.
EMEA Tribune is not responsible for this news, news agencies have provided us this news.
Follow us on our WhatsApp channel here .