
In the United States, a new chapter in the debate over artificial intelligence regulation is intensifying. Among his earliest executive actions upon returning to the White House, Donald Trump directed the formulation of an “AI Action Plan,” a comprehensive national strategy to foster AI advancement, calling upon all stakeholders to articulate their visions for the technology’s future. Recently, following OpenAI’s lead, Google has responded to this call.
Company representatives emphasize that it is time for U.S. authorities to proactively champion American values on the global stage and stimulate innovation. Google’s response points out that regulators have become excessively preoccupied with identifying risks, thereby neglecting how overly restrictive measures may undermine technological progress and compromise the nation’s scientific leadership.
Primarily, the technology giant highlights that the government should ease copyright-related restrictions concerning AI model training. Developers assert that, to accelerate technological and scientific advancement, unrestricted access to publicly available data—even when copyrighted—is essential. OpenAI shares this viewpoint, and both companies are pushing to embed this right into legislation.
Google argues that such an approach would inflict minimal harm upon rights holders while enabling corporations to bypass protracted negotiations with content owners during model creation and research processes.
This stance is far from new; the company remains steadfast despite a series of ongoing legal battles. Google, having allegedly utilized copyrighted materials to train several of its models, currently faces litigation from content creators who contend that the company neither sought permission nor compensated them for using their data. American courts have yet to determine whether the principle of fair use shields AI developers from such claims.
The document also criticizes export restrictions imposed by the Biden administration. According to Google, these measures unduly burden U.S.-based cloud service providers and may weaken the nation’s economic standing. Interestingly, competitors hold differing views; Microsoft, for instance, has publicly declared its full readiness to comply with the new regulations.
Nevertheless, current export regulations, designed to restrict certain nations’ access to advanced AI chips, include exceptions for verified companies purchasing processors in substantial quantities.
Google expresses opinions on various other fronts as well. The company stresses the necessity of consistent funding for fundamental research within the U.S., opposing recent efforts to curtail scientific grants. Specific recommendations include granting developers access to government databases for training commercial neural networks. Additionally, Google’s team underscores the critical importance of supporting key initiatives at early stages and ensuring researchers have free access to essential computing resources.
The technology giant also voices serious concerns regarding the fragmented regulatory landscape in the United States, where each state implements its own AI-related legislation. Experts advocate for establishing unified federal legislation to comprehensively address safety and personal data protection concerns. Indeed, the magnitude of this issue is reflected by figures alone: during just the first two months of 2025, lawmakers introduced 781 AI-related bills.
Moreover, the company cautions against overly restrictive AI regulations, particularly concerning liability for AI applications. Developers, it is argued, often cannot trace or control how their models are utilized, making it unjust to hold them accountable in certain scenarios.
Google has consistently upheld this perspective. It actively opposed California’s proposed SB 1047 law, which detailed stringent precautionary measures for deploying AI models and outlined explicit circumstances under which developers could be held liable for harm. Fortunately for the company, the legislation ultimately did not pass.
Google also harshly criticizes proposed European Union transparency requirements for AI systems. The company perceives these measures as excessively intrusive, urging U.S. officials to resist regulations that would compel disclosure of technological trade secrets, simplify competitors’ attempts to replicate products, or inadvertently aid malicious actors in exploiting algorithm vulnerabilities.
However, the broader global trend moves in precisely the opposite direction. An increasing number of countries and regions demand greater transparency from AI developers. For example, California’s AB 2013 mandates companies publicly disclose the datasets used in training their AI systems. The forthcoming European AI Act will go even further, compelling developers to comprehensively explain to users their models’ operational principles, limitations, and potential risks.