Trump changes artificial intelligence regulations – new approach to AI in the USA
10 February 2025 / AI
Donald Trump began his term of office with significant changes in the approach to artificial intelligence (AI) regulation. One of the first steps was to repeal Joe Biden’s 2023 executive order, which introduced specific safety requirements for AI systems. This decision has sparked controversy among experts, who emphasise that the lack of adequate regulations can bring both opportunities and serious threats to society and the United States’ position as a technological leader.
What was the Biden AI regulation about?
Joe Biden’s executive order aimed to ensure the safe and responsible development of artificial intelligence. It focused on several key areas:
- Safety standards and testing of AI systems
- Companies involved in the development of artificial intelligence were required to conduct safety tests on their systems and share the results with the US government. This was intended to identify potential risks, such as algorithmic bias or the risk of using AI in activities that threaten national security.
- Protection against AI-generated disinformation
- The Department of Commerce was to develop guidelines for watermarks and content authentication systems to enable easy identification of AI-generated material. This was intended to limit the impact of disinformation and fake news on society.
- Privacy and data protection
- The regulation emphasised the protection of citizens’ data from being used illegally to train AI models. Although President Biden urged Congress to pass appropriate laws, there were no specific regulations governing this issue at the time.
- Preventing algorithmic discrimination
- One of the key points of the regulation concerned counteracting the creation of AI algorithms based on unrepresentative data that could lead to discrimination, e.g. in recruitment systems, the judiciary or healthcare.
- Security in healthcare and life sciences
- The Biden administration introduced mechanisms to prevent the use of AI to create dangerous biological materials. The Department of Health was to develop AI safety programmes in medicine, focusing on improving healthcare and developing innovative therapies. Their fulfilment was to be a condition for obtaining federal funding for life sciences projects.
- The labour market and the impact of AI
- The regulations were intended to develop rules to protect employees from the unfair use of AI in performance appraisal or recruitment systems.
You can read more about artificial intelligence systems in the states here:
- ChatGPT at the centre of controversy. The Cybertruck explosion in Las Vegas
- Kidnapping by AI: the case of Mike Johns and the legal risks of autonomous vehicles
It is worth noting that the repealed version of the regulation is no longer even available on the White House website, so the regulation has not only been repealed, but also removed along with the archive versions from the official source. At the moment, it can only be found here.
Why did Trump repeal the regulation?
Donald Trump argued that the regulations introduced by Joe Biden were too strict and could limit the development of innovative technologies. From the Republicans’ perspective, regulations such as the obligation to report security tests and share information with the government could hinder the activities of technology companies and weaken their competitiveness in the global market.
Trump emphasised that the US approach to AI should be less bureaucratic and more focused on supporting innovation. The decision to repeal the regulation is in line with his philosophy of deregulation and limiting government interference in the private sector.
In addition, Biden’s regulation aimed to increase the security of AI development by introducing transparency standards, reducing the risk of misinformation and counteracting algorithmic discrimination. Tech companies also had to disclose information about potential flaws in their models, including AI biases, which was particularly criticised by Trump-related circles as threatening their competitiveness.
Trump’s decision – liberalisation or risk?
The decision to repeal the regulation has met with mixed reactions. Experts such as Alondra Nelson of the Center for American Progress warn that the lack of safety standards will weaken consumer protection against AI-related risks. Alexander Nowrasteh from the Cato Institute, on the other hand, noted that abandoning some of Biden’s regulations, e.g. immigration facilities for AI specialists, could have negative effects on the sector.
Trump’s supporters, however, argue that his decision is an opportunity to accelerate technological development. They emphasise that overly strict regulations, such as those introduced in Europe, can hamper innovation.
Image source: website of the White House
Consequences of Trump’s decision
Experts warn that the lack of clearly defined rules governing the development of AI can lead to a number of risks:
- Disinformation and fake news: The lack of guidelines for authenticating AI-generated content can facilitate the spread of false information.
- Threats to national security: Without proper security testing, AI systems can be vulnerable to use in cybercrime or warfare.
- Ethics and trust: The lack of regulation increases the risk of algorithmic discrimination and privacy violations, which can undermine public trust in AI technology.
On the other hand, supporters of Trump’s decision emphasise that liberalising regulations will allow for faster technology development and attract investment in the AI sector.
Will the US remain the leader in AI?
Trump’s decision to repeal Biden’s executive order opens a new chapter in the US approach to AI regulation. While Europe is focusing on protecting civil rights, the US may take a more liberal course, favouring freedom of innovation but at the same time exposing even basic human rights.
However, the lack of a clearly defined legal framework in the long term may weaken the US’s position as a leader in the field of AI, especially in the context of international cooperation and the creation of global standards. It will be crucial to find a balance between supporting development and minimising the risks posed by this revolutionary technology.
Artificial intelligence remains one of the most important technologies of the 21st century, and the decisions made this week by world leaders in the United States will influence its development for decades to come.
Need help with this topic?
Write to our expert
Articles in this category
DeepSeek – Chinese AI in open source mode. Does Hong Kong have a chance against OpenAI?
DeepSeek – Chinese AI in open source mode. Does Hong Kong have a chance against OpenAI?BREAKING: new executive order from President Trump
BREAKING: new executive order from President TrumpUS export restrictions on AI chips: what do they mean for the world, the gaming industry and … Poland?
US export restrictions on AI chips: what do they mean for the world, the gaming industry and … Poland?