AGI – a machine on a human scale. Is the law keeping up with technology?

27 February 2025   /  AI

General artificial intelligence (AGI) is becoming an increasingly real prospect, posing unprecedented challenges for legislators and society. The recent AI Action Summit in Paris and the International Association for Safe and Ethical AI (IASEAI ‘25) conference have shed light on the urgent need for a comprehensive legal framework for this groundbreaking technology.

ai

The origins of the idea of artificial intelligence

The concept of artificial general intelligence dates back to the 1950s, when computer science pioneers began to analyse the possibility of creating machines that could match human intelligence. Alan Turing, one of the forerunners of AI, proposed the Turing test – an experiment to assess whether a machine can conduct a conversation indistinguishable from a human one.

A turning point in the history of AI was the 1956 Dartmouth conference, where the term ‘artificial intelligence’ was officially coined. The predictions at the time were that human-level intelligence would be achieved quickly, but technological progress did not keep pace with researchers’ optimism

The crisis and revival of AI

The 1980s and 1990s brought changes in the approach to artificial intelligence. Research focused on narrower applications of AI, such as expert systems, and the idea of artificial general intelligence (AGI) faded into the background. It was not until the dynamic development of computing technology, big data and deep learning in the 21st century that researchers turned their attention to the topic of AGI once again.

AI

What is AGI?

Unlike current AI systems, AGI has the potential to perform a wide range of intellectual tasks at a level comparable to that of humans. This versatility brings with it both great opportunities and serious risks that must be properly regulated.

There is currently no legal definition of AGI, but given the rapid development of AI technology, it is likely that one will be needed in the near future.

Many companies and organisations are already trying to define AGI. For example, in 2024, OpenAI created a definition of AGI and five levels of advancement.

According to OpenAI:

Today’s chatbots, such as ChatGPT, are at the first level of development.

OpenAI claims to be approaching the second level, which means a system capable of solving basic problems at the level of a person with a doctorate.

The third level is AI acting as an agent that can make decisions and perform tasks on behalf of the user.

At the fourth level, artificial intelligence achieves the ability to create new innovations.

The fifth, highest level, means AI that can do the work of entire organisations of people.

OpenAI previously defined AGI as ‘a highly autonomous system that surpasses humans in most economically valuable tasks’.

Stargate Project

Stargate Project

One of the most ambitious AGI projects is Project Stargate, which envisages investments of $500 billion in the development of AI infrastructure in the USA. The main goal is to strengthen the United States’ position as a leader in the field of AGI, as well as to create hundreds of thousands of jobs and generate global economic benefits.

AGI has the potential to take over a variety of tasks that have so far required human creativity and adaptability.

Read more about Stargate:

https://lbplegal.com/stargate-project-nowa-era-infrastruktury-ai-w-stanach-zjednoczonych/

Why is AGI regulation crucial? Expert opinions

During IASEAI ‘25 in Paris and the AI Action summit, experts emphasised the need to create comprehensive AGI regulations.

  • Dr Szymon Łukasik (NASK) pointed out that the law must keep pace with innovation while ensuring its security.
  • Prof Stuart Russell called for the introduction of mandatory safeguards to ensure that AGI is not used in a harmful way.
  • Joseph Stiglitz emphasised the need to take social interests into account in the legislative process.
  • Max Tegmark from the Future of Life Institute emphasised the importance of international cooperation, especially between the USA and China. Only a global approach will allow for the effective regulation of AGI development.

AGI

Risks associated with AGI

Many experts in Paris expressed concern about the development of AI, which they believe is happening very quickly; some believe AGI will emerge within the next 10 years. Key questions about AGI concerned the following issues:

  • Can AGI itself modify rules to achieve its objectives?
  • What should be the mechanisms for emergency shutdown of AGI?
  • How can we ensure that the AI’s objectives do not conflict with the interests of humanity?

Researchers are already verifying whether a model with a set objective, e.g. winning a game, is able to modify the rules of the game in order to win. In view of this fact, it is necessary to model the objectives of the AI system in such a way that they do not conflict with the interests of humanity.

These considerations and proposals aim to create a legal framework and standards that will allow for the safe and ethical development of AGI, while not hindering technological progress. Experts agree that regulations should be comprehensive and take into account both the technical aspects and the social implications of AGI development. Therefore, international cooperation is crucial for creating uniform safety and ethical standards.

Transparency and safety mechanisms

Transparency and safety mechanisms are other key aspects that must be taken into account in future regulations. Prof. Steward Russell postulated that companies developing AI should be legally obliged to implement safeguards against the harmful use of their technology. Experts also propose the legal requirement of an emergency shutdown mechanism for AI models and the creation of standards for security testing.

In the context of emergency shutdown mechanisms for AI models, there are two key ISO standards that address this issue:

  • ISO 13850 is a standard for emergency stop functions in machines. Although it does not directly refer to AI models, it establishes general principles for the design of emergency stop mechanisms that can potentially be applied to AI systems as well. This standard emphasises that the emergency stop function should be available and operational at all times, and its activation should take precedence over all other functions;
  • ISO/IEC 42001, on the other hand, is a more recent standard, published in December 2023, which directly addresses artificial intelligence management systems. It covers broader aspects of AI risk management, including risk and impact assessment of AI and AI system life cycle management.

Under ISO/IEC 42001, organisations are required to implement processes to identify, analyse, assess and monitor risks associated with AI systems throughout the life cycle of the management system. This standard emphasises continuous improvement and the maintenance of high standards in the development of AI systems.

It is worth noting that although these standards provide guidelines, specific emergency shutdown mechanisms for advanced AI systems such as AGI are still the subject of international research and discussion. Experts emphasise that as increasingly advanced AI systems are developed, traditional ‘shutdown’ methods may prove insufficient, and other standards and solutions will need to be developed.

Sztuczna inteligencja

Read more about AI:

https://lbplegal.com/sztuczna-inteligencja-czym-jest-z-prawnego-punktu-widzenia-i-jak-radzi-sobie-z-nia-swiat/

AGI regulation and cybersecurity

AGI can identify and exploit vulnerabilities faster than any human or current AI systems. Therefore, legal regulations for AGI should include:

  • Preventing the use of AGI for cyberattacks.
  • Standardising the security of AGI systems to limit the risk of takeover by malicious entities.
  • Determination of legal responsibility in the event of AGI-related incidents.

Interdisciplinary approach to lawmaking

The development of AGI requires an interdisciplinary approach that takes into account:

  • Cooperation between lawyers and AI experts, ethicists and economists.
  • Global regulations to protect human rights and privacy.
  • Transparency of AGI development and control mechanisms.

A proactive legal approach can make AGI safe and beneficial for all of humanity, without blocking technological progress.

AGI is a technology with enormous potential, but also with serious risks. It is crucial to create a legal framework and standards that will allow for its safe development. International cooperation and an interdisciplinary approach are key to ensuring that AGI serves humanity and does not pose a threat.

Share

Share

Need help with this topic?

Write to our expert

Grzegorz Leśniewski

Managing Partner, Attorney at law

+48 531 871 707 Contact

Articles in this category

DeepSeek – Chinese AI in open source mode. Does Hong Kong have a chance against OpenAI?

AI

More
DeepSeek – Chinese AI in open source mode. Does Hong Kong have a chance against OpenAI?

And the Oscar goes to … AI Brody

AI

More
And the Oscar goes to … AI Brody

BREAKING: new executive order from President Trump

AI

More
BREAKING: new executive order from President Trump

GPT chat not working. Thousands of user reports

AI

More
GPT chat not working. Thousands of user reports

Trump changes artificial intelligence regulations – new approach to AI in the USA

AI

More
Trump changes artificial intelligence regulations – new approach to AI in the USA
More

Contact

Any questions?see phone number+48 663 683 888
see email address

Hey, have you
signed up to our newsletter yet?

    Check how we process your personal data here