CUDA and NVIDIA’s dominance – invisible AI infrastructure beyond the scope of regulation?

18 July 2025   /  Articles

In April 2025, NVIDIA surpassed a market capitalisation of $2.79 trillion, with its shares rising by over 170% in a year to become the third most valuable publicly traded company in the world, behind Microsoft and Apple, and achieving a 90% share of the AI chip market in 2024. Although just a few years ago NVIDIA was mainly associated with graphics cards for gamers, today it is the foundation of the global digital economy based on artificial intelligence. Its GPUs – particularly the H100 series – are not only a strategic asset for data centres, but also the main driver behind the development of foundation models, including the most advanced general-purpose language models such as ChatGPT.

NVIDIA

CUDA – the AI engine that is changing the rules of the game

At the heart of NVIDIA’s transformation into a global leader in artificial intelligence is CUDA (Compute Unified Device Architecture), a proprietary programming platform that enables the full power of GPUs to be harnessed for scientific, industrial and commercial applications. CUDA is not just a technology layer – it is a critical infrastructure for the scalability and efficiency of AI models.

It is not without reason that this platform is sometimes referred to as the ‘invisible AI operating system.’ It is a key element in the lifecycle of AI-based systems: from training and validation to the deployment of models in real-world applications. In practice, it is CUDA that defines how quickly and at what scale modern AI systems can be developed.

GPU vs CPU – why are graphics processing units crucial for artificial intelligence?

In the context of training large language models and processing data on a massive scale, classic processors (CPUs) are becoming insufficient. The key features of GPUs – especially those from NVIDIA – give them an advantage in AI environments:

  • Parallel architecture – GPUs such as the NVIDIA H100 contain thousands of cores that enable simultaneous processing of large data sets – ideal for the matrix operations used in neural networks.
  • Energy efficiency – next-generation graphics chips offer up to 25 times higher energy efficiency compared to previous solutions, which translates into lower operating costs and greater scalability.
  • High-bandwidth memory – technologies such as HBM2 (High Bandwidth Memory) enable lightning-fast processing of terabytes of data – essential for real-time and critical applications.

NVIDIA

The closed CUDA ecosystem – both a strength and a weakness

As a closed solution, CUDA offers huge performance gains – up to 1,000 times faster over the last decade. However, the fact that this technology is controlled by a single company raises concerns:

  • Technological dominance – over 80% of AI models – including all major foundation models – are trained in the CUDA environment.
  • Lack of alternatives – open solutions such as AMD ROCm and Intel oneAPI have less than 10% market share, mainly due to weaker optimisation and lack of full compatibility with popular AI libraries.
  • Network effect – the more developers use CUDA, the more difficult it is to switch to competing solutions – this creates a closed ecosystem that is difficult for the market to counterbalance.

AI infrastructure and European law: a gap in the AI Act?

The AI Act (EU 2024/1689) is the first comprehensive piece of legislation regulating the use of artificial intelligence in Europe. However, it focuses mainly on the algorithmic level – on training data, model transparency and the risks of their use.

Meanwhile, the computational layer – the infrastructure without which these systems cannot exist – remains outside its direct scope.

CUDA is not classified as a standalone AI system, but its impact on the compliance, auditability and security of AI systems is undeniable. Without the ability to verify the operation of the infrastructure – both in terms of hardware (black-box GPUs) and closed software – it is difficult to talk about full implementation of the principles of transparency and accountability.

Legal consequences – monopoly, dependency, lack of audit

The lack of regulation in the field of computing infrastructure raises specific legal and systemic issues:

  • Limited auditability – the closed nature of CUDA makes it difficult to meet the requirements of Article 13 of the AI Act regarding transparency and verifiability.
  • Monopoly risk – a price increase of over 300% for GPUs between 2020 and 2024 may indicate abuse of a dominant position (Article 102 TFEU).
  • Lack of EU technological sovereignty – as many as 98% of European AI data centres use NVIDIA technology, raising serious questions about infrastructure independence and resilience to external disruption.

NVIDIA

Is accountability without transparency possible?

The AI Act establishes chain liability – responsibilities apply not only to system developers, but also to users and distributors. However, market reality shows that end users have no real way of assessing the CUDA infrastructure they use indirectly. There are no technical standards or requirements disclosing the details of how closed platforms operate.

Recommendations for regulators and the AI industry

Although not formally classified as an AI system, CUDA should be recognised as a component that affects compliance, auditability and security. Recommendations:

  • EC guidelines and AI Office it is necessary to develop legal interpretations that take into account the impact of computing platforms on AI systems, as is the case with cloud computing and the GDPR.
  • Promoting technological neutrality – EU technology support programmes (e.g. Digital Europe) should favour open, interoperable technologies.
  • Revision of the scope of the AI Act – in the long term, it is worth considering updating the AI Act to also cover technological infrastructure as a factor determining the safety and compliance of AI systems.

CUDA – a technological marvel or a legal risk?

CUDA is undoubtedly a technology that has enabled unprecedented progress in the field of AI. However, its closed structure, market dominance and lack of regulatory oversight may mean that responsibility for AI systems becomes illusory. For the EU, which is committed to transparency, ethics and digital sovereignty, this is a challenge that can no longer be ignored.

* * *

ART. 13 AI Act

Transparency and information sharing with users

  1. High-risk AI systems shall be designed and developed in a manner that ensures sufficient transparency of their performance, enabling users to interpret the results of the system and use them appropriately. The appropriate type and level of transparency shall be ensured in order to enable the supplier and the user to fulfil their respective obligations set out in Section 3.
  2. High-risk AI systems shall be accompanied by a user manual in an appropriate digital or other format containing concise, complete, accurate and clear information that is relevant, accessible and understandable to users.
  3. The user manual shall contain at least the following information:
  4. (a) the identity and contact details of the supplier and, where applicable, its authorised representative;
  5. (b) the characteristics, capabilities and limitations of the performance of the high-risk AI system, including:

(i) its intended use;

(ii) the level of accuracy, including its indicators, the level of robustness and cybersecurity referred to in Article 15, against which the high-risk AI system has been tested and validated and which can be expected, as well as any known and foreseeable circumstances that may affect those expected levels of accuracy, robustness and cybersecurity;

(iii) any known or foreseeable circumstances related to the use of the high-risk AI system in accordance with its intended purpose or under reasonably foreseeable conditions of misuse that could give rise to a risk to health and safety or fundamental rights as referred to in Article 9(2);

(iv) where applicable, the technical capabilities and features of the high-risk AI system to provide information relevant to the explanation of its performance;

(v) where applicable, the performance of the system in relation to specific individuals or groups of individuals for whom it is intended to be used; (vi) where applicable, specifications regarding input data or any other relevant information regarding the training, validation and testing data sets used, taking into account the intended use of the high-risk AI system; (vii) where applicable, information enabling users to interpret the results of the high-risk AI system and to use those results appropriately;

  1. (c) changes to the high-risk AI system and its performance that have been planned in advance by the supplier at the time of the initial conformity assessment;
  2. (d) the human oversight measures referred to in Article 14, including technical measures introduced to facilitate the interpretation of the results of high-risk AI systems by users;
  3. e) the necessary computing and hardware resources, the expected life cycle of the high-risk AI system and any necessary maintenance and servicing measures, including their frequency, to ensure the proper functioning of that AI system, including software updates;
  4. f) where applicable, a description of the mechanisms included in the high-risk AI system that enable entities using it to correctly collect, store and interpret event logs in accordance with Article 12.

ART. 12 TFEU

Prohibition of abuse of a dominant position

Any abuse by one or more undertakings of a dominant position within the internal market or in a significant part thereof shall be prohibited as incompatible with the internal market in so far as it may affect trade between Member States.

Such abuse may, in particular, consist in:

  1. a) directly or indirectly imposing unfair purchase or selling prices or other unfair trading conditions;
  2. b) limiting production, markets or technical development to the prejudice of consumers;
  3. c) applying dissimilar conditions to equivalent transactions with other trading partners, thereby placing them at a competitive disadvantage;
  4. d) making the conclusion of contracts subject to acceptance by the other parties of supplementary obligations which, by their nature or according to commercial practice, do not relate to the subject of such contracts.

Share

Share

Need help with this topic?

Write to our expert

Mateusz Borkiewicz

Managing Partner, Attorney at law

+48 663 683 888 Contact

Articles in this category

Types of crypto assets regulated by MiCA

Articles

More
Types of crypto assets regulated by MiCA

Sójka AI – digital guardian of ethics and security

AI

More
Sójka AI – digital guardian of ethics and security

Artificial intelligence, copyright and the controversy surrounding Studio Ghibli

AI

More
Artificial intelligence, copyright and the controversy surrounding Studio Ghibli

Obtaining a CASP licence – key information for crypto companies

Articles

More
Obtaining a CASP licence – key information for crypto companies

Declassification of documents on the Kennedy assassination – sheds new light on historical events and the impact of technology on historical research

AI

More
Declassification of documents on the Kennedy assassination – sheds new light on historical events and the impact of technology on historical research
More

Contact

Any questions?see phone number+48 663 683 888
see email address

Hey, have you
signed up to our newsletter yet?

    Check how we process your personal data here