CUDA and NVIDIA’s dominance – invisible AI infrastructure beyond the scope of regulation?

In April 2025, NVIDIA surpassed a market capitalisation of $2.79 trillion, with its shares rising by over 170% in a year to become the third most valuable publicly traded company in the world, behind Microsoft and Apple, and achieving a 90% share of the AI chip market in 2024. Although just a few years ago NVIDIA was mainly associated with graphics cards for gamers, today it is the foundation of the global digital economy based on artificial intelligence. Its GPUs – particularly the H100 series – are not only a strategic asset for data centres, but also the main driver behind the development of foundation models, including the most advanced general-purpose language models such as ChatGPT.

NVIDIA

CUDA – the AI engine that is changing the rules of the game

At the heart of NVIDIA’s transformation into a global leader in artificial intelligence is CUDA (Compute Unified Device Architecture), a proprietary programming platform that enables the full power of GPUs to be harnessed for scientific, industrial and commercial applications. CUDA is not just a technology layer – it is a critical infrastructure for the scalability and efficiency of AI models.

It is not without reason that this platform is sometimes referred to as the ‘invisible AI operating system.’ It is a key element in the lifecycle of AI-based systems: from training and validation to the deployment of models in real-world applications. In practice, it is CUDA that defines how quickly and at what scale modern AI systems can be developed.

GPU vs CPU – why are graphics processing units crucial for artificial intelligence?

In the context of training large language models and processing data on a massive scale, classic processors (CPUs) are becoming insufficient. The key features of GPUs – especially those from NVIDIA – give them an advantage in AI environments:

  • Parallel architecture – GPUs such as the NVIDIA H100 contain thousands of cores that enable simultaneous processing of large data sets – ideal for the matrix operations used in neural networks.
  • Energy efficiency – next-generation graphics chips offer up to 25 times higher energy efficiency compared to previous solutions, which translates into lower operating costs and greater scalability.
  • High-bandwidth memory – technologies such as HBM2 (High Bandwidth Memory) enable lightning-fast processing of terabytes of data – essential for real-time and critical applications.

NVIDIA

The closed CUDA ecosystem – both a strength and a weakness

As a closed solution, CUDA offers huge performance gains – up to 1,000 times faster over the last decade. However, the fact that this technology is controlled by a single company raises concerns:

  • Technological dominance – over 80% of AI models – including all major foundation models – are trained in the CUDA environment.
  • Lack of alternatives – open solutions such as AMD ROCm and Intel oneAPI have less than 10% market share, mainly due to weaker optimisation and lack of full compatibility with popular AI libraries.
  • Network effect – the more developers use CUDA, the more difficult it is to switch to competing solutions – this creates a closed ecosystem that is difficult for the market to counterbalance.

AI infrastructure and European law: a gap in the AI Act?

The AI Act (EU 2024/1689) is the first comprehensive piece of legislation regulating the use of artificial intelligence in Europe. However, it focuses mainly on the algorithmic level – on training data, model transparency and the risks of their use.

Meanwhile, the computational layer – the infrastructure without which these systems cannot exist – remains outside its direct scope.

CUDA is not classified as a standalone AI system, but its impact on the compliance, auditability and security of AI systems is undeniable. Without the ability to verify the operation of the infrastructure – both in terms of hardware (black-box GPUs) and closed software – it is difficult to talk about full implementation of the principles of transparency and accountability.

Legal consequences – monopoly, dependency, lack of audit

The lack of regulation in the field of computing infrastructure raises specific legal and systemic issues:

  • Limited auditability – the closed nature of CUDA makes it difficult to meet the requirements of Article 13 of the AI Act regarding transparency and verifiability.
  • Monopoly risk – a price increase of over 300% for GPUs between 2020 and 2024 may indicate abuse of a dominant position (Article 102 TFEU).
  • Lack of EU technological sovereignty – as many as 98% of European AI data centres use NVIDIA technology, raising serious questions about infrastructure independence and resilience to external disruption.

NVIDIA

Is accountability without transparency possible?

The AI Act establishes chain liability – responsibilities apply not only to system developers, but also to users and distributors. However, market reality shows that end users have no real way of assessing the CUDA infrastructure they use indirectly. There are no technical standards or requirements disclosing the details of how closed platforms operate.

Recommendations for regulators and the AI industry

Although not formally classified as an AI system, CUDA should be recognised as a component that affects compliance, auditability and security. Recommendations:

  • EC guidelines and AI Office it is necessary to develop legal interpretations that take into account the impact of computing platforms on AI systems, as is the case with cloud computing and the GDPR.
  • Promoting technological neutrality – EU technology support programmes (e.g. Digital Europe) should favour open, interoperable technologies.
  • Revision of the scope of the AI Act – in the long term, it is worth considering updating the AI Act to also cover technological infrastructure as a factor determining the safety and compliance of AI systems.

CUDA – a technological marvel or a legal risk?

CUDA is undoubtedly a technology that has enabled unprecedented progress in the field of AI. However, its closed structure, market dominance and lack of regulatory oversight may mean that responsibility for AI systems becomes illusory. For the EU, which is committed to transparency, ethics and digital sovereignty, this is a challenge that can no longer be ignored.

* * *

ART. 13 AI Act

Transparency and information sharing with users

  1. High-risk AI systems shall be designed and developed in a manner that ensures sufficient transparency of their performance, enabling users to interpret the results of the system and use them appropriately. The appropriate type and level of transparency shall be ensured in order to enable the supplier and the user to fulfil their respective obligations set out in Section 3.
  2. High-risk AI systems shall be accompanied by a user manual in an appropriate digital or other format containing concise, complete, accurate and clear information that is relevant, accessible and understandable to users.
  3. The user manual shall contain at least the following information:
  4. (a) the identity and contact details of the supplier and, where applicable, its authorised representative;
  5. (b) the characteristics, capabilities and limitations of the performance of the high-risk AI system, including:

(i) its intended use;

(ii) the level of accuracy, including its indicators, the level of robustness and cybersecurity referred to in Article 15, against which the high-risk AI system has been tested and validated and which can be expected, as well as any known and foreseeable circumstances that may affect those expected levels of accuracy, robustness and cybersecurity;

(iii) any known or foreseeable circumstances related to the use of the high-risk AI system in accordance with its intended purpose or under reasonably foreseeable conditions of misuse that could give rise to a risk to health and safety or fundamental rights as referred to in Article 9(2);

(iv) where applicable, the technical capabilities and features of the high-risk AI system to provide information relevant to the explanation of its performance;

(v) where applicable, the performance of the system in relation to specific individuals or groups of individuals for whom it is intended to be used; (vi) where applicable, specifications regarding input data or any other relevant information regarding the training, validation and testing data sets used, taking into account the intended use of the high-risk AI system; (vii) where applicable, information enabling users to interpret the results of the high-risk AI system and to use those results appropriately;

  1. (c) changes to the high-risk AI system and its performance that have been planned in advance by the supplier at the time of the initial conformity assessment;
  2. (d) the human oversight measures referred to in Article 14, including technical measures introduced to facilitate the interpretation of the results of high-risk AI systems by users;
  3. e) the necessary computing and hardware resources, the expected life cycle of the high-risk AI system and any necessary maintenance and servicing measures, including their frequency, to ensure the proper functioning of that AI system, including software updates;
  4. f) where applicable, a description of the mechanisms included in the high-risk AI system that enable entities using it to correctly collect, store and interpret event logs in accordance with Article 12.

ART. 12 TFEU

Prohibition of abuse of a dominant position

Any abuse by one or more undertakings of a dominant position within the internal market or in a significant part thereof shall be prohibited as incompatible with the internal market in so far as it may affect trade between Member States.

Such abuse may, in particular, consist in:

  1. a) directly or indirectly imposing unfair purchase or selling prices or other unfair trading conditions;
  2. b) limiting production, markets or technical development to the prejudice of consumers;
  3. c) applying dissimilar conditions to equivalent transactions with other trading partners, thereby placing them at a competitive disadvantage;
  4. d) making the conclusion of contracts subject to acceptance by the other parties of supplementary obligations which, by their nature or according to commercial practice, do not relate to the subject of such contracts.

Types of crypto assets regulated by MiCA

The MiCA (Markets in Crypto-Assets) Regulation is the first European Union legal act that comprehensively regulates the rights and obligations of issuers and service providers related to crypto-assets. The aim of MiCA is to ensure a high level of investor protection, particularly for retail investors, to increase the transparency of the crypto-asset market and to harmonise the rules governing this market across the European Union. Thanks to MiCA, the crypto-asset market is gaining clear rules, which promotes investment security and the development of the industry.

kryptowaluty kryptoaktywa

Types of crypto assets regulated by MiCA

Types of crypto assets regulated by MiCA include digital representations of value or rights stored electronically using distributed ledger technology (DLT) or similar technologies.

The MiCA Regulation distinguishes between three main types of crypto assets, which differ in terms of their characteristics and level of risk. This distinction is crucial as it determines the regulatory obligations of companies issuing crypto assets or offering crypto assets to investors. Thanks to the clear definitions in MiCA, companies can align their activities with legal requirements and investors are better protected in the crypto asset market.

Categories of crypto assets:

Asset-Referenced Tokens (ART)

Asset-Referenced Tokens (ART) are cryptoassets whose purpose is to maintain a stable value by being linked to another value, right or combination thereof, including at least one fiat currency.

ARTs are not considered electronic money tokens (EMTs). The key difference is that the value of an ART cannot be determined solely by a single fiat currency. If a cryptoasset bases its value on more than one measure or on a combination of assets, including at least one official currency, it will be classified as an ART.

The issuer of an ART token is required to enable its redemption, either by paying cash other than electronic money corresponding to the market value of the assets associated with the token, or by delivering those assets.

  • MiCA allows some flexibility in determining the ART value measure, but redemption must be possible in cash or through the delivery of the underlying asset.
  • In particular, the issuer should always ensure that redemption is possible in cash (other than electronic money) denominated in the same official currency that was accepted at the time of sale of the token.

E-Money Tokens (EMT)

EMT tokens are linked to a single official currency (e.g. the euro) and serve as a digital equivalent of traditional money. Their key feature is a guaranteed redemption at face value.

Only credit institutions or electronic money institutions may issue e-money tokens. These entities must ensure that token holders can exercise their redemption right at any time, at face value and in the currency to which the token is linked.

An example of such a token is stablecoins linked to the euro, which aim to maintain a 1:1 parity with the euro. Under MiCA, issuers of such tokens will have to meet strict regulatory requirements, including having the appropriate legal status and ensuring a real possibility of redemption of tokens at their nominal value.

Other crypto assets

This category includes cryptocurrencies that are not classified as asset-backed tokens, such as Bitcoin (BTC) and Ethereum (ETH), which do not have a value stabilisation mechanism. This group also includes utility tokens, which provide access to services or goods offered by the issuer.

This category also includes utility tokens, which give holders access to specific services or goods offered by the issuer. Such a token can be compared to a digital voucher or ticket entitling the holder to use a specific service or purchase a specific good.

kryptowaluty kryptoaktywa MICA

Crypto assets excluded from MiCA regulation

MiCA does not cover all digital assets. The Regulation excludes from its scope certain categories of digital assets that are either already regulated by other EU legal acts or do not meet the definition of crypto assets within the meaning of MiCA. In particular, the provisions exclude financial instruments and financial products that are subject to MiFID II.

In accordance with Article 2(4) of the Regulation, the following are also excluded from the scope of MiCA:

  • deposits, including structured deposits,
  • cash (unless they meet the definition of e-money tokens),
  • insurance, pension products and schemes.

Non-fungible tokens (NFTs)

The MiCA Regulation also does not regulate non-fungible tokens (NFTs), provided that they are truly unique and non-fungible. This applies, for example, to digital artworks or unique collectibles in computer games.

However, it is important to note a significant distinction: if crypto assets are issued as non-fungible tokens as part of a large series or collection, this may be considered an indicator of their actual fungibility, which would result in them being subject to MiCA regulations. Furthermore, fractional parts of a unique and non-fungible crypto asset are not considered unique and non-fungible, so they will also be subject to MiCA regulations.

Crypto assets limited to internal networks

The MiCA Regulation also does not cover crypto assets used in closed networks, such as loyalty points or vouchers accepted only by their issuer. This exception applies to digital assets that operate within a limited ecosystem and are not intended for wider trading on the market.

kryptowaluty kryptoaktywa MICA

Conclusions and Recommendations

We encourage you to contact a lawyer for comprehensive legal support in determining the classification of crypto assets and ensuring compliance with regulations governing the crypto asset and financial instrument markets.

Sójka AI – digital guardian of ethics and security

In an age of constantly evolving technology, artificial intelligence (AI) is playing an increasingly important role in our everyday lives. From apps that help us organise our time to image recognition algorithms, AI is gaining popularity, but at the same time it poses serious challenges. These challenges are particularly evident in the areas of ethics, the spread of fake news, law and cybersecurity. In response to these challenges, the Sójka AI project was created, which combines the ethical boundaries of technology with a focus on user safety.

AI

Ethical Artificial Intelligence – a necessity in the digital age

The internet is a space that offers unlimited possibilities, but at the same time it is full of threats. At every turn, you can encounter toxic content, hate speech or manipulation. We are also increasingly using artificial intelligence tools designed to support us in our daily lives. Unfortunately, AI can also pose a threat if it is not properly designed and controlled. That is why the Sójka AI project was created, which acts as a digital ethics guardian, protecting users from harmful content.

Why do we need ethical artificial intelligence?

The advantage of artificial intelligence is its ability to analyse huge amounts of data in a short time. This allows AI to identify and respond to problems that often escape human attention.

However, without appropriate ethical boundaries, such algorithms can generate dangerous content or encourage inappropriate behaviour. Sójka AI is the answer to these threats, providing tools to moderate content and eliminate toxic and potentially dangerous interactions.

AI

Bielik AI – a taste of responsible artificial intelligence

Bielik AI is the first step towards more responsible AI technology in Poland. The project has gained recognition for its ability to analyse data, detect threats and support the ethical development of AI. Bielik AI has achieved its mission by using advanced algorithms that analyse data in an ethical context, ensuring a safe and positive online experience for users.

Community collaboration

An important aspect of Bielik AI was its collaboration with users, who helped develop the project through their experiences and suggestions. The same approach has been adopted by the creators of Sójka AI, which also relies on collaboration with Internet users to create an algorithm that will effectively protect users from online threats. Link to the survey.

How does Bielik AI influence the development of Sójka AI?

Bielik AI served as inspiration for the creation of Sójka AI. Thanks to the experience gained in creating Bielik, the creators of Sójka were able to focus on developing new technologies that will enable even more effective content moderation, detection of harmful activities, and protection of users against manipulation and inappropriate content.

LLM security

What can Sójka AI – a digital ethics guardian – do?

Sójka AI is an algorithm that acts as a digital guardian, eliminating toxic content and protecting users from harmful information. Thanks to its advanced design, Sójka can:

  • analyse chats and detect toxic intentions – the algorithm is able to recognise when an online conversation takes a negative turn and users start using offensive words or expressions;
  • verify AI responses and eliminate dangerous content – Sójka AI can analyse responses generated by other AI models, ensuring that they comply with ethical principles and do not contain harmful content;.
  • moderate content, protecting against hate and manipulation – Sójka AI can effectively moderate content in various spaces, eliminating content that contains hate, hate speech or manipulation;.
  • Ensure the mental well-being of the youngest users – Sójka AI is also a tool that pays special attention to younger Internet users, protecting them from content that may have a negative impact on their mental health.

Practical applications of Sójka AI

Sójka AI can be used in many areas, both in business and in the everyday lives of users. Thanks to its ability to analyse large data sets and respond quickly to threats, it can be used, for example, to:

  • moderate online discussions – companies and online platforms can potentially use Sójka to manage discussions and eliminate negative content in real time;.
  • supporting educational platforms – Sójka AI can be used on educational platforms to monitor student interactions, ensuring that they are not exposed to harmful or inappropriate content;.
  • enhancing social media platforms – with Sójka AI, social media platforms can become a more friendly and safe place for all users, especially the youngest ones;.
  • and much more.

ai law

Legal context and the future of ethical AI models

Projects such as Sójka AI are only the beginning of the quest to create a safe, ethical and responsible Internet. With each passing year, AI technology will become more complex and its role in our lives will become more significant.

Ethical AI models in the European Union – law, practice and directions for development

With the rapid development of artificial intelligence, the European Union has become a global leader in shaping the legal and ethical framework for this technology. Projects such as Sójka AI and Bielik AI in Poland illustrate the evolution from pure innovation to the responsible implementation of systems that respect fundamental rights, transparency and user safety. At the heart of this transformation is the AI Act – the world’s first comprehensive regulation governing AI, which, together with other initiatives, creates a coherent ecosystem for ethical artificial intelligence.

Legal basis for ethical AI in the EU

AI Act – risk classification and obligations

The AI Act, which entered into force on 1 August 2024, introduces a four-tier risk model:

  • unacceptable risk (e.g. social scoring systems, manipulative AI exploiting the vulnerabilities of vulnerable groups) – total ban on use;.
  • high risk (e.g. recruitment systems, credit ratings, medical diagnostics) – require certification, audits and continuous supervision;.
  • limited risk (e.g. chatbots) – obligation to inform users about interaction with AI;.
  • minimal risk (e.g. spam filters) – no additional regulations beyond general ethical principles.

Example: content moderation systems such as Sójka AI may qualify as high-risk systems due to their impact on freedom of expression and data protection. If an AI system is classified in this category, it must then comply with the AI Act requirements regarding data quality, algorithm transparency and appeal mechanisms.

EU ethical guidelines

In 2019, the European Commission defined seven pillars of trustworthy AI:

  1. respect for human autonomy;
  2. technical and societal safety;
  3. privacy protection,
  4. transparency,
  5. diversity and non-discrimination,
  6. social responsibility,
  7. and sustainable development.

These principles have become the basis for the AI Act, especially in the context of the requirement for ‘human-centric’ system design.

Initiatives in the EU

  • GenAI4EU – a programme supporting the implementation of generative AI in key sectors of the economy.

https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

Platforma STEP – nowa era dla AI w Unii Europejskiej

 

Steps for businesses – how to build ethical AI in line with the AI Act?

1. Risk mapping and system audits

– ensure that functionalities and objectives are lawful, ethical and do not create opportunities for abuse. Consider how the results of the AI system’s work may be used by third parties;

– identify any compliance requirements (these may vary depending on your company’s location and target markets) and confirm that you are able to meet them;

– create a list of all potential risks and risk mitigation measures in relation to functionality and legal requirements;

– Conduct risk analyses in the context of the compliance requirements you are subject to. For example, if the system will use personal data as part of a training database, conduct a DPIA (Data Protection Impact Assessment). This will help you understand the scope of the project and the challenges it faces.

More here: How to develop AI tools legally?

2. Implement governance

  • Establish an AI ethics team consisting of lawyers, engineers, and compliance specialists, depending on the situation and capabilities.
  • Develop an AI use policy that takes into account EU guidelines and industry specifics. You can read more about this here:

Co powinna zawierać Polityka wykorzystywania systemów AI?

3. Transparency and control

4. Data management

  • Use debiaisng algorithms to eliminate bias in training data.

Debiasing algorithms are techniques used in artificial intelligence (AI) and machine learning to remove or reduce bias in data that can lead to unfair, discriminatory or inaccurate results. Bias in the context of AI refers to unintended tendencies in a system that may arise as a result of errors in the input data or in the way the AI model is trained.

  • Perform regular data quality audits, especially for systems that use sensitive data (race, religion, health).

5. Certification and compliance

  • Use the certification platforms developed as part of the CERTAIN project, which automate compliance checks with the AI Act.
  • Register high-risk systems in the European AI database.

6. Training and organisational culture

  • Conduct educational programmes for employees.

AI Literacy i AI Act – jak firmy mogą dostosować się do nowych przepisów?

Challenges and future of ethical AI in the EU

Key trends for the coming years:

  1. development of AI for ESG
  2. regulation of foundation models – plans to extend regulation to include requirements for general-purpose models (e.g. Meta Llama 3).
  3. cross-border cooperation

Summary

The EU’s approach to ethical AI, while challenging, creates a unique opportunity for companies to build customer trust and competitive advantage. Projects such as Sójka AI show that combining innovation with responsibility is possible – but it requires strategic planning, investment in governance and close cooperation with regulators.

In the coming decade, ethics will become the main driver of technological progress in Europe.

 

Contact

Any questions?see phone number+48 663 683 888
see email address

Hey, have you
signed up to our newsletter yet?

    Check how we process your personal data here