Tax aspects of doing business in Germany

Dear Sir or Madam,

We would like to invite you to a free webinar on the tax aspects of doing business in Germany, organised by the law firm Leśniewski Borkiewicz Kostka & Partners (LBK&P), in cooperation with the German tax consultancy Dr Klein, Dr Mönstermann International Tax Services GmbH. The event is aimed at both companies just starting out on the German market and those already operating there – especially in the SME sector and large enterprises. Please feel free to forward the invitation to your friends and colleagues who may be interested in the topic of the webinar.

📅 Date: 18 February 2025

⏰ Time: 10:00

📍 Venue: online

During the webinar, LBK&P experts Paweł Suliga (tax advisor in Poland and Germany, recognised expert in German taxes and business law, on a daily basis he provides services in Germany to, among others, Polish companies from the construction industry listed on the Warsaw Stock Exchange) and Bartłomiej Chałupiński (tax advisor in Poland, head of tax @ LBK&P) will discuss key issues regarding tax regulations and obligations related to doing business in Germany.

Agenda:

✅ The most important tax changes in Germany since 01.01.2025.

✅ What tax obligations are associated with setting up a business in Germany?

✅ Documentation and information obligations when conducting cross-border business – what to look out for.

NOTE:

After the webinar, you have the opportunity to take part in a free, 30-minute individual consultation with our experts (separate booking required, limited number of places). To book your consultation, please send an email to rezerwacje@lbplegal.com .

🔗 registration link for the webinar – the link to participate in the webinar and the login details will be sent to you at least 3 days before the event.

Details of the event are also described in the attachment, along with a link to registration.

We look forward to seeing you there!

Attachment for download:

Webinar - Podatkowe aspekty prowadzenia działalności gospodarczej w Niemczech

Articles 1-5 of the AI Act have been in force since 2 February 2025, failure to comply may result in heavy fines.

🚨 IMPORTANT INFORMATION – Articles 1-5 of the AI Act have been in force since 2 February, failure to comply may result in heavy fines. Many companies in the EU have not yet taken the required action – here are the most important details.

ai law

From 2 February 2025, the first provisions of the Artificial Intelligence Act (AI Act) will apply, aimed at increasing safety and regulating the AI market in the European Union.

The most important changes include:

  • Prohibited practices – a ban on the use and placing on the market or for trading of AI systems that meet the criteria of prohibited practices. Examples include manipulative systems that take advantage of human weaknesses, social scoring systems and systems that analyse emotions in the workplace or education. Violations of these regulations are subject to high fines of up to 35 million euros or 7% of a company’s annual global turnover.
  • Obligation of AI literacy. Employers must provide their employees with adequate training and knowledge about AI so that they can safely use AI systems at work. Lack of AI training can lead to non-compliance with regulations and increase the risk of incorrect use of AI systems. In connection with AI literacy, it is also worth ensuring the implementation of an AI use policy in the company. How to do it?

See articles:

The policy for using AI in a company can include, for example, clear procedures, rules for using AI, conditions for the approval of systems, procedures in case of incidents and the appointment of a person responsible for the effective implementation and use of AI in the organisation (AI Ambassador).

 

The importance of AI education (Article 4 AI Act)

Awareness and knowledge of AI is not only a legal requirement, but also a strategic necessity for organisations. Article 4 of the AI Act obliges companies to implement training programmes tailored to the knowledge, role and experience of their employees.

‘Suppliers and deployers of AI systems shall take measures to ensure, to the greatest extent possible, an appropriate level of competence with regard to AI among their personnel and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, and the context in which the AI systems are to be used, as well as taking into account the persons or groups of persons against whom the AI systems are to be used.’ Failure to act in this regard carries serious consequences. in which the AI systems are to be used, as well as taking into account the persons or groups of persons against whom the AI systems are to be used.

Failure to act in this regard has serious consequences, including:

  • The risk of violating personal data protection and privacy regulations.
  • An increased likelihood of violating the law and incurring financial penalties.

In addition to regulatory compliance, AI education helps to build a culture of responsible use of technology and minimises potential operational risks.

Where can I find guidance?

The ISO/IEC 42001 standard on artificial intelligence management systems can help. As part of the measures relating to the relevant competences of persons dealing with AI in an organisation, the standard indicates, for example, the following issues:

  • mentoring
  • training
  • transferring employees to appropriate tasks within the organisation based on an analysis of their competences.

At the same time, important roles or areas of responsibility should be assigned to, for example:

  • supervision of the AI system
  • security
  • protection
  • privacy

LLM security

Prohibited AI practices (Article 5 AI Act)

The AI Act prohibits the use of certain AI systems that could pose serious risks to society. Suppliers and companies using AI must ensure that they are not directly or indirectly involved in their development or implementation. Among other things, the AI Act lists specific prohibited practices that are considered particularly dangerous. These include:

  • Subliminal or manipulative techniques – AI systems that subconsciously change the user’s behaviour so that they make a decision they would not otherwise have made.
  • Exploitation of human weaknesses – systems that take advantage of a person’s disability, social or economic situation.
  • Social scoring – systems that evaluate citizens and grant them certain rights based on their behaviour.
  • Assessment of the risk of committing a crime – systems that profile individuals and evaluate their individual characteristics without legitimate grounds.
  • Creation of facial image databases – untargeted acquisition of images from the internet or city surveillance for the purpose of creating facial recognition systems.
  • Analysis of emotions in the workplace or education – AI systems that analyse the emotions of employees or students.
  • Biometric categorisation of sensitive data – using biometric data to gain information about race, political views, etc.
  • Remote biometric identification in real time – using facial recognition systems in public spaces for the purpose of prosecuting crimes.

Where to look for guidance?

  • draft Guidelines on prohibited artificial intelligence (AI) practices – 4 February 2025. The Commission published guidelines on prohibited artificial intelligence practices to ensure consistent, effective and uniform application of the AI Act across the EU.

Important dates:

  • From 2 February 2025 – Chapter II (prohibited practices)
  • From 2 August 2025 – Chapter V (general-purpose models), Chapter XII (Penalties) without Article 101
  • From 2 August 2026 – Article 6(2) and Annex III (high-risk systems), Chapter IV (transparency obligations)
  • From 2 August 2027 – Article 6(1) (high-risk systems) and corresponding obligations

Key takeaways for companies:

  • Compliance with Articles 1-5 of the AI Act is mandatory and cannot be ignored.
  • Training in AI is crucial to avoid mistakes related to employee ignorance and potential company liability.
  • Conducting audits of technology providers is necessary to ensure that AI systems comply with regulations.
  • Implement an AI use policy – introduce clear documentation to organise the risks. The policy can include, for example, clear procedures, rules for using AI, conditions for admitting systems, how to deal with incidents, and appointing a person responsible for supervision (AI Ambassador).
  • Developing AI tools in accordance with the law – companies developing AI tools must consider legal and ethical aspects at every stage of development. This includes analysing the compliance of the system’s objectives with the law, database legality, cybersecurity and system testing. It is important that the process of creating AI systems complies with the principles of privacy by design and privacy by default under the GDPR – How to create AI tools legally?.

Be sure to check out these sources:

https://www.gov.pl/attachment/9bb34f05-037d-4e71-bb7a-6d5ace419eeb

DeepSeek – Chinese AI in open source mode. Does Hong Kong have a chance against OpenAI?

DeepSeek is a series of Chinese language models that impresses with its performance and low training costs. Thanks to their open source approach, DeepSeek-R1 and DeepSeek-V3 are causing quite a stir in the AI industry.

DeepSeek

Source: www.deepseek.com

DeepSeek: a revolution in the world of AI from Hong Kong

DeepSeek is increasingly being mentioned in discussions about the future of artificial intelligence. This Hong Kong project provides open-source large language models (LLMs) with high performance and, crucially, significantly lower training costs than competing solutions from OpenAI or Meta.

In this article, we will take a closer look at DeepSeek-R1 and DeepSeek-V3 and provide an update on the development and distribution of these models based on official materials available on the Hugging Face platform as well as publications from Spider’s Web and china24.com.

Table of contents

  1. How was DeepSeek created?
  2. DeepSeek-R1 and DeepSeek-V3: a brief technical introduction
  3. Training costs and performance: what’s the secret?
  4. Open source and licensing
  5. DeepSeek-R1, R1-Zero and Distill models: what are the differences?
  6. The rivalry between China and the USA: sanctions, semiconductors and innovation
  7. Will DeepSeek threaten OpenAI’s dominance?
  8. Summary
  9. Sources

AI born

How was DeepSeek created?

The latest press reports indicate that High-Flyer Capital Management, a company that until recently was almost unknown in the IT industry outside of Asia, was founded in Hong Kong in 2015. This changed dramatically with DeepSeek, a series of large language models that took Silicon Valley experts by storm.

However, DeepSeek is not only a commercial project – it is also a breath of fresh air in a world where closed solutions with huge budgets, such as models from OpenAI (including GPT-4 and OpenAI o1), usually dominate.

DeepSeek-R1 and DeepSeek-V3: a brief technical introduction

According to information from the official project page on Hugging Face, DeepSeek is currently publishing several variants of its models:

  1. DeepSeek-R1-Zero: created through advanced training without the initial SFT (Supervised Fine-Tuning) stage, focusing on strengthening reasoning skills (the so-called chain-of-thought).
  2. DeepSeek-R1: in which the authors included additional, preliminary fine-tuning (SFT) before the reinforcement learning phase, which improved the readability and consistency of the generated text.
  3. DeepSeek-V3: named after the base model from which the R1-Zero and R1 variants described above are derived. DeepSeek-V3 can have up to 671 billion parameters and was trained in two months at a cost of approximately $5.58 million (data: china24.com).

ai tech

Technical background

  • The high number of parameters (up to 671 billion) means that very complex statements and analyses can be generated.
  • Thanks to the optimised training process, even such a large architecture does not require a budget comparable to that of OpenAI.
  • The main goal: to independently develop multi-stage solutions and minimise ‘hallucinations’, so common in other models.

Training costs and performance: what’s the secret?

Both the Spider’s Web service and the china24. com emphasise that the training costs of DeepSeek-R1 (approx. $5 million for the first version) are many times lower than those we hear about in the context of GPT-4 or other closed OpenAI models, where we hear about billions of dollars.

Where does the recipe for success lie?

  • Proprietary methods of optimising the learning process,
  • Agile architecture that allows the model to learn more effectively with fewer GPUs,
  • Economical management of training data (avoiding unnecessary repetitions and precisely selecting the data set).

open source

Open source and licensing

DeepSeek, unlike most of its Western competitors, relies on open source. As stated in the official documentation of the model on Hugging Face:

‘DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation…’

This means that the community is not only free to use these models, but also to modify and develop them. In addition, several variants have already been developed within the DeepSeek-R1-Distill line, optimised for lower resource requirements.

Important:

  • The DeepSeek-R1-Distill models are based, among other things, on the publicly available Qwen2.5 and Llama3, which are linked to the relevant Apache 2.0 and Llama licences.
  • Nevertheless, the whole is made available to the community on very liberal terms – which stimulates experimentation and further innovation.

AI

DeepSeek-R1, R1-Zero and Distill models: what are the differences?

From the documentation published on Hugging Face, a three-tier division emerges:

1. DeepSeek-R1-Zero

  • Training only with RL (reinforcement learning), without prior SFT,
  • The model can generate very complex chains of thought (chain-of-thought),
  • However, it can suffer from problems with text reproducibility and readability.

2. DeepSeek-R1

  • Including the SFT phase before RL solved the problems noticed in R1-Zero,
  • Better consistency and less tendency to hallucinate,
  • According to benchmarks, it is comparable to OpenAI o1 in math, programming, and analytical tasks.

3. DeepSeek-R1-Distill

  • ‘Slimmed-down’ versions of the model (1.5B, 7B, 8B, 14B, 32B, 70B parameters),
  • Enable easier implementation on weaker hardware,
  • Created by distillation (transferring knowledge from the full R1 model to smaller architectures).

Rivalry between China and the USA: sanctions, semiconductors and innovation

As noted by the ‘South China Morning Post’ (cited by chiny24.com), the development of Chinese AI models is taking place under conditions of limited access to advanced semiconductors due to US sanctions.

Meanwhile, Chinese companies – including DeepSeek and ByteDance (Doubao) – are showing that even in such an unfavourable climate, they are able to create models:

  • that are not inferior to Western solutions,
  • and often much cheaper to maintain.

As Jim Fan (researcher at Nvidia) points out, the DeepSeek project may be proof that innovation and restrictive conditions (less funding, sanctions) do not have to be mutually exclusive.

Will DeepSeek threaten OpenAI’s dominance?

High-Flyer Capital Management and other Chinese companies are entering the market with a model that:

  • performs better than Western competitors in some tests,
  • is cheaper to develop and maintain,
  • makes open repositories available, allowing for the rapid development of a community-based ecosystem.

If OpenAI (and other giants) do not develop a strategy to compete with cheaper and equally good models, Chinese solutions – such as DeepSeek or Doubao – could capture a significant share of the market.

LLM przyszłość

The era of expensive AI models is coming to an end?

DeepSeek is a prime example of how the era of gigantic and ultra-expensive AI models may be coming to an end. Open source, low training costs and very good benchmark results mean that ambitious start-ups from China could shake up the current balance of power in the artificial intelligence industry.

Due to the growing technological tensions between China and the USA, the further development of DeepSeek and similar projects will probably become one of the main themes in the global rivalry for the title of AI leader.

Sources

  1. ‘Chinese DeepSeek beats all OpenAI models. The West has a big problem’ – Spider’s Web
  2. ‘DeepSeek. Chinese startup builds open-source AI’ – chiny24.com
  3. Official DeepSeek-R1 website on Hugging Face

Author: own work based on the indicated publications.

Text intended for information and journalistic purposes.

And the Oscar goes to … AI Brody

The Oscar nominations have been announced. One of the favourites is Brady Corbet’s The Brutalist, starring Adrien Brody, who is also nominated for this prestigious award. The film tells the story of a Jewish architect who emigrated from post-war Europe to the USA in search of a safe haven for himself and his wife. Even before the nominations were announced, there was a heated debate about whether Adrien Brody should receive the award for his phenomenal acting performance, due to the fact that his accent, which we hear throughout the entire film, was improved by AI tools. As you can see, it didn’t matter in the context of Adrien’s nomination, but will the controversy surrounding the use of AI in the film ultimately have a decisive impact on the verdict of the American Film Academy?

cinema

AI improved Adrien Brody’s accent in The Brutalist – how does technology change cinema?

The film The Brutalist uses artificial intelligence to subtly correct the Hungarian pronunciation of actors Adrien Brody and Felicity Jones. The film’s editor, Dávid Jancsó, revealed that Respeecher technology was used to improve the authenticity of the Hungarian dialogue. Both actors worked with a dialect coach, but the producers wanted perfect pronunciation, which is difficult to achieve using traditional methods. Are such practices the future of cinema, or rather a threat to the authenticity of acting performances?

How Respeecher technology works

Respeecher is an advanced speech synthesis tool that allows the voice of one person to be transformed into the voice of another, while retaining all the emotions, intonations and natural sound. The process is based on machine learning algorithms that work in several key stages:

  1. Collecting voice data – The developers first record samples of the target voice. In the case of The Brutalist, these were recordings of actors who were to use a Hungarian accent.
  2. Acoustic analysis – The Respeecher system analyses the unique characteristics of the voice, such as timbre, speech rate and the way certain phonemes are pronounced.
  3. Machine learning: Based on the provided samples, algorithms learn the characteristics of the voice and then generate a digital version of it that faithfully reflects the original characteristics.
  4. Sound synthesis: In the final process, the actor’s voice is modified to fit the requirements of the creators – in this case, it was about the authentic sound of a Hungarian accent.

The advantage of this approach is that the actor does not have to re-record the dialogue. As emphasised by the film’s creators, the technology was used exclusively as a supporting tool, not replacing the work of the actors.

ai cinema

AI in the film industry – breakthrough or threat?

The use of AI in films such as The Brutalist is just the tip of the iceberg. The technology is finding more and more applications in cinematography:

  • Special effects – AI allows for the generation of realistic visual effects, which reduces production costs.
  • Post-production: AI-based tools automate processes such as editing, colour correction and sound quality improvement.
  • Scriptwriting: Algorithms that analyse popular film plots can suggest new storylines.
  • Digitalisation of actors: AI makes it possible to rejuvenate or ‘revive’ deceased actors for use in new productions. (Raindance)

usa law ai

Legal and ethical aspects of using AI in cinematography

The use of AI in film raises many legal and ethical questions, which are becoming increasingly pressing with the growing use of this technology. In the case of The Brutalist, we can identify several key issues:

  1. Copyright – Does the digital voice generated by AI belong to the actor, the technology company, or the film producers? The use of an actor’s voice in synthetic form may give rise to claims for remuneration for additional use of the image.
  2. Transparency – Viewers were not informed about the use of Respeecher during the first screenings of the film. Should filmmakers openly communicate such practices, especially when they affect the perception of performances?
  3. Impact on the acting profession – Critics fear that the development of AI may lead to a decrease in the demand for voice actors or even actors themselves, as their voices could be generated synthetically.

oscar ai

Controversy and impact on Oscar chances

The revelation of AI use in The Brutalist has sparked controversy in the film industry. Concerns have been raised that such practices could undermine the authenticity of acting performances and lead to ethical dilemmas surrounding the use of technology in the arts. In the context of the upcoming Oscars, some experts suggest that the use of AI in The Brutalist could affect Adrien Brody’s chances of winning the Best Actor award. Nevertheless, both the director and the film editor assure that AI was only a supporting tool, not a substitute for the talent and work of the actors.

Summary

The Brutalist has become a symbol of a new era in cinematography, in which artificial intelligence is becoming a creative tool, but also a subject of controversy. The case of Adrien Brody and the use of Respeecher opens a discussion about the limits of technology in art. Should AI only support creators or will it dominate the industry, replacing human creativity? One thing is certain – the future of cinema will be closely linked to the development of technology.

Sources:

BREAKING: new executive order from President Trump

On 23 January 2025, President Donald Trump signed an executive order that defines the United States’ new priorities in the field of artificial intelligence (AI). This decision was made in connection with the repeal of EO 14110, issued by Joe Biden in 2023, and introduces new rules governing the development of AI in the US. This step by the Trump administration emphasises the importance of AI in maintaining US global dominance, promoting innovation and ensuring national security

AI USA

New priorities in AI policy

President Trump’s executive order sets out the United States’ main objectives in the field of artificial intelligence. The document emphasises that AI development should be based on free market principles and that ideological biases should be avoided. A key element of the new policy is to strengthen the US’s global position in AI, which is intended to promote economic competitiveness, human development and national security.

In contrast to the Biden administration’s approach, which called for stricter security tests and the sharing of results with the government, the new policy favours greater freedom for technology companies. Trump described the previous regulations as too restrictive, which he believed could have hampered the development of AI in the United States.

business ai

Key elements of Trump’s regulation

  1. Repeal of EO 14110
  2. The Biden administration’s regulation, which aimed to ensure the safe and trusted development of AI, was considered by the Trump administration to be overly restrictive of freedom of innovation. The new regulations pave the way for a review of all regulations and actions resulting from the repealed regulation.
  3. Strengthening the US position as a global leader in AI
  4. The document emphasises that the goal of US policy is to maintain and develop dominance in the field of artificial intelligence. This is intended to promote technological innovation, economic competitiveness and national security.
  5. Development of an AI Action Plan
  6. Within 180 days of the signing of the regulation, special advisors, including the Assistant to the President for Science and Technology and the Special Advisor for AI and Cryptocurrencies, in cooperation with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the Director of the Office of Management and Budget (OMB Director) and the heads of such executive departments and agencies as they deem relevant, are to present an action plan. Domestic Policy, the Director of the Office of Management and Budget (OMB Director) and the heads of such executive departments and agencies as the above-mentioned deem relevant are to present an action plan that will set out the details of the implementation of the new priorities.
  7. Update of supervisory policies
  8. The Office of Management and Budget (OMB) has been instructed to update existing memoranda regarding the supervision of AI systems to comply with the new policy.

bialy dom

Significance for the United States

1. Impact on the technology sector

Trump’s new executive order could boost innovation in the AI sector by removing regulatory barriers. This will give technology companies more freedom to develop new systems and applications, which could strengthen the US global position in this field. At the same time, the lack of strict regulations raises concerns about the ethical use of technology and the risk of abuse.

2. National security and global competition

Emphasising the role of AI in the context of national security indicates the growing importance of this technology in defence and intelligence. As an innovation leader, the US must face competition primarily from China, which is also investing heavily in AI. The new policy is intended to ensure the technological advantage of the United States.

3. Criticism and controversy

The decision to repeal the Biden regulation has been criticised by numerous experts who fear that the lack of restrictions could lead to the development of dangerous AI technologies. Alondra Nelson of the Center for American Progress warns that the American public could be left unprotected from the potential harms of AI development.

AI Act – a pioneering regulation for artificial intelligence in the EU

The European Union, on the other hand, has taken a different approach. The AI Act is a regulation that puts the European Union at the forefront of global efforts to create a responsible and transparent legal framework for artificial intelligence. Published on 12 July 2024 in the Official Journal of the European Commission, the regulation is crucial for shaping the future of AI technology in Europe, while ensuring a high level of protection for citizens‘ and consumers’ rights.

The regulation defines AI systems in terms of their risk, classifying them into different levels (minimal, limited, high and unacceptable). High-risk AI systems, such as those used in healthcare, education or recruitment processes, will have to meet specific requirements regarding safety, transparency and reliability. Furthermore, applications of artificial intelligence that are considered unacceptable, such as mass biometric surveillance in public spaces or the manipulation of human behaviour, are prohibited.

Comparison with the US approach

In contrast to the more liberal US policy, the European Union places greater emphasis on prevention and the protection of citizens’ rights. In the EU, regulations are aimed at preventing potential risks associated with AI, such as algorithmic discrimination, surveillance or manipulation of public opinion.


Conclusions and outlook

President Trump’s new executive order shows that the administration is committed to the development of AI as a key element of the US economic and technological strategy. Free-market policies are intended to attract investment and promote innovation, but their side effects can lead to a lack of adequate ethical and legal safeguards.

GPT chat not working. Thousands of user reports

Chat GPT

Illustration generated by competitor #GoogleGemini.

On Thursday, 23 January, the popular GPT Chat stopped working. The failure was reported by thousands of concerned Internet users.

The website providing online access to the popular GPT Chat stopped working for a while on Thursday, 23 January. The failure occurred around 1 p.m. CET, as can be clearly seen on the Downdetector.pl service’s notification chart. Interestingly, the websites of OpenAI, the company responsible for GPT Chat, were still working at the same time.

At the same time, problems were also reported with other OpenAI services. The GPT-4o and GPT-4 models were not working. Some users reported that the chatgpt.com and chat.openai.com websites were not opening. Others noticed that GPT Chat was not responding to their questions. The applications created to operate this model were also not responding.

This would not be the first failure of GPT Chat. In the last few weeks, there have been brief service interruptions. The biggest one occurred in December, when there was a major failure in the United States, which also resulted in errors in other OpenAI services.

Trump changes artificial intelligence regulations – new approach to AI in the USA

Donald Trump began his term of office with significant changes in the approach to artificial intelligence (AI) regulation. One of the first steps was to repeal Joe Biden’s 2023 executive order, which introduced specific safety requirements for AI systems. This decision has sparked controversy among experts, who emphasise that the lack of adequate regulations can bring both opportunities and serious threats to society and the United States’ position as a technological leader.

AI USA

What was the Biden AI regulation about?

Joe Biden’s executive order aimed to ensure the safe and responsible development of artificial intelligence. It focused on several key areas:

  1. Safety standards and testing of AI systems
  2. Companies involved in the development of artificial intelligence were required to conduct safety tests on their systems and share the results with the US government. This was intended to identify potential risks, such as algorithmic bias or the risk of using AI in activities that threaten national security.
  3. Protection against AI-generated disinformation
  4. The Department of Commerce was to develop guidelines for watermarks and content authentication systems to enable easy identification of AI-generated material. This was intended to limit the impact of disinformation and fake news on society.
  5. Privacy and data protection
  6. The regulation emphasised the protection of citizens’ data from being used illegally to train AI models. Although President Biden urged Congress to pass appropriate laws, there were no specific regulations governing this issue at the time.
  7. Preventing algorithmic discrimination
  8. One of the key points of the regulation concerned counteracting the creation of AI algorithms based on unrepresentative data that could lead to discrimination, e.g. in recruitment systems, the judiciary or healthcare.
  9. Security in healthcare and life sciences
  10. The Biden administration introduced mechanisms to prevent the use of AI to create dangerous biological materials. The Department of Health was to develop AI safety programmes in medicine, focusing on improving healthcare and developing innovative therapies. Their fulfilment was to be a condition for obtaining federal funding for life sciences projects.
  11. The labour market and the impact of AI
  12. The regulations were intended to develop rules to protect employees from the unfair use of AI in performance appraisal or recruitment systems.

You can read more about artificial intelligence systems in the states here:

It is worth noting that the repealed version of the regulation is no longer even available on the White House website, so the regulation has not only been repealed, but also removed along with the archive versions from the official source. At the moment, it can only be found here.

bialy dom

Why did Trump repeal the regulation?

Donald Trump argued that the regulations introduced by Joe Biden were too strict and could limit the development of innovative technologies. From the Republicans’ perspective, regulations such as the obligation to report security tests and share information with the government could hinder the activities of technology companies and weaken their competitiveness in the global market.

Trump emphasised that the US approach to AI should be less bureaucratic and more focused on supporting innovation. The decision to repeal the regulation is in line with his philosophy of deregulation and limiting government interference in the private sector.

In addition, Biden’s regulation aimed to increase the security of AI development by introducing transparency standards, reducing the risk of misinformation and counteracting algorithmic discrimination. Tech companies also had to disclose information about potential flaws in their models, including AI biases, which was particularly criticised by Trump-related circles as threatening their competitiveness.

Trump’s decision – liberalisation or risk?

The decision to repeal the regulation has met with mixed reactions. Experts such as Alondra Nelson of the Center for American Progress warn that the lack of safety standards will weaken consumer protection against AI-related risks. Alexander Nowrasteh from the Cato Institute, on the other hand, noted that abandoning some of Biden’s regulations, e.g. immigration facilities for AI specialists, could have negative effects on the sector.

Trump’s supporters, however, argue that his decision is an opportunity to accelerate technological development. They emphasise that overly strict regulations, such as those introduced in Europe, can hamper innovation.

404 Biały dom

Image source: website of the White House

Consequences of Trump’s decision

Experts warn that the lack of clearly defined rules governing the development of AI can lead to a number of risks:

  • Disinformation and fake news: The lack of guidelines for authenticating AI-generated content can facilitate the spread of false information.
  • Threats to national security: Without proper security testing, AI systems can be vulnerable to use in cybercrime or warfare.
  • Ethics and trust: The lack of regulation increases the risk of algorithmic discrimination and privacy violations, which can undermine public trust in AI technology.

On the other hand, supporters of Trump’s decision emphasise that liberalising regulations will allow for faster technology development and attract investment in the AI sector.

business ai

Will the US remain the leader in AI?

Trump’s decision to repeal Biden’s executive order opens a new chapter in the US approach to AI regulation. While Europe is focusing on protecting civil rights, the US may take a more liberal course, favouring freedom of innovation but at the same time exposing even basic human rights.

However, the lack of a clearly defined legal framework in the long term may weaken the US’s position as a leader in the field of AI, especially in the context of international cooperation and the creation of global standards. It will be crucial to find a balance between supporting development and minimising the risks posed by this revolutionary technology.

Artificial intelligence remains one of the most important technologies of the 21st century, and the decisions made this week by world leaders in the United States will influence its development for decades to come.

How does artificial intelligence improve the analysis of GOCC’s financial data and increase the transparency of the charity?

The situation in which Jerzy Owsiak has found himself, discussed and commented on in recent days in the media, gives rise to numerous discussions and questions about the legal possibilities of action in such cases. Below we point out the key legal provisions on criminal threats, public incitement to hatred or violation of personal rights that could apply. These incidents serve as a reminder of how important legal protection is in situations where words and actions can escalate so far as to take the form of criminal acts.

In the second part of the article, we also outline how AI is revolutionising the analysis of financial data, increasing the transparency and efficiency of charities.

The final part of the article is based on a thorough analysis of the financial reports of the GOCC, available on the organisation’s official website, and demonstrates the practical application of technology in the non-profit sector.

wosp-2025

Graphics: WOŚP Foundation materials

Criminal threats and incitement to hatred – legal aspects on the example of the famous case of Jerzy Owsiak

Jerzy Owsiak, president of the Great Orchestra of Christmas Charity (WOŚP) Foundation, last week reported to law enforcement authorities the occurrence of criminal threats and public incitement to hatred against him and the Foundation. Phone and email threats, including calls for violence, caused the WOŚP Foundation president to reasonably fear their fulfilment. Owsiak also pointed to media activities that he considered manipulative and escalating negative emotions against him and the WOŚP. As a result, the foundation has banned certain editorial and television outlets from entering the foundation’s headquarters.

WOŚP Marcin Michon

GOCC Foundation materials – photographer Marcin Michon

Criminal threats – when is it a crime?


Art. 190 CC [Criminal threat].

  1. Whoever threatens another person with committing a criminal offence to his/her detriment or to the detriment of a person close to him/her, if the threat induces in the person to whom it was addressed or whom it concerns a reasonable fear that it will be carried out, shall be subject to the penalty of deprivation of liberty for up to 3 years.
  2. Prosecution takes place at the request of the victim.

According to Article 190 of the Criminal Code, a criminal threat consists in threatening another person with the commission of an offence to his or her detriment or to the detriment of a person close to him or her, if it arouses in him or her a well-founded fear that it will be fulfilled. The key considerations here are:

  1. Objective considerations – whether an average person in similar circumstances would consider the threat to be real.
  2. Subjective feelings of the victim – how the threat affects a particular person.

In the case of Jerzy Owsiak and the employees of WOŚP, threats were made both by telephone and by email. Although the recordings of the conversations have not been preserved, the emails constitute important evidence in the case that can be used in the proceedings.

Incitement to hatred – legal consequences


Art. 255 CC [Public incitement to commit a misdemeanour or fiscal offence].

  1. Whoever publicly incites to the commission of a misdemeanour or fiscal offence shall be subject to a fine, the penalty of restriction of liberty or the penalty of deprivation of liberty for up to 2 years.
  2. Whoever publicly incites to the commission of a crime shall be subject to the penalty of deprivation of liberty for up to 3 years.
  3. Whoever publicly incites to the commission of a crime shall be subject to a fine of up to 180 daily rates, the penalty of restriction of liberty or the penalty of deprivation of liberty for up to one year.

Actions involving public incitement to hatred under Article 255 of the Criminal Code may lead to criminal liability. In the case of the GOCC, the president of the Foundation pointed to media activities carried out by certain TV stations and editorial offices. According to him, they may bear the hallmarks of manipulation and escalation of negative emotions towards the Foundation and its activities, which, as a result of strong social emotions, which are additionally escalated and fuelled, may end very seriously (the case of the President of Gdańsk, Mr Paweł Adamowicz, should be indicated here in particular).

WOŚP Paweł Krup

GOCC Foundation materials – photographer Paweł Krup

Defamation and insult – protection of image and dignity

Going further, media activities which violate the image and good name of the Foundation may also potentially constitute a form of defamation (Article 212 CC) or insult (Article 216 CC).

Defamation refers to a situation where a person or institution is slandered for actions that may bring it into disrepute in public opinion.


Article 212 CC [Defamation].

  1. Whoever slanders another person, a group of persons, an institution, a legal person or an organisational unit without legal personality of such conduct or qualities which may humiliate him in public opinion or expose him to loss of confidence necessary for a given position, profession or type of activity, shall be subject to a fine or the penalty of restriction of liberty.
  2. If the perpetrator commits the act specified in § 1 by means of mass communication media, he shall be subject to a fine, the penalty of restriction of liberty or the penalty of deprivation of liberty for up to one year.
  3. In the event of a conviction for the offence specified in § 1 or 2, the court may rule on a surcharge in favour of the injured party, the Polish Red Cross or for another social purpose indicated by the injured party.
  4. The prosecution of the offence specified in § 1 or 2 shall be by private prosecution.

Insult includes utterances or gestures that affront the dignity of a person, also in the presence of others.


Article 216 CC [Insult].

  1. Whoever insults another person in his presence or even in his absence, but in public or with the intention that the insult should reach that person, shall be subject to a fine or the penalty of restriction of liberty.
  2. Whoever insults another person by means of mass communication shall be subject to a fine, the penalty of restriction of liberty or imprisonment of up to one year.
  3. If the insult has been provoked by defiant behaviour on the part of the victim or if the victim has responded by violating bodily integrity or by mutual insult, the court may waive punishment.
  4. In the event of a conviction for an offence specified in § 2, the court may order a surcharge for the benefit of the wronged party, the Polish Red Cross or for another social purpose indicated by the wronged party.
  5. Prosecution shall be by private prosecution.

Both offences are prosecuted by private prosecution, which means that an indictment must be brought by the victim. The assistance of a lawyer or solicitor is invaluable in such situations.

Infringement of personal rights – civil action

In situations such as this, the victim may also avail himself of civil law protection on the basis of the provisions on the protection of personal rights. This includes the possibility of claiming:

  • Financial compensation.
  • A public apology.
  • The cessation of the action infringing personal rights and the removal of its effects.

groźba fake news

How to act in case of threats or slander?

  1. Report the suspected offence to the relevant services – notify the police or the public prosecutor’s office of threats or statements that may be insulting or defamatory. In the case of a criminal threat, a request for prosecution must also be made. The request can be made verbally on the record as well as in writing.
  2. Secure evidence – emails, recordings of conversations, or other forms of documentation are key to corroborating the allegations (for fear of deleting the evidence, it is a good idea to prepare screenshots or secure data on a website or social media with a notarised record).
  3. Use legal advice – a solicitor or barrister will assist in drafting the notice and also support you during criminal or civil proceedings. It is worth remembering that advocates or legal advisors also provide support in these cases free of charge (ex officio).

GOCC Foundation materials – photographer Łukasz Widziszowski

 

WOŚP Łukasz Widziszowski

Materiały Fundacji WOŚP – fotograf Łukasz Widziszowski

WOŚP – social activity in the shadow of threats

The WOŚP Foundation has been supporting the Polish health service since 1993, collecting nearly PLN 2.3 billion for its cause so far. Despite its invaluable contribution to the healthcare system, its activities are sometimes the target of attacks, which shows how important it is to have effective legal protection mechanisms, but also – to focus on reliable information.

In this regard, it is always worth paying special attention to so-called ‘fake news’ appearing in the media or on television, which are often already at first glance distinguishable from other information (they are aggressive, refer to non-existent persons or events). Among fake news, there are many: flashy headlines, biased manipulations appearing in the statements of various people, or manipulated images or video. For this purpose (especially if the fake news concerns NGOs / non-governmental organisations), it is worth using the Demagog website .

The Demagog Association is the first Polish factchecking organisation. Since 2014, it has been verifying the statements of politicians, fighting against fake news and disinformation. In addition, it is also a team of analysts and educators for whom facts matter. The organisation’s mission is to fight fake news and disinformation and to provide citizens with reliable, unbiased and verified information.

Artificial intelligence can also be used to analyse fake news.

sztuczna inteligencja WOŚP

How artificial intelligence improves the analysis of GOCC’s financial data and increases the transparency of the charity’s activities

  1. Automating data acquisition: OCR as a key tool in the analysis of GOCC financial reports

dane OCR

  • Precise Data Extraction from Official Sources: The first step in the analysis process is data acquisition. In the case of WOŚP, the main source of information is the detailed financial reports, published annually in the form of images on the foundation’s official website. The AI modules responsible for data retrieval automatically acquire these images, eliminating the need for manual data entry. The images, which include the accounts, are available here: https://www.wosp.org.pl/fundacja/wazne/rozliczenia
  • Optical Character Recognition (OCR) in the Service of Good: Optical Character Recognition (OCR) technology plays a key role. OCR algorithms accurately analyse the images of reports, converting the textual and numerical information they contain (dates, amounts, descriptions) into a digital format for further processing and analysis. This is a key element in automating the analysis of financial reports.
  • Intelligent Structuring of Financial Data: The extracted data is then organised and categorised. AI algorithms automatically classify the information, separating it into readable sections, such as: medical equipment expenditure, medical programme delivery costs , administrative costs, in-kind donations and others. This makes GOCC financial reports more readable and understandable, without the need for expert knowledge to analyse them.

2. Objective analysis of GOCC financial data: lessons learned by AI and their relevance for donors

analiza ai wośp

  • Visualising Data for Better Understanding: Based on the processed data, AI generates clear visualisations, including charts and tables. These facilitate a quick overview of key information, trends in GOCC finances and expenditure structure for donors and all stakeholders.
  • Detailed Analysis of Expenditure Structure: An in-depth analysis of the data, supported by AI algorithms , allows you to precisely determine where GOCC is spending the funds raised. For example, it is clear from the analysis that a significant portion of the budget is consistently invested in the purchase of specialised medical equipment for hospitals across Poland, which directly improves the quality of healthcare.
  • Identification of Changes and Trends Over the Years: AI algorithms make it possible to track changes in the structure of GOCC spending over successive finals. For example, the analysis can show an increase in spending in specific areas, such as support for specific medical programmes, or a flexible response to current needs, as in the case of the increase in spending to fight COVID-19 in the 27th Final.
  • Assessing Cost Effectiveness and Optimising Activities: AI can also support the analysis of administrative and operational costs of the GOCC, identifying potential areas for optimisation and increasing the efficiency of the use of every penny donated by donors.
  • Stories Hidden in the Data: Real Impact on People’s Lives: It is important to emphasise that behind every row in the table and every bar in the graph are real human stories. The medical equipment purchased by the GOCC saves the lives and health of patients across Poland, and the support of medical programmes translates into an improvement in the quality of life for many people. Every number in the WOŚP financial report is not just a dry fact, but first and foremost a testimony of real help and goodness brought to those in need. These are hundreds of thousands of lives saved and smiles on the faces of children and their parents. Data gains real value when we realise that it reflects a real impact on the lives of people helped by WOŚP.
  1. Why is AI such an effective tool in analysing charities’ data? Benefits for GOCC and donors

ai analitics

  • Efficiency and Automation: the automation of the data acquisition and processing process, thanks to AI and OCR, significantly reduces the time and effort required to prepare comprehensive financial reports. This allows GOCC staff and volunteers to focus on achieving their statutory goals.
  • Transparency and Credibility: Using objective and precise AI algorithms to analyse GOCC data increases the transparency and credibility of the information presented. Donors can be assured that the data is analysed fairly and impartially, which builds trust in the organisation.
  • Scalability: AI-based systems can process huge amounts of data, regardless of the scale of the organisation and the number of financial reports. This is crucial for fast-growing organisations such as GOCC.
  • Detecting Trends and Patterns for Better Planning: Advanced machine learning algorithms are able to identify hidden patterns and trends in GOCC financial data . These provide valuable information that supports strategic planning and decision-making, allowing us to help even more effectively.
  1. Development perspectives and conclusions: GOCC as a role model

WOŚP AI

The use of artificial intelligence in the analysis of Great Orchestra of Christmas Charity’s financial data is an excellent example of the effective application of modern technology in the charity sector. Automation, objective analysis and the possibility of generating detailed reports contribute to the transparency, efficiency and credibility of the organisation’s operations.

The example of the Great Orchestra of Christmas Charity can serve as an inspiration for other entities operating in the field of charity, pointing to the benefits of adapting innovative technological solutions. Further development and implementation of AI in the sector may bring even greater improvements to the financial management and delivery of charities’ statutory objectives. However, it is worth remembering that technology is only a tool, and the real value comes from the human heart and the desire to help. It is empathy and commitment, combined with the analytical power of AI, that create the power to make a real difference in the world for the better.

Sources:

Summary

Protecting against threats, defamation and hate speech requires the conscious use of available legal, but also technological tools. In the case of incidents such as those reported by Jerzy Owsiak, both criminal law actions and protection of personal rights through civil means are possible. An effective reaction allows not only to defend against further attacks, but also to build public awareness of the consequences of such actions.

Let us remember – if any information raises our doubts, it is worth checking it and reaching for the source. Technology, to the extent indicated above, can help us not only to verify the accuracy of data or information, but also its proper interpretation.

Why are we afraid of the simple public limited company (PSA)?

The simple joint-stock company (PSA) is a relatively new legal form that was introduced into the Polish legal system only in 2021. Its main purpose is to make it easier to do business, which can be particularly useful for young entrepreneurs and StartUp’s (including technology StartUps, dealing with AI or GameDev, among others).

Despite its many advantages, the PSA raises some concerns and uncertainties among potential founders. Therefore, we encourage you to read a summary created with the help of Open AI Chat GPT and Google Gemini, as well as a commentary by our expert to allay your doubts.

Commentary by Open AI Chat GPT and Google Gemini

ai

New

The Simple Public Limited Company (PSA) is a relatively new legal form with the associated low popularity. The lack of long-standing legal practice can lead to uncertainty for potential entrepreneurs about the interpretation of the rules and their application in real business situations. In addition, the PSA is less well-known among the public than traditional legal forms – such as the limited liability company (Ltd.). This can make it difficult to establish business relationships and gain the trust of partners.

Low share capital

A PSA requires a minimum share capital of just £1. For many entrepreneurs, this may seem too low, raising concerns about the company’s financial credibility in the eyes of contractors and investors. It also raises the potential risk of abuse – the low barrier to entry and high flexibility of this type of business may encourage abuse by rogue shareholders.

Flexibility in shaping shareholder relations

PSAs offer a high degree of flexibility in shaping shareholder rights and obligations. This can lead to misunderstandings and conflicts, especially when there are more shareholders.

A simple public limited company is an advantageous solution for start-ups

Low cost of incorporation and financial security

Setting up a PSA is cheaper than a limited liability company due to the low share capital. In addition, as with a limited liability company, the shareholders of a PSA are only liable for the company’s obligations up to the amount of their contributions.

Flexibility in management and the opportunities offered by a PSA

A PSA offers greater flexibility in the management of the company, which can be beneficial for rapidly growing businesses. In addition, a simple PSA allows for liability insurance and capital raising in the form of issues.

Simpler procedures and growing popularity

A PSA requires simpler registration and operational procedures, which speeds up the process of setting up and running a business.

Did you know that you can set up a simple limited company through the S24 portal today? Simply use the service: https://ekrs.ms.gov.pl/s24/strona-glowna

Legal and institutional support

Public institutions offer various support programmes for entrepreneurs, including those who opt for a PSA in addition to the creation of numerous platforms and portals that support the management of PSAs, offering tools and services adapted to this legal form.

General meetings in PSAs – modern solutions and shareholder facilitation – expert commentary

General meetings in a Simple Public Limited Company (PSA) are innovative solutions that significantly facilitate the shareholder decision-making process. PSA, as a legal form introduced in Poland, focuses on flexibility and modernity, which is also reflected in the organisation of general meetings.

zgromadzenie meeting

Facilitation of Minutes of General Meetings

One of the key facilitations introduced by PSAs is that general meetings do not need to be minuted by a notary public, which is standard in traditional joint stock companies. The only exception to this is resolutions concerning amendments to the articles of association, which require minutes to be taken by a notary public. This is a significant convenience that reduces the costs and formalities of general meetings.

Variety of Voting Forms

The PSA offers a wide range of solutions for the organisation of general meetings, both onsite and online. Shareholders can vote at the general meeting, but also outside the meeting, either in writing or using electronic means of communication. This flexible approach enables shareholders to participate in votes regardless of their location, which is particularly important in this age of digitalisation and globalisation of business.

Minutes and Documentation

The resolutions of the general meeting in the PSA are recorded in the minutes, which include information on the correctness of the convening of the meeting, its capacity to adopt resolutions and the resolutions themselves. The minutes shall be accompanied by evidence of the convening of the meeting, the attendance list and the list of shareholders voting electronically. Resolutions should be signed by those present or at least by the chairman and the person drawing up the minutes. Shareholders furthermore have the right to inspect the minute book and to request certified copies of the resolutions.

Share capital in a PSA

One of the most important features of the PSA, which has already been highlighted by OPEN AI Chat GPT and Gemini, is the very low required share capital of just £1. This is a significant convenience for young entrepreneurs and startups that do not have a lot of financial resources. The share capital can be covered by both monetary and non-monetary contributions, such as shares in another company, movable property (e.g. a car, office equipment) or even the provision of labour or services. Furthermore, the amount of share capital is not set in the articles of association, which gives additional flexibility in managing the company’s finances.

Shareholders’ register and supervisory board

The shareholder register in a PSA can be maintained by a notary public or an entity authorised to maintain securities accounts. This is an important convenience as it ensures the security and integrity of the data contained in the register. The register is kept in electronic form, which facilitates access to information and share management.

The supervisory board, on the other hand, is an optional body, meaning that its establishment is not mandatory. This flexible approach allows the governance structure of the company to be adapted to the specific needs of the entrepreneur. The company’s articles of association may provide for the establishment of a supervisory board, but it is not required.

How to establish a simple joint-stock company?

There are several legal forms of companies in Poland, such as a limited liability company (sp. z o.o.), general partnership, limited partnership and the simple joint-stock company (PSA) discussed above. Each of these forms can be incorporated in two ways: traditionally or online. To set up a company online, simply use the website: https://ekrs.ms.gov.pl/s24/

What information will you need to set up a PSA?

1. details of the shareholders

  1. Individual:
  • first and last name
  • PESEL
  • address of residence / address for service
  • e-mail address
  1. Sp. k:
  • name
  • KRS no.
  • address for service
  • address for service of the general partner
  • information on the manner of registration of the company (via S24 / traditionally)
  • e-mail addresses of the partners

2 Company details

  1. Name
  2. Registered office + registered office address
  3. Amount of share capital
  4. Nominal value per share
  5. Information on the distribution of shares between the natural person and the sp.k. and amount of contributions
  6. Information on the financial year (whether it coincides with the calendar year)
  7. PKD + object of the predominant activity
  8. Duration of the company (definite/indefinite)
  9. Manner of representation
  10. Information regarding the management board, i.e:
  • how many persons the management board may consist of (e.g. 1 to 3),
  • Names, PESEL numbers and addresses for service of persons making up the management board
  • function of the individuals on the board (chairman/member of the board)
  • the term of office of the board member,
  1. Information on whether a supervisory board is to be appointed. If yes:
  • information on the number of RN members
  • full names, PESEL numbers and addresses for service of the members of the Supervisory Board
  • function of individual members of the Supervisory Board (chairman/member)
  • Term of office of the RN member
  1. Information as to whether a resolution is required to commit the Company to provide services with a value of twice the Company’s share capital,
  2. Whether the company is to have any branches

PSA

The Simple Joint Stock Company is a modern legal form that adapts to the needs of modern business by offering flexibility and facilitation of general meetings. Thanks to the possibility of voting in writing or electronically, as well as the absence of the need for minutes to be taken by a notary public (with the exception of resolutions to amend the articles of association). In addition, the low required share capital, the flexibility in covering it, the possibility for a notary to keep the shareholders’ register and the optionality of the supervisory board are just some of the advantages of PSAs. These features make the simple joint stock company an attractive option for entrepreneurs seeking efficient and cost-effective solutions.

Regulations cited in Art:

  • Art. 300(80) [Shareholder resolutions].
  • Art. 300(100) [Minutes].
  • Art. 300(3) [Share capital].
  • Art. 300(31) [Tasks of the shareholder registrar].
  • Art. 300(52) [Management board, board of directors].

Sources:

www.biznes.gov.pl/pl/portal/00168

US export restrictions on AI chips: what do they mean for the world, the gaming industry and … Poland?

The US has introduced new export restrictions on advanced chips and artificial intelligence (AI) models. The official aim is to protect national security. However, the consequences of these measures may be felt by companies around the world – including in the gaming industry, and even in Poland. In the article below, we explain what exactly these restrictions are, who they affect and what long-term effects they may have on AI development.

usa ai

What are the new restrictions?

The main ideas are found in the document ‘Framework for Artificial Intelligence Diffusion’, issued by the US Department of Commerce’s Bureau of Industry and Security (BIS). The document places restrictions on the export of advanced integrated circuits (ICs) and AI model scales to selected countries. The restrictions are intended to prevent the unauthorised use of these technologies in areas that could potentially threaten US interests and security.

Why is the US imposing restrictions?

A key concern of the US authorities relates to the use of advanced technologies in a manner contrary to US interests. Potential threats cited by government experts include:

  1. Military and intelligence applications of AI by countries considered potential adversaries.
  2. Development of weapons of mass destruction, supported by AI algorithms in design and production processes.
  3. Risk of advanced cyber attacks, facilitated by powerful chips and machine learning methods.
  4. Mass surveillance of citizens using facial recognition and data analytics technologies.

Restrictions are intended to prevent access to these solutions by entities that could destabilise the international situation or threaten US security. At the same time, the US wants to allow those countries and companies that comply with security rules and use AI chips responsibly.

usa ai

Which processors do the restrictions cover?

The restrictions mainly affect high-performance computing chips, crucial in the processes of training and deploying advanced AI models. Among these are:

  • GPUs (Graphical Processing Units), e.g. Nvidia A100, H100
  • TPUs (Tensor Processing Units), such as Google TPUs
  • Neural Processors, In-memory Processors, Vision Processors
  • Application-Specific Integrated Circuits (ASICs )
  • Field-Programmable Logic Devices (FPLDs ).

It is these types of circuits that make it possible to build and train the most complex neural networks, used for example in image analysis systems, speech recognition or natural language processing.

Do gamers have reason to be concerned?

Although the restrictions mainly focus on processors for data centre and professional AI applications, some high-performance GPUs used in gaming laptops may meet the criteria described in the BIS document. However:

  • Manufacturers such as Nvidia do not disclose detailed specifications, making it difficult to assess whether a particular GPU model is subject to the restrictions.
  • However, most gaming GPUs have much lower processing power than chips used in data centres, so the risk of restrictions in the gaming area remains relatively low.

pl usa ai

Why is Poland on the list?

It comes as asurprise that Poland is also on the list of restricted countries. The annual limit set by the US authorities for the import of chips from the US to Poland is 50,000 units, with the possibility of increasing it to 100,000.

  • Deputy Minister of Digitalisation Dariusz Standerski admits that in the short term this should not significantly harm the development of AI in Poland.
  • In the longer term, however, the cap may slow down investment in cutting-edge AI solutions and limit Polish companies’ access to the latest chips.

Reaction from Nvidia and the European Commission

The new regulations have drawn criticism from key technology players and representatives of the European Union:

  • Nvidia officially expressed concern that the restrictions would limit innovation and the global competitiveness of the US technology sector.
  • The European Commission published a statement highlighting the importance of a secure transatlantic AI supply chain. The Commission believes that the free purchase of AI chips from US companies is in both the economic and security interests of the United States, and looks forward to finding a constructive compromise with the US administration.

usa law ai

Potential consequences of the new rules

The introduction of export restrictions creates uncertainty in the high-tech market. Here are some scenarios that could materialise:

  1. Further tightening of restrictions – the US may expand the list of restricted solutions or countries.
  2. Easing of regulations – under international pressure and industry lobbying, the US may grant export licences for some chips.
  3. Price increases and supply delays – limited chip availability may affect higher costs for manufacturers and consumers and cause production downtime.
  4. Development of alternative technologies – countries affected by restrictions may choose to build their own ecosystem of AI solutions and become independent of US supply.
  5. International cooperation – countries affected by restrictions may strengthen relationships in the area of research and development, creating new AI initiatives and projects.

Impact on AI development and innovation

Lack of access to state-of-the-art AI chips and models can effectively hinder research, especially in countries and companies that relied on US processors:

  • Slower scientific progress – difficulties in acquiring the necessary hardware will delay research work in areas such as medicine or pharmaceuticals.
  • Fragmentation of the AI market – restrictions may lead to the formation of technology blocks, hindering global collaboration and knowledge sharing.
  • Inhibition ofinnovation – restrictions on access to key components will lengthen the path to the introduction of new products and services.
  • Increased international tensions – restrictions may exacerbate relations between global powers, especially when strategic technologies are involved.

law business usa

Summary and outlook

The new US export restrictions on advanced chips and AI models send an important signal to the world. On the one hand, they are intended to prevent the dangerous use of AI; on the other, they are already causing concern in the markets and may slow down the global development of artificial intelligence.

Poland, despite its so far marginal role in the market for advanced AI processors, is not excluded from the US action. Limits on chip imports may limit the pace of adoption of new technologies in our country in the future. In the long term, much depends on whether the US relaxes or tightens existing regulations, as well as how individual countries respond to these challenges – whether they choose to strengthen international cooperation or build autonomous AI ecosystems.

Ultimately, the changes being implemented by the US require a balance between protecting national security and maintaining continuity and freedom in the development of modern solutions. Keeping an eye on developments is crucial, especially as the AI industry continues to grow in strength year after year and underpins many areas of the economy and science.

Contact

Any questions?see phone number+48 663 683 888
see email address

Hey, have you
signed up to our newsletter yet?

    Check how we process your personal data here