The music of the future – Suno AI and Sora AI: will artificial intelligence be the new generation of music creators?

Artificial intelligence (AI) is changing the way music is created. Tools such as Suno AI and Sora AI allow artists and producers to generate melodies and lyrics and experiment with new sounds. However, the development of this technology raises questions about copyright, intellectual property and ethical aspects of using AI in music.

Are AI-generated songs protected by copyright? Who is the author? What are the consequences of using artificial intelligence in the creative process? In this article, we will look at these issues based on the analysis presented by Anna Adamiak, trainee solicitor and Junior Associate at LBK&P, in ‘Przegląd Radcowski’ and the applicable legal regulations.

ai muzyka

Is AI-generated music protected by copyright?

According to Polish copyright law, for a piece of work to be protected, it must be the result of human creative activity with an individual character (Article 1 of the Act on Copyright and Related Rights). This means that works created exclusively by Suno AI, Sora AI or other generative models cannot be protected by copyright.

Since a machine does not have the ability to express personality or make conscious creative choices, works created without significant human input are considered to be part of the public domain.

Who is the creator of AI-generated works?

The question of authorship of AI-generated works is one of the most important legal challenges. There are three main positions:

  1. User of the AI tool – a person using Suno AI or Sora AI can be considered a co-creator if their contribution to the creative process was significant. However, setting a few parameters or entering a short description does not meet the requirements of copyright law conferring the status of a creator.
  2. Creator of the algorithm – companies developing AI often claim the results of their models. It is also questionable that AI works autonomously and there is no individual human contribution to a specific work.
  3. Lack of an author – In many jurisdictions (e.g. the USA, the UK), AI works are treated as part of the public domain and have no assigned author.

These are the issues analysed by Anna Adamiak in ‘Przegląd Radcowski’, emphasising that current law cannot keep up with the development of generative technology.

Logo SUNO AI

Suno AI – artificial intelligence in the service of music

Suno AI is an advanced AI tool that allows you to generate music in different styles. It analyses huge databases of music and then creates new songs, lyrics and arrangements. It is particularly popular with independent artists and music producers.

Sora AI – the future of composition generation

Sora AI, on the other hand, is an AI platform for the automatic composition of music. Its algorithms allow for the personalisation of sounds, the adaptation of melodies to the user’s preferences and integration with music production software.

The impact of Suno AI and Sora AI on the music industry

AI tools are changing the music industry by offering:

Automation of the composing process – artists can generate unique melodies in seconds.

Lower production costs – AI eliminates the need for expensive producers.

New creative possibilities – AI supports experimentation with unusual styles and harmonies.

LLM czym jest . 

However, the use of AI in music is associated with numerous legal problems:

Risk of plagiarism – models such as Suno AI are trained on huge databases, which can lead to existing songs being unknowingly copied.

Lack of regulation – there is a lack of clear regulations regarding AI, which raises questions about copyright and intellectual property.

Legal regulations regarding AI-generated works

In order to adapt the law to AI technology, new legal initiatives have emerged:

🔹 Generative AI Copyright Disclosure Act (USA) – a 2024 law requiring disclosure of the use of copyrighted works to train AI.

🔹 ZAiKS guidelines (Poland) – clearly state that works generated exclusively by AI are not subject to legal protection.

Both regulations are an attempt to solve the problem of the lack of global standards regarding music created by artificial intelligence.

AGI

Summary

AI in music is revolutionising but also challenging copyright law. Suno AI and Sora AI offer new creative possibilities, but their use raises questions about intellectual property, plagiarism and authorship.

An analysis by Anna Adamiak in ‘Przegląd Radcowski’ shows that current regulations are not keeping up with the development of AI, and that music generated by artificial intelligence does not meet current criteria for legal protection.

What does the future hold? For the time being, artificial intelligence contributes to music, but does not fully replace human artists. However, this may change over the years as technology develops. 🎶

Will AI dominate the music industry? The next few years of technological development and legal regulations will provide the answer.

Sources:

📌 ‘Przegląd Radcowski’ – article by Anna Adamiak ‘Wyzwania dla prawa autorskiego w muzyce’

Data protection violations – what do you need to know?

In today’s digital world, data protection is becoming an increasingly important topic. Every organisation that processes personal data must be prepared for potential data breaches and know how to proceed in such a situation. In this article, we will discuss the most important issues related to personal data breaches in the light of the GDPR based on the publication of the UODO (Polish Data Protection Authority) entitled ‘Guide under the GDPR – obligations of administrators related to personal data breaches v2’.

Poradnik na gruncie RODO

What is a data breach?

A data breach is a security incident that leads to accidental or unlawful:

  • data destruction
  • data loss
  • data modification
  • unauthorised disclosure of data
  • unauthorised access to data

A breach can be both a deliberate action (e.g. a cyber attack) and an accidental event (e.g. losing a data carrier). The key point is that the breach concerns personal data being processed and can have a negative impact on the rights and freedoms of the data subjects.

Why are breaches dangerous?

Data breaches can have serious consequences for data subjects, such as:

  • physical injury
  • property damage (e.g. identity theft, financial fraud)
  • non-pecuniary damage (e.g. damage to reputation, mental stress)

Even seemingly insignificant incidents can have far-reaching consequences. It is therefore important that data controllers respond appropriately to any violations.

dane osobowe

Who is responsible for data protection?

The main responsibility lies with the data controller, i.e. the entity that determines the purposes and means of processing personal data. It is the controller who must implement appropriate technical and organisational measures to ensure data security.

The following also play an important role:

  • Processors – process data on behalf of the controller
  • Data Protection Officers (DPO) – advise and monitor compliance with the GDPR

What are the responsibilities of the controller?

In the context of personal data breaches, the controller has the following responsibilities:

  1. Preventing breaches by implementing appropriate safeguards
  2. Detecting and identifying breaches
  3. Responding to breaches:
  4. Remediation of the breach and minimisation of its effects
  5. Assessment of the risk associated with the breach
  6. Reporting of the breach to the supervisory authority (if there is a risk)
  7. Notification of the data subjects (in case of high risk)
  8. Documentation of the breach

dane osobowe

How can data breaches be prevented?

The key is to implement appropriate technical and organisational measures, such as:

  • Data encryption and pseudonymisation
  • Regular testing and evaluation of the effectiveness of security measures
  • Employee training
  • Incident response procedures
  • Control of data access
  • Data backups

The selection of measures should be based on an analysis of the risks associated with the processing.

How to detect violations?

Administrators should implement monitoring and incident detection systems, such as:

  • Intrusion detection systems (IDS/IPS)
  • Anti-virus software
  • Analysis of system logs
  • Procedures for reporting incidents by employees

It is also important to train staff to recognise potential violations.

dane osobowe

What to do after a breach has been detected?

After a breach has been detected, the controller should:

  1. Take immediate action to contain the breach and minimise its impact
  2. Assess the risk to the rights and freedoms of data subjects
  3. Report the breach to the supervisory authority within 72 hours if there is a risk (unless it can be demonstrated that the risk is unlikely to materialise)
  4. Notify the data subjects if there is a high risk
  5. Document the breach and the measures taken

Reporting breaches to the supervisory authority

The notification to the President of the Personal Data Protection Office should include:

  • A description of the nature of the breach
  • The categories and approximate number of data subjects
  • The possible consequences of the breach
  • The measures taken to remedy the breach
  • The contact details of the Data Protection Officer or other contact point

The notification can be made electronically via a dedicated form or ePUAP.

dane osobowe

Notification of data subjects

In the event of a high risk, the controller must notify the data subjects without undue delay. The notification should:

  • Be written in simple and clear language
  • Describe the nature of the breach
  • Include the contact details of the DPO or other contact point
  • Describe the possible consequences of the breach
  • Describe the measures taken to remedy the breach
  • Include recommendations for individuals to minimise potential negative effects

Notifications can be made directly (e.g. by email) or through a public announcement.

Documenting breaches

The controller must document all violations, regardless of whether they were reported. The documentation should include:

  • The circumstances of the violation
  • Its effects
  • The remedial measures taken
  • The reasoning behind the decision regarding the report/notification
  • The documentation serves as proof of compliance with the GDPR and may be subject to inspection by the supervisory authority.

dane osobowe

Cross-border personal data breaches

A cross-border data breach is an incident that involves the processing of personal data in more than one member state of the European Union. This can be because the controller or processor has organisational units in several EU countries, or when the breach affects data subjects in different member states.

In the case of cross-border data breaches, the incident reporting and management process becomes more complex. Controllers must cooperate with supervisory authorities in different countries and also take into account differences in local regulations and procedures. It is crucial to quickly determine which supervisory authority is the lead authority in a given case and to ensure effective communication between all parties involved. The cross-border nature of the breach can also affect the risk assessment and the way in which data subjects are notified, especially when it is necessary to take into account cultural and linguistic differences in different countries.

Summary

Responding appropriately to personal data breaches is crucial to protecting the rights of data subjects. This requires controllers to:

  • Implement appropriate safeguards
  • Prepare incident response procedures
  • Act quickly in the event of a breach
  • Communicate transparently with the supervisory authority and data subjects

Remember that the main purpose of these measures is to protect the rights and freedoms of individuals, not to avoid penalties. A responsible approach to data protection builds trust and minimises the negative effects of possible violations.

Want to know more?

Read the new guide from the UODO (the Polish Data Protection Authority):

https://uodo.gov.pl/pl/138/3561

Poradnik UODO

What’s new in the guide?

The new version takes into account the latest interpretations of regulations, case law and practical tips that will help administrators make the right decisions in the event of a personal data breach. It includes, among others:

  • updated procedures for responding to breaches (reporting to the President of the Personal Data Protection Office);
  • practical examples and case studies;
  • guidelines on cooperation with the President of the Personal Data Protection Office and other supervisory authorities;
  • key recommendations on risk assessment and breach prevention.

 

MiCA implementation begins – what does it mean for the crypto market in Poland?

MiCA and Polish regulations – who are the new regulations aimed at?

The MiCA (Markets in Crypto-Assets Regulation) regulation introduces uniform rules for the crypto-asset market in the European Union. The new regulations cover both the issuance of tokens and the activities of crypto-asset service providers (CASP).
In practice, this means that entities offering crypto assets to the public or operating trading platforms will have to meet certain licensing and transparency requirements.

krypto

Who is subject to MiCA?

According to MiCA, a crypto-asset service provider is a legal person or company that professionally provides crypto-asset services to clients. To operate legally, each such entity must obtain a CASP licence, which in Poland will be issued by the Polish Financial Supervision Authority (KNF).

The regulation distinguishes 10 categories of cryptoasset-related services. Cryptoasset services are regulated in Article 3(1)(16) of the MiCA, including:

  • providing crypto-asset custody and administration services on behalf of clients;
  • operating a trading platform for crypto-assets;
  • exchanging crypto-assets for cash;
  • exchanging crypto-assets for other crypto-assets;
  • executing orders related to crypto-assets on behalf of clients;

placing crypto-assets;

  • Placing crypto assets;
  • Accepting and forwarding orders related to crypto assets on behalf of clients;
  • Advising on crypto assets;
  • Managing a crypto asset portfolio;
  • Providing crypto asset transfer services on behalf of clients;

Any entity that provides even one of the above services must comply with the new MiCA regulations.

krypto

MiCA and the Polish crypto market – key changes

Until now, the Polish cryptocurrency market has operated in the absence of dedicated sectoral regulations. The only requirement was to be entered in the register of activities in the field of virtual currencies and to comply with the AML Act.
After the MiCA comes into force, the situation will change dramatically:


✅ The Polish Financial Supervision Authority (KNF) will gain full supervisory powers over cryptocurrency exchanges, crypto bureaux de change and custody companies,
✅ Obligation to obtain a CASP licence – operating without a licence will become illegal after a transitional period,
✅ Introduction of capital requirements – 50,000 euros or 125,000 euros or 150,000 euros for crypto-asset service providers.
✅ Necessity to implement risk management, audit and compliance systems.


For Polish companies, this means having to comply with strict requirements or ceasing their operations. It is also possible that market consolidation will accelerate, with smaller entities being taken over by larger companies. 

Is Poland ready for MiCA?

The Polish draft law on the crypto-asset market generally reflects the assumptions of MiCA, adapting the national legal order to the new EU regulations, but potential legislative delays could lead to regulatory chaos.

📌 Transition period – Poland shortened the transition period to the end of June 2025, and therefore had to implement the relevant regulations before 30 December 2024 in order to effectively limit the EU ‘grace period’.
📌 Risk of companies leaving – as it is still not possible to apply for a licence in Poland, cryptocurrency companies may move their operations to other EU countries where the procedures are already being implemented.

krypto

Summary – how to prepare your company for MiCA?

MiCA is a revolution for the crypto market in Poland. Every company operating in this sector should as soon as possible:

✅ Check whether it is subject to MiCA and what licences it will need to obtain,
✅ Prepare to implement organisational and capital requirements,
✅ Monitor legislative progress and adapt to new regulations,
✅ Consider seeking legal assistance in the process of obtaining a CASP licence.

Failure to comply with the new regulations will mean that you will have to either close down or move your business to another EU country.
If you run a company related to the cryptocurrency market, contact us now to prepare for the changes!

 

SME Fund 2025 – funding for trademark registration for SMEs

Protect your brand and benefit from funding in 2025!

Good news for small and medium-sized enterprises (SMEs)! On 3 February 2025, the next edition of the SME Fund was launched, offering funding for the registration of trademarks at the European Union level. This programme is an excellent opportunity to protect your brand and increase your competitiveness in the European market.

sme fund 2025

What is SME Fund 2025?

The SME Fund is an initiative of the European Union Intellectual Property Office (EUIPO) that aims to support small and medium-sized enterprises in protecting their intellectual property rights. Under the programme, entrepreneurs can obtain a voucher worth up to 1,000 euros to cover the costs of registering trademarks and industrial designs.

Why is it worth protecting a trademark?

Registering a trademark at the EU level provides a company with a number of benefits, including:

✅ Exclusive right to the brand throughout the European Union,

✅ Protection against unfair competition and counterfeiting,

✅ Increased company value and market recognition,

✅ Secure investment in branding and marketing,

✅ Facilitated acquisition of investors and financing.

sme fund 2025

Who can benefit from the funding?

The SME Fund 2025 programme is aimed at small and medium-sized enterprises (SMEs) based in the European Union, including:

✔ Natural persons conducting business activities,

✔ Commercial law companies,

✔ Civil law partnerships.

The prerequisite for participation is that the SME criteria are met, i.e. fewer than 250 employees and an annual turnover not exceeding 50 million euros.

How do I get funding from the SME Fund 2025?

To receive funding, you must go through a four-step process:

1. Submission of an application for funding via the EUIPO platform,

2. Receiving a decision on the voucher award,

3. Registering the trademark and paying the fees,

4. Applying for reimbursement of the costs incurred.

The whole process takes about 2 months – from submitting the application to receiving reimbursement.

sme fund 2025

How much can you get? Amount of funding

Under the SME Fund 2025 programme, entrepreneurs can get:

💰 75% reimbursement of fees for trademark applications at EU or national level,

💰 50% reimbursement of basic fees for international applications,

💰 Maximum funding amount is 1000 euros per SME.

Why is it worth acting quickly?

🕒 The programme’s budget is limited!

Funding is awarded on a first-come, first-served basis, so it’s a good idea to apply as soon as possible after the programme launches on 3 February 2025.

How can we help?

Our law firm has been helping entrepreneurs register trademarks for years. We offer comprehensive support at every stage of the process:

🔹 Analysing the registrability of the trademark,

🔹 Preparing and submitting an application for funding,

🔹 Registering the trademark with the relevant authority,

🔹 Handling the formalities related to reimbursement.

Protect your brand with SME Fund 2025 and our law firm! With the SME Fund 2025 and our support, you can protect your brand at minimal cost.

📞 Contact us today and arrange a free consultation:

Don’t wait! Funding is limited – apply as soon as possible and protect your brand in the EU!

You can find more information here:

https://uprp.gov.pl/pl/aktualnosci/informacje/rusza-kolejna-edycja-programu-pn-fundusz-dla-msp

 

 

 

Tax aspects of doing business in Germany

Dear Sir or Madam,

We would like to invite you to a free webinar on the tax aspects of doing business in Germany, organised by the law firm Leśniewski Borkiewicz Kostka & Partners (LBK&P), in cooperation with the German tax consultancy Dr Klein, Dr Mönstermann International Tax Services GmbH. The event is aimed at both companies just starting out on the German market and those already operating there – especially in the SME sector and large enterprises. Please feel free to forward the invitation to your friends and colleagues who may be interested in the topic of the webinar.

📅 Date: 18 February 2025

⏰ Time: 10:00

📍 Venue: online

During the webinar, LBK&P experts Paweł Suliga (tax advisor in Poland and Germany, recognised expert in German taxes and business law, on a daily basis he provides services in Germany to, among others, Polish companies from the construction industry listed on the Warsaw Stock Exchange) and Bartłomiej Chałupiński (tax advisor in Poland, head of tax @ LBK&P) will discuss key issues regarding tax regulations and obligations related to doing business in Germany.

Agenda:

✅ The most important tax changes in Germany since 01.01.2025.

✅ What tax obligations are associated with setting up a business in Germany?

✅ Documentation and information obligations when conducting cross-border business – what to look out for.

NOTE:

After the webinar, you have the opportunity to take part in a free, 30-minute individual consultation with our experts (separate booking required, limited number of places). To book your consultation, please send an email to rezerwacje@lbplegal.com .

🔗 registration link for the webinar – the link to participate in the webinar and the login details will be sent to you at least 3 days before the event.

Details of the event are also described in the attachment, along with a link to registration.

We look forward to seeing you there!

Attachment for download:

Webinar - Podatkowe aspekty prowadzenia działalności gospodarczej w Niemczech

Articles 1-5 of the AI Act have been in force since 2 February 2025, failure to comply may result in heavy fines.

🚨 IMPORTANT INFORMATION – Articles 1-5 of the AI Act have been in force since 2 February, failure to comply may result in heavy fines. Many companies in the EU have not yet taken the required action – here are the most important details.

ai law

From 2 February 2025, the first provisions of the Artificial Intelligence Act (AI Act) will apply, aimed at increasing safety and regulating the AI market in the European Union.

The most important changes include:

  • Prohibited practices – a ban on the use and placing on the market or for trading of AI systems that meet the criteria of prohibited practices. Examples include manipulative systems that take advantage of human weaknesses, social scoring systems and systems that analyse emotions in the workplace or education. Violations of these regulations are subject to high fines of up to 35 million euros or 7% of a company’s annual global turnover.
  • Obligation of AI literacy. Employers must provide their employees with adequate training and knowledge about AI so that they can safely use AI systems at work. Lack of AI training can lead to non-compliance with regulations and increase the risk of incorrect use of AI systems. In connection with AI literacy, it is also worth ensuring the implementation of an AI use policy in the company. How to do it?

See articles:

The policy for using AI in a company can include, for example, clear procedures, rules for using AI, conditions for the approval of systems, procedures in case of incidents and the appointment of a person responsible for the effective implementation and use of AI in the organisation (AI Ambassador).

 

The importance of AI education (Article 4 AI Act)

Awareness and knowledge of AI is not only a legal requirement, but also a strategic necessity for organisations. Article 4 of the AI Act obliges companies to implement training programmes tailored to the knowledge, role and experience of their employees.

‘Suppliers and deployers of AI systems shall take measures to ensure, to the greatest extent possible, an appropriate level of competence with regard to AI among their personnel and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, and the context in which the AI systems are to be used, as well as taking into account the persons or groups of persons against whom the AI systems are to be used.’ Failure to act in this regard carries serious consequences. in which the AI systems are to be used, as well as taking into account the persons or groups of persons against whom the AI systems are to be used.

Failure to act in this regard has serious consequences, including:

  • The risk of violating personal data protection and privacy regulations.
  • An increased likelihood of violating the law and incurring financial penalties.

In addition to regulatory compliance, AI education helps to build a culture of responsible use of technology and minimises potential operational risks.

Where can I find guidance?

The ISO/IEC 42001 standard on artificial intelligence management systems can help. As part of the measures relating to the relevant competences of persons dealing with AI in an organisation, the standard indicates, for example, the following issues:

  • mentoring
  • training
  • transferring employees to appropriate tasks within the organisation based on an analysis of their competences.

At the same time, important roles or areas of responsibility should be assigned to, for example:

  • supervision of the AI system
  • security
  • protection
  • privacy

LLM security

Prohibited AI practices (Article 5 AI Act)

The AI Act prohibits the use of certain AI systems that could pose serious risks to society. Suppliers and companies using AI must ensure that they are not directly or indirectly involved in their development or implementation. Among other things, the AI Act lists specific prohibited practices that are considered particularly dangerous. These include:

  • Subliminal or manipulative techniques – AI systems that subconsciously change the user’s behaviour so that they make a decision they would not otherwise have made.
  • Exploitation of human weaknesses – systems that take advantage of a person’s disability, social or economic situation.
  • Social scoring – systems that evaluate citizens and grant them certain rights based on their behaviour.
  • Assessment of the risk of committing a crime – systems that profile individuals and evaluate their individual characteristics without legitimate grounds.
  • Creation of facial image databases – untargeted acquisition of images from the internet or city surveillance for the purpose of creating facial recognition systems.
  • Analysis of emotions in the workplace or education – AI systems that analyse the emotions of employees or students.
  • Biometric categorisation of sensitive data – using biometric data to gain information about race, political views, etc.
  • Remote biometric identification in real time – using facial recognition systems in public spaces for the purpose of prosecuting crimes.

Where to look for guidance?

  • draft Guidelines on prohibited artificial intelligence (AI) practices – 4 February 2025. The Commission published guidelines on prohibited artificial intelligence practices to ensure consistent, effective and uniform application of the AI Act across the EU.

Important dates:

  • From 2 February 2025 – Chapter II (prohibited practices)
  • From 2 August 2025 – Chapter V (general-purpose models), Chapter XII (Penalties) without Article 101
  • From 2 August 2026 – Article 6(2) and Annex III (high-risk systems), Chapter IV (transparency obligations)
  • From 2 August 2027 – Article 6(1) (high-risk systems) and corresponding obligations

Key takeaways for companies:

  • Compliance with Articles 1-5 of the AI Act is mandatory and cannot be ignored.
  • Training in AI is crucial to avoid mistakes related to employee ignorance and potential company liability.
  • Conducting audits of technology providers is necessary to ensure that AI systems comply with regulations.
  • Implement an AI use policy – introduce clear documentation to organise the risks. The policy can include, for example, clear procedures, rules for using AI, conditions for admitting systems, how to deal with incidents, and appointing a person responsible for supervision (AI Ambassador).
  • Developing AI tools in accordance with the law – companies developing AI tools must consider legal and ethical aspects at every stage of development. This includes analysing the compliance of the system’s objectives with the law, database legality, cybersecurity and system testing. It is important that the process of creating AI systems complies with the principles of privacy by design and privacy by default under the GDPR – How to create AI tools legally?.

Be sure to check out these sources:

https://www.gov.pl/attachment/9bb34f05-037d-4e71-bb7a-6d5ace419eeb

DeepSeek – Chinese AI in open source mode. Does Hong Kong have a chance against OpenAI?

DeepSeek is a series of Chinese language models that impresses with its performance and low training costs. Thanks to their open source approach, DeepSeek-R1 and DeepSeek-V3 are causing quite a stir in the AI industry.

DeepSeek

Source: www.deepseek.com

DeepSeek: a revolution in the world of AI from Hong Kong

DeepSeek is increasingly being mentioned in discussions about the future of artificial intelligence. This Hong Kong project provides open-source large language models (LLMs) with high performance and, crucially, significantly lower training costs than competing solutions from OpenAI or Meta.

In this article, we will take a closer look at DeepSeek-R1 and DeepSeek-V3 and provide an update on the development and distribution of these models based on official materials available on the Hugging Face platform as well as publications from Spider’s Web and china24.com.

Table of contents

  1. How was DeepSeek created?
  2. DeepSeek-R1 and DeepSeek-V3: a brief technical introduction
  3. Training costs and performance: what’s the secret?
  4. Open source and licensing
  5. DeepSeek-R1, R1-Zero and Distill models: what are the differences?
  6. The rivalry between China and the USA: sanctions, semiconductors and innovation
  7. Will DeepSeek threaten OpenAI’s dominance?
  8. Summary
  9. Sources

AI born

How was DeepSeek created?

The latest press reports indicate that High-Flyer Capital Management, a company that until recently was almost unknown in the IT industry outside of Asia, was founded in Hong Kong in 2015. This changed dramatically with DeepSeek, a series of large language models that took Silicon Valley experts by storm.

However, DeepSeek is not only a commercial project – it is also a breath of fresh air in a world where closed solutions with huge budgets, such as models from OpenAI (including GPT-4 and OpenAI o1), usually dominate.

DeepSeek-R1 and DeepSeek-V3: a brief technical introduction

According to information from the official project page on Hugging Face, DeepSeek is currently publishing several variants of its models:

  1. DeepSeek-R1-Zero: created through advanced training without the initial SFT (Supervised Fine-Tuning) stage, focusing on strengthening reasoning skills (the so-called chain-of-thought).
  2. DeepSeek-R1: in which the authors included additional, preliminary fine-tuning (SFT) before the reinforcement learning phase, which improved the readability and consistency of the generated text.
  3. DeepSeek-V3: named after the base model from which the R1-Zero and R1 variants described above are derived. DeepSeek-V3 can have up to 671 billion parameters and was trained in two months at a cost of approximately $5.58 million (data: china24.com).

ai tech

Technical background

  • The high number of parameters (up to 671 billion) means that very complex statements and analyses can be generated.
  • Thanks to the optimised training process, even such a large architecture does not require a budget comparable to that of OpenAI.
  • The main goal: to independently develop multi-stage solutions and minimise ‘hallucinations’, so common in other models.

Training costs and performance: what’s the secret?

Both the Spider’s Web service and the china24. com emphasise that the training costs of DeepSeek-R1 (approx. $5 million for the first version) are many times lower than those we hear about in the context of GPT-4 or other closed OpenAI models, where we hear about billions of dollars.

Where does the recipe for success lie?

  • Proprietary methods of optimising the learning process,
  • Agile architecture that allows the model to learn more effectively with fewer GPUs,
  • Economical management of training data (avoiding unnecessary repetitions and precisely selecting the data set).

open source

Open source and licensing

DeepSeek, unlike most of its Western competitors, relies on open source. As stated in the official documentation of the model on Hugging Face:

‘DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation…’

This means that the community is not only free to use these models, but also to modify and develop them. In addition, several variants have already been developed within the DeepSeek-R1-Distill line, optimised for lower resource requirements.

Important:

  • The DeepSeek-R1-Distill models are based, among other things, on the publicly available Qwen2.5 and Llama3, which are linked to the relevant Apache 2.0 and Llama licences.
  • Nevertheless, the whole is made available to the community on very liberal terms – which stimulates experimentation and further innovation.

AI

DeepSeek-R1, R1-Zero and Distill models: what are the differences?

From the documentation published on Hugging Face, a three-tier division emerges:

1. DeepSeek-R1-Zero

  • Training only with RL (reinforcement learning), without prior SFT,
  • The model can generate very complex chains of thought (chain-of-thought),
  • However, it can suffer from problems with text reproducibility and readability.

2. DeepSeek-R1

  • Including the SFT phase before RL solved the problems noticed in R1-Zero,
  • Better consistency and less tendency to hallucinate,
  • According to benchmarks, it is comparable to OpenAI o1 in math, programming, and analytical tasks.

3. DeepSeek-R1-Distill

  • ‘Slimmed-down’ versions of the model (1.5B, 7B, 8B, 14B, 32B, 70B parameters),
  • Enable easier implementation on weaker hardware,
  • Created by distillation (transferring knowledge from the full R1 model to smaller architectures).

Rivalry between China and the USA: sanctions, semiconductors and innovation

As noted by the ‘South China Morning Post’ (cited by chiny24.com), the development of Chinese AI models is taking place under conditions of limited access to advanced semiconductors due to US sanctions.

Meanwhile, Chinese companies – including DeepSeek and ByteDance (Doubao) – are showing that even in such an unfavourable climate, they are able to create models:

  • that are not inferior to Western solutions,
  • and often much cheaper to maintain.

As Jim Fan (researcher at Nvidia) points out, the DeepSeek project may be proof that innovation and restrictive conditions (less funding, sanctions) do not have to be mutually exclusive.

Will DeepSeek threaten OpenAI’s dominance?

High-Flyer Capital Management and other Chinese companies are entering the market with a model that:

  • performs better than Western competitors in some tests,
  • is cheaper to develop and maintain,
  • makes open repositories available, allowing for the rapid development of a community-based ecosystem.

If OpenAI (and other giants) do not develop a strategy to compete with cheaper and equally good models, Chinese solutions – such as DeepSeek or Doubao – could capture a significant share of the market.

LLM przyszłość

The era of expensive AI models is coming to an end?

DeepSeek is a prime example of how the era of gigantic and ultra-expensive AI models may be coming to an end. Open source, low training costs and very good benchmark results mean that ambitious start-ups from China could shake up the current balance of power in the artificial intelligence industry.

Due to the growing technological tensions between China and the USA, the further development of DeepSeek and similar projects will probably become one of the main themes in the global rivalry for the title of AI leader.

Sources

  1. ‘Chinese DeepSeek beats all OpenAI models. The West has a big problem’ – Spider’s Web
  2. ‘DeepSeek. Chinese startup builds open-source AI’ – chiny24.com
  3. Official DeepSeek-R1 website on Hugging Face

Author: own work based on the indicated publications.

Text intended for information and journalistic purposes.

And the Oscar goes to … AI Brody

The Oscar nominations have been announced. One of the favourites is Brady Corbet’s The Brutalist, starring Adrien Brody, who is also nominated for this prestigious award. The film tells the story of a Jewish architect who emigrated from post-war Europe to the USA in search of a safe haven for himself and his wife. Even before the nominations were announced, there was a heated debate about whether Adrien Brody should receive the award for his phenomenal acting performance, due to the fact that his accent, which we hear throughout the entire film, was improved by AI tools. As you can see, it didn’t matter in the context of Adrien’s nomination, but will the controversy surrounding the use of AI in the film ultimately have a decisive impact on the verdict of the American Film Academy?

cinema

AI improved Adrien Brody’s accent in The Brutalist – how does technology change cinema?

The film The Brutalist uses artificial intelligence to subtly correct the Hungarian pronunciation of actors Adrien Brody and Felicity Jones. The film’s editor, Dávid Jancsó, revealed that Respeecher technology was used to improve the authenticity of the Hungarian dialogue. Both actors worked with a dialect coach, but the producers wanted perfect pronunciation, which is difficult to achieve using traditional methods. Are such practices the future of cinema, or rather a threat to the authenticity of acting performances?

How Respeecher technology works

Respeecher is an advanced speech synthesis tool that allows the voice of one person to be transformed into the voice of another, while retaining all the emotions, intonations and natural sound. The process is based on machine learning algorithms that work in several key stages:

  1. Collecting voice data – The developers first record samples of the target voice. In the case of The Brutalist, these were recordings of actors who were to use a Hungarian accent.
  2. Acoustic analysis – The Respeecher system analyses the unique characteristics of the voice, such as timbre, speech rate and the way certain phonemes are pronounced.
  3. Machine learning: Based on the provided samples, algorithms learn the characteristics of the voice and then generate a digital version of it that faithfully reflects the original characteristics.
  4. Sound synthesis: In the final process, the actor’s voice is modified to fit the requirements of the creators – in this case, it was about the authentic sound of a Hungarian accent.

The advantage of this approach is that the actor does not have to re-record the dialogue. As emphasised by the film’s creators, the technology was used exclusively as a supporting tool, not replacing the work of the actors.

ai cinema

AI in the film industry – breakthrough or threat?

The use of AI in films such as The Brutalist is just the tip of the iceberg. The technology is finding more and more applications in cinematography:

  • Special effects – AI allows for the generation of realistic visual effects, which reduces production costs.
  • Post-production: AI-based tools automate processes such as editing, colour correction and sound quality improvement.
  • Scriptwriting: Algorithms that analyse popular film plots can suggest new storylines.
  • Digitalisation of actors: AI makes it possible to rejuvenate or ‘revive’ deceased actors for use in new productions. (Raindance)

usa law ai

Legal and ethical aspects of using AI in cinematography

The use of AI in film raises many legal and ethical questions, which are becoming increasingly pressing with the growing use of this technology. In the case of The Brutalist, we can identify several key issues:

  1. Copyright – Does the digital voice generated by AI belong to the actor, the technology company, or the film producers? The use of an actor’s voice in synthetic form may give rise to claims for remuneration for additional use of the image.
  2. Transparency – Viewers were not informed about the use of Respeecher during the first screenings of the film. Should filmmakers openly communicate such practices, especially when they affect the perception of performances?
  3. Impact on the acting profession – Critics fear that the development of AI may lead to a decrease in the demand for voice actors or even actors themselves, as their voices could be generated synthetically.

oscar ai

Controversy and impact on Oscar chances

The revelation of AI use in The Brutalist has sparked controversy in the film industry. Concerns have been raised that such practices could undermine the authenticity of acting performances and lead to ethical dilemmas surrounding the use of technology in the arts. In the context of the upcoming Oscars, some experts suggest that the use of AI in The Brutalist could affect Adrien Brody’s chances of winning the Best Actor award. Nevertheless, both the director and the film editor assure that AI was only a supporting tool, not a substitute for the talent and work of the actors.

Summary

The Brutalist has become a symbol of a new era in cinematography, in which artificial intelligence is becoming a creative tool, but also a subject of controversy. The case of Adrien Brody and the use of Respeecher opens a discussion about the limits of technology in art. Should AI only support creators or will it dominate the industry, replacing human creativity? One thing is certain – the future of cinema will be closely linked to the development of technology.

Sources:

BREAKING: new executive order from President Trump

On 23 January 2025, President Donald Trump signed an executive order that defines the United States’ new priorities in the field of artificial intelligence (AI). This decision was made in connection with the repeal of EO 14110, issued by Joe Biden in 2023, and introduces new rules governing the development of AI in the US. This step by the Trump administration emphasises the importance of AI in maintaining US global dominance, promoting innovation and ensuring national security

AI USA

New priorities in AI policy

President Trump’s executive order sets out the United States’ main objectives in the field of artificial intelligence. The document emphasises that AI development should be based on free market principles and that ideological biases should be avoided. A key element of the new policy is to strengthen the US’s global position in AI, which is intended to promote economic competitiveness, human development and national security.

In contrast to the Biden administration’s approach, which called for stricter security tests and the sharing of results with the government, the new policy favours greater freedom for technology companies. Trump described the previous regulations as too restrictive, which he believed could have hampered the development of AI in the United States.

business ai

Key elements of Trump’s regulation

  1. Repeal of EO 14110
  2. The Biden administration’s regulation, which aimed to ensure the safe and trusted development of AI, was considered by the Trump administration to be overly restrictive of freedom of innovation. The new regulations pave the way for a review of all regulations and actions resulting from the repealed regulation.
  3. Strengthening the US position as a global leader in AI
  4. The document emphasises that the goal of US policy is to maintain and develop dominance in the field of artificial intelligence. This is intended to promote technological innovation, economic competitiveness and national security.
  5. Development of an AI Action Plan
  6. Within 180 days of the signing of the regulation, special advisors, including the Assistant to the President for Science and Technology and the Special Advisor for AI and Cryptocurrencies, in cooperation with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the Director of the Office of Management and Budget (OMB Director) and the heads of such executive departments and agencies as they deem relevant, are to present an action plan. Domestic Policy, the Director of the Office of Management and Budget (OMB Director) and the heads of such executive departments and agencies as the above-mentioned deem relevant are to present an action plan that will set out the details of the implementation of the new priorities.
  7. Update of supervisory policies
  8. The Office of Management and Budget (OMB) has been instructed to update existing memoranda regarding the supervision of AI systems to comply with the new policy.

bialy dom

Significance for the United States

1. Impact on the technology sector

Trump’s new executive order could boost innovation in the AI sector by removing regulatory barriers. This will give technology companies more freedom to develop new systems and applications, which could strengthen the US global position in this field. At the same time, the lack of strict regulations raises concerns about the ethical use of technology and the risk of abuse.

2. National security and global competition

Emphasising the role of AI in the context of national security indicates the growing importance of this technology in defence and intelligence. As an innovation leader, the US must face competition primarily from China, which is also investing heavily in AI. The new policy is intended to ensure the technological advantage of the United States.

3. Criticism and controversy

The decision to repeal the Biden regulation has been criticised by numerous experts who fear that the lack of restrictions could lead to the development of dangerous AI technologies. Alondra Nelson of the Center for American Progress warns that the American public could be left unprotected from the potential harms of AI development.

AI Act – a pioneering regulation for artificial intelligence in the EU

The European Union, on the other hand, has taken a different approach. The AI Act is a regulation that puts the European Union at the forefront of global efforts to create a responsible and transparent legal framework for artificial intelligence. Published on 12 July 2024 in the Official Journal of the European Commission, the regulation is crucial for shaping the future of AI technology in Europe, while ensuring a high level of protection for citizens‘ and consumers’ rights.

The regulation defines AI systems in terms of their risk, classifying them into different levels (minimal, limited, high and unacceptable). High-risk AI systems, such as those used in healthcare, education or recruitment processes, will have to meet specific requirements regarding safety, transparency and reliability. Furthermore, applications of artificial intelligence that are considered unacceptable, such as mass biometric surveillance in public spaces or the manipulation of human behaviour, are prohibited.

Comparison with the US approach

In contrast to the more liberal US policy, the European Union places greater emphasis on prevention and the protection of citizens’ rights. In the EU, regulations are aimed at preventing potential risks associated with AI, such as algorithmic discrimination, surveillance or manipulation of public opinion.


Conclusions and outlook

President Trump’s new executive order shows that the administration is committed to the development of AI as a key element of the US economic and technological strategy. Free-market policies are intended to attract investment and promote innovation, but their side effects can lead to a lack of adequate ethical and legal safeguards.

GPT chat not working. Thousands of user reports

Chat GPT

Illustration generated by competitor #GoogleGemini.

On Thursday, 23 January, the popular GPT Chat stopped working. The failure was reported by thousands of concerned Internet users.

The website providing online access to the popular GPT Chat stopped working for a while on Thursday, 23 January. The failure occurred around 1 p.m. CET, as can be clearly seen on the Downdetector.pl service’s notification chart. Interestingly, the websites of OpenAI, the company responsible for GPT Chat, were still working at the same time.

At the same time, problems were also reported with other OpenAI services. The GPT-4o and GPT-4 models were not working. Some users reported that the chatgpt.com and chat.openai.com websites were not opening. Others noticed that GPT Chat was not responding to their questions. The applications created to operate this model were also not responding.

This would not be the first failure of GPT Chat. In the last few weeks, there have been brief service interruptions. The biggest one occurred in December, when there was a major failure in the United States, which also resulted in errors in other OpenAI services.

Contact

Any questions?see phone number+48 663 683 888
see email address

Hey, have you
signed up to our newsletter yet?

    Check how we process your personal data here