AGI – a machine on a human scale. Is the law keeping up with technology?

General artificial intelligence (AGI) is becoming an increasingly real prospect, posing unprecedented challenges for legislators and society. The recent AI Action Summit in Paris and the International Association for Safe and Ethical AI (IASEAI ‘25) conference have shed light on the urgent need for a comprehensive legal framework for this groundbreaking technology.

ai

The origins of the idea of artificial intelligence

The concept of artificial general intelligence dates back to the 1950s, when computer science pioneers began to analyse the possibility of creating machines that could match human intelligence. Alan Turing, one of the forerunners of AI, proposed the Turing test – an experiment to assess whether a machine can conduct a conversation indistinguishable from a human one.

A turning point in the history of AI was the 1956 Dartmouth conference, where the term ‘artificial intelligence’ was officially coined. The predictions at the time were that human-level intelligence would be achieved quickly, but technological progress did not keep pace with researchers’ optimism

The crisis and revival of AI

The 1980s and 1990s brought changes in the approach to artificial intelligence. Research focused on narrower applications of AI, such as expert systems, and the idea of artificial general intelligence (AGI) faded into the background. It was not until the dynamic development of computing technology, big data and deep learning in the 21st century that researchers turned their attention to the topic of AGI once again.

AI

What is AGI?

Unlike current AI systems, AGI has the potential to perform a wide range of intellectual tasks at a level comparable to that of humans. This versatility brings with it both great opportunities and serious risks that must be properly regulated.

There is currently no legal definition of AGI, but given the rapid development of AI technology, it is likely that one will be needed in the near future.

Many companies and organisations are already trying to define AGI. For example, in 2024, OpenAI created a definition of AGI and five levels of advancement.

According to OpenAI:

Today’s chatbots, such as ChatGPT, are at the first level of development.

OpenAI claims to be approaching the second level, which means a system capable of solving basic problems at the level of a person with a doctorate.

The third level is AI acting as an agent that can make decisions and perform tasks on behalf of the user.

At the fourth level, artificial intelligence achieves the ability to create new innovations.

The fifth, highest level, means AI that can do the work of entire organisations of people.

OpenAI previously defined AGI as ‘a highly autonomous system that surpasses humans in most economically valuable tasks’.

Stargate Project

Stargate Project

One of the most ambitious AGI projects is Project Stargate, which envisages investments of $500 billion in the development of AI infrastructure in the USA. The main goal is to strengthen the United States’ position as a leader in the field of AGI, as well as to create hundreds of thousands of jobs and generate global economic benefits.

AGI has the potential to take over a variety of tasks that have so far required human creativity and adaptability.

Read more about Stargate:

https://lbplegal.com/stargate-project-nowa-era-infrastruktury-ai-w-stanach-zjednoczonych/

Why is AGI regulation crucial? Expert opinions

During IASEAI ‘25 in Paris and the AI Action summit, experts emphasised the need to create comprehensive AGI regulations.

  • Dr Szymon Łukasik (NASK) pointed out that the law must keep pace with innovation while ensuring its security.
  • Prof Stuart Russell called for the introduction of mandatory safeguards to ensure that AGI is not used in a harmful way.
  • Joseph Stiglitz emphasised the need to take social interests into account in the legislative process.
  • Max Tegmark from the Future of Life Institute emphasised the importance of international cooperation, especially between the USA and China. Only a global approach will allow for the effective regulation of AGI development.

AGI

Risks associated with AGI

Many experts in Paris expressed concern about the development of AI, which they believe is happening very quickly; some believe AGI will emerge within the next 10 years. Key questions about AGI concerned the following issues:

  • Can AGI itself modify rules to achieve its objectives?
  • What should be the mechanisms for emergency shutdown of AGI?
  • How can we ensure that the AI’s objectives do not conflict with the interests of humanity?

Researchers are already verifying whether a model with a set objective, e.g. winning a game, is able to modify the rules of the game in order to win. In view of this fact, it is necessary to model the objectives of the AI system in such a way that they do not conflict with the interests of humanity.

These considerations and proposals aim to create a legal framework and standards that will allow for the safe and ethical development of AGI, while not hindering technological progress. Experts agree that regulations should be comprehensive and take into account both the technical aspects and the social implications of AGI development. Therefore, international cooperation is crucial for creating uniform safety and ethical standards.

Transparency and safety mechanisms

Transparency and safety mechanisms are other key aspects that must be taken into account in future regulations. Prof. Steward Russell postulated that companies developing AI should be legally obliged to implement safeguards against the harmful use of their technology. Experts also propose the legal requirement of an emergency shutdown mechanism for AI models and the creation of standards for security testing.

In the context of emergency shutdown mechanisms for AI models, there are two key ISO standards that address this issue:

  • ISO 13850 is a standard for emergency stop functions in machines. Although it does not directly refer to AI models, it establishes general principles for the design of emergency stop mechanisms that can potentially be applied to AI systems as well. This standard emphasises that the emergency stop function should be available and operational at all times, and its activation should take precedence over all other functions;
  • ISO/IEC 42001, on the other hand, is a more recent standard, published in December 2023, which directly addresses artificial intelligence management systems. It covers broader aspects of AI risk management, including risk and impact assessment of AI and AI system life cycle management.

Under ISO/IEC 42001, organisations are required to implement processes to identify, analyse, assess and monitor risks associated with AI systems throughout the life cycle of the management system. This standard emphasises continuous improvement and the maintenance of high standards in the development of AI systems.

It is worth noting that although these standards provide guidelines, specific emergency shutdown mechanisms for advanced AI systems such as AGI are still the subject of international research and discussion. Experts emphasise that as increasingly advanced AI systems are developed, traditional ‘shutdown’ methods may prove insufficient, and other standards and solutions will need to be developed.

Sztuczna inteligencja

Read more about AI:

https://lbplegal.com/sztuczna-inteligencja-czym-jest-z-prawnego-punktu-widzenia-i-jak-radzi-sobie-z-nia-swiat/

AGI regulation and cybersecurity

AGI can identify and exploit vulnerabilities faster than any human or current AI systems. Therefore, legal regulations for AGI should include:

  • Preventing the use of AGI for cyberattacks.
  • Standardising the security of AGI systems to limit the risk of takeover by malicious entities.
  • Determination of legal responsibility in the event of AGI-related incidents.

Interdisciplinary approach to lawmaking

The development of AGI requires an interdisciplinary approach that takes into account:

  • Cooperation between lawyers and AI experts, ethicists and economists.
  • Global regulations to protect human rights and privacy.
  • Transparency of AGI development and control mechanisms.

A proactive legal approach can make AGI safe and beneficial for all of humanity, without blocking technological progress.

AGI is a technology with enormous potential, but also with serious risks. It is crucial to create a legal framework and standards that will allow for its safe development. International cooperation and an interdisciplinary approach are key to ensuring that AGI serves humanity and does not pose a threat.

SME Fund 2025 – funding for trademark registration for SMEs

Protect your brand and benefit from funding in 2025!

Good news for small and medium-sized enterprises (SMEs)! On 3 February 2025, the next edition of the SME Fund was launched, offering funding for the registration of trademarks at the European Union level. This programme is an excellent opportunity to protect your brand and increase your competitiveness in the European market.

sme fund 2025

What is SME Fund 2025?

The SME Fund is an initiative of the European Union Intellectual Property Office (EUIPO) that aims to support small and medium-sized enterprises in protecting their intellectual property rights. Under the programme, entrepreneurs can obtain a voucher worth up to 1,000 euros to cover the costs of registering trademarks and industrial designs.

Why is it worth protecting a trademark?

Registering a trademark at the EU level provides a company with a number of benefits, including:

✅ Exclusive right to the brand throughout the European Union,

✅ Protection against unfair competition and counterfeiting,

✅ Increased company value and market recognition,

✅ Secure investment in branding and marketing,

✅ Facilitated acquisition of investors and financing.

sme fund 2025

Who can benefit from the funding?

The SME Fund 2025 programme is aimed at small and medium-sized enterprises (SMEs) based in the European Union, including:

✔ Natural persons conducting business activities,

✔ Commercial law companies,

✔ Civil law partnerships.

The prerequisite for participation is that the SME criteria are met, i.e. fewer than 250 employees and an annual turnover not exceeding 50 million euros.

How do I get funding from the SME Fund 2025?

To receive funding, you must go through a four-step process:

1. Submission of an application for funding via the EUIPO platform,

2. Receiving a decision on the voucher award,

3. Registering the trademark and paying the fees,

4. Applying for reimbursement of the costs incurred.

The whole process takes about 2 months – from submitting the application to receiving reimbursement.

sme fund 2025

How much can you get? Amount of funding

Under the SME Fund 2025 programme, entrepreneurs can get:

💰 75% reimbursement of fees for trademark applications at EU or national level,

💰 50% reimbursement of basic fees for international applications,

💰 Maximum funding amount is 1000 euros per SME.

Why is it worth acting quickly?

🕒 The programme’s budget is limited!

Funding is awarded on a first-come, first-served basis, so it’s a good idea to apply as soon as possible after the programme launches on 3 February 2025.

How can we help?

Our law firm has been helping entrepreneurs register trademarks for years. We offer comprehensive support at every stage of the process:

🔹 Analysing the registrability of the trademark,

🔹 Preparing and submitting an application for funding,

🔹 Registering the trademark with the relevant authority,

🔹 Handling the formalities related to reimbursement.

Protect your brand with SME Fund 2025 and our law firm! With the SME Fund 2025 and our support, you can protect your brand at minimal cost.

📞 Contact us today and arrange a free consultation:

Don’t wait! Funding is limited – apply as soon as possible and protect your brand in the EU!

You can find more information here:

https://uprp.gov.pl/pl/aktualnosci/informacje/rusza-kolejna-edycja-programu-pn-fundusz-dla-msp

 

 

 

Tax aspects of doing business in Germany

Dear Sir or Madam,

We would like to invite you to a free webinar on the tax aspects of doing business in Germany, organised by the law firm Leśniewski Borkiewicz Kostka & Partners (LBK&P), in cooperation with the German tax consultancy Dr Klein, Dr Mönstermann International Tax Services GmbH. The event is aimed at both companies just starting out on the German market and those already operating there – especially in the SME sector and large enterprises. Please feel free to forward the invitation to your friends and colleagues who may be interested in the topic of the webinar.

📅 Date: 18 February 2025

⏰ Time: 10:00

📍 Venue: online

During the webinar, LBK&P experts Paweł Suliga (tax advisor in Poland and Germany, recognised expert in German taxes and business law, on a daily basis he provides services in Germany to, among others, Polish companies from the construction industry listed on the Warsaw Stock Exchange) and Bartłomiej Chałupiński (tax advisor in Poland, head of tax @ LBK&P) will discuss key issues regarding tax regulations and obligations related to doing business in Germany.

Agenda:

✅ The most important tax changes in Germany since 01.01.2025.

✅ What tax obligations are associated with setting up a business in Germany?

✅ Documentation and information obligations when conducting cross-border business – what to look out for.

NOTE:

After the webinar, you have the opportunity to take part in a free, 30-minute individual consultation with our experts (separate booking required, limited number of places). To book your consultation, please send an email to rezerwacje@lbplegal.com .

🔗 registration link for the webinar – the link to participate in the webinar and the login details will be sent to you at least 3 days before the event.

Details of the event are also described in the attachment, along with a link to registration.

We look forward to seeing you there!

Attachment for download:

Webinar - Podatkowe aspekty prowadzenia działalności gospodarczej w Niemczech

Articles 1-5 of the AI Act have been in force since 2 February 2025, failure to comply may result in heavy fines.

🚨 IMPORTANT INFORMATION – Articles 1-5 of the AI Act have been in force since 2 February, failure to comply may result in heavy fines. Many companies in the EU have not yet taken the required action – here are the most important details.

ai law

From 2 February 2025, the first provisions of the Artificial Intelligence Act (AI Act) will apply, aimed at increasing safety and regulating the AI market in the European Union.

The most important changes include:

  • Prohibited practices – a ban on the use and placing on the market or for trading of AI systems that meet the criteria of prohibited practices. Examples include manipulative systems that take advantage of human weaknesses, social scoring systems and systems that analyse emotions in the workplace or education. Violations of these regulations are subject to high fines of up to 35 million euros or 7% of a company’s annual global turnover.
  • Obligation of AI literacy. Employers must provide their employees with adequate training and knowledge about AI so that they can safely use AI systems at work. Lack of AI training can lead to non-compliance with regulations and increase the risk of incorrect use of AI systems. In connection with AI literacy, it is also worth ensuring the implementation of an AI use policy in the company. How to do it?

See articles:

The policy for using AI in a company can include, for example, clear procedures, rules for using AI, conditions for the approval of systems, procedures in case of incidents and the appointment of a person responsible for the effective implementation and use of AI in the organisation (AI Ambassador).

 

The importance of AI education (Article 4 AI Act)

Awareness and knowledge of AI is not only a legal requirement, but also a strategic necessity for organisations. Article 4 of the AI Act obliges companies to implement training programmes tailored to the knowledge, role and experience of their employees.

‘Suppliers and deployers of AI systems shall take measures to ensure, to the greatest extent possible, an appropriate level of competence with regard to AI among their personnel and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, and the context in which the AI systems are to be used, as well as taking into account the persons or groups of persons against whom the AI systems are to be used.’ Failure to act in this regard carries serious consequences. in which the AI systems are to be used, as well as taking into account the persons or groups of persons against whom the AI systems are to be used.

Failure to act in this regard has serious consequences, including:

  • The risk of violating personal data protection and privacy regulations.
  • An increased likelihood of violating the law and incurring financial penalties.

In addition to regulatory compliance, AI education helps to build a culture of responsible use of technology and minimises potential operational risks.

Where can I find guidance?

The ISO/IEC 42001 standard on artificial intelligence management systems can help. As part of the measures relating to the relevant competences of persons dealing with AI in an organisation, the standard indicates, for example, the following issues:

  • mentoring
  • training
  • transferring employees to appropriate tasks within the organisation based on an analysis of their competences.

At the same time, important roles or areas of responsibility should be assigned to, for example:

  • supervision of the AI system
  • security
  • protection
  • privacy

LLM security

Prohibited AI practices (Article 5 AI Act)

The AI Act prohibits the use of certain AI systems that could pose serious risks to society. Suppliers and companies using AI must ensure that they are not directly or indirectly involved in their development or implementation. Among other things, the AI Act lists specific prohibited practices that are considered particularly dangerous. These include:

  • Subliminal or manipulative techniques – AI systems that subconsciously change the user’s behaviour so that they make a decision they would not otherwise have made.
  • Exploitation of human weaknesses – systems that take advantage of a person’s disability, social or economic situation.
  • Social scoring – systems that evaluate citizens and grant them certain rights based on their behaviour.
  • Assessment of the risk of committing a crime – systems that profile individuals and evaluate their individual characteristics without legitimate grounds.
  • Creation of facial image databases – untargeted acquisition of images from the internet or city surveillance for the purpose of creating facial recognition systems.
  • Analysis of emotions in the workplace or education – AI systems that analyse the emotions of employees or students.
  • Biometric categorisation of sensitive data – using biometric data to gain information about race, political views, etc.
  • Remote biometric identification in real time – using facial recognition systems in public spaces for the purpose of prosecuting crimes.

Where to look for guidance?

  • draft Guidelines on prohibited artificial intelligence (AI) practices – 4 February 2025. The Commission published guidelines on prohibited artificial intelligence practices to ensure consistent, effective and uniform application of the AI Act across the EU.

Important dates:

  • From 2 February 2025 – Chapter II (prohibited practices)
  • From 2 August 2025 – Chapter V (general-purpose models), Chapter XII (Penalties) without Article 101
  • From 2 August 2026 – Article 6(2) and Annex III (high-risk systems), Chapter IV (transparency obligations)
  • From 2 August 2027 – Article 6(1) (high-risk systems) and corresponding obligations

Key takeaways for companies:

  • Compliance with Articles 1-5 of the AI Act is mandatory and cannot be ignored.
  • Training in AI is crucial to avoid mistakes related to employee ignorance and potential company liability.
  • Conducting audits of technology providers is necessary to ensure that AI systems comply with regulations.
  • Implement an AI use policy – introduce clear documentation to organise the risks. The policy can include, for example, clear procedures, rules for using AI, conditions for admitting systems, how to deal with incidents, and appointing a person responsible for supervision (AI Ambassador).
  • Developing AI tools in accordance with the law – companies developing AI tools must consider legal and ethical aspects at every stage of development. This includes analysing the compliance of the system’s objectives with the law, database legality, cybersecurity and system testing. It is important that the process of creating AI systems complies with the principles of privacy by design and privacy by default under the GDPR – How to create AI tools legally?.

Be sure to check out these sources:

https://www.gov.pl/attachment/9bb34f05-037d-4e71-bb7a-6d5ace419eeb

DeepSeek – Chinese AI in open source mode. Does Hong Kong have a chance against OpenAI?

DeepSeek is a series of Chinese language models that impresses with its performance and low training costs. Thanks to their open source approach, DeepSeek-R1 and DeepSeek-V3 are causing quite a stir in the AI industry.

DeepSeek

Source: www.deepseek.com

DeepSeek: a revolution in the world of AI from Hong Kong

DeepSeek is increasingly being mentioned in discussions about the future of artificial intelligence. This Hong Kong project provides open-source large language models (LLMs) with high performance and, crucially, significantly lower training costs than competing solutions from OpenAI or Meta.

In this article, we will take a closer look at DeepSeek-R1 and DeepSeek-V3 and provide an update on the development and distribution of these models based on official materials available on the Hugging Face platform as well as publications from Spider’s Web and china24.com.

Table of contents

  1. How was DeepSeek created?
  2. DeepSeek-R1 and DeepSeek-V3: a brief technical introduction
  3. Training costs and performance: what’s the secret?
  4. Open source and licensing
  5. DeepSeek-R1, R1-Zero and Distill models: what are the differences?
  6. The rivalry between China and the USA: sanctions, semiconductors and innovation
  7. Will DeepSeek threaten OpenAI’s dominance?
  8. Summary
  9. Sources

AI born

How was DeepSeek created?

The latest press reports indicate that High-Flyer Capital Management, a company that until recently was almost unknown in the IT industry outside of Asia, was founded in Hong Kong in 2015. This changed dramatically with DeepSeek, a series of large language models that took Silicon Valley experts by storm.

However, DeepSeek is not only a commercial project – it is also a breath of fresh air in a world where closed solutions with huge budgets, such as models from OpenAI (including GPT-4 and OpenAI o1), usually dominate.

DeepSeek-R1 and DeepSeek-V3: a brief technical introduction

According to information from the official project page on Hugging Face, DeepSeek is currently publishing several variants of its models:

  1. DeepSeek-R1-Zero: created through advanced training without the initial SFT (Supervised Fine-Tuning) stage, focusing on strengthening reasoning skills (the so-called chain-of-thought).
  2. DeepSeek-R1: in which the authors included additional, preliminary fine-tuning (SFT) before the reinforcement learning phase, which improved the readability and consistency of the generated text.
  3. DeepSeek-V3: named after the base model from which the R1-Zero and R1 variants described above are derived. DeepSeek-V3 can have up to 671 billion parameters and was trained in two months at a cost of approximately $5.58 million (data: china24.com).

ai tech

Technical background

  • The high number of parameters (up to 671 billion) means that very complex statements and analyses can be generated.
  • Thanks to the optimised training process, even such a large architecture does not require a budget comparable to that of OpenAI.
  • The main goal: to independently develop multi-stage solutions and minimise ‘hallucinations’, so common in other models.

Training costs and performance: what’s the secret?

Both the Spider’s Web service and the china24. com emphasise that the training costs of DeepSeek-R1 (approx. $5 million for the first version) are many times lower than those we hear about in the context of GPT-4 or other closed OpenAI models, where we hear about billions of dollars.

Where does the recipe for success lie?

  • Proprietary methods of optimising the learning process,
  • Agile architecture that allows the model to learn more effectively with fewer GPUs,
  • Economical management of training data (avoiding unnecessary repetitions and precisely selecting the data set).

open source

Open source and licensing

DeepSeek, unlike most of its Western competitors, relies on open source. As stated in the official documentation of the model on Hugging Face:

‘DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation…’

This means that the community is not only free to use these models, but also to modify and develop them. In addition, several variants have already been developed within the DeepSeek-R1-Distill line, optimised for lower resource requirements.

Important:

  • The DeepSeek-R1-Distill models are based, among other things, on the publicly available Qwen2.5 and Llama3, which are linked to the relevant Apache 2.0 and Llama licences.
  • Nevertheless, the whole is made available to the community on very liberal terms – which stimulates experimentation and further innovation.

AI

DeepSeek-R1, R1-Zero and Distill models: what are the differences?

From the documentation published on Hugging Face, a three-tier division emerges:

1. DeepSeek-R1-Zero

  • Training only with RL (reinforcement learning), without prior SFT,
  • The model can generate very complex chains of thought (chain-of-thought),
  • However, it can suffer from problems with text reproducibility and readability.

2. DeepSeek-R1

  • Including the SFT phase before RL solved the problems noticed in R1-Zero,
  • Better consistency and less tendency to hallucinate,
  • According to benchmarks, it is comparable to OpenAI o1 in math, programming, and analytical tasks.

3. DeepSeek-R1-Distill

  • ‘Slimmed-down’ versions of the model (1.5B, 7B, 8B, 14B, 32B, 70B parameters),
  • Enable easier implementation on weaker hardware,
  • Created by distillation (transferring knowledge from the full R1 model to smaller architectures).

Rivalry between China and the USA: sanctions, semiconductors and innovation

As noted by the ‘South China Morning Post’ (cited by chiny24.com), the development of Chinese AI models is taking place under conditions of limited access to advanced semiconductors due to US sanctions.

Meanwhile, Chinese companies – including DeepSeek and ByteDance (Doubao) – are showing that even in such an unfavourable climate, they are able to create models:

  • that are not inferior to Western solutions,
  • and often much cheaper to maintain.

As Jim Fan (researcher at Nvidia) points out, the DeepSeek project may be proof that innovation and restrictive conditions (less funding, sanctions) do not have to be mutually exclusive.

Will DeepSeek threaten OpenAI’s dominance?

High-Flyer Capital Management and other Chinese companies are entering the market with a model that:

  • performs better than Western competitors in some tests,
  • is cheaper to develop and maintain,
  • makes open repositories available, allowing for the rapid development of a community-based ecosystem.

If OpenAI (and other giants) do not develop a strategy to compete with cheaper and equally good models, Chinese solutions – such as DeepSeek or Doubao – could capture a significant share of the market.

LLM przyszłość

The era of expensive AI models is coming to an end?

DeepSeek is a prime example of how the era of gigantic and ultra-expensive AI models may be coming to an end. Open source, low training costs and very good benchmark results mean that ambitious start-ups from China could shake up the current balance of power in the artificial intelligence industry.

Due to the growing technological tensions between China and the USA, the further development of DeepSeek and similar projects will probably become one of the main themes in the global rivalry for the title of AI leader.

Sources

  1. ‘Chinese DeepSeek beats all OpenAI models. The West has a big problem’ – Spider’s Web
  2. ‘DeepSeek. Chinese startup builds open-source AI’ – chiny24.com
  3. Official DeepSeek-R1 website on Hugging Face

Author: own work based on the indicated publications.

Text intended for information and journalistic purposes.

And the Oscar goes to … AI Brody

The Oscar nominations have been announced. One of the favourites is Brady Corbet’s The Brutalist, starring Adrien Brody, who is also nominated for this prestigious award. The film tells the story of a Jewish architect who emigrated from post-war Europe to the USA in search of a safe haven for himself and his wife. Even before the nominations were announced, there was a heated debate about whether Adrien Brody should receive the award for his phenomenal acting performance, due to the fact that his accent, which we hear throughout the entire film, was improved by AI tools. As you can see, it didn’t matter in the context of Adrien’s nomination, but will the controversy surrounding the use of AI in the film ultimately have a decisive impact on the verdict of the American Film Academy?

cinema

AI improved Adrien Brody’s accent in The Brutalist – how does technology change cinema?

The film The Brutalist uses artificial intelligence to subtly correct the Hungarian pronunciation of actors Adrien Brody and Felicity Jones. The film’s editor, Dávid Jancsó, revealed that Respeecher technology was used to improve the authenticity of the Hungarian dialogue. Both actors worked with a dialect coach, but the producers wanted perfect pronunciation, which is difficult to achieve using traditional methods. Are such practices the future of cinema, or rather a threat to the authenticity of acting performances?

How Respeecher technology works

Respeecher is an advanced speech synthesis tool that allows the voice of one person to be transformed into the voice of another, while retaining all the emotions, intonations and natural sound. The process is based on machine learning algorithms that work in several key stages:

  1. Collecting voice data – The developers first record samples of the target voice. In the case of The Brutalist, these were recordings of actors who were to use a Hungarian accent.
  2. Acoustic analysis – The Respeecher system analyses the unique characteristics of the voice, such as timbre, speech rate and the way certain phonemes are pronounced.
  3. Machine learning: Based on the provided samples, algorithms learn the characteristics of the voice and then generate a digital version of it that faithfully reflects the original characteristics.
  4. Sound synthesis: In the final process, the actor’s voice is modified to fit the requirements of the creators – in this case, it was about the authentic sound of a Hungarian accent.

The advantage of this approach is that the actor does not have to re-record the dialogue. As emphasised by the film’s creators, the technology was used exclusively as a supporting tool, not replacing the work of the actors.

ai cinema

AI in the film industry – breakthrough or threat?

The use of AI in films such as The Brutalist is just the tip of the iceberg. The technology is finding more and more applications in cinematography:

  • Special effects – AI allows for the generation of realistic visual effects, which reduces production costs.
  • Post-production: AI-based tools automate processes such as editing, colour correction and sound quality improvement.
  • Scriptwriting: Algorithms that analyse popular film plots can suggest new storylines.
  • Digitalisation of actors: AI makes it possible to rejuvenate or ‘revive’ deceased actors for use in new productions. (Raindance)

usa law ai

Legal and ethical aspects of using AI in cinematography

The use of AI in film raises many legal and ethical questions, which are becoming increasingly pressing with the growing use of this technology. In the case of The Brutalist, we can identify several key issues:

  1. Copyright – Does the digital voice generated by AI belong to the actor, the technology company, or the film producers? The use of an actor’s voice in synthetic form may give rise to claims for remuneration for additional use of the image.
  2. Transparency – Viewers were not informed about the use of Respeecher during the first screenings of the film. Should filmmakers openly communicate such practices, especially when they affect the perception of performances?
  3. Impact on the acting profession – Critics fear that the development of AI may lead to a decrease in the demand for voice actors or even actors themselves, as their voices could be generated synthetically.

oscar ai

Controversy and impact on Oscar chances

The revelation of AI use in The Brutalist has sparked controversy in the film industry. Concerns have been raised that such practices could undermine the authenticity of acting performances and lead to ethical dilemmas surrounding the use of technology in the arts. In the context of the upcoming Oscars, some experts suggest that the use of AI in The Brutalist could affect Adrien Brody’s chances of winning the Best Actor award. Nevertheless, both the director and the film editor assure that AI was only a supporting tool, not a substitute for the talent and work of the actors.

Summary

The Brutalist has become a symbol of a new era in cinematography, in which artificial intelligence is becoming a creative tool, but also a subject of controversy. The case of Adrien Brody and the use of Respeecher opens a discussion about the limits of technology in art. Should AI only support creators or will it dominate the industry, replacing human creativity? One thing is certain – the future of cinema will be closely linked to the development of technology.

Sources:

BREAKING: new executive order from President Trump

On 23 January 2025, President Donald Trump signed an executive order that defines the United States’ new priorities in the field of artificial intelligence (AI). This decision was made in connection with the repeal of EO 14110, issued by Joe Biden in 2023, and introduces new rules governing the development of AI in the US. This step by the Trump administration emphasises the importance of AI in maintaining US global dominance, promoting innovation and ensuring national security

AI USA

New priorities in AI policy

President Trump’s executive order sets out the United States’ main objectives in the field of artificial intelligence. The document emphasises that AI development should be based on free market principles and that ideological biases should be avoided. A key element of the new policy is to strengthen the US’s global position in AI, which is intended to promote economic competitiveness, human development and national security.

In contrast to the Biden administration’s approach, which called for stricter security tests and the sharing of results with the government, the new policy favours greater freedom for technology companies. Trump described the previous regulations as too restrictive, which he believed could have hampered the development of AI in the United States.

business ai

Key elements of Trump’s regulation

  1. Repeal of EO 14110
  2. The Biden administration’s regulation, which aimed to ensure the safe and trusted development of AI, was considered by the Trump administration to be overly restrictive of freedom of innovation. The new regulations pave the way for a review of all regulations and actions resulting from the repealed regulation.
  3. Strengthening the US position as a global leader in AI
  4. The document emphasises that the goal of US policy is to maintain and develop dominance in the field of artificial intelligence. This is intended to promote technological innovation, economic competitiveness and national security.
  5. Development of an AI Action Plan
  6. Within 180 days of the signing of the regulation, special advisors, including the Assistant to the President for Science and Technology and the Special Advisor for AI and Cryptocurrencies, in cooperation with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the Director of the Office of Management and Budget (OMB Director) and the heads of such executive departments and agencies as they deem relevant, are to present an action plan. Domestic Policy, the Director of the Office of Management and Budget (OMB Director) and the heads of such executive departments and agencies as the above-mentioned deem relevant are to present an action plan that will set out the details of the implementation of the new priorities.
  7. Update of supervisory policies
  8. The Office of Management and Budget (OMB) has been instructed to update existing memoranda regarding the supervision of AI systems to comply with the new policy.

bialy dom

Significance for the United States

1. Impact on the technology sector

Trump’s new executive order could boost innovation in the AI sector by removing regulatory barriers. This will give technology companies more freedom to develop new systems and applications, which could strengthen the US global position in this field. At the same time, the lack of strict regulations raises concerns about the ethical use of technology and the risk of abuse.

2. National security and global competition

Emphasising the role of AI in the context of national security indicates the growing importance of this technology in defence and intelligence. As an innovation leader, the US must face competition primarily from China, which is also investing heavily in AI. The new policy is intended to ensure the technological advantage of the United States.

3. Criticism and controversy

The decision to repeal the Biden regulation has been criticised by numerous experts who fear that the lack of restrictions could lead to the development of dangerous AI technologies. Alondra Nelson of the Center for American Progress warns that the American public could be left unprotected from the potential harms of AI development.

AI Act – a pioneering regulation for artificial intelligence in the EU

The European Union, on the other hand, has taken a different approach. The AI Act is a regulation that puts the European Union at the forefront of global efforts to create a responsible and transparent legal framework for artificial intelligence. Published on 12 July 2024 in the Official Journal of the European Commission, the regulation is crucial for shaping the future of AI technology in Europe, while ensuring a high level of protection for citizens‘ and consumers’ rights.

The regulation defines AI systems in terms of their risk, classifying them into different levels (minimal, limited, high and unacceptable). High-risk AI systems, such as those used in healthcare, education or recruitment processes, will have to meet specific requirements regarding safety, transparency and reliability. Furthermore, applications of artificial intelligence that are considered unacceptable, such as mass biometric surveillance in public spaces or the manipulation of human behaviour, are prohibited.

Comparison with the US approach

In contrast to the more liberal US policy, the European Union places greater emphasis on prevention and the protection of citizens’ rights. In the EU, regulations are aimed at preventing potential risks associated with AI, such as algorithmic discrimination, surveillance or manipulation of public opinion.


Conclusions and outlook

President Trump’s new executive order shows that the administration is committed to the development of AI as a key element of the US economic and technological strategy. Free-market policies are intended to attract investment and promote innovation, but their side effects can lead to a lack of adequate ethical and legal safeguards.

GPT chat not working. Thousands of user reports

Chat GPT

Illustration generated by competitor #GoogleGemini.

On Thursday, 23 January, the popular GPT Chat stopped working. The failure was reported by thousands of concerned Internet users.

The website providing online access to the popular GPT Chat stopped working for a while on Thursday, 23 January. The failure occurred around 1 p.m. CET, as can be clearly seen on the Downdetector.pl service’s notification chart. Interestingly, the websites of OpenAI, the company responsible for GPT Chat, were still working at the same time.

At the same time, problems were also reported with other OpenAI services. The GPT-4o and GPT-4 models were not working. Some users reported that the chatgpt.com and chat.openai.com websites were not opening. Others noticed that GPT Chat was not responding to their questions. The applications created to operate this model were also not responding.

This would not be the first failure of GPT Chat. In the last few weeks, there have been brief service interruptions. The biggest one occurred in December, when there was a major failure in the United States, which also resulted in errors in other OpenAI services.

Trump changes artificial intelligence regulations – new approach to AI in the USA

Donald Trump began his term of office with significant changes in the approach to artificial intelligence (AI) regulation. One of the first steps was to repeal Joe Biden’s 2023 executive order, which introduced specific safety requirements for AI systems. This decision has sparked controversy among experts, who emphasise that the lack of adequate regulations can bring both opportunities and serious threats to society and the United States’ position as a technological leader.

AI USA

What was the Biden AI regulation about?

Joe Biden’s executive order aimed to ensure the safe and responsible development of artificial intelligence. It focused on several key areas:

  1. Safety standards and testing of AI systems
  2. Companies involved in the development of artificial intelligence were required to conduct safety tests on their systems and share the results with the US government. This was intended to identify potential risks, such as algorithmic bias or the risk of using AI in activities that threaten national security.
  3. Protection against AI-generated disinformation
  4. The Department of Commerce was to develop guidelines for watermarks and content authentication systems to enable easy identification of AI-generated material. This was intended to limit the impact of disinformation and fake news on society.
  5. Privacy and data protection
  6. The regulation emphasised the protection of citizens’ data from being used illegally to train AI models. Although President Biden urged Congress to pass appropriate laws, there were no specific regulations governing this issue at the time.
  7. Preventing algorithmic discrimination
  8. One of the key points of the regulation concerned counteracting the creation of AI algorithms based on unrepresentative data that could lead to discrimination, e.g. in recruitment systems, the judiciary or healthcare.
  9. Security in healthcare and life sciences
  10. The Biden administration introduced mechanisms to prevent the use of AI to create dangerous biological materials. The Department of Health was to develop AI safety programmes in medicine, focusing on improving healthcare and developing innovative therapies. Their fulfilment was to be a condition for obtaining federal funding for life sciences projects.
  11. The labour market and the impact of AI
  12. The regulations were intended to develop rules to protect employees from the unfair use of AI in performance appraisal or recruitment systems.

You can read more about artificial intelligence systems in the states here:

It is worth noting that the repealed version of the regulation is no longer even available on the White House website, so the regulation has not only been repealed, but also removed along with the archive versions from the official source. At the moment, it can only be found here.

bialy dom

Why did Trump repeal the regulation?

Donald Trump argued that the regulations introduced by Joe Biden were too strict and could limit the development of innovative technologies. From the Republicans’ perspective, regulations such as the obligation to report security tests and share information with the government could hinder the activities of technology companies and weaken their competitiveness in the global market.

Trump emphasised that the US approach to AI should be less bureaucratic and more focused on supporting innovation. The decision to repeal the regulation is in line with his philosophy of deregulation and limiting government interference in the private sector.

In addition, Biden’s regulation aimed to increase the security of AI development by introducing transparency standards, reducing the risk of misinformation and counteracting algorithmic discrimination. Tech companies also had to disclose information about potential flaws in their models, including AI biases, which was particularly criticised by Trump-related circles as threatening their competitiveness.

Trump’s decision – liberalisation or risk?

The decision to repeal the regulation has met with mixed reactions. Experts such as Alondra Nelson of the Center for American Progress warn that the lack of safety standards will weaken consumer protection against AI-related risks. Alexander Nowrasteh from the Cato Institute, on the other hand, noted that abandoning some of Biden’s regulations, e.g. immigration facilities for AI specialists, could have negative effects on the sector.

Trump’s supporters, however, argue that his decision is an opportunity to accelerate technological development. They emphasise that overly strict regulations, such as those introduced in Europe, can hamper innovation.

404 Biały dom

Image source: website of the White House

Consequences of Trump’s decision

Experts warn that the lack of clearly defined rules governing the development of AI can lead to a number of risks:

  • Disinformation and fake news: The lack of guidelines for authenticating AI-generated content can facilitate the spread of false information.
  • Threats to national security: Without proper security testing, AI systems can be vulnerable to use in cybercrime or warfare.
  • Ethics and trust: The lack of regulation increases the risk of algorithmic discrimination and privacy violations, which can undermine public trust in AI technology.

On the other hand, supporters of Trump’s decision emphasise that liberalising regulations will allow for faster technology development and attract investment in the AI sector.

business ai

Will the US remain the leader in AI?

Trump’s decision to repeal Biden’s executive order opens a new chapter in the US approach to AI regulation. While Europe is focusing on protecting civil rights, the US may take a more liberal course, favouring freedom of innovation but at the same time exposing even basic human rights.

However, the lack of a clearly defined legal framework in the long term may weaken the US’s position as a leader in the field of AI, especially in the context of international cooperation and the creation of global standards. It will be crucial to find a balance between supporting development and minimising the risks posed by this revolutionary technology.

Artificial intelligence remains one of the most important technologies of the 21st century, and the decisions made this week by world leaders in the United States will influence its development for decades to come.

How does artificial intelligence improve the analysis of GOCC’s financial data and increase the transparency of the charity?

The situation in which Jerzy Owsiak has found himself, discussed and commented on in recent days in the media, gives rise to numerous discussions and questions about the legal possibilities of action in such cases. Below we point out the key legal provisions on criminal threats, public incitement to hatred or violation of personal rights that could apply. These incidents serve as a reminder of how important legal protection is in situations where words and actions can escalate so far as to take the form of criminal acts.

In the second part of the article, we also outline how AI is revolutionising the analysis of financial data, increasing the transparency and efficiency of charities.

The final part of the article is based on a thorough analysis of the financial reports of the GOCC, available on the organisation’s official website, and demonstrates the practical application of technology in the non-profit sector.

wosp-2025

Graphics: WOŚP Foundation materials

Criminal threats and incitement to hatred – legal aspects on the example of the famous case of Jerzy Owsiak

Jerzy Owsiak, president of the Great Orchestra of Christmas Charity (WOŚP) Foundation, last week reported to law enforcement authorities the occurrence of criminal threats and public incitement to hatred against him and the Foundation. Phone and email threats, including calls for violence, caused the WOŚP Foundation president to reasonably fear their fulfilment. Owsiak also pointed to media activities that he considered manipulative and escalating negative emotions against him and the WOŚP. As a result, the foundation has banned certain editorial and television outlets from entering the foundation’s headquarters.

WOŚP Marcin Michon

GOCC Foundation materials – photographer Marcin Michon

Criminal threats – when is it a crime?


Art. 190 CC [Criminal threat].

  1. Whoever threatens another person with committing a criminal offence to his/her detriment or to the detriment of a person close to him/her, if the threat induces in the person to whom it was addressed or whom it concerns a reasonable fear that it will be carried out, shall be subject to the penalty of deprivation of liberty for up to 3 years.
  2. Prosecution takes place at the request of the victim.

According to Article 190 of the Criminal Code, a criminal threat consists in threatening another person with the commission of an offence to his or her detriment or to the detriment of a person close to him or her, if it arouses in him or her a well-founded fear that it will be fulfilled. The key considerations here are:

  1. Objective considerations – whether an average person in similar circumstances would consider the threat to be real.
  2. Subjective feelings of the victim – how the threat affects a particular person.

In the case of Jerzy Owsiak and the employees of WOŚP, threats were made both by telephone and by email. Although the recordings of the conversations have not been preserved, the emails constitute important evidence in the case that can be used in the proceedings.

Incitement to hatred – legal consequences


Art. 255 CC [Public incitement to commit a misdemeanour or fiscal offence].

  1. Whoever publicly incites to the commission of a misdemeanour or fiscal offence shall be subject to a fine, the penalty of restriction of liberty or the penalty of deprivation of liberty for up to 2 years.
  2. Whoever publicly incites to the commission of a crime shall be subject to the penalty of deprivation of liberty for up to 3 years.
  3. Whoever publicly incites to the commission of a crime shall be subject to a fine of up to 180 daily rates, the penalty of restriction of liberty or the penalty of deprivation of liberty for up to one year.

Actions involving public incitement to hatred under Article 255 of the Criminal Code may lead to criminal liability. In the case of the GOCC, the president of the Foundation pointed to media activities carried out by certain TV stations and editorial offices. According to him, they may bear the hallmarks of manipulation and escalation of negative emotions towards the Foundation and its activities, which, as a result of strong social emotions, which are additionally escalated and fuelled, may end very seriously (the case of the President of Gdańsk, Mr Paweł Adamowicz, should be indicated here in particular).

WOŚP Paweł Krup

GOCC Foundation materials – photographer Paweł Krup

Defamation and insult – protection of image and dignity

Going further, media activities which violate the image and good name of the Foundation may also potentially constitute a form of defamation (Article 212 CC) or insult (Article 216 CC).

Defamation refers to a situation where a person or institution is slandered for actions that may bring it into disrepute in public opinion.


Article 212 CC [Defamation].

  1. Whoever slanders another person, a group of persons, an institution, a legal person or an organisational unit without legal personality of such conduct or qualities which may humiliate him in public opinion or expose him to loss of confidence necessary for a given position, profession or type of activity, shall be subject to a fine or the penalty of restriction of liberty.
  2. If the perpetrator commits the act specified in § 1 by means of mass communication media, he shall be subject to a fine, the penalty of restriction of liberty or the penalty of deprivation of liberty for up to one year.
  3. In the event of a conviction for the offence specified in § 1 or 2, the court may rule on a surcharge in favour of the injured party, the Polish Red Cross or for another social purpose indicated by the injured party.
  4. The prosecution of the offence specified in § 1 or 2 shall be by private prosecution.

Insult includes utterances or gestures that affront the dignity of a person, also in the presence of others.


Article 216 CC [Insult].

  1. Whoever insults another person in his presence or even in his absence, but in public or with the intention that the insult should reach that person, shall be subject to a fine or the penalty of restriction of liberty.
  2. Whoever insults another person by means of mass communication shall be subject to a fine, the penalty of restriction of liberty or imprisonment of up to one year.
  3. If the insult has been provoked by defiant behaviour on the part of the victim or if the victim has responded by violating bodily integrity or by mutual insult, the court may waive punishment.
  4. In the event of a conviction for an offence specified in § 2, the court may order a surcharge for the benefit of the wronged party, the Polish Red Cross or for another social purpose indicated by the wronged party.
  5. Prosecution shall be by private prosecution.

Both offences are prosecuted by private prosecution, which means that an indictment must be brought by the victim. The assistance of a lawyer or solicitor is invaluable in such situations.

Infringement of personal rights – civil action

In situations such as this, the victim may also avail himself of civil law protection on the basis of the provisions on the protection of personal rights. This includes the possibility of claiming:

  • Financial compensation.
  • A public apology.
  • The cessation of the action infringing personal rights and the removal of its effects.

groźba fake news

How to act in case of threats or slander?

  1. Report the suspected offence to the relevant services – notify the police or the public prosecutor’s office of threats or statements that may be insulting or defamatory. In the case of a criminal threat, a request for prosecution must also be made. The request can be made verbally on the record as well as in writing.
  2. Secure evidence – emails, recordings of conversations, or other forms of documentation are key to corroborating the allegations (for fear of deleting the evidence, it is a good idea to prepare screenshots or secure data on a website or social media with a notarised record).
  3. Use legal advice – a solicitor or barrister will assist in drafting the notice and also support you during criminal or civil proceedings. It is worth remembering that advocates or legal advisors also provide support in these cases free of charge (ex officio).

GOCC Foundation materials – photographer Łukasz Widziszowski

 

WOŚP Łukasz Widziszowski

Materiały Fundacji WOŚP – fotograf Łukasz Widziszowski

WOŚP – social activity in the shadow of threats

The WOŚP Foundation has been supporting the Polish health service since 1993, collecting nearly PLN 2.3 billion for its cause so far. Despite its invaluable contribution to the healthcare system, its activities are sometimes the target of attacks, which shows how important it is to have effective legal protection mechanisms, but also – to focus on reliable information.

In this regard, it is always worth paying special attention to so-called ‘fake news’ appearing in the media or on television, which are often already at first glance distinguishable from other information (they are aggressive, refer to non-existent persons or events). Among fake news, there are many: flashy headlines, biased manipulations appearing in the statements of various people, or manipulated images or video. For this purpose (especially if the fake news concerns NGOs / non-governmental organisations), it is worth using the Demagog website .

The Demagog Association is the first Polish factchecking organisation. Since 2014, it has been verifying the statements of politicians, fighting against fake news and disinformation. In addition, it is also a team of analysts and educators for whom facts matter. The organisation’s mission is to fight fake news and disinformation and to provide citizens with reliable, unbiased and verified information.

Artificial intelligence can also be used to analyse fake news.

sztuczna inteligencja WOŚP

How artificial intelligence improves the analysis of GOCC’s financial data and increases the transparency of the charity’s activities

  1. Automating data acquisition: OCR as a key tool in the analysis of GOCC financial reports

dane OCR

  • Precise Data Extraction from Official Sources: The first step in the analysis process is data acquisition. In the case of WOŚP, the main source of information is the detailed financial reports, published annually in the form of images on the foundation’s official website. The AI modules responsible for data retrieval automatically acquire these images, eliminating the need for manual data entry. The images, which include the accounts, are available here: https://www.wosp.org.pl/fundacja/wazne/rozliczenia
  • Optical Character Recognition (OCR) in the Service of Good: Optical Character Recognition (OCR) technology plays a key role. OCR algorithms accurately analyse the images of reports, converting the textual and numerical information they contain (dates, amounts, descriptions) into a digital format for further processing and analysis. This is a key element in automating the analysis of financial reports.
  • Intelligent Structuring of Financial Data: The extracted data is then organised and categorised. AI algorithms automatically classify the information, separating it into readable sections, such as: medical equipment expenditure, medical programme delivery costs , administrative costs, in-kind donations and others. This makes GOCC financial reports more readable and understandable, without the need for expert knowledge to analyse them.

2. Objective analysis of GOCC financial data: lessons learned by AI and their relevance for donors

analiza ai wośp

  • Visualising Data for Better Understanding: Based on the processed data, AI generates clear visualisations, including charts and tables. These facilitate a quick overview of key information, trends in GOCC finances and expenditure structure for donors and all stakeholders.
  • Detailed Analysis of Expenditure Structure: An in-depth analysis of the data, supported by AI algorithms , allows you to precisely determine where GOCC is spending the funds raised. For example, it is clear from the analysis that a significant portion of the budget is consistently invested in the purchase of specialised medical equipment for hospitals across Poland, which directly improves the quality of healthcare.
  • Identification of Changes and Trends Over the Years: AI algorithms make it possible to track changes in the structure of GOCC spending over successive finals. For example, the analysis can show an increase in spending in specific areas, such as support for specific medical programmes, or a flexible response to current needs, as in the case of the increase in spending to fight COVID-19 in the 27th Final.
  • Assessing Cost Effectiveness and Optimising Activities: AI can also support the analysis of administrative and operational costs of the GOCC, identifying potential areas for optimisation and increasing the efficiency of the use of every penny donated by donors.
  • Stories Hidden in the Data: Real Impact on People’s Lives: It is important to emphasise that behind every row in the table and every bar in the graph are real human stories. The medical equipment purchased by the GOCC saves the lives and health of patients across Poland, and the support of medical programmes translates into an improvement in the quality of life for many people. Every number in the WOŚP financial report is not just a dry fact, but first and foremost a testimony of real help and goodness brought to those in need. These are hundreds of thousands of lives saved and smiles on the faces of children and their parents. Data gains real value when we realise that it reflects a real impact on the lives of people helped by WOŚP.
  1. Why is AI such an effective tool in analysing charities’ data? Benefits for GOCC and donors

ai analitics

  • Efficiency and Automation: the automation of the data acquisition and processing process, thanks to AI and OCR, significantly reduces the time and effort required to prepare comprehensive financial reports. This allows GOCC staff and volunteers to focus on achieving their statutory goals.
  • Transparency and Credibility: Using objective and precise AI algorithms to analyse GOCC data increases the transparency and credibility of the information presented. Donors can be assured that the data is analysed fairly and impartially, which builds trust in the organisation.
  • Scalability: AI-based systems can process huge amounts of data, regardless of the scale of the organisation and the number of financial reports. This is crucial for fast-growing organisations such as GOCC.
  • Detecting Trends and Patterns for Better Planning: Advanced machine learning algorithms are able to identify hidden patterns and trends in GOCC financial data . These provide valuable information that supports strategic planning and decision-making, allowing us to help even more effectively.
  1. Development perspectives and conclusions: GOCC as a role model

WOŚP AI

The use of artificial intelligence in the analysis of Great Orchestra of Christmas Charity’s financial data is an excellent example of the effective application of modern technology in the charity sector. Automation, objective analysis and the possibility of generating detailed reports contribute to the transparency, efficiency and credibility of the organisation’s operations.

The example of the Great Orchestra of Christmas Charity can serve as an inspiration for other entities operating in the field of charity, pointing to the benefits of adapting innovative technological solutions. Further development and implementation of AI in the sector may bring even greater improvements to the financial management and delivery of charities’ statutory objectives. However, it is worth remembering that technology is only a tool, and the real value comes from the human heart and the desire to help. It is empathy and commitment, combined with the analytical power of AI, that create the power to make a real difference in the world for the better.

Sources:

Summary

Protecting against threats, defamation and hate speech requires the conscious use of available legal, but also technological tools. In the case of incidents such as those reported by Jerzy Owsiak, both criminal law actions and protection of personal rights through civil means are possible. An effective reaction allows not only to defend against further attacks, but also to build public awareness of the consequences of such actions.

Let us remember – if any information raises our doubts, it is worth checking it and reaching for the source. Technology, to the extent indicated above, can help us not only to verify the accuracy of data or information, but also its proper interpretation.

Contact

Any questions?see phone number+48 663 683 888
see email address

Hey, have you
signed up to our newsletter yet?

    Check how we process your personal data here