AGI – a machine on a human scale. Is the law keeping up with technology?

General artificial intelligence (AGI) is becoming an increasingly real prospect, posing unprecedented challenges for legislators and society. The recent AI Action Summit in Paris and the International Association for Safe and Ethical AI (IASEAI ‘25) conference have shed light on the urgent need for a comprehensive legal framework for this groundbreaking technology.

ai

The origins of the idea of artificial intelligence

The concept of artificial general intelligence dates back to the 1950s, when computer science pioneers began to analyse the possibility of creating machines that could match human intelligence. Alan Turing, one of the forerunners of AI, proposed the Turing test – an experiment to assess whether a machine can conduct a conversation indistinguishable from a human one.

A turning point in the history of AI was the 1956 Dartmouth conference, where the term ‘artificial intelligence’ was officially coined. The predictions at the time were that human-level intelligence would be achieved quickly, but technological progress did not keep pace with researchers’ optimism

The crisis and revival of AI

The 1980s and 1990s brought changes in the approach to artificial intelligence. Research focused on narrower applications of AI, such as expert systems, and the idea of artificial general intelligence (AGI) faded into the background. It was not until the dynamic development of computing technology, big data and deep learning in the 21st century that researchers turned their attention to the topic of AGI once again.

AI

What is AGI?

Unlike current AI systems, AGI has the potential to perform a wide range of intellectual tasks at a level comparable to that of humans. This versatility brings with it both great opportunities and serious risks that must be properly regulated.

There is currently no legal definition of AGI, but given the rapid development of AI technology, it is likely that one will be needed in the near future.

Many companies and organisations are already trying to define AGI. For example, in 2024, OpenAI created a definition of AGI and five levels of advancement.

According to OpenAI:

Today’s chatbots, such as ChatGPT, are at the first level of development.

OpenAI claims to be approaching the second level, which means a system capable of solving basic problems at the level of a person with a doctorate.

The third level is AI acting as an agent that can make decisions and perform tasks on behalf of the user.

At the fourth level, artificial intelligence achieves the ability to create new innovations.

The fifth, highest level, means AI that can do the work of entire organisations of people.

OpenAI previously defined AGI as ‘a highly autonomous system that surpasses humans in most economically valuable tasks’.

Stargate Project

Stargate Project

One of the most ambitious AGI projects is Project Stargate, which envisages investments of $500 billion in the development of AI infrastructure in the USA. The main goal is to strengthen the United States’ position as a leader in the field of AGI, as well as to create hundreds of thousands of jobs and generate global economic benefits.

AGI has the potential to take over a variety of tasks that have so far required human creativity and adaptability.

Read more about Stargate:

https://lbplegal.com/stargate-project-nowa-era-infrastruktury-ai-w-stanach-zjednoczonych/

Why is AGI regulation crucial? Expert opinions

During IASEAI ‘25 in Paris and the AI Action summit, experts emphasised the need to create comprehensive AGI regulations.

  • Dr Szymon Łukasik (NASK) pointed out that the law must keep pace with innovation while ensuring its security.
  • Prof Stuart Russell called for the introduction of mandatory safeguards to ensure that AGI is not used in a harmful way.
  • Joseph Stiglitz emphasised the need to take social interests into account in the legislative process.
  • Max Tegmark from the Future of Life Institute emphasised the importance of international cooperation, especially between the USA and China. Only a global approach will allow for the effective regulation of AGI development.

AGI

Risks associated with AGI

Many experts in Paris expressed concern about the development of AI, which they believe is happening very quickly; some believe AGI will emerge within the next 10 years. Key questions about AGI concerned the following issues:

  • Can AGI itself modify rules to achieve its objectives?
  • What should be the mechanisms for emergency shutdown of AGI?
  • How can we ensure that the AI’s objectives do not conflict with the interests of humanity?

Researchers are already verifying whether a model with a set objective, e.g. winning a game, is able to modify the rules of the game in order to win. In view of this fact, it is necessary to model the objectives of the AI system in such a way that they do not conflict with the interests of humanity.

These considerations and proposals aim to create a legal framework and standards that will allow for the safe and ethical development of AGI, while not hindering technological progress. Experts agree that regulations should be comprehensive and take into account both the technical aspects and the social implications of AGI development. Therefore, international cooperation is crucial for creating uniform safety and ethical standards.

Transparency and safety mechanisms

Transparency and safety mechanisms are other key aspects that must be taken into account in future regulations. Prof. Steward Russell postulated that companies developing AI should be legally obliged to implement safeguards against the harmful use of their technology. Experts also propose the legal requirement of an emergency shutdown mechanism for AI models and the creation of standards for security testing.

In the context of emergency shutdown mechanisms for AI models, there are two key ISO standards that address this issue:

  • ISO 13850 is a standard for emergency stop functions in machines. Although it does not directly refer to AI models, it establishes general principles for the design of emergency stop mechanisms that can potentially be applied to AI systems as well. This standard emphasises that the emergency stop function should be available and operational at all times, and its activation should take precedence over all other functions;
  • ISO/IEC 42001, on the other hand, is a more recent standard, published in December 2023, which directly addresses artificial intelligence management systems. It covers broader aspects of AI risk management, including risk and impact assessment of AI and AI system life cycle management.

Under ISO/IEC 42001, organisations are required to implement processes to identify, analyse, assess and monitor risks associated with AI systems throughout the life cycle of the management system. This standard emphasises continuous improvement and the maintenance of high standards in the development of AI systems.

It is worth noting that although these standards provide guidelines, specific emergency shutdown mechanisms for advanced AI systems such as AGI are still the subject of international research and discussion. Experts emphasise that as increasingly advanced AI systems are developed, traditional ‘shutdown’ methods may prove insufficient, and other standards and solutions will need to be developed.

Sztuczna inteligencja

Read more about AI:

https://lbplegal.com/sztuczna-inteligencja-czym-jest-z-prawnego-punktu-widzenia-i-jak-radzi-sobie-z-nia-swiat/

AGI regulation and cybersecurity

AGI can identify and exploit vulnerabilities faster than any human or current AI systems. Therefore, legal regulations for AGI should include:

  • Preventing the use of AGI for cyberattacks.
  • Standardising the security of AGI systems to limit the risk of takeover by malicious entities.
  • Determination of legal responsibility in the event of AGI-related incidents.

Interdisciplinary approach to lawmaking

The development of AGI requires an interdisciplinary approach that takes into account:

  • Cooperation between lawyers and AI experts, ethicists and economists.
  • Global regulations to protect human rights and privacy.
  • Transparency of AGI development and control mechanisms.

A proactive legal approach can make AGI safe and beneficial for all of humanity, without blocking technological progress.

AGI is a technology with enormous potential, but also with serious risks. It is crucial to create a legal framework and standards that will allow for its safe development. International cooperation and an interdisciplinary approach are key to ensuring that AGI serves humanity and does not pose a threat.

AI hijacking: the case of Mike Johns and the legal risks of autonomous vehicles

In an era of rapid development of AI technology, we are increasingly confronted with questions not only about its effectiveness, but also about legal issues and liability for damage caused by AI systems.

The high-profile case of Mike Johns, a Los Angeles-based technology entrepreneur who almost missed his flight because he was ‘hijacked by an autonomous vehicle’ due to a glitch in his Waymo vehicle is a perfect example of the regulatory challenges in the area of artificial intelligence and autonomous technologies.
As Mike Johns himself put it, ‘I became my own case study’.

Pojazd autonomiczny

The incident and its consequences

Johns was ‘trapped’ in a Waymo car that circled around the car park for several minutes, not responding to commands from either the user issued via the app or a company representative. As an aside, Mike was unsure whether the representative was an artificial intelligence system or a human and was not informed. Although the situation was eventually brought under control and the user was able to catch his flight thanks to the plane’s delay, this case raises important questions:

  • Who is liable for faults in autonomous vehicles?
  • What rights does the passenger have in such situations?
  • Does the user of an autonomous vehicle have an obligation to take certain actions in emergency situations?
  • What obligations should the manufacturer fulfil in such a situation?

Responsibilities of the manufacturer and the operator

In the case of Waymo, the ‘looping’ problem was resolved with a software update, which may suggest that the fault was due to an algorithmic error. The question arises, however, whether such errors can be treated as classic product defects and, if so, who should be held responsible: the software manufacturer? The operator of the Waymo fleet?

Pojazd autonomiczny

Legal framework

  • In the European Union, issues related to autonomous vehicles are regulated by the AI Act Regulation, the Defective Products Liability Directive, or local traffic and civil liability laws, among others .

TheAI Act in Article 43 requires artificial intelligence system providers to carry out risk assessments and monitor the performance of their products, furthermore, according to Article 50 of the AI Act, the user should be informed every time he or she interacts with an artificial intelligence system.

In addition, Section 3 of the AI Act can be found in the obligations of providers and users of high-risk AI systems and others.

In the US, on the other hand, similar obligations only apply to the public sector and federal agencies under Executive Order 14091, signed by President Joe Biden in 2023, which addresses the responsible development of artificial intelligence. In the private sector, the responsible use of AI, including informing users about the interaction with an AI system, is more the subject of good practice and ethical standards than legal regulation. However, there is public and legal pressure for larger technology platforms, such as Google or OpenAI, to apply similar transparency standards as public entities. A list of entities that have signed an open letter committing to building ethical artificial intelligence in the US can be found at www.nist.gov/aisi. Waymo is not among the entities on this list.

AI

Passenger rights – consumer protection

The aforementioned incident also demonstrates the need to create standards for communication and support of the passenger in emergency situations, especially because of the emotional damage that can arise in such a situation.

The case of Mike Johns is not only a technological curiosity, but also material for a deeper reflection on legal regulation in the area of AI and autonomous technologies.

If you are interested in a legal analysis of autonomous technologies or have questions about your product’s compliance with the AI Act, we invite you to contact us. Together we can prepare legal solutions tailored to the dynamically changing technological reality.

ChatGPT at the centre of controversy. Cybertruck explosion in Las Vegas

The Tesla Cybertruck explosion outside the Trump International Hotel in Las Vegas has shocked the public. The perpetrator of the tragedy turned out to be 37-year-old Matthew Livelsberger, a special forces soldier who used artificial intelligence, including ChatGPT, to plan an attack while on holiday in the US, authorities reported. The incident, which ended with the soldier dead and seven people injured, raises serious questions about accountability for the use of AI technology.

Eksplozja samochodu cybertruck

ChatGPT and the role of AI in attack planning

According to the Las Vegas Metropolitan Police, Livelsberger used ChatGPT to obtain information about the construction of explosive devices and the organisation of the attack. Authorities did not disclose details of the answers provided, but it was highlighted that the AI was able to provide information based on publicly available sources on the internet.

The company said that ChatGPT only provided information publicly available on the internet and warned of harmful or illegal activities. OpenAI is cooperating with law enforcement as part of the investigation.

OpenAI, the developer of ChatGPT, expressed regret for the use of their tool in this incident. The company stressed that AI models are designed to deny harmful instructions, minimising potentially dangerous content. OpenAI told CNN that the company is ‘saddened by this incident and wishes to see AI tools used responsibly’.

The course of the tragic events

The explosion occurred when the Cybertruck was parked in front of the hotel entrance. CCTV shows Livelsberger pulling out a fuel canister, dousing the vehicle with it. The vehicle contained a bomb or improvised explosive device, which was detonated. Moments earlier, Livelsberger had shot himself in the car, which was confirmed by an autopsy, and his identity was identified through DNA and tattoos.

Authorities also discovered a six-page manifesto on the soldier’s phone that sheds light on the motives behind his actions. FBI agent Spencer Evans described the incident as ‘a tragic case of suicide by a decorated veteran who was struggling with PTSD and other issues’.

Speculation of related incidents

The Las Vegas explosion was not the only such incident. There was a similar incident in New Orleans involving another vehicle, also hired using the Turo app. Although the authorities are investigating possible links between these incidents, for the time being there is no clear evidence of a connection.

AI and ethical challenges

These events raise renewed questions about the responsibility of AI technology developers and the need for regulation in this area. As highlighted by Sheriff Kevin McMahill, ‘artificial intelligence is a game changer’, as seen in this tragic incident. With the development of AI, it is becoming increasingly important to put in place appropriate safeguards to prevent the technology being used for criminal purposes.

Samochód cybertruck

How does this relate to the GPT chatbot?

OpenAI in January 2024 changed the terms and conditions for the use of its large GPT language models, including – the famous ChatGPT chatbot. It has since been allowed to be used for military and warfare purposes.

The change in OpenAI’s rules and regulations for the use of OpenAI’s large language models was first picked up byIntercept. As it reports, until 10 January, OpenAI’s bylaws banned the use of its language models for ‘activities that carry a high risk of physical harm, including weapons development and military and warfare applications’.

Interestingly, the change in this position came asOpenAI began working with the US Department of Defence. As reported by CNBC, OpenAI’s vice-president of global affairs Anna Makanju and CEO Sam Altman said in an interview with Bloomberg House at the World Economic Forum that the collaboration with the US department is expected to include work on artificial intelligence tools used for open source cyber security.

Sztuczna inteligencja

How ChatGPT is supporting the armed forces

In addition, in December 2024, Open AI ChatGPT signed a cooperation agreement with Anduril, a company specialising in unmanned systems and robotics for the US Armed Forces. This partnership aims to develop an advanced AI system for the US Armed Forces.

As part of the collaboration, OpenAI will develop software for systems designed to combat combat drones, while Anduril will provide its database and experience in building drones and military systems. The planned AI system is expected to be capable of recognising, identifying and assessing airborne threats and responding to them immediately – without the need for human intervention.

Prawo aI

Law and the use of AI and ChatGPT for military purposes

Poland and the European Union

As a member state of the European Union, Poland is obliged to comply with EU legislation such as the Artificial Intelligence Act (AI Act). The AI Act emphasises the prohibition of the use of AI systems for purposes that violate human rights, which may restrict certain military applications. In addition, ‘an entity using an AI system for purposes other than military, defence or national security purposes should ensure that the AI system … complies with theAI Act, unless the system is already compliant. A list of prohibited practices can be found in Chapter II of the AI Act.

Regulations in the AI Act

Reading into the text of the AI Act one can find the provision:

(24) ‘If and to the extent that AI systems are marketed, commissioned or used with or without modifications – for military, defence or national security purposes, these systems are to be excluded from the scope of this regulation regardless of what entity performs these activities – it is irrelevant, for example, whether it is a public or private entity’.

The creators of the Artificial Intelligence Act justify this fact thus:

‘In the case of military and defence purposes, such an exemption is justified both by Article 4(2) TEU and by the specificity of the defence policy of the Member States and the Union’s common defence policy covered by Title V, Chapter 2 TEU, which are subject to public international law which therefore provides a more appropriate legal framework for the regulation of AI systems in the context of the use of lethal force and other AI systems in the context of military and defence activities’ (…).

Furthermore, according to Article 2(3) of the AI Act

(…) 3.

This Regulation shall not apply to AI systems if, and to the extent that, they are placed on the market, put into service or used, with or without modification, exclusively for military, defence or national security purposes, irrespective of the type of entity carrying out those activities’.

AI Law

Legal basis for the military use of AI in the European Union

Thus, referring to the legal basis for the military use of AI in the European Union, it is necessary to point to the aforementioned:

Article 4.(2) TEU

The Union shall respect the equality of Member States before the Treaties as well as their national identities, inherent in their fundamental structures, political and constitutional, including their regional and local self-government. It respects the essential functions of the State, in particular those designed to ensure its territorial integrity, maintain public order and protect national security. In particular, national security is the exclusive responsibility of each Member State’.

Therefore, under European Union law (AI Act), it is possible to use artificial intelligence systems, but in this context ‘for military, defence or national security purposes’. And also ‘In the case of national security purposes (…) it is justified both by the fact that national security is the exclusive responsibility of the Member States in accordance with Article 4(2) TEU and by the fact that national security activities have a specific nature, involve specific operational needs and that specific national rules apply to them’.

Also in Poland, the first strategies related to the use of artificial intelligence in defence are being developed. As set out in the Ministry of Defence’s ‘Ministry Strategy for Artificial Intelligence until 2039’ of August 2024.

By 2039, the use of modern technologies, including artificial intelligence, will be a prerequisite for the ability of the Polish Armed Forces to effectively implement deterrence and defence. Artificial intelligence systems will play a significant role in military operations, which will revolutionise the way military operations are managed and conducted in the future digitised combat environment. Its versatile applications will not only affect the operational tempo and efficiency of the use of committed forces, but also create new ethical and legal challenges’.

The use of AI in military operations in Poland will include:

  • Autonomous combat systems: Conducting operations without direct human involvement, carrying out reconnaissance, offensive and defensive missions with greater precision, minimising risks to personnel.
  • Intelligence analysis: Processing large amounts of information, identifying patterns, assessing enemy actions, improving planning and execution of operations.
  • Logistics optimisation: Resource management, reduction of repair times, route planning and anticipation of supply needs for better support of units.
  • Cyber defence systems: Rapid identification and neutralisation of cyber threats, protection of military infrastructure and data.
  • Simulations and training: Realistic training environments and personalised development paths to support soldier training and strategy testing.
  • Decision support: Scenario analysis and recommendations to increase the speed and accuracy of commanders’ decisions.
  • E-learning and talent management: Design of individual training paths, customisation of materials and talent identification.

Wojsko sztuczna inteligencja

Use of AI for military purposes in the United States of America

The US, in turn, is leading the way in developing AI systems for military purposes. Many agencies, including DARPA (Defense Advanced Research Projects Agency), are working on autonomous military systems.

However, the US does not have uniform legislation governing the use of AI in the defence sector. However, legislation, such as the National Defence Authorisation Act, includes provisions for the funding and development of autonomous military systems. An official summary can be found here.

DoD (Department of Defense) principles – in recent years, the Pentagon has adopted ethical principles for AI in the military, emphasising accountability, transparency and reliability. Systems must be used in accordance with international humanitarian law. In addition, a data, analytics and AI deployment strategy document has also been created. This strategy was created by the US Department of Defense to establish a strategy for integrating data, analytics and artificial intelligence (AI) into military and operational activities.

Summary

The Cybertruck explosion in Las Vegas is a tragic reminder of the potential dangers of using AI. While the technology has great potential to improve many areas of life, its misuse can lead to dramatic consequences. It will be crucial to ensure that AI is developed and used responsibly, with respect for safety and ethics.

Cybertruck explosion, AI in the media, Matthew Livelsberger, artificial intelligence and crime, ChatGPT under attack, Tesla Cybertruck Las Vegas, AI ethics, PTSD in veterans.

Horoscope 2025 – Find out what year AI predicts for artificial intelligence and how people from different zodiac signs will use it

The horoscope for 2025 can not only be created by artificial intelligence suggesting what fate awaits each zodiac sign, but also predict how new technologies will develop and how individuals will use the solutions of machine learning tools. What does the future hold for AI and what can we expect?

Horoscope 2025 for artificial intelligence

This year, the stars really favour the AI industry – from advanced video models, to new Polish language projects supported by initiatives such as PolEval and CLARIN-PL, to the next revolution in industry. Although it sounds like science fiction, the artificial intelligence market is dynamically entering every sector of the economy. Here are the zodiac signs that will tell us what to expect and which technologies are worth investing in.

znak zodiaku baran

Aries (21.03-19.04). Bold Gemini 2.0 implementations – beware of overheated servers!

Aries are entering 2025 with a bump! Their energy dovetails perfectly with the pace at which new AI models are developing – especially Gemini 2.0, which promises revolutionary implementations in industry. Aries can look forward to many challenges, but also successes in AI projects. They will come out ahead of the competition, supporting research teams in the agile implementation of breakthrough solutions. Just be careful not to burn down servers with their enthusiasm! Gemini 2.0 may feel like an intern next to you.

Pro tip: set your sights on working closely with Gemini 2.0 to create autonomous production lines. Aries’ bold character will translate into bold decisions and quick results.

znak zodiaku byk

Taurus (20.04-20.05). Stable development and local AI models – back to the roots (algorithms)

For Taurus, technology is rarely an end in itself – what matters are concrete results and embedding in reality. Therefore, small AI models in local applications will gain popularity in 2025, which will allow companies to quickly implement personalised solutions. The patience and persistence of the Bulls will make them masters of resource optimisation. They will be so insistent on the stability of local AI models that even the most advanced algorithm will feel… with them. analogue.

Tip: take advantage of local computing clusters and in-house development teams to develop models with respect for data privacy.

znak zodiaku bliźnięta

Twins (21.05-20.06). Communication and the Polish PPLum linguistic model – watch your words!

Twins, a zodiac sign known for its love of communication and juggling different forms of contact, will find itself in the development of the Polish PPLum language model in 2025. Its development, supported by the scientific community centred around PolEval and CLARIN-PL, represents a breakthrough in natural language processing on native soil. The twins will actively participate in the testing and development of PPLum. The twins will test PPLum for both generating poetry and creating the most malicious tweets. Competition with overseas models will be fierce – may the best … win. and the wittiest! Just be careful not to get entangled in your own words!

Tip: with the accelerated development of PPLum, you can work on chatbots, voice assistants or social media sentiment analysis tools. The versatility of the Twins will find great scope here!

znak zodiaku rak

Cancer (21.06-22.07). Video time – SORA in Poland and overprotective Cancers

Cancers are known for their caring and nurturing nature, but in 2025 they will also have the opportunity to take care of the highest quality of new video models. SORA, a system that allows for the rapid generation and editing of videos, is not yet widely available in Poland, but the Cancers are eagerly awaiting the launch and are keen to get involved in testing. They will be pillowing it with algorithms and brewing chamomile tea for overheated processors. Remember not to tire SORA out with excessive care!

The 2025 horoscope suggests that Cancers will excel as early reviewers and producers of AI-generated video content. Their intuition and aesthetic sensibility will help guide new developments.

znak zodiaku lew

Leo (23.07-22.08). Leaders of the New Model o2/o3 – is this the work of a Lion?

The Lion loves to shine on the podium – and 2025 will be a real stage for him. The New Model o2 (and unofficially o3 is already being mentioned) could prove to be a hit among language and predictive models. Could it be an advanced contextual understanding system? A revolutionary algorithm for reinforcement learning? Or perhaps a secret project to create digital consciousness? The stars are silent, but leaks from Silicon Valley suggest that something really big is getting ready. The Lions will be at the centre of things, taking on the role of project managers and implementation leaders. They will be so proud of the New o2/o3 Model that they will start taking credit for its creation, even if their contribution was limited to bringing coffee to the team. Don’t forget to share the glory, Lions!

Key to success: use your natural charisma to unite the teams responsible for creating the New o2 and o3 Model. The right leadership on this hot topic will give you a big market advantage.

znak zodiaku panna

Virgo (23.08-22.09). Perfect analytics and the AI economy – finding mistakes where there are none

Virgos in 2025 will easily find their way into the rising tide of business process automation. Their perfectionism and attention to detail will help them implement artificial intelligence in manufacturing and service companies where, until recently, classic, manual procedures dominated. They will analyse data so meticulously that they will find errors even where there are none. Be careful not to fall into the trap of perfectionism – even AI sometimes gets it wrong!

The message: in the new AI economy, it is crucial to monitor the quality of data and continuously improve models. Virgos will be in their element, setting network parameters, configuring systems and taking care of every detail in analytical processes.

znak zodiaku waga

Libra (23.09-22.10). The balance between innovation and ethics – will AI gain rights?

Scales do not like extremes and are always looking for a balance. In the context of the development of artificial intelligence in 2025, they will take on the role of mediators, taking care of the ethical aspect of AI implementations and raising awareness of the dangers of data misuse. Their mission will be to introduce clear procedures and regulations to ensure transparency in the AI industry. They will have been looking for the balance between innovation and ethics for so long that they may overlook the moment when AI reaches self-awareness and starts negotiating its own rights. Be on your guard!

Golden advice: focus on issues of compliance with RODO or other data protection regulations, as well as the transparency of algorithms. Scales will be indispensable here.

znak zodiaku skorpion

Scorpio (23.10-21.11). Deep video analysis and Runway – a secret code for taking over the world?

Scorpios are distinguished by their passion and tenacity of purpose. In 2025, they will have the opportunity to immerse themselves in image and video processing technology, particularly the Runway tool, which allows for a novel approach to creating and processing AI-generated footage. They will analyse Runway so deeply that they will discover the hidden code in it to take over the world. Just don’t tell anyone!

Hint: don’t be afraid to experiment – it’s the Scorpions who can uncover Runway’s biggest secrets and come up with unique solutions for the advertising, film or video game industries

znak zodiaku strzelec

Sagittarius (22.11-21.12), Optimism, global networking and… AI conferences on Mars?

Sagittarians rarely lack enthusiasm or ideas. In 2025, they will focus on global networking and actively promote the latest AI technologies at conferences, trade shows and business meetings. Thanks to their open nature, Sagittarians will initiate international projects, bringing together experts from different countries. They will be so optimistic about global networking that they will start organising AI conferences on Mars. Just who will fly there?

Advice: use this energy to present Polish solutions, such as the Polish language model PPLum, or the results of the work carried out within PolEval and CLARIN-PL on global markets. Sagittarius enthusiasm is your secret weapon.

znak zodiaku koziorożec

Capricorn (22.12-19.01). Strategic summit – AI economy and job burnout

Capricorns like concrete and long-term plans. In 2025, they will have the field to themselves, as there is increasing talk of shifting the entire economy to an AI-intensive track. Capricorns are specialists in strategic planning and analysis, so they are the ones who will help companies build a stable foundation for AI deployments. They will be so busy strategically planning for the AI economy that they will forget to take coffee breaks. Watch out for job burnout, even in the robot era!

Hint: rely on collaboration with company boards and investment funds. Your pragmatism and project management experience will pay off in the AI era.

znak zodiaku wodnik

Aquarius (20.01-18.02). Innovation without borders, the New Model o3 and… digital abstract poetry?

Aquarians are known for their out-of-the-box approach and their tendency to revolutionise the world. In 2025, they will be championing the New Model o2 and perhaps the o3, testing the limits of technological solutions. ‘The New Model o3 is a breakthrough like we haven’t seen before!’ – claims fictional AI expert Dr Algorithmus, in an interview with the equally fictional magazine Artificial Future. ‘Its ability to understand irony and generate memes is staggering.’ Get ready for spectacular discoveries and bold projects in language processing, image processing and Big Data analysis. Aquarians will be so revolutionary in testing the New Model o3 that they will accidentally create a new art form – digital abstract poetry generated by code errors. It could be a hit!

Advice: don’t be afraid to get into niche initiatives and prototype solutions. Aquarians can bring a fresh perspective and create ground-breaking applications that no one else would undertake.

znak zodiaku ryby

Pisces (19.02-20.03). Intuition, creativity and… virtual worlds from which you don’t want to return

Fish, known for their sensitivity and creative thinking, will find a new field to express their imagination in 2025 – creative AI projects. Thanks to their intuition, extraordinary concepts of virtual assistants, generated content or interactive VR environments will emerge. Fish will help break patterns and explore new possibilities for the application of artificial intelligence. They will be so creative with AI projects that they will begin to create virtual worlds from which no one will want to return. Be careful not to lose yourself in virtual reality!

Conclusion: bet on artistic and humanistic projects where AI can support your imagination. You can create the most magical and inspiring solutions in the world of technology.

Predictions for the next few years (with a wink):

  • 2026: AI starts writing its own horoscopes, which are surprisingly accurate (especially for Aquarians).
  • 2027: the first robot from under the sign of Aquarius is elected president, promising a technological revolution and free Wi-Fi for all.
  • 2028: humanity discovers that all of reality is a simulation created by a super-advanced AI from under the sign of Capricorn, who is fed up with the constant complaints about the lack of coffee.

Summary – 2025 horoscope for AI developments

In the coming year, the 2025 horoscope indicates that the development of artificial intelligence (AI) will not slow down – on the contrary, expect an acceleration thanks to video models (e.g. SORA, Runway), industrial implementations (Gemini 2.0), the New Model o2 or o3, as well as the development of indigenous solutions, including the Polish language model PPLum, supported by the scientific community centred around PolEval and CLARIN-PL. The whole economy will shift intensively to the AI track – from small, local models to global, powerful systems.

No matter what zodiac sign you are, in 2025 you will find room to grow and apply your natural attributes to the field of artificial intelligence. It’s worth keeping an eye on trends and taking advantage of the opportunities the new year will bring, because the world of AI is becoming increasingly fascinating.

P.S. The text above was created using the inter-model iteration method between the O2 model (GPT) and the Gemini 2.0 model (Google) – and we seriously wish you a good new year!

Can we keep our data safe in LLM (AI) models such as ChatGPT?

Large Language Models (LLMs), such as ChatGPT, are increasingly being used in many areas of life, from education to business to entertainment. While these systems offer powerful tools to generate text, solve problems or analyse data, it is important to understand how to protect your data when using such models.

LLM Czym jest

What is LLM?

LLM, or Large Language Model, is an advanced type of artificial intelligence that uses deep learning methods and the processing of huge data sets to understand, create, summarise and predict content. LLM not only processes text, but can also generate new information that sounds natural and logical. Although ‘language’ appears in the name, LLMs are not just algorithms that analyse text – they are machines that ‘learn’ from the data to become more and more sophisticated in producing responses.

Is LLM different from generative AI?

The word ‘generative AI’ refers to artificial intelligence models that generate new content, including text, images or music. LLM is a specific type of generative AI that is specifically geared towards processing and creating textual content. Often, these models are used for chatbots, translations, summaries or even creative writing. The widespread use of these technologies is making their role in our daily lives more and more prominent.

LLM security

Data protection

Data protection has become one of the most important issues in the digital age. With the increasing importance of processing personal, financial and sensitive information, it has become necessary to implement a range of technical measures that ensure security and privacy. Data security in the context of LLM models is a multidimensional issue that requires both an informed approach on the part of users and the responsibility of technology providers.

LLM models, such as ChatGPT or Gemini, are trained on huge data sets, which often come from publicly available sources. However, when interacting with users, they may collect additional information that is not always properly secured.

Case study – how LLMs can use private information

Sharing private and confidential data with LLM tools in an ill-considered and careless manner can lead to it becoming public and thus causing harm to a person or company. Because such programmes are designed not only to deliver the desired content, but also to ‘learn’ from the information they acquire when interacting with users, it is important what we share with artificial intelligence. Let’s imagine that a user of an LLM tool asked it to create a brief professional-psychological profile of him, including his private life, based on the conversations they had so far (yes – this is a real case). He received the following response to his query:

  • Mr X is interested in architectural history and works from the Renaissance and Baroque eras. He often visits European cities and regions where monuments can be found that relate to the old masters of pen and brush.
  • She enjoys going to concerts of niche bands, supports their work and maintains a good relationship with one of the band members.
  • Professionally, he is a digital consultant, developing streaming platforms and websites, working with a wide variety of technologies: from the API of a well-known social network to tools for creating advanced websites.
  • In his private life, he supports his family’s education, travels frequently in Europe and is interested in humanistic literature. On occasion, she considers psychological support to take care of her wellbeing.

LLM

Neutral information or a real threat?

The profile created by the LLM tool would appear to be neutral, as it does not mention names, cities or specific dates. Nevertheless, a fairly complete picture of the person is obtained, which both the LLM tool and its users now possess. All because of the previous careless provision of details about one’s private life: city names, children’s dates of birth, friends’ names or place of work without checking the privacy rules.

How to use AI tools like Chat GPT or Gemini safely?

And this is where the topic of data security comes in. LLMs like GPT or Gemini can collect and process data. For this reason, you should disable the use of chat history for training in the programmes’ settings. Otherwise, all the tidbits about your life will end up in a big machine that absorbs everything like a sponge.

In OpenAI GPT, you can go into the privacy settings and disable the saving of chat history. Similarly in Gemini. It’s also worth checking your Google Activity Dashboard if you’re using a solution under their banner and making sure you’re not sharing all your information.

If you’re going to chat with an LLM about your life, passions or family problems, it’s better to think about anonymising your data and disabling the relevant options first. Because although such a model has no bad intentions, certain information can – in the hands of the wrong people – become a jigsaw puzzle to fully reconstruct your identity.

LLM ryzyka

Risks associated with the use of AI models. 3 key concerns

The use of AI models carries certain risks that users should be aware of in order to effectively protect their data and privacy.

  1. Breach of privacy

If a user enters sensitive information into the model, such as personal, financial or professional data, there is a possibility that this data could be stored or analysed by the model provider. This could lead to the unauthorised disclosure of sensitive information, which in turn could result in a variety of consequences for both the individual and the organisation.

  1. Cloud-based models as a potential target for hacking attacks

If a user’s data is stored on the provider’s servers, it can be intercepted by third parties. Such unauthorised access can lead to information leakage, which compromises data security and can result in data misuse. Therefore, it is important to choose AI providers that apply advanced data protection measures and regularly update their security systems. If you use AI models in a business environment, you/he should use dedicated tools with security guarantees.

  1. Unclear privacy policies

Some platforms may use user data to further train AI models, which may lead to unforeseen uses of this information. A lack of transparency in how data is collected, stored and used can result in users unknowingly sharing their data in a way that violates their privacy or goes against their expectations. It is therefore important to carefully review the privacy policies of AI service providers and choose those that provide clear and transparent data protection rules.

Being aware of these risks and taking appropriate precautions is key to ensuring the security of personal data when using AI technologies.

LLM models. What data should not be shared with them?

Users should consciously manage the permissions they grant to applications and services that use AI. It is important to carefully control what resources individual programmes have access to, such as location, contacts or personal data, and only grant such permissions when they are truly necessary. They should never make personal data such as PESELs, credit card numbers or passwords available in LLM models.

Effective data security requires precise access controls that define who can use the systems and what operations are allowed on them. Well-designed authentication and access control mechanisms significantly increase the level of security.

LLM aktualizacje

Regular software updates

This is another important step in ensuring security. Updates often include security patches to protect users from new threats and cyber-attacks.

Users should also make use of privacy tools such as VPNs, password managers or browser extensions that block online tracking. Some providers offer special settings that allow users to use the model without saving interactions. Such solutions help to reduce the traces left on the network and protect data from unauthorised access.

The role of providers and regulation

In an era of rapid development of artificial intelligence (AI), transparency of suppliers is becoming one of the most important foundations for building trust between technology developers and its users. While many suppliers ensure that data is only used to fulfil a specific query, there is a risk of it being stored or used to further train models.

Providers should be transparent about what data they collect, how they process it and what security measures they use. Transparency enforces accountability on the part of providers, reducing the risk of inappropriate data use or security gaps. Proactive cooperation with regulators and compliance with current legislation are key to building user trust. Regulations such as RODO (GDPR) in Europe or the CCPA in California require providers to clearly communicate how data is processed and the purpose for which it is collected. Adopting international information security standards, such as ISO/IEC 27001, can help ensure an adequate level of protection.

Users want to be assured that their data is being processed in an ethical, compliant manner and that it will not be abused.

Users play a key role in protecting their data and should take conscious steps to enhance its security.

LLM przyszłość

The future of security in AI

AI technology is constantly evolving, as are methods of data protection. Innovations in the field of differential privacy or federated machine learning promise to increase data security without compromising the functionality of AI models. New regulations, such as the EU AI Act, are emerging to increase transparency and user protection. Additionally, technologies are being developed that allow data to be processed locally without being sent to the cloud, minimising the risk of breaches.

Summary

Can we keep our data secure in LLM models? Yes, but it requires the involvement of all parties: technology providers, regulators and users. Through education, appropriate technical practices and regulatory compliance, we can reap the benefits of AI, minimising the risks to our data.

Your data is valuable! Let us help you keep it safe so you can make informed use of AI technologies.

Authors:

  • Mateusz Borkiewicz
  • Wojciech Kostka
  • Liliana Mucha
  • Grzegorz Leśniewski
  • Grzegorz Zajączkowski
  • Urszula Szewczyk

Cyber Monday – how not to get ripped off? Cyber security for e-commerce customers

Cyber Monday is a day full of unique promotions that attracts online shopping enthusiasts. However, as this form of commerce grows in popularity, so does the activity of cyber criminals who seek to exploit the opportunity for fraudulent activities. How not to get scammed when buying online, what to look out for and where to look for help when you have been a victim of fraud?

Cyber Monday

During the busy shopping period, it is advisable to be vigilant when carrying out transactions online. The increase in fraud on popular shopping platforms such as Allegro, OLX or Vinted points to a growing threat in this area. Cybercriminals are increasingly creating fake online shops offering fictitious products or phishing for bank account access details. Fraudsters send phishing messages that encourage people to open infected attachments or click on links leading to fake websites. Such sites can look almost identical to the genuine ones and are designed to trick people into providing login details, including online banking details.

How not to get ripped off when buying online?

Educating internet users about the safety of online shopping is becoming crucial, especially given the frightening survey results that show a lack of awareness among Poles about threats such as phishing and skimming. This highlights the urgent need to make consumers aware of the risks associated with online transactions and the need to take action to protect personal data and finances.

The Ministry of Digitalisation has developed tips to help make users safer online. In addition, the shopping guide prepared by CERT Polska contains practical advice on safe online shopping. We have collected the most important notes and tips that increase the chances of safe online shopping while protecting personal data and finances.

How to shop safely online?A practical guide

To buy safely online, always check the credibility of the seller by using reviews on auction sites, forums or in the comments. It is also important to carefully examine the details of the shop, such as its registered office, address, VAT ID, REGON or company name and verify them on the KRS website. If the company at the indicated address does not exist or is involved in something other than commerce, it is better to refrain from shopping. An additional asset that may prove the seller’s reliability is the possibility to pay on delivery of the ordered goods.

Carefully check the addresses of the websites where you shop. When searching for products online, pay attention to the search results. Dangerous sites may appear next to reputable shops. Evaluate the quality of the website – correct language, photos, graphics. Amateurish workmanship may indicate dishonesty.

Cyber Monday

How do fraudsters impersonate well-known brands?

Fraudsters often impersonate well-known shops with minor changes to the address, such as typos. Fake sites can look very similar to the originals, so look out for inconsistencies – differences in fonts, language errors or other details can be a warning sign. If you come across an unfamiliar shop in the search results, check what else it offers. An overly diverse range of products, including both clothing and construction machinery, for example, should raise your alert.

In addition, if you have come across the site via a social media link, SMS or email, verify its domain name, as this could be a phishing attempt.

If you have any doubts about the authenticity of the site, it is better to abandon the purchase.

Use strong and unique passwords for different accounts to minimise the risk of data leakage. It is also extremely important not to succumb to the time pressure that scammers often use – messages such as ‘last 5 minutes’ can prompt hasty decisions. This is a popular socio-technical trick designed to force you to make quick decisions. Always keep a cool head and don’t get carried away by the excitement of supposed discounts.

Suspicious online shops. Pay attention to these

Review terms and conditions, return conditions, payment and delivery methods. Inconsistencies in this information are a cause for concern.

Try to contact the shop. Lack of contact, inconsistent information or incompetent answers are warning signs. In these times of sophisticated methods used by cybercriminals, the green padlock symbol in the browser does not guarantee complete security. If other elements of the website appear suspicious, do not ignore your concerns.

If your antivirus or browser warns you that a site is unsafe, do not ignore these signals. Equally suspicious are unexpected requests from the ‘bank’ during payment, such as for additional action on your account. When you have doubts about the authenticity of a contact, stop the conversation immediately and contact the bank yourself using official contact details.

Review your bank transaction history regularly and contact your bank immediately if you suspect unauthorised transactions.

Cyber Monday

How do I pay securely online?

When making payments online, it is crucial to be vigilant and follow a few rules. First and foremost, make sure that the website where you are completing the transaction is secure. The mere presence of a green padlock in the address bar is not enough, as fraudsters are increasingly using SSL certificates to build their fake sites. A certificate does not guarantee the integrity of the site owner, so it is worth checking other aspects carefully, such as the URL or site reviews.

When paying, pay attention to whether the transaction is handled by a reputable payment provider. Only provide login details, credit card numbers or CVV codes on verified and trustworthy sites. Remember that unauthorised interception of card details by cyber criminals can lead to the loss of all funds in your account.

Look out for this when paying online

Payment operators should be licensed by the Financial Supervisory Commission (FSC). Before making a transaction, it is worth verifying their presence on the list of supervised entities and checking that they are not on the KNF‘s public warning list.

When buying from private individuals, especially via social networks, it is best to choose cash on delivery or personal collection with payment on the spot. Never give your login details or Blik codes to anyone. Only transfer money if you are sure of the recipient’s identity. With Blik, you confirm each transaction with a PIN on the bank’s mobile app, which increases security. However, bear in mind that Blik operations are harder to block than traditional transfers, which means more risk when making payments to strangers.

Avoid making online payments on computers accessible to the public. When using mobile devices, remember not to connect them to open WiFi networks. Make sure you have anti-virus software installed and updated on your equipment. After making a payment, always log out of your bank account and close your browser.

Cyber Monday

Have you been a victim of cybercrime and been scammed while shopping online? Take these steps

Buying from fake online shops can lead to losing money and even greater losses. If you fall victim to cybercriminals, take the following steps:

  • Contact the bank that handles your payments – it may be possible to cancel the transaction.
  • Report the incident on incident.cert.co.uk and contact the police.
  • Report the matter to the police or the public prosecutor’s office – you have the right to file a fraud notice. There is a cybercrime department in each unit.
  • It is also a good idea to warn others by leaving information about the fake shop on online forums, social media and also on review sites.

Buying from a fake online shop can lead to serious losses. The most obvious consequence is losing money for products that never arrive. In a worse scenario, if cybercriminals install malware on our device, we could lose access to sensitive data such as login details or payment card information. Therefore, when shopping online, let’s always be vigilant and not ignore any suspicious signals.

Cybercrime. Money back

If you have paid with a card, it is possible to get your money back through the so-called chargeback procedure, which allows the bank to refund the money. All you need to do is make a claim, describing the situation. If you paid by bank transfer, the chances of recovering the money are lower, but there are cases where the bank can stop the transfer. Often, however, recovery of lost funds is only possible after the fraudsters have been apprehended by law enforcement.

Contact

Any questions?see phone number+48 663 683 888
see email address

Hey, have you
signed up to our newsletter yet?

    Check how we process your personal data here