ChatGPT at the centre of controversy. Cybertruck explosion in Las Vegas

13 January 2025   /  AI

The Tesla Cybertruck explosion outside the Trump International Hotel in Las Vegas has shocked the public. The perpetrator of the tragedy turned out to be 37-year-old Matthew Livelsberger, a special forces soldier who used artificial intelligence, including ChatGPT, to plan an attack while on holiday in the US, authorities reported. The incident, which ended with the soldier dead and seven people injured, raises serious questions about accountability for the use of AI technology.

Eksplozja samochodu cybertruck

ChatGPT and the role of AI in attack planning

According to the Las Vegas Metropolitan Police, Livelsberger used ChatGPT to obtain information about the construction of explosive devices and the organisation of the attack. Authorities did not disclose details of the answers provided, but it was highlighted that the AI was able to provide information based on publicly available sources on the internet.

The company said that ChatGPT only provided information publicly available on the internet and warned of harmful or illegal activities. OpenAI is cooperating with law enforcement as part of the investigation.

OpenAI, the developer of ChatGPT, expressed regret for the use of their tool in this incident. The company stressed that AI models are designed to deny harmful instructions, minimising potentially dangerous content. OpenAI told CNN that the company is ‘saddened by this incident and wishes to see AI tools used responsibly’.

The course of the tragic events

The explosion occurred when the Cybertruck was parked in front of the hotel entrance. CCTV shows Livelsberger pulling out a fuel canister, dousing the vehicle with it. The vehicle contained a bomb or improvised explosive device, which was detonated. Moments earlier, Livelsberger had shot himself in the car, which was confirmed by an autopsy, and his identity was identified through DNA and tattoos.

Authorities also discovered a six-page manifesto on the soldier’s phone that sheds light on the motives behind his actions. FBI agent Spencer Evans described the incident as ‘a tragic case of suicide by a decorated veteran who was struggling with PTSD and other issues’.

Speculation of related incidents

The Las Vegas explosion was not the only such incident. There was a similar incident in New Orleans involving another vehicle, also hired using the Turo app. Although the authorities are investigating possible links between these incidents, for the time being there is no clear evidence of a connection.

AI and ethical challenges

These events raise renewed questions about the responsibility of AI technology developers and the need for regulation in this area. As highlighted by Sheriff Kevin McMahill, ‘artificial intelligence is a game changer’, as seen in this tragic incident. With the development of AI, it is becoming increasingly important to put in place appropriate safeguards to prevent the technology being used for criminal purposes.

Samochód cybertruck

How does this relate to the GPT chatbot?

OpenAI in January 2024 changed the terms and conditions for the use of its large GPT language models, including – the famous ChatGPT chatbot. It has since been allowed to be used for military and warfare purposes.

The change in OpenAI’s rules and regulations for the use of OpenAI’s large language models was first picked up byIntercept. As it reports, until 10 January, OpenAI’s bylaws banned the use of its language models for ‘activities that carry a high risk of physical harm, including weapons development and military and warfare applications’.

Interestingly, the change in this position came asOpenAI began working with the US Department of Defence. As reported by CNBC, OpenAI’s vice-president of global affairs Anna Makanju and CEO Sam Altman said in an interview with Bloomberg House at the World Economic Forum that the collaboration with the US department is expected to include work on artificial intelligence tools used for open source cyber security.

Sztuczna inteligencja

How ChatGPT is supporting the armed forces

In addition, in December 2024, Open AI ChatGPT signed a cooperation agreement with Anduril, a company specialising in unmanned systems and robotics for the US Armed Forces. This partnership aims to develop an advanced AI system for the US Armed Forces.

As part of the collaboration, OpenAI will develop software for systems designed to combat combat drones, while Anduril will provide its database and experience in building drones and military systems. The planned AI system is expected to be capable of recognising, identifying and assessing airborne threats and responding to them immediately – without the need for human intervention.

Prawo aI

Law and the use of AI and ChatGPT for military purposes

Poland and the European Union

As a member state of the European Union, Poland is obliged to comply with EU legislation such as the Artificial Intelligence Act (AI Act). The AI Act emphasises the prohibition of the use of AI systems for purposes that violate human rights, which may restrict certain military applications. In addition, ‘an entity using an AI system for purposes other than military, defence or national security purposes should ensure that the AI system … complies with theAI Act, unless the system is already compliant. A list of prohibited practices can be found in Chapter II of the AI Act.

Regulations in the AI Act

Reading into the text of the AI Act one can find the provision:

(24) ‘If and to the extent that AI systems are marketed, commissioned or used with or without modifications – for military, defence or national security purposes, these systems are to be excluded from the scope of this regulation regardless of what entity performs these activities – it is irrelevant, for example, whether it is a public or private entity’.

The creators of the Artificial Intelligence Act justify this fact thus:

‘In the case of military and defence purposes, such an exemption is justified both by Article 4(2) TEU and by the specificity of the defence policy of the Member States and the Union’s common defence policy covered by Title V, Chapter 2 TEU, which are subject to public international law which therefore provides a more appropriate legal framework for the regulation of AI systems in the context of the use of lethal force and other AI systems in the context of military and defence activities’ (…).

Furthermore, according to Article 2(3) of the AI Act

(…) 3.

This Regulation shall not apply to AI systems if, and to the extent that, they are placed on the market, put into service or used, with or without modification, exclusively for military, defence or national security purposes, irrespective of the type of entity carrying out those activities’.

AI Law

Legal basis for the military use of AI in the European Union

Thus, referring to the legal basis for the military use of AI in the European Union, it is necessary to point to the aforementioned:

Article 4.(2) TEU

The Union shall respect the equality of Member States before the Treaties as well as their national identities, inherent in their fundamental structures, political and constitutional, including their regional and local self-government. It respects the essential functions of the State, in particular those designed to ensure its territorial integrity, maintain public order and protect national security. In particular, national security is the exclusive responsibility of each Member State’.

Therefore, under European Union law (AI Act), it is possible to use artificial intelligence systems, but in this context ‘for military, defence or national security purposes’. And also ‘In the case of national security purposes (…) it is justified both by the fact that national security is the exclusive responsibility of the Member States in accordance with Article 4(2) TEU and by the fact that national security activities have a specific nature, involve specific operational needs and that specific national rules apply to them’.

Also in Poland, the first strategies related to the use of artificial intelligence in defence are being developed. As set out in the Ministry of Defence’s ‘Ministry Strategy for Artificial Intelligence until 2039’ of August 2024.

By 2039, the use of modern technologies, including artificial intelligence, will be a prerequisite for the ability of the Polish Armed Forces to effectively implement deterrence and defence. Artificial intelligence systems will play a significant role in military operations, which will revolutionise the way military operations are managed and conducted in the future digitised combat environment. Its versatile applications will not only affect the operational tempo and efficiency of the use of committed forces, but also create new ethical and legal challenges’.

The use of AI in military operations in Poland will include:

  • Autonomous combat systems: Conducting operations without direct human involvement, carrying out reconnaissance, offensive and defensive missions with greater precision, minimising risks to personnel.
  • Intelligence analysis: Processing large amounts of information, identifying patterns, assessing enemy actions, improving planning and execution of operations.
  • Logistics optimisation: Resource management, reduction of repair times, route planning and anticipation of supply needs for better support of units.
  • Cyber defence systems: Rapid identification and neutralisation of cyber threats, protection of military infrastructure and data.
  • Simulations and training: Realistic training environments and personalised development paths to support soldier training and strategy testing.
  • Decision support: Scenario analysis and recommendations to increase the speed and accuracy of commanders’ decisions.
  • E-learning and talent management: Design of individual training paths, customisation of materials and talent identification.

Wojsko sztuczna inteligencja

Use of AI for military purposes in the United States of America

The US, in turn, is leading the way in developing AI systems for military purposes. Many agencies, including DARPA (Defense Advanced Research Projects Agency), are working on autonomous military systems.

However, the US does not have uniform legislation governing the use of AI in the defence sector. However, legislation, such as the National Defence Authorisation Act, includes provisions for the funding and development of autonomous military systems. An official summary can be found here.

DoD (Department of Defense) principles – in recent years, the Pentagon has adopted ethical principles for AI in the military, emphasising accountability, transparency and reliability. Systems must be used in accordance with international humanitarian law. In addition, a data, analytics and AI deployment strategy document has also been created. This strategy was created by the US Department of Defense to establish a strategy for integrating data, analytics and artificial intelligence (AI) into military and operational activities.

Summary

The Cybertruck explosion in Las Vegas is a tragic reminder of the potential dangers of using AI. While the technology has great potential to improve many areas of life, its misuse can lead to dramatic consequences. It will be crucial to ensure that AI is developed and used responsibly, with respect for safety and ethics.

Cybertruck explosion, AI in the media, Matthew Livelsberger, artificial intelligence and crime, ChatGPT under attack, Tesla Cybertruck Las Vegas, AI ethics, PTSD in veterans.

Share

Share

Need help with this topic?

Write to our expert

Mateusz Borkiewicz

Managing Partner, Attorney at law

+48 663 683 888 Contact

Articles in this category

AI hijacking: the case of Mike Johns and the legal risks of autonomous vehicles

AI

More
AI hijacking: the case of Mike Johns and the legal risks of autonomous vehicles

Horoscope 2025 – Find out what year AI predicts for artificial intelligence and how people from different zodiac signs will use it

AI

More
Horoscope 2025 – Find out what year AI predicts for artificial intelligence and how people from different zodiac signs will use it

Civilization VII – LLM models uncover secrets of title track

AI

More
Civilization VII – LLM models uncover secrets of title track

Can we keep our data safe in LLM (AI) models such as ChatGPT?

AI

More
Can we keep our data safe in LLM (AI) models such as ChatGPT?

Chat GPT vs personal data

AI

More
Chat GPT vs personal data
More

Contact

Any questions?see phone number+48 663 683 888
see email address

Hey, have you
signed up to our newsletter yet?

    Check how we process your personal data here