AI hijacking: the case of Mike Johns and the legal risks of autonomous vehicles
13 January 2025 / AI
In an era of rapid development of AI technology, we are increasingly confronted with questions not only about its effectiveness, but also about legal issues and liability for damage caused by AI systems.
The high-profile case of Mike Johns, a Los Angeles-based technology entrepreneur who almost missed his flight because he was ‘hijacked by an autonomous vehicle’ due to a glitch in his Waymo vehicle is a perfect example of the regulatory challenges in the area of artificial intelligence and autonomous technologies.
As Mike Johns himself put it, ‘I became my own case study’.
The incident and its consequences
Johns was ‘trapped’ in a Waymo car that circled around the car park for several minutes, not responding to commands from either the user issued via the app or a company representative. As an aside, Mike was unsure whether the representative was an artificial intelligence system or a human and was not informed. Although the situation was eventually brought under control and the user was able to catch his flight thanks to the plane’s delay, this case raises important questions:
- Who is liable for faults in autonomous vehicles?
- What rights does the passenger have in such situations?
- Does the user of an autonomous vehicle have an obligation to take certain actions in emergency situations?
- What obligations should the manufacturer fulfil in such a situation?
Responsibilities of the manufacturer and the operator
In the case of Waymo, the ‘looping’ problem was resolved with a software update, which may suggest that the fault was due to an algorithmic error. The question arises, however, whether such errors can be treated as classic product defects and, if so, who should be held responsible: the software manufacturer? The operator of the Waymo fleet?
Legal framework
- In the European Union, issues related to autonomous vehicles are regulated by the AI Act Regulation, the Defective Products Liability Directive, or local traffic and civil liability laws, among others .
TheAI Act in Article 43 requires artificial intelligence system providers to carry out risk assessments and monitor the performance of their products, furthermore, according to Article 50 of the AI Act, the user should be informed every time he or she interacts with an artificial intelligence system.
In addition, Section 3 of the AI Act can be found in the obligations of providers and users of high-risk AI systems and others.
In the US, on the other hand, similar obligations only apply to the public sector and federal agencies under Executive Order 14091, signed by President Joe Biden in 2023, which addresses the responsible development of artificial intelligence. In the private sector, the responsible use of AI, including informing users about the interaction with an AI system, is more the subject of good practice and ethical standards than legal regulation. However, there is public and legal pressure for larger technology platforms, such as Google or OpenAI, to apply similar transparency standards as public entities. A list of entities that have signed an open letter committing to building ethical artificial intelligence in the US can be found at www.nist.gov/aisi. Waymo is not among the entities on this list.
Passenger rights – consumer protection
The aforementioned incident also demonstrates the need to create standards for communication and support of the passenger in emergency situations, especially because of the emotional damage that can arise in such a situation.
The case of Mike Johns is not only a technological curiosity, but also material for a deeper reflection on legal regulation in the area of AI and autonomous technologies.
If you are interested in a legal analysis of autonomous technologies or have questions about your product’s compliance with the AI Act, we invite you to contact us. Together we can prepare legal solutions tailored to the dynamically changing technological reality.
Need help with this topic?
Write to our expert
Articles in this category
ChatGPT at the centre of controversy. Cybertruck explosion in Las Vegas
ChatGPT at the centre of controversy. Cybertruck explosion in Las VegasHoroscope 2025 – Find out what year AI predicts for artificial intelligence and how people from different zodiac signs will use it
Horoscope 2025 – Find out what year AI predicts for artificial intelligence and how people from different zodiac signs will use itCivilization VII – LLM models uncover secrets of title track
Civilization VII – LLM models uncover secrets of title trackCan we keep our data safe in LLM (AI) models such as ChatGPT?
Can we keep our data safe in LLM (AI) models such as ChatGPT?