Obligations of online platforms under the Digital Services Act – selected issues

In this article, I will focus on discussing the obligations that online platform providers have under the Digital Services Act (DSA).

Many services qualify under online platforms. These can be social media (e.g. Meta services such as Facebook or Instagram), a market place type service (e.g. Allegro, Amazon), or venues for publishing videos (e.g. Youtube). In recent years, the provision of services in the form of online platforms has become one of the leading sectors of the digital economy. Therefore, the EU legislator decided to pay a great deal of attention to these entities when creating the DSA and imposed a number of obligations on them.

Who is an online platform provider?

At the outset, it is important to identify who an internet platform provider is.

An online platform under the DSA is a hosted service (see our blog article [__] for more information on this) that stores and disseminates information to the public at the request of the recipient of the service.

However, the DSA provides exceptions to whether a particular service qualifies as an internet platform. Such an exemption applies where the activity is an insignificant or solely ancillary feature of another service or an insignificant function of the main service, and for objective and technical reasons cannot be used without such other service, and the inclusion of such feature or function in such other service is not a way to circumvent the application of the DSA.

Merely establishing that a particular provider meets the prerequisites of being an online platform does not at all mean that the DSA will need to be applied to it. If a provider is a micro or small enterprise as defined by the European Commission Recommendation 2003/361/EC, then – with one reporting exception – it will not be required to implement the arrangements provided for online platform providers. These are as follows:

– micro enterprise – has fewer than 10 employees and a turnover or annual balance sheet total of less than EUR 2 million;

– small enterprise – has fewer than 50 employees and a turnover or annual balance sheet total of less than EUR 10 million.

But here, too, we have an exception to the exception 😊 Even if the online platform provider is a micro or small enterprise, it will have to comply with these obligations if, at the same time, it has the status of a very large online platform provider. This is because, with this qualification, the size of the provider and, above all, its outreach (more on this below) does not matter.

Below is a general summary of the obligations provided for DSA online platform providers only (Chapter III Section 3 DSA):

But beware: we still need to remember the following rules:

I. The online platform provider must also comply with the obligations that are provided for each intermediate service provider (Chapter III Section 1 DSA) and hosting service provider (Chapter III Section 2 DSA). This is because an online platform provider is, by definition, also a hosting provider, one of the types of intermediate services.

II. Further obligations for an online platform provider will arise if its platform allows consumers to conclude distance contracts with traders (so-called B2C platforms) (Chapter III Section 4 DSA).

III. Even more obligations will arise when the provider has the status of a very large online platform provider, i.e. it has an average number of monthly active service customers in the Union of at least 45 million and has been designated by the European Commission as a very large online platform.

IV. Last but not least, notwithstanding the obligations imposed by the DSA, an online platform provider must comply with the obligations imposed on it by other legislation (such as Regulation 2021/784 on the prevention of the dissemination of terrorist content on the internet, or the RODO).

The obligations of online platform providers that I have listed above are briefly summarised below.

Internal complaint handling system

This is an extension of the obligations imposed on each service provider relating to the moderation of content on its resources. Provider platform online must enable odbiorcom service, in that people who make the notification, by at least six months from the decision related to the moderation to the internal system of the complaint internal.

Po some time supplier platform issued a decision in which came to the position of the user A and removed with platform XYZ content user B, which was informed about it. User B does not agree with this decision and therefore files a complaint against it, which will be dealt with by that provider’s just internal complaint handling system.

Of course, this is one of the scenarios where the internal complaints handling system is applicable.

More information about this procedure can be found in this article.

Out-of-court dispute resolution mechanism

A further obligation on online platform providers is to ensure that users and eligible persons (i.e. filers who are not users) can use an out-of-court dispute resolution mechanism.

It is important to note that this is not another stage of the online platform provider’s handling of the case. The case is resolved by an external entity. The online platform provider must indicate to the interested party that the right to use this measure.

The interested party may refer the dispute resolution request to an out-of-court (e.g. certified) body. As a general rule, the online platform provider may not refuse to enter into such a case.

Prioritisation of requests

It is incumbent on the provider of the online platform to implement the necessary technical and organisational measures within its organisation to ensure the prioritisation of notifications made by trusted whistleblowers. These are entities identified by the Digital Services Coordinator that:

  1. have specific expertise and competences to detect, identify and report illegal content;
  2. are independent of online platform providers;
  3. take steps to report accurately and objectively and with due diligence.

Meeting such an obligation may involve, for example, establishing a separate reporting channel for these entities, independent of reports made by others.

Mechanisms for responding to abuse of services

The DSA requires the provider of an online platform to implement mechanisms that the provider can use when abusers use the services it provides. Very often, this is done by those engaged in ‘trolling’.

Firstly, the provider suspends for a reasonable period of time and after issuing a prior warning the provision of services to recipients of the service often transmitting obviously illegal content.

Secondly, the provider suspends for a reasonable period of time and after issuing a prior warning, the processing of reports made through the reporting and action mechanisms and complaints made through the internal complaint handling systems by persons or entities making frequently obviously unfounded reports or by complainants making frequently obviously unfounded complaints.

For example, user A, who has an account on the social networking platform XYZ, reports to its provider as potential illegal content any post by user B that concerns the political situation in the country. User A does not agree with the views of user B, as he himself advocates a different political option. On the other hand, without the need for expertise, the provider of the XYZ platform notices that none of user B’s contributions contain illegal content and that the contributions themselves are constructive criticism. In such a situation, the XYZ platform provider warns user A that he or she is abusing his or her reporting rights and calls on him or her to stop this practice. Despite the call, user A continues to make reports of user B’s statements. Consequently, the provider suspends the processing of user A’s submissions for 1 month.

Additional reporting obligations

An online platform provider has more reporting obligations than a standard intermediate service provider. Below are examples of these obligations.

In addition to the information contained in Article 15 of the DSA (see the article at this link for more information), the online platform provider must also make the following data publicly available (usually on the online platform’s website):

  1. related to the conduct of disputes by the online platform provider:
  • the number of disputes submitted to out-of-court dispute resolution bodies;
  • the results of the resolution of those disputes; and
  • the median time taken to conduct dispute resolution proceedings; and
  • the share of disputes in which the online platform provider has implemented the decisions of that body.

 

2. The number of service suspensions broken down by suspensions made due to:

  • transmission of manifestly illegal content;
  • making manifestly unfounded claims; and
  • the filing of manifestly unfounded complaints.

In addition, at least every six months, providers shall, for each online platform or search engine, publish information on the average number of active monthly users of the service in the Union on a publicly accessible section of their online interface.

At the same time, providers of online platforms or search engines shall, upon request and without undue delay, provide information on the average number of monthly active recipients of the service in the Union to the digital services coordinator responsible for the place of establishment and to the European Commission.

Prohibition on the use of dark patterns

An online platform provider must not design, organise or operate its online interfaces in a way that misleads or manipulates the recipients of the service or otherwise materially interferes with or impairs the ability of the recipients of their service to make free and informed decisions. The DSA refers to these types of practices as ‘deceptive web interfaces’, but the business most commonly uses the phrase ‘dark patterns’. The use of dark patterns is an extremely common phenomenon, including in e-commerce. Through such interfaces, users often buy products they do not need at all, or buy more than necessary.

 

It should be noted here that the regulations of the RODO and the Unfair Market Practices Directive (implemented in the Polish legal order as the Act on Counteracting Unfair Market Practices) take precedence over the DSA provision in this respect.

Transparency of online advertising

Online platform providers that present advertisements on their online interfaces shall ensure that – with regard to each specific advertisement presented to each individual recipient – the recipients of the service are able to clearly, explicitly, concisely and unambiguously and in real time:

  • state that the information is an advertisement;
  • state, on behalf of a natural or legal person is presented advertisement;
  • identify the natural or legal person who paid for the advertisement, if that person is not the natural or legal person referred to in point b;
  • find relevant information, extracted directly and readily from the advertisement, on the main parameters used to determine the target audience to which the advertisement is presented and, if applicable, how those parameters are varied.

In addition, providers provide a function for service recipients to make a declaration as to whether the content they provide is or contains commercial information (do you sometimes see on platforms that a particular material is ‘sponsored’? 😊).

Another important obligation imposed on online platform providers is the prohibition to present profiling-based advertising to the recipients of the service under the provisions of the RODO using special categories of personal data (e.g. data on health status or political views).

Use of a transparent recommendation system

Online platform providers that use recommender systems shall set out in simple and accessible language in their terms of service the main parameters used in their recommender systems, as well as any options for service recipients to change or influence these parameters.

The main parameters explain why certain information is suggested to the service recipient. These include, at a minimum:

  1. the criteria that are most relevant in determining the information suggested to the service recipient; and
  2. the contribution of each parameter (‘how much they weigh’) in determining the recommendation to the user.

 

In other words, this way we should know that we often see pictures of funny cats on the platform because we watch a lot of videos with them in attendance 😊.

If several options are available for recommender systems that determine the relative order of the information presented to the recipients of the service, providers shall also provide a function that allows the recipient of the service to select and change the preferred option at any time. This function must be directly and easily accessible in the specific section of the web interface of the online platform where the information is prioritised.

Protection of minors on the internet

A final area of obligation for online platform providers relates to the use of their services by minors. These obligations are as follows:

  1. the introduction of appropriate and proportionate measures to ensure a high level of privacy, safety and protection of minors in the services provided by providers (the solutions developed in the RODO are very helpful here);
  2. prohibiting providers from presenting profiling-based advertising on their interface using the service recipient’s personal data if they know with reasonable certainty that the service recipient is a minor.

Compliance with the above obligations does not oblige online platform providers to process additional personal data in order to assess whether the service recipient is a minor.

 

Machinery ordinance – new standards for production and distribution

Significant changes for machine builders and distributors coming soon!

The law of new technologies does not end with the IT or Game Dev industry. New technologies are also strongly present in the industries of automation and robotics, machine manufacturing, electrical and electronic equipment, including household appliances, which we use every day in our homes – and not only smart and IoT devices.

The production, safety requirements or rules for the marketing and distribution of machines, tools and other equipment are regulated by numerous pieces of EU legislation (mainly directives).

Those involved in the production and distribution of machinery will soon be facing significant changes. These will be introduced by the already enacted Regulation (EU) 2023/1230 of the European Parliament and of the Council of 14 June 2023 on machinery and repealing Directive 2006/42/EC of the European Parliament and of the Council and Council Directive 73/361/EEC (the “Machinery Regulation”).

It will replace the 2006/42/EC Machinery Directive currently in force. The Machinery Regulation does not only apply to industrial and consumer machinery. It also covers small vehicles for personal use and light electric vehicles such as scooters and bicycles.

The Machinery Ordinance introduces changes such as:

Provision of product manuals in digital format. Hard copy information will still have to be made available, but only at the request of customers.

Clarification of when a substantial modification of a machine occurs that triggers a reassessment of the product’s conformity with safety requirements and the issuing of a new CE marking.
Mandatory third-party conformity assessment for six categories of ‘high-risk’ machinery.
Introduction of general requirements on cyber security and artificial intelligence.

The provisions of the Machinery Ordinance will take effect from 14 January 2027. However, many entities are already legitimately preparing for their implementation.

If you have any questions related to the current regulations on the topic of machine manufacturing and distribution or the changes that the Machine Ordinance will introduce, please feel free to contact Ewa Knapińska of our law firm.

#NewTechnologies #Robotics #Manufacturing #Distribution #MachineIntelligence #LegalRegulations

Relation of RODO to the Digital Services Act

1. DSA and RODO -relationship status: “it’s complicated”.

Recent months have seen many e-commerce businesses implementing the EU regulation, the Digital Services Act (DSA), in their organisations. 

It is important to remember that the DSA does not operate in a vacuum. In addition to it, e-commerce entrepreneurs need to be aware of other regulations that they must comply with in order to be fully compliant. One of these is precisely RODO. One can even venture to say that DSA will not be properly implemented if RODO has not been implemented in the organisation beforehand. 

The DSA Regulation in its wording indicates how it relates to the RODO. In general, the DSA is unaffected by the EU data protection regulations (i.e. primarily the RODO). At this point, a lawyer will use the expression that the RODO is lex specialis to the DSA. This means that the provisions of RODO are specific to those of the DSA. The Digital Services Act is only complementary to the RODO regulations. 

Below are a few areas where you should be mindful of RODO when implementing DSA in your organisation.

2. RODO and dark patterns 

One example is the prohibition of dark patterns -‘deceptive interfaces’. Under Article 25(1) of the DSA, online platform providers may not design, organise or operate their online interfaces in a way that misleads or manipulates the recipients of the service or otherwise materially interferes with or impairs the ability of the recipients of their service to make free and informed decisions.Importantly, this regulation applies when the provisions of the RODO and the Unfair Market Practices Directive will not apply. What does this mean? Even if an online platform provider uses dark patterns, it must first be established whether they are not related to the collection or processing of personal data or whether they are targeted at consumers. If neither of theseis the case, then the DSA regulation should be used. 

Thus, the RODO remains more relevant than the DSA when combating dark patterns. It is important in this context to pay attention to, among others, the European Data Protection Board’s Guidelines 3/2022 (Guidelines 3/2022 on Deceptive design patterns in social media platform interfaces: How to recognise and avoid them, adopted 14.2.2023 (version 2.0)).

3. RODO and profiling

One area that the DSA has paid particular attention to is the issue of the presentation of advertising based on profiling using personal data.

What is profiling? Under the RODO, it is any form of automated processing of personal data that involves the use of personal data to evaluate certain personal factors of an individual, in particular to analyse or predict aspects relating to that individual’s performance, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.

The DSA Regulation primarily refers to profiling in the case of online platform providers.

Firstly, online platform providers are not allowed to present profiling-based advertisements to service recipients using special categories of personal data.What are these ‘special categories of data’, which are also referred to as ‘sensitive data’? The RODO indicates that they are personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership and genetic data, biometric data for the purpose of uniquely identifying a natural person or data concerning the health, sexuality or sexual orientation of that person.

Secondly, online platform providers are not allowed to present profiling-based advertising on their interface using the personal (not only sensitive!) data of the service recipient if they know with sufficient certainty that the service recipient is a minor.

Thirdly, providers of very large online platforms and very large search engines that use recommender systems (more at this link) provide at least one option for each of their recommender systems that is not based on profiling.

4. RODO and the protection of minors

Another area of the DSA where knowledge of the RODO is necessary for implementation is the issue of the protection of minors (from the perspective of the DSA of those under 18). Above, I mentioned the prohibition on presenting profiling-based advertising to minors using personal data. Below is another obligation.

Providers of online platforms accessible to minors shall put in place appropriate and proportionate measures to ensure a high level of privacy, security and protection of minors in the services they provide. This is a similar approach to the privacy by design and privacy by default model introduced in the RODO. In other words, when putting in place appropriate measures to ensure the privacy, safety and protection of minors, it is necessary to draw on the acquis of the RODO in this respect, particularly in the context of Articles 25 and 34 of that Regulation. It is also important to make use of guidance developed both by the EROD (e.g. 5/2020 on consent under Regulation 2016/679) or by the supervisory authorities of individual EU Member States (e.g. ‘The Fundamentals for a Child-Oriented Approach to Data Processing’, developed by the Irish Data Protection Commission).

  1. Summary

These are just a few examples that show how important RODO is from an DSA perspective. The implementation of the Digital Services Act in an organisation will not be complete if proper implementation of RODO has not taken place in advance. This means that it is already worthwhile for each e-commerce business to check the validity of its data protection solutions. 

Get to know our team of experts

Advisors #LBKPteam

The last few years have been a time of dynamic development not only for Leśniewski Borkiewicz Kostka & Partners, but above all for our advisors.

We would like to thank the #LBKPteam for the fact that together we have managed to create a Team based on such high competence and experience.

Here are our indispensable ones:

Senior Manager:

r. pr. Monika Skaba-Szklarska, r. pr. Paweł Kempa-Dymiński

Manager:

r. pr. Ewa Knapińska, r. pr. Jacek Cieśliński, r. pr. Paulina Jeziorska, r. pr. Anna Żmidzińska, r. pr. Marta Żukowska

Senior Associate:

r. pr. Marta Czeladzka, r. pr. Dr. Wojciech Lamik, r. pr. Natalia Wojciechowska-Chałupińska, Adw. Marta Maliszewska

Associate:

Marek Czwojdziński, Albert Krynicki

Junior Associate:

Agata Jałowiecka

Intern:

Maciej Małek, Gracjan Ciupa

Basic obligations of intermediate service providers according to the DSA

Under the Digital Services Act (hereinafter: “DSA“), due diligence obligations have been imposed on intermediate service providers for a transparent and secure online environment.

Due to the fact that the concept of indirect service provider includes many, different types of entities (often of different structure and size), as well as the fact that the concept of indirect service is itself broad and includes a catalog of services with different specificities, the legislator recognized that due diligence obligations must be tailored to the type, size and nature of the intermediary service in question, and therefore a catalog of basic duties – that is, duties that apply to all providers of intermediary services – and, in addition, additional duties – that apply to particular types of providers of intermediary services, due to the specificity and size of the services they provide – was singled out. Basic duties should be performed by an intermediate service provider of any type.

The basic due diligence obligations of intermediate service providers include:

  • Designation of points of contact.

 

Indirect service providers are required to designate a single electronic point of contact and to publish and update relevant information regarding this point of contact. The imposition of such an obligation on intermediate service providers is aimed at ensuring smooth communication between the provider and the recipient of the service, as well as between the provider and member state authorities, the European Commission, the European Digital Services Board.

Unlike a legal representative, a point of contact does not have to have a physical location, it is a virtual place. A point of contact can serve duties imposed under various other laws, not just under the AUC. Information on points of contact must be readily available and updated on the supplier’s website.

The point of contact for service recipients should, first and foremost, allow them to communicate directly and quickly with the intermediary service provider, electronically, in a user-friendly manner, including by allowing service recipients to choose their means of communication, which must not rely solely on automated tools. In practice, this means first and foremost that the service recipient should have a choice of at least two communication tools, one of which must not rely solely on automated tools.

 

  • Appointment of legal representative.

 

Indirect service providers that are based in a third country (i.e., outside the EU) and offer services in the European Union should appoint a legal representative in the EU with sufficient authority and provide information on their legal representatives to the relevant authorities and make such information public.

Indirect service providers shall specifically designate in writing a legal or natural person to act as their legal representative in one of the member states where the provider offers its services. The legal representative may represent one or more intermediate service providers.

 

A legal representative is not only an attorney for service of process in matters related to the issuance of DSA decisions by the authorities. He must also be able to cooperate with the authorities, respond to summonses received. The legal representative should receive authorizations for actions that ensure compliance with the decisions of the competent authorities.

In order to fulfill the obligation to appoint a legal representative, intermediate service providers should ensure that the appointed legal representative has the authority and resources necessary to cooperate with the relevant authorities. Adequate resources should be viewed as appropriate competence and experience, as well as having the relevant organizational, legal or technical capabilities to perform such a role.

 

  • Include in the terms and conditions of service information on restrictions on the use of services.

 

Indirect service providers should include in their terms of service (i.e., regulations that are part of user contracts, for example) information on any restrictions they impose on the use of their services with respect to information provided by recipients of the service. Such information must include an indication in terms of any policies, procedures, measures and tools used for content moderation, and should also include information on the rules of procedure for handling complaints internally. The Digital Services Act formulates an additional requirement that the aforementioned information be provided in a manner that is simple and understandable to the recipient, and that the information be machine-readable.

Providers of intermediate services directed primarily at minors (e.g., because of the type of service or the type of marketing associated with the service), should make a special effort to explain the terms of use in a manner that is easily understood by minors.

Special obligations related to the inclusion of restriction information in the terms of service were imposed on intermediate service providers qualifying as very large online platforms or very large search engines. The rationale for imposing additional obligations was primarily cited as the need for such large entities to provide special transparency regarding the terms of use of their services. Providers of very large web browsers and very large search engines, in addition to the obligation applicable to all types of intermediate service providers to provide terms of use, are also required to, among other things: provide a summary of such terms, make the terms available in the official languages of all member states in which they offer their services.

 

  • Reporting obligations.

The AUC also imposes an annual reporting obligation on intermediate service providers for any content moderation they have done during the period.

The report should include, among other things, the following information:

  • Number of warrants received from member state authorities;
  • Number of notifications made under DSA Article 16;
  • Relevant and understandable information on content moderation done on the suppliers’ own initiative;
  • Number of complaints received through internal complaint handling systems in accordance with the provider’s terms of service;
  • Any use of automated means for content moderation.

The European Commission has the authority under the DSA to adopt implementing acts to establish templates setting forth the form, content and other details of the reports, including harmonized reporting periods. The Commission is currently working on the adoption of such a template.

***

Want to learn more about the basic obligations of intermediate service providers under the Digital Services Act? For more, check out the publication by our law firm’s advisors: [Link to publication].

Artificial intelligence – what it is (from a legal point of view) and how the world is dealing with it

“In the rapidly evolving field of technology, artificial intelligence (AI) is a disruptive force that has not only transformed industries, but has also raised many questions and legal challenges.”

Chat GPT asked to present artificial intelligence in the context of legal challenges.

Is there a definition of artificial intelligence?

Currently, there is no legal definition of artificial intelligence either in Poland or in the European Union. A similar situation also exists in other major jurisdictions around the world. Probably the closest definition to AI is the definition of ‘automated decision-making’ in the RODO, which may include some AI systems.

The RODO, in Article 22, defines automated decision-making as:

“… a decision which is based solely on automated processing, including profiling, and which produces legal effects in relation to (…) a person or significantly affects that person in a similar manner.”.

However, this definition in its current form is not specific enough to sufficiently ‘cover’ the concept of artificial intelligence systems as we know them today.

From a legal point of view, artificial intelligence is therefore ‘just’ a technology or a set of technologies and is regulated in the same way as any other technology – through a number of different rules applicable to specific contexts or applications. It can be used for good purposes or to cause harm, its use can be legal or illegal – it all depends on the situation and the context.

Why is the regulation of artificial intelligence so important?

The pace of artificial intelligence development is accelerating. And because artificial intelligence is a ‘disruptive force’, different countries are struggling to describe the technology for legislative purposes. In the past, legislators rarely considered creating new legislation at an international level specifically for a single technology. However, recent years have proven that more and more technological breakthroughs require a rapid legal response – you don’t have to look far, just think of cloud computing, blockchain and now artificial intelligence.

For example, different parts or components of this technology may be owned by different people or companies (for example, copyright of a certain programme code or ownership of databases), but the idea of artificial intelligence is public. And as more and more AI tools and knowledge are made available to everyone, in theory anyone can use AI tools or create new tools. This may involve potential abuse, which is why regulation of the technology is so important.

Why else? Everyone agrees that artificial intelligence has the potential to change the economic and social landscape around the world. Of course, this is already happening, and the process is accelerating every day – which is as exciting as it is frightening. The speed at which new technologies are developing makes it difficult to predict the results. It is therefore crucial to have some legal principles in place to ensure that artificial intelligence is used in a way that benefits everyone. And since it is a ‘global phenomenon’, it would be best if there was at least a universal agreement on what artificial intelligence is from a legal point of view.

However, this is unlikely to happen globally. Some countries are trying to define artificial intelligence by its purpose or functions, others by the technologies used, and some are combining different approaches. However, many key jurisdictions are trying to agree on a definition of AI and find common principles. This is important to avoid practical problems, especially for providers of global AI solutions, as they will soon face numerous compliance issues. Only at least basic interoperability between jurisdictions will allow AI to reach its full potential.

EU approach

Various countries in the European Union have tried to ‘approach’ the AI issue in many ways. However, if we are looking for a quick answer to the question “what is the most likely definition of AI in the EU?”, most will refer us to the Artificial Intelligence Act, or AI Act, or rather its draft. Member states are deferring concrete decisions until the final version of the AI Act, which will comprehensively regulate the technology at the European level in all member states, is adopted.

The current publicly available version of the AI Act contains the following definition of an artificial intelligence system:

“An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Source: https://www.linkedin.com/feed/update/urn:li:activity:7155091883872964608/

Which can be translated as: “An artificial intelligence system is a machine system designed to operate with varying levels of autonomy, which can exhibit adaptability when deployed and which, for explicit or implicit purposes or hidden purposes, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

This is in contrast to the last text of the AI Act of 2023, which defined an artificial intelligence system as “software developed using one or more of the techniques and approaches listed in Annex I that can, for a given set of human-defined purposes, generate outputs such as content, predictions, recommendations or decisions that affect the environments with which it interacts.”

The EU has thus moved closer in its definition of an artificial intelligence system to the OECD standard.

And what is this standard? In November 2023. The OECD (Organisation for Economic Co-operation and Development) updated the definition of AI contained in the OECD AI Principles. This is the first intergovernmental standard on AI (it was adopted in 2019). Numerous authorities around the world have committed to applying this definition directly or with minor modifications. The European Union is also part of this group.

Source: https://oralytics.com/2022/03/14/oced-framework-for-classifying-of-ai-systems/

OECD definition of an AI System:

An AI system is a machine-based system that , for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

(PL: “An artificial intelligence system is a machine-based system that, for explicit or implicit purposes, infers from the input received how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different artificial intelligence systems vary in their level of autonomy and adaptability once deployed”).

Current OECD artificial intelligence system model

In addition to this definition, the OECD recommendations set out five additional value-based principles for the responsible management of trustworthy artificial intelligence.

These include:

inclusive growth, sustainability and prosperity;
human-centred values and justice;
transparency and ‘explainability’;
robustness, safety and security;
accountability.
In the context of the above, countries that have committed to the OECD Principles on Artificial Intelligence should reflect the aspects listed (at least in theory). In this context, the EU is on the right track.

How is artificial intelligence interpreted at a global level?

United States

Obviously, one of the most active jurisdictions when it comes to artificial intelligence is the United States.According to the National Conference of State Legislatures website, at least 25 states, Puerto Rico and the District of Columbia have introduced legislation on artificial intelligence in 2023, with 15 states and Puerto Rico passing resolutions in this area. Individual states have taken more than 120 initiatives in relation to general AI issues (legislation on specific AI technologies, such as facial recognition or autonomous cars, is monitored separately).

The approach in the United States thus varies. As an interesting aside, in May 2023, a bill was introduced in California calling on the US government to impose an immediate moratorium on the training of artificial intelligence systems more powerful than GPT-4 for at least six months to allow time for the development of an AI management system – its status is currently ‘pending’, but it does not seem likely to be adopted.

Regarding the definition of artificial intelligence, there is no uniform legal definition in the US. However, one of the key pieces of AI-related legislation – the National AI Initiative Act of 2020. – established the National Artificial Intelligence Initiative Office and defined artificial intelligence as “a machine-based system that can, for a given set of human-defined goals, make predictions, recommendations or decisions affecting real or virtual environments”. It goes on to explain that “artificial intelligence systems use machine- and human-based inputs to – (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated fashion; and (C) use model inference to formulate options for information or action”. However, the document mainly focuses on the organisation of the AI Office to support the development of this technology in the United States, rather than regulating artificial intelligence itself.

The US has committed to the OECD’s principles on artificial intelligence. However, there is also other guidance on what to expect from federal AI regulations. “The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” is the place to start. It was published by the White House Office of Science and Technology Policy in October 2022 and contains a list of five principles to “help provide guidance whenever automated systems may significantly affect the rights, opportunities or access to critical needs of the public”. These principles include:

1. secure and efficient systems

2. protection against algorithmic discrimination

3. data privacy

4. notification and explanation

5. human alternatives, considerations and fallback solutions

The definition of artificial intelligence systems covered by Blueprint includes two elements: (i) it has the potential to significantly affect the rights, capabilities or access of individuals or communities and (ii) it is an “automated system”. An automated system is further defined as “any system, software or process that uses computing as all or part of a system to determine outcomes, make or support decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities. Automated systems include, but are not limited to, systems derived from machine learning, statistics or other data processing techniques or artificial intelligence and exclude passive computing infrastructure.” To clarify, “passive computing infrastructure is any intermediary technology that does not influence or determine the outcome of a decision, make or assist in making a decision, inform the implementation of a policy or collect data or observations”, including, for example, web hosting.

In terms of other key jurisdictions, none of the following have any widely recognised legal definition, but:

China

China has defined standards at the national level and local adaptations that are based on certain definitions related to the functionality of artificial intelligence systems;

Hong Kong

has created guidelines for the ethical development and use of artificial intelligence, which define artificial intelligence as “a family of technologies that involve the use of computer programmes and machines to mimic the problem-solving and decision-making abilities of humans”.

Japan

Japan has set out an ‘AI Strategy 2022’. It has been issued by the Cabinet Office’s Integrated Innovation Strategy Promotion Council. It suggests that ‘AI’ refers to a system capable of performing functions deemed intelligent.

Singapore

Singapore, on the other hand, has attempted to define ‘AI’ as a set of technologies that are designed to simulate human characteristics such as knowledge, reasoning, problem solving, perception, learning and planning and, depending on the AI model, produce a result or decision (such as a prediction, recommendation and/or classification). This definition is provided in the Model Framework for the Management of Artificial Intelligence issued by the Infocomm Media Development Authority and the Personal Data Protection Commission.

***

Attempts to create a legal definition of artificial intelligence are ongoing around the world. Currently, one of the most recent proposals is that proposed by the OECD. The enactment of the AI Act in its final version will certainly accelerate the process of unifying the approach to the definition of AI worldwide. The question remains open as to whether some countries will not, however, want to ‘distinguish’ themselves with a strongly liberal approach to AI in order to attract the creators of this technology to themselves (without particularly caring about the legal and ethical aspects).

Authors: Mateusz Borkiewicz, Agata Jałowiecka, Grzegorz Leśniewski

The new Internet Constitution is now in force

February 17, 2024 is an important date from the point of view of both Internet users and many entrepreneurs for whom it is the main channel of doing business. This is the day when the provisions of the EU regulation passed on October 19, 2022 – the Digital Services Act, called the new Constitution of the Internet – began to apply to all entities covered by its provisions (earlier, i.e., by August 25, 2023 from the requirements of the DSA had to be implemented in their organizations by very large online platforms and search engines).

Why was the DSA created?

When the European Union first attempted to regulate the Internet in 2000, the global and Polish digital space looked vastly different than it does today. E-commerce was crawling, the days of the reign of social media giants such as Facebook and Tik Tok were yet to come. It was difficult to fully imagine at that stage what enormous opportunities for influencing almost every aspect of daily life the development of the Internet would bring. In these realities, the regulations introduced by the European Union – namely the E-Commerce Directive – were limited in scope. This is well illustrated by the fact that the directive had just over 20 articles. The provisions of the directive were introduced into the Polish legal order in 2002 by the well-known Act on Provision of Electronic Services, which is familiar to all those who deal with e-commerce-related topics.

After two decades of intense development and change, EU policymakers have recognized that the Internet is in dire need of new regulations that comprehensively cover this important area of the digital space, such as intermediary services, on the one hand ensuring that users’ rights are protected at an appropriate level, and on the other hand giving member states the tools to combat significant threats, such as disinformation. It was precisely the need to adapt regulations to the new – as exciting as it is challenging – digital reality that was one of the goals of the enactment of the October 19, 2022. Digital Services Act, dubbed the new Internet Constitution.

Three primary objectives of the DSA

An analysis of the Recitals to the DSA makes it possible to distinguish three fundamental goals that guided the drafters of the regulation:

D- FOR UPDATE
S- FOR UNIVERSITY
A- FOR CYBER SECURITY

The update covers the issues that were basically described in the introduction – that is, the adaptation of regulations passed more than 20 years ago to new technologies, business models, but also the challenges and threats posed by the important role of the Internet and its impact on the world.

The unification of regulations – the second major goal of the DSA – is intended to ensure that through close harmonization (mainly through the EU’s use of the tool of a directly applicable regulation, rather than a directive, in the member states), obstacles to entrepreneurs that have hitherto been the result of differences in regulations in each member state will be removed.

And finally, cyber-security: the EU’s goal is to create a secure, predictable and transparent digital space that protects the fundamental rights set forth in the Charter of Fundamental Rights, on the one hand, and is free of illegal content and disinformation, on the other.

🤔 Who is affected?

The DSA defines duties and responsibilities for intermediary service providers, such as online platforms and search engines. It might seem that this act is addressed to a rather narrow audience, but in practice, due to, for example, the very broad definition of a web hosting provider, many entrepreneurs running a “traditional” online store may also be obliged to implement solutions under the DSA.

***

There is no doubt that the European Union has undertaken an ambitious task, implementing a revolution that for indirect services is to be what RODO was for personal data. At the same time, an analysis of the AUC leads to the conclusion that the authors of the regulation wanted to avoid the most significant mistakes of the RODO (among which one points out the imposition in the RODO of essentially analogous obligations on all businesses, regardless of the size and scope of their activities). Whether this mission will be successful – we will find out in time.

💬 Want to learn more about the goals of the Digital Services Act and the impact of the new regulations on your business? For more, check out the publications of our law firm’s advisors: Link to publication

Implementation of the Digital Services Act in e-commerce

Implementation of the Digital Services Act in e-commerce
Our advisors, together with the long-standing Deputy President of the Office for Personal Data Protection (and former Deputy GIODO) Miroslaw Sanek and the publishing house C.H.Beck, not only discussed the new “constitution of the Internet” in detail, but also provided numerous examples (including on the implementation of the AUC, their mechanisms of functioning in e-commerce and violations of the indicated regulations), tables (including comparing the old regulations with the new regulations) and diagrams of the functioning of the various regulations of the AUC, including in comparison with the RODO, to facilitate understanding of the issues.

We address the publication not only to legal practitioners, but especially to online entrepreneurs, e-commerce employees and anyone who wants to expand their knowledge of Internet security.
We would like to thank for the commitment and support of the C.H.Beck Publishing House, all the authors: Mateusz Borkiewicz, attorney at law Jacek Cieśliński, Marta Czeladzka, attorney at law Marek Czwojdziński, attorney at law Paulina Jeziorska, attorney at law Ewa Knapińska, adw. Wojciech Kostka, attorney-at-law Grzegorz Lesniewski, Mirosław Sanek, attorney-at-law Monika Skaba-Szklarska, attorney-at-law Marta Żukowska and, above all, attorney-at-law Dr. Wojciech Lamik, whose determination and substantive supervision made it possible to bring this publication to the finish line.

Link to pre-order:

https://www.ksiegarnia.beck.pl/22073-wdrozenie-aktu-o-uslugach-cyfrowych-w-e-commerce-mateusz-borkiewicz?fbclid=IwAR1dMiTKmyhuFW0C8h90Ys_mYkZof9ekmnOMi9J2vfbA8qdlzoFQEpD0LsI

CREDITS IN VIDEO GAMES

Continuing the series of posts on the legal aspects of the Game Dev industry, a separate space should be devoted to the issue of end credits in video games, or the issue of so-called Credits. The problem in this regard primarily concerns who should be credited in such Credits, to what extent and in what way. Due to the fact that the process of creating a video game involves many people, institutions or various types of entities in general, one should be guided by the rules defined, among others, by copyright law – about which more later in the post.

WHAT ARE THE CREDITS AND WHAT TO KEEP IN MIND?

End credits in all kinds of works (not only in video games, but also, for example, in films) are the most common and optimal form of fulfilling the obligation to mark the authorship of a game as a work under copyright law. At the same time, Credits can be (and most often are) also used to thank for non-creative contributions to the creation of a game. As a rule, such thanks are not regulated by law, unless otherwise specified in the agreement between the video game developer and the person or entity in question.

Importantly, when publishing Credits, it is important to keep in mind, first and foremost, the regulations on personal copyrights, RODO, protection of personal rights, unequal treatment in employment and contractual obligations to subcontractors.

CREDITS A RODO

Special attention in the context of Credits should definitely be paid to RODO. In this regard, first of all, it is necessary to point out the legal basis for publishing the data of certain creators. Interestingly, the basis for processing will be different for creators and different for non-creators:

  • vis-à-vis creators: performance of the contract or the law (authorization/obligation to mark authorship), i.e. Article 6(1)(b) or (c) of the RODO, respectively;
  • vis-à-vis non-creators: legitimate interest, i.e. Article 6(1)(f) RODO (possibility of a broad argumentation in this regard, including in particular: taking care of the image of the video game developer and relations with employees/co-workers, striving for compliance with market practices and relevant standards), taking into account the positive aspect of Credits for the interests of the employee/co-worker and legitimate expectations on his/her part – the above speaks in favor of a positive result of the so-called “balance test”, which makes it possible to invoke the above-mentioned basis for processing.

Of course, over and above the aforementioned basis for processing in this regard, one can also point to the withdrawal of consent for such processing, especially with respect to non-creators, which can be positively evaluated in the context of transparency of such processing and the transfer of actual decision-making to data subjects regarding such processing.

Importantly, however, relying on consent as a basis for processing can prove to be very problematic in practice, particularly with large-scale game production (when many people are involved in game production). The issue here, of course, is the potential problem of obtaining such consent from each of the individuals whose data is to be published in Credits (the inability to guarantee responses from all individuals, the inability to “force” consent, and the ability to withdraw consent at any time – which may lead to the need to frequently update Credits). In the case of basing data processing on legitimate interest, while the data subject has the opportunity to object, the controller has legal instruments to disregard the objection (per case approach).

Thus, it seems that a far better and more practical approach in this case is to base the publication of personal data in Credits on a legitimate interest. Importantly, if the manufacturer decides to adopt as the basis for processing the consent obtained from the data subject, and it proves impossible to collect such consent from some people the final publication of the game may constitute an incident within the meaning of the RODO.

OTHER LEGAL ISSUES

As for the personal copyrights of creators, they include, among other things, the right to decide on the designation of authorship. In this context, special attention should be paid to the provisions arising from the contract between the producer and the creator himself. Usually, it is in the contracts that it is indicated whether the right to decide on the designation of authorship is vested in the manufacturer or in the creator himself. This will also determine whether or not permission for authorship marking is required.

Another important aspect of Credits is the issue of protecting the personal rights of those whose data is or is not included in the credits. If such inclusion or non-inclusion is unlawful or gives the impression of erroneous attribution of authorship to another person, the personal rights of the creator or non-creator may then be violated.

The prospect of unequal employment treatment is also worth mentioning. Importantly, any independent decision by a video game manufacturer not to publish the data of a particular employee (employment relationship) should have an objective justification. Otherwise, an allegation of unequal employment treatment is not excluded. It should be mentioned that the risk of such an allegation may also arise from former employees, but it is lower (depending on when the employment relationship expired).

Good practice in terms of Credits, which allows you to avoid various kinds of ambiguity, is certainly the introduction of internal Credits Policy defining m.in. the approach of the game manufacturer to Credits, with a description of the justification for the collection of in parts of the cases of agreement. Such a policy may also determine the periods of employment/cooperation or other policies that justify or certain persons within Credits and possible rules for the conduct of the accident of receiving objections as to publication.

SUMMARY

The Credits issue is another video game issue that should be viewed from multiple perspectives. Publish the data of the individual or not-creative, manufacturers must pay attention to many aspects of the law to allow themselves to violations and the same not to be exposed to the responsibility or image. The Credits issue as another shows how complex the process of creating a video game is.

Important developments in the right of access to personal data

The past year has seen a lot of developments in the context of the right of access to personal data, especially in light of the rulings of the Court of Justice of the European Union on this issue.

 

One of the hot topics in this area was the issue of informing a subject who exercises the right to access his or her data about the recipients of that personal data. In the current reality, it is difficult to imagine processing personal data without sharing it with third parties, such as hosting providers, providers of other IT services or courier companies.

 

Pursuant to Article 15(1) of the General Data Protection Regulation (GDPR), the data controller is obliged to grant access to the processed personal data to the data subject if he or she makes such a request. As part of the right of access, the data subject has the right to obtain information about, among other things, the recipients or categories of recipients of his personal data.

 

Data controllers sometimes face challenges regarding the proper implementation of this obligation. Is it necessary to provide information about the identity of specific data recipients, such as the names of specific hosting providers, or is it sufficient to indicate the categories of data recipients?

 

Ewa Knapinska of our Team looked at this issue in the latest issue @ “ABI Expert”. In her article “Right of access vs. data recipients,” Ewa discusses, among other things, the CJEU’s January 12, 2023 ruling in Case C-154/21 regarding this issue, as well as the updated version of the European Data Protection Board’s 1/2022 guidelines on the implementation of the right of access.

 

#RODO #DataProtection #RightofAccess #TSUE #ABIExpert

 

Link to full article:

Październik – Grudzień 2023

Contact

Any questions?see phone number+48 663 683 888
see email address

Hey, have you
signed up to our newsletter yet?

    Check how we process your personal data here