ARTIFICIAL INTELLIGENCE ACT

WHAT IT MEANS FOR YOUR BUSINESS

Now that the amended proposal for the regulation of artificial intelligence (“AI”) systems (the “Proposed AI Regulation”) has been adopted by the European Parliament, we are entering the so-called “trilogue procedure” where the European Parliament, the European Commission and the EU Council (representing the Member States) will conduct interinstitutional negotiations with a view for the Proposed AI Regulation to be finalised and passed as an EU-wide regulation by end of 2023.

For businesses and organisations, now is the right time to start making preparations so as to assess whether they will fall within the scope of the Proposed AI Regulation and, where that is the case, take steps to address their obligations. The Proposed AI Regulation is anticipated to apply 24 months after it is voted into law (currently expected to happen by end of 2023).

This is particularly important where AI systems will be used for key business operations and/or when significant costs have been or will be incurred for the introduction of AI systems within the organisation. Businesses should now consider the role played by AI in their organisation so as to limit or avoid potential costs for fulfilling the Proposed AI Regulation’s requirements in the future.

We set out below some key decision items that should be considered:

1- Is our system an AI system, a general purpose AI system and/or a foundation model as defined in the Proposed AI Regulation?

An “AI system” is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations or decisions that influence physical or virtual environments.”

A “general purpose AI system” is defined as “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed”. This would capture, for example, an AI system for language processing or image and speech recognition which can be used for a variety of applied models (such as chatbots, search engines, translation assistants, ad generator etc.).

A “foundation model” is defined as “an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks”. This would capture “generative AI” models such as Dall-E, ChatGPT, Bard or Midjourney.

Answering the above question implies analysing each process and component of a product that includes an automated machine-based system as well as the level of automation and human involvement in that process or product.

2- Is the Proposed AI Regulation applicable to our organisation? Is our organisation one of the following entities?

The Proposed AI Regulation will apply to the following entities:

a. Provider, being any natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service in the EU under its own name or trademark, whether for payment or free of charge.

b. Authorised representative, being any natural or legal person established in the EU who has received a written mandate from a provider of an AI system to, respectively, perform and carry out on its behalf the obligations and procedures established by the Proposed AI Regulation.

c. Deployer, being any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non professional activity.

d. Distributor, being any natural or legal person in the supply chain, other than the provider or the importer, that makes AI systems available in the EU in the course of a commercial activity, whether for payment or free of charge.

e. Importer, being any natural or legal person established in the EU placing AI systems on the market or putting into service an AI system that bears the name or trademark of a natural or legal person established outside the EU.

f. Affected person, being a natural person or group of persons who are subject to or otherwise affected by an AI system (whose health, safety or fundamental rights were adversely impacted by the use of an AI system).

g. Product manufacturer. This term is not defined in the legislation, however it can be assumed that it will have the meaning given in the EU harmonisation legislation listed in Annex II to the Proposed AI Regulation.

3- For what purpose are we using an AI system?

Is our AI system a safety component of a product or system, or is itself a product, covered by the harmonisation legislation listed in Annex II to the Proposed AI Regulation and whose safety component is the AI system?

Or does our AI system fall within one of the following exceptions (in which case the Proposed AI Regulation does not apply, subject to specific conditions being met):

a. Is it to be used exclusively for military purposes?
b. Is it to be used by public bodies in third countries within an international framework for law enforcement and judicial co-operation?
c. Is it to be used for research and development?
d. Does the system include AI components provided under free and opensource licences (this exception does not apply to “foundation models”)?

4- Is it a prohibited AI system?

The Proposed AI Regulation expressly prohibits the following AI practices:

a. Subliminal, manipulative, or exploitative AI systems that cause physical or physiological harm.
b. Emotion recognition, predictive policing, real time biometric identification AI systems used in public spaces for law enforcement.
c. All forms of social scoring, such as AI or technology that evaluates an individual based on social behaviour or predicted personality traits.

5- Is it a “high-risk” AI system?

Under the Proposed AI Regulation, an AI system shall be considered “high-risk” where both the following conditions are fulfilled:

a. the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the harmonisation legislation listed in Annex II to the Proposed AI Regulation; and
b. the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment related to risks for health and safety, with a view to the placing on the market or putting into service that product pursuant to the EU’s harmonisation legislation listed in Annex II.

In addition to the above, an AI system shall also be considered “high-risk” where it falls under Annex III of the Proposed AI Regulation, only if it poses a “significant risk” of harm to the health, safety or fundamental rights of natural persons.

Where an AI system falls under Annex III point 2 (critical infrastructure), it shall also be considered “high-risk” if it poses a “significant risk” of harm to the environment.

Annex III includes AI systems that are biometrics or biometrics-based; for management or operation of critical infrastructure; education and vocational training; employment, workers management and access to self-employment; access to and enjoyment of essential private services and public services and benefits; law enforcement; migration, asylum and border control management; or administration of justice and democratic processes.

A “significant risk” is defined as a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and its ability to affect an individual, a plurality of persons or to affect a particular group of persons.

Obligations under the Proposed AI Regulation will vary depending on what type of entity my organisation is and the type of AI system my organisation uses.

6- Other considerations

Have we considered transparency? For instance, do we make natural persons who interact with our AI system aware that we are using an AI system?

Has our AI system been developed and used in accordance with the Proposed AI Regulation’s general principles? These principles include: human agency and oversight; technical robustness and safety; privacy and data governance; transparency (including explainability); diversity, non-discrimination and fairness; and social and environmental well-being.

7- Understanding the organisation’s obligations

Once the above questions have been answered, an organisation will be in a much better position to assess its obligations under the Proposed AI Regulation and what steps it should take to prepare for compliance.

For further information in relation to this article please contact Eoghan Doyle and Hugo Grattirola.