The European Union takes control of artificial intelligence

  • 2024-02-06

Representatives from EU Member States reached an agreement on the Artificial Intelligence Act (AI Act), which sets the stage for the next important steps towards its entry into force. 

This landmark legislation sets forth a comprehensive framework for the regulation of artificial intelligence (AI) across the EU. It marks a significant step towards ensuring AI technologies are developed and deployed in a safe, transparent, and accountable manner.

Toomas Seppel, an attorney-at-law at Hedman Law Firm says: “The AI Act is a pioneering piece of legislation, first in the world to regulate the development of artificial intelligence systems, as well as their placing on the market, supply and use.”

What is the EU AI Act about?

The Act classifies AI systems according to their risk level, from minimal risk to unacceptable risk, imposing strict requirements on high-risk AI applications. These include critical sectors such as healthcare, policing, and employment, where AI systems must adhere to stringent transparency, data governance, and human oversight standards.

For the first time, the AI Act introduces the concept of general-purpose artificial intelligence (GPAI). The best known of these today are ChatGPT, Bard, and DALL-E, systems that can perform a wide range of tasks such as generating text, images or sounds. In the future, GPAI systems will have to comply with transparency requirements. The AI Act also regulates more powerful models of GPAI, which may pose a systemic risk. The most prominent of these are OpenAI (GPT-3, GPT-4), DeepMind (AlphaGo, AlphaFold) and IBM (IBM Watson) and these will be subject to additional obligations.

Fines in the millions of euros will also be imposed for breaches of the AI Act. For example, in the future, violators could face fines of up to €7.5 million, or 1.5% of a company's annual turnover. For large global companies, the fines could be as high as €35 million, representing up to 7% of global turnover. “The size of the fines depends on the seriousness of the infringement as well as the size and turnover of the company. Such a strong control mechanism is essential to prevent infringements and to ensure that the Act protects people's rights effectively by promoting the development and use of responsible artificial intelligence”, explained Seppel.

AI Act ought to enhance transparency for end-users since when interacting with humans, artificial intelligence systems must inform the user that they are interacting with a machine. This applies, for example, to automatic identity verification at border control and chatbots in customer service, Snapchat, and ChatGPT.

In addition, people using AI-generated 'deepfakes' will have to indicate that the content was generated by AI. There are no fines for individuals, but this does not exclude the user of a deepfake from liability in general cases such as defamation.

Low, High and Prohibited Risk Artificial Intelligence systems

Toomas Seppel explained that the AI Act classifies artificial intelligence systems into three categories according to potential risk: limited, high, and prohibited risk.

Limited-risk AI systems are subject to general transparency requirements, such as the preparation of technical documentation, compliance with copyright requirements and disclosure of summaries of data used in training.

Additional obligations and prohibitions will be imposed on GPAI and especially those GPAI systems which pose a systemic risk.

High-risk artificial intelligence systems will be subject to extensive obligations, such as the establishment of a risk management system, the creation and updating of technical documentation, compliance with transparency requirements, and human oversight. Examples of high-risk Artificial Intelligence systems are those used in critical infrastructure, medical devices, law enforcement, and the administration of justice.

The European Union agreed that AI systems that pose a threat to fundamental human rights will be classified as prohibited risk systems and their development and use will be banned in the EU. Examples of prohibited systems include biometric categorisation systems using sensitive data and facial recognition databases, also emotion recognition in the workplace and educational institutions, and systems that manipulate people's subconscious behaviour and exploit their vulnerabilities.

Estonia is developing its own AI strategy

The AI Act now awaits further clarification and adoption by EU lawmakers. Estonia is also currently developing its own AI strategy based on the AI Act, which will provide guidance and set standards, and look more systematically at how to ensure the trustworthiness of AI and mitigate risks both when developing and using AI.

The AI Act is expected to be approved at the Commission level in the next two weeks. It will then be forwarded to the EU plenary vote in the European Parliament, which is expected to take place in April. More details on the regulation can be found on website of Hedman Law Firm: https://hedman.legal/articles/the-european-union-unveils-groundbreaking-ai-act-to-foster-responsible-ai-development/

Hedman Law Firm specialises in commercial and corporate law and assists its clients in investment fundraising, shareholder relations, technology law, mergers and acquisitions, cross-border corporate transactions, IT law, data protection, and intellectual property matters.