top of page

EU to adopt the world’s first comprehensive AI regulation

29/08/2023

The European Union Artificial Intelligence Act (EU AI Act) is expected to be adopted by the Council of the European Union by the end of 2023, making it the first comprehensive AI regulation in the world.


The EU AI Act was approved by the European Parliament in June 2023  which set out the reason for the legislation stating: “Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.


“AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes. Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.”


If passed, the framework will be binding on all 27 EU Member States and apply to anyone who creates and disseminates AI systems in the European Union. Under the Act, EU member states must establish at least one official secure sandbox environment to test AI systems before they are deployed.


The emergence of publicly available generative AI applications has led to last-minute proposals for regulatory amendments to the Act, including a globally unique transparency obligation regarding copyrighted material used as training data. Other copyright issues related to generative AI remain unanswered for now.


The EU AI Act is intended find a balance between mitigating the risks of AI and fundamental rights and principles to enable AI to reach its full potential and aid the EU’s global competitiveness.


If the framework becomes law, it will require companies to formally assess the risks posed by their AI systems before they are put into use, and it will grant the European government the authority to fine companies that violate the framework’s compliance rules.


Non-compliances can trigger penalties of up to €40 million, or 7% of a company’s annual global revenue - whichever is higher.


The legislation will also give European citizens the power to file complaints against AI providers they believe are in breach of the Act. The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.


Under the Act there are different rules for various risk levels:


· Unacceptable risk: Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include: Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children. Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics. Real-time and remote biometric identification systems, such as facial recognition.


Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval.


· High risk: AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories: 1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.2) AI systems falling into eight specific areas that will have to be registered in an EU database: Biometric identification and categorisation of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, worker management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Assistance in legal interpretation and application of the law.


All high-risk AI systems will be assessed before being put on the market and throughout their lifecycle. Generative AI, like ChatGPT, would have to comply with transparency requirements: Disclosing that the content was generated by AI; Designing the model to prevent it from generating illegal content; Publishing summaries of copyrighted data used for training.


· Limited risk:  AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.


The Act is intended to ensure high-risk AI systems are only used in accordance with the provider’s instructions and records must be kept of their input data along with competent human oversight and cybersecurity measures. Importers must ensure the provider has carried out the conformity assessment and prepared the technical documentation and the system bears the conformity mark.



Under the Act, distributors of high-risk AI systems are required to ensure the system carries the CE marking, the necessary documentation, instructions and conforms to the Act.


W Denis Europe arranges comprehensive insurance for EEA based businesses, large and small, including, Data Protection Infringement Cover, Cyber, Errors & Omissions, Directors & Officers Liability and much more. If you wish to discuss your insurance requirements, please visit www.wdenis.eu or contact Vida Jarašiūnaitė vida.jarasiunaite@wdenis.eu or Mark Dutton mark.dutton@wdenis.com

bottom of page