
European AI Act comes into force with significant implications
03/04/2024
The European Union Artificial Intelligence Act (EU AI Act), which has been passed by the EU Parliament, has the potential to significantly change the way companies operate within the EU by setting a new global standard for AI policy.
The Act means companies that are releasing products or operating systems with AI exposure will need to make sure they comply with the law, or risk being fined. The Act is intended find a balance between mitigating the risks of AI and fundamental rights and principles to enable AI to reach its full potential and aid the EU’s global competitiveness.
AI systems currently being used in industries such as banking, insurance, healthcare, life sciences, human capital management and others will be classified as “high risk” under the Act.
Accenture research shows that pressure is starting to impact how firms invest in technology; 72% of companies globally are now approaching investments with more caution because of societal concerns about the responsible use of AI. Europe boasts the highest level of caution across the world, with 77% compared to 58% in North America.
It is clear that organisations affected by the Act need to understand how these regulations may affect them and what strategies they can deploy to ensure they can operate in compliance with their obligations once the new law takes effect.
The new legal obligations carry significant financial penalties for non-compliance, ranging from penalties of €35 million or 7 percent of global revenue to €7.5 million or 1.5 percent of revenue, depending on the infringement and size of the company.
"Businesses have no time to lose when it comes to getting ready for the EU AI Act—the most significant step to regulating AI in the world," Ray Eitel-Porter, AI lead at Accenture UKIA told Forbes.
The Act aims to classify and regulate AI applications based on their risk to cause harm. This classification includes four categories of risk - "unacceptable", "high", "limited" and "minimal" plus one additional category for general-purpose AI.
Applications deemed to represent unacceptable risks are banned. High-risk ones must comply to security, transparency and quality obligations and undergo conformity assessments. Limited-risk AI applications only have transparency obligations, and those representing minimal risks are not regulated. For general-purpose AI, transparency requirements are imposed, with additional and thorough evaluations when representing particularly high risks.
After coming into force, there will be a delay before the Act becomes applicable with some industry sectors more immediately impacted than others. The Act also requires that AI systems intended to directly interact with humans be clearly marked unless this is obvious under the circumstances.
Under the Act, distributors of high-risk AI systems are required to ensure the system carries the CE marking, the necessary documentation, instructions and conforms to the Act. One of the key aspects of the Act is its requirement for companies to disclose the content used to train AI models and comply with European copyright laws.
W Denis Europe arranges comprehensive insurance for EEA based businesses, large and small, including, Data Protection Infringement Cover, Cyber, Errors & Omissions, Directors & Officers Liability and much more.
For more information, please contact:
Eastern Europe
Southern Europe
Christos.Hadjisotiris@wdenis.com
Western Europe &/or elsewhere worldwide