top of page

Artificial intelligence Act highlights need for cyber insurance

14 July 2022

The European Union has responded to a lack of comprehensive regulation of artificial intelligence by delivering the EU AI Act which is expected to come into force in late 2022.


The Act is seen as an important step that will determine the future of artificial intelligence and the protection of personal data and is the first law on AI by a major regulator anywhere in the world.

The EU Act puts AI into three risk categories:


Unacceptable risk : Covers AI systems considered a clear threat to the safety, livelihoods and rights of people. It includes systems that manipulate human behaviour or allow ‘social scoring’ by governments.


High-risk applications :This relates to technologies which affect material aspects of people’s lives, through assessments, evaluations, access, employment, etc.


Low risk :This covers AI technology which is not classified as unacceptable or high risk and is largely left unregulated.


For organisations with machine learning systems at the centre of their business there is the risk of failures. Cyber insurance covers information security and privacy liability, and business interruption. However, AI failures resulting in brand damage, bodily harm, and property damage are unlikely to be covered by existing cyber insurance.


In a fast moving market, cyber insurance products are being created to target the major concerns of businesses, including a guarantee  to absorb under performance of AI. It is predicted the global cyber insurance market will grow from a worth of Euros 6.0 billion ($7bn) in gross written premiums (GWP) in 2020 to Euros 20.4 billion ( $20.6bn) by 2025


The European Act arrives at a time when organisations are increasingly using AI assisted recruitment tools which may carry the risk of discriminating against applicants which could trigger legal action. AI makes it possible to perform automatic filtering of candidates through online assessments and tests which can identify an applicant’s job history, personal characteristics or cognitive ability.


However, there is the danger of bias being introduced in the AI recruitment model resulting in systematic bias.


There have already been significant incidents involving automated systems and in 2018 Amazon was forced to scrap its own CV screening algorithm which was trained using its recruitment data from the previous ten years. Using the data, the algorithm taught itself that male candidates were preferable to female candidates based on Amazon’s previous recruitment decisions. It therefore reportedly penalised CVs that included the word “women’s” and downgraded graduates of certain all-women colleges.


It has been noted that voice-testing technology may discriminate against employees based on their accent, reflecting assumptions about the candidate’s nationality or race while Indirect discrimination occurs where a “provision, criterion or practice disadvantages a certain group of people who share the same “protected characteristic”.


For further information visit www.wdenis.euor contact Vida Jarašiūnaitė Vida.Jarasiunaite@wdenis.euor Mark Dutton mark.dutton@wdenis.com

bottom of page