Bad robot: Europe plans product liability changes to make it easier to sue AIs

Tech

Products You May Like

 

The European Union is to update product liability laws to tackle the risk of damage caused by artificial intelligence systems and address other liability issues arising from digital devices — such as drones and smart devices.

Presenting a proposal for revisions to long-standing EU products rules — which includes a dedicated AI Liability Directive — justice commissioner, Didier Reynders, said modernization of the legal framework is needed to take account of “digital transformation” generally and the ‘blackbox’ explainability challenge that AI specifically poses in order to ensure that consumers are able to obtain redress for harms caused by modern products. 

The EU’s executive also argues its approach will give businesses legal certainty, as well as helping to foster (consumer) trust in their products.

“Current liability rules are not equipped to handle claims for damage caused by AI-enabled products and services,” said Reynders discussing the AI Liability Directive in a press briefing. “We must change this and guarantee protection for all consumers.”

The Directive contains two main measures: Disclosure requirements and a rebuttable presumption of causality.

“With these measures victims will have an effective chance to prove their justified liability claims in court,” he suggested. “Because it is only when the parties have equal tools to make their case before a judge that the fundamental right to access to justice becomes effective.

“Our proposal will ensure that justified claims are not hindered by specific difficulties of proof linked to AI.”

The Commission’s AI liability proposal would apply its protections to both individuals and businesses, not only consumers.

While, on the culpability side, the draft legislation is not limited in scope to an original maker of an AI system; rather liability risk spans manufacturers, developers or users of an AI system that causes harm as a result of errors or omissions. So it looks rather more broadly drawn than the earlier AI Act proposal (targeted at “high risk” AI systems), as it does not limit liability to the producer but opens it up to the whole supply chain.

That’s an interesting difference — especially bearing in mind certain civil society criticisms of the AI Act for lacking rights and avenues for individuals to seek redress when they are negatively impacted by AI.

The Commission’s riposte appears to be that it’s going to make it easier for people to sue if they are harmed by AIs. (In a Q&A on the linkage between the AI Act and the Liability Directive, it writes: “Safety-oriented rules aim primarily to reduce risks and prevent damages, but those risks will never be eliminated entirely. Liability provisions are needed to ensure that, in the event that a risk materialises in damage, compensation is effective and realistic. While the AI Act aims at preventing damage, the AI Liability Directive lays down a safety-net for compensation in the event of damage.”)

“The principle is simple,” said Reynders of the AI Liability Directive. “The new rules apply when a product that functions thanks to AI technology causes damage and that this damage is the result of an error made by manufacturers, developers or users of this technology.”

He gave the example of damage caused by a drone operator that’s delivering packages but not respecting user instructions that particularly relate to AI as the sort of scenario that would be covered. Or a manufacturer that’s not applying “essential remedial measures” for recruitment services using AI. Or an operator giving incorrect instructions to a mobile robot that’s AI equipped — which then collides with a parked car.

Currently, he said it’s difficult to obtain redress for liability around such AI products — given what he described as “the obscurity of these technologies, their unique nature, and their extreme complexity”.

The directive proposes to circumvent the “blackbox of AI” by laying out powers for victims to obtain documents or recorded data generated by an AI system to build their case — aka disclosure powers — with provisions also put in place to protect commercially sensitive information (like trade secrets).

The law will also introduce a rebuttable presumption of causality to alleviate the ‘burden of proof’ problem attached to complex AI systems.

“This [presumption] means that if the victim can show that the liable person committed fraud by not complying with a certain obligation — such as an AI Act requirement or an obligation set by EU or national law to avoid harm from happening — the court can presume that this non-compliance led to the damage,” he explained.

While a potentially liable person could rebut the presumption if they can demonstrate that another cause led the AI to give rise to the damage, he added.

“The directive covers all types of damage that are currently compensated for in each Member State’s national law — such as issues resulting in physical injury, damage of materials or discrimination,” Reynders went on, adding: “This directive will act in the interests of all victims.”

Commenting on this aspect of the proposal in a statement, John Buyers, head of AI, at the international law firm Osborne Clarke, said: “There’s a very intentional interplay between the AI Act and the proposed new presumptions on liability, linking non-compliance with the EU’s planned regulatory regime with increased exposure to damages actions. Instead of having to prove that the AI system caused the harm suffered, claimants that can prove non-compliance with the Act (or certain other regulatory requirements) will benefit from a presumption that their damages is case is proven. The focus will then shift to the defendant to show that its system is not the cause of the harm suffered.”

“The potential for claimants to get hold of a defendant’s regulatory compliance documentation to inform their claims may add a tactical layer to how those technical documents are written,” he added. “There’s no doubt that the AI industry — at least as regards applications classified as high risk under the Act – is going to need to apply time and thought to compliance with the new Act and how best to protect their interests.”

In a Q&A, the Commission further specifies that the new AI Liability rules will cover compensation “of any type of damage covered by national law (life, health, property, privacy, etc)” — which raises an interesting prospect of privacy litigation (notoriously difficult to pull off under current legal frameworks in Europe) potentially getting a boost, given how far and wide AI is spreading (and how fast and lose with people’s information AI data-miners can be).

Could Facebook be sued for the privacy harms of behavioral profiling and ad targeting under the incoming directive? It’s a thought.

That said, the Commission pours some cold water on the notion of the revised liability framework empowering citizens to sue directly for damages over infringements of their fundamental rights — writing: “The new rules do not allow compensation for infringements of fundamental rights, for example if someone failed a job interview because of discriminatory AI recruitment software. The draft AI Act currently being negotiated aims to prevent such infringements from occurring. Where they nevertheless do occur, people can turn to national liability rules for compensation, and the proposed AI Liability Directive could assist people in such claims.”

But its response on that also specifies that a damages claim could be brought for “data loss”.

Not just high risk AIs…

While Reynders made mention in today’s press briefing of the “high risk” category of AI systems that is contained in the AI Act — appearing to suggest the liability directive would be limited to that narrow subset of AI applications — he said that is not actually the Commission’s intention.

“The reference is of course to the AI high risk products that we have put in the AI act but with the possibility to go further than that if there are some evidence about the link with the damage,” he said, adding: “So it’s not a limitation to exclusive the high risk applications — but it’s the first reference and the reference is linked to the AI Act.”

Revisions to the EU’s existing Liability Directive which have also been adopted today — paving the way for the AI Liability Directive to slot in uniform rules around AI products — also include some further modernization focused on liability rules for digital products, such as allowing compensation for damage when products like robots, drones or smart-home systems are made unsafe by software updates; or by digital services (or AI) that are needed to operate the product; or if manufacturers fail to address cybersecurity vulnerabilities.

Earlier this month, the EU laid out plans for a Cyber Resilience Act to bring in mandatory cybersecurity requirements for smart products that apply throughout their lifetimes.

The proposed revision to EU product liability rules which date back to 1985 is also intended to consider products originating from circular economy business models — so where products are modified or upgraded — with the EU saying it wants to create legal certainty to help support circularity as part of its broader push for a green transition, too.

Commenting in a statement, commissioner for the internal market, Thierry Breton, added: The Product Liability Directive has been a cornerstone of the internal market for four decades. Today’s proposal will make it fit to respond to the challenges of the decades to come. The new rules will reflect global value chains, foster innovation and consumer trust, and provide stronger legal certainty for businesses involved in the green and digital transition.”

The Commission’s product liability proposals will now move through the EU’s co-legislative process, meaning they will be debated and potentially amended by the European Parliament and the Council, which will both need to give their agreement to the changes if the package is to become EU law. So it remains to be seen how the policy package might shift.

The EU’s proposal does not cover no-fault/unintentional harm caused by AI systems but the Commission said it will review that position five years after the directive comes into force.

“Unanticipated outcomes — so-called “edge cases” — are one of the potential risks of machine learning AI systems and it will be interesting to see whether that turns out to be a theoretical problem or a material one,” added Osborne Clarke’s Buyers.

The AI Act is moving fairly slowly through the EU legislative process — we certainly don’t expect it to become law before late 2023, with a period for compliance after that, likely to be 2 years but still being debated.  On the other hand, monitoring what will need to be done is important so that business can plan ahead in terms of resourcing and planning this work.”

This report was updated with additional comment

Products You May Like

Articles You May Like

A Mysterious Impact Left 2 Billion Craters On The Surface of Mars : ScienceAlert
He Will Survive: ‘Chucky’ Season 3: Part 2 Trailer Drops a Bomb
‘Night of the Hunter’ – Scott Derrickson Directing Movie Remake
Whalefall Movie: What We Know So Far About The Upcoming Book Adaptation
Realme GT Neo 6 SE Display Details Confirmed; Live Images Surface Ahead of Imminent Launch