From Our Pipe Dream Department: Harmful AI Must Pay Victims!
October 28, 2022
It looks like the European Commission is taking the potential for algorithms to cause harm seriously. The Register reports, “Europe Just Might Make it Easier for People to Sue for Damage Caused by AI Tech.” Vice-president for values and transparency V?ra Jourová frames the measure as a way to foster trust in AI technologies. Apparently EU officials believe technical innovation is helped when the public knows appropriate guardrails are in place. What an interesting perspective. Writer Katyanna Quach describes:
“The proposed AI Liability Directive aims to do a few things. One main goal is updating product liability laws so that they effectively cover machine-learning systems and lower the burden-of-proof for a compensation claimant. This ought to make it easier for people to claim compensation, provided they can prove damage was done and that it’s likely a trained model was to blame. This means someone could, for instance, claim compensation if they believe they’ve been discriminated against by AI-powered recruitment software. The directive opens the door to claims for compensation following privacy blunders and damage caused by poor safety in the context of an AI system gone wrong. Another main aim is to give people the right to demand from organizations details of their use of artificial intelligence to aid compensation claims. That said, businesses can provide proof that no harm was done by an AI and can argue against giving away sensitive information, such as trade secrets. The directive is also supposed to give companies a clear understanding and guarantee of what the rules around AI liability are.”
Officials hope such clarity will encourage developers to move forward with AI technologies without the fear of being blindsided by unforeseen allegations. Another goal is to build the current patchwork of AI standards and legislation across Europe into a cohesive set of rules. Commissioner for Justice Didier Reynders declares citizen protection top priority, stating, “technologies like drones or delivery services operated by AI can only work when consumers feel safe and protected.” Really? I’d like to see US officials tell that to Amazon.
Cynthia Murrell, October 28, 2022