Charles Nerko, partner and team leader for data security litigation in the firm’s Data Security & Technology Practice Area, was featured in the InformationWeek article “AI Is Creating New Forms of Liability. How Can It Be Managed?”
AI offers significant efficiency gains, making it essential for competitive businesses. However, its growing implementation also brings novel and poorly understood liabilities. These arise due to AI’s autonomous learning capabilities and its unpredictability, which can result in errors and unforeseen consequences. The “black box” problem, where AI decisions are hard to trace, exacerbates these risks, especially when compounded by faulty data or improper implementation, leading to legal and reputational challenges. Charles said, “In the United States, AI-specific laws are still developing. But we have what’s called common law. That’s the system of legal principles that evolves through court decisions over time. Those common law principles already apply to artificial intelligence.” He continued, “The law right now treats an AI system similarly to how a human worker is treated. Any time a business has a representative, the representative can create legal liability for that business.”
Liability from AI spans various areas, including contract and tort law. Charles said, “Businesses have to supervise AI the same way that they would supervise a human workforce. That applies even if the company didn’t code the AI. If the company is using an AI system designed by a third party and using a training set that a third party developed, that doesn’t matter. Under the law the company that’s using it to make those decisions and convey that information is still liable under common law principles.”
To mitigate these risks, companies need to establish clear AI usage policies, ensure transparency, and structure contracts that outline performance standards and liabilities. Charles said, “Having an AI policy discourages employees from being overly ambitious and using personal AI systems. It makes sure the AI enterprise system is properly vetted, receives legal review, and has the proper contract in place to protect the organization using it.”
Monitoring AI systems, ensuring proper data use, and securing appropriate cyber insurance are critical to managing the potential liabilities that AI introduces. As laws and regulations evolve, businesses must adapt their practices to protect themselves against the unpredictable nature of AI systems.
Click here to read the full article.