Skip to Main Content
Services Talent Knowledge
Site Search
Menu

News

October 11, 2024

Charles Nerko Featured in InformationWeek Article on Managing Liabilities Created by Implementation of Artificial Intelligence

Charles Nerko, partner and team leader for data security litigation in the firm’s Data Security & Technology Practice Area, was featured in the InformationWeek article “AI Is Creating New Forms of Liability. How Can It Be Managed?”

AI offers significant efficiency gains, making it essential for competitive businesses. However, its growing implementation also brings novel and poorly understood liabilities. These arise due to AI’s autonomous learning capabilities and its unpredictability, which can result in errors and unforeseen consequences. The “black box” problem, where AI decisions are hard to trace, exacerbates these risks, especially when compounded by faulty data or improper implementation, leading to legal and reputational challenges. Charles said, “In the United States, AI-specific laws are still developing. But we have what’s called common law. That’s the system of legal principles that evolves through court decisions over time. Those common law principles already apply to artificial intelligence.” He continued, “The law right now treats an AI system similarly to how a human worker is treated. Any time a business has a representative, the representative can create legal liability for that business.”

Liability from AI spans various areas, including contract and tort law. Charles said, “Businesses have to supervise AI the same way that they would supervise a human workforce. That applies even if the company didn’t code the AI. If the company is using an AI system designed by a third party and using a training set that a third party developed, that doesn’t matter. Under the law the company that’s using it to make those decisions and convey that information is still liable under common law principles.” 

To mitigate these risks, companies need to establish clear AI usage policies, ensure transparency, and structure contracts that outline performance standards and liabilities. Charles said, “Having an AI policy discourages employees from being overly ambitious and using personal AI systems. It makes sure the AI enterprise system is properly vetted, receives legal review, and has the proper contract in place to protect the organization using it.” 

Monitoring AI systems, ensuring proper data use, and securing appropriate cyber insurance are critical to managing the potential liabilities that AI introduces. As laws and regulations evolve, businesses must adapt their practices to protect themselves against the unpredictable nature of AI systems.

Click here to read the full article.

Subscribe

Click here to sign up for alerts, blog posts, and firm news.

Featured Media

Alerts

RAPID Action: NYS Office of Energy Renewable Energy Siting and Transmission Announces Draft Regulations for New Transmission Siting Framework

Alerts

NYSDEC Issues Draft Freshwater Wetlands General Permit

Alerts

USPTO Updates Audit Program

Alerts

NYS DOL Publishes Long-Awaited FAQs on Paid Prenatal Leave Law

Alerts

Update on Massachusetts Pay Transparency Law Disclosures and EEO Reporting Requirements in 2025

Alerts

Massachusetts Employers Required to Provide Job Applicants Notice That Use of a Lie Detector Test Is Unlawful

This site uses cookies to give you the best experience possible on our site and in some cases direct advertisements to you based upon your use of our site.

By clicking [I agree], you are agreeing to our use of cookies. For information on what cookies we use and how to manage our use of cookies, please visit our Privacy Statement.

I AgreeOpt-Out