Joe Wilson, special counsel, was featured in the Rochester Business Journal article “Insurance Underwriting & Claims Make the Shift to AI Tools,” which explored how generative artificial intelligence (AI) is significantly transforming insurance underwriting and claims processing, offering efficiency gains and cost reductions. While machine learning has long played a role in the industry, AI’s newer capabilities are expediting underwriting decisions—cutting turnaround times from days to minutes while maintaining high accuracy. This shift not only streamlines operations but also reduces human error, which historically contributed to ambiguous or inconsistent policy outcomes.
AI is seen as a way to mitigate issues that previously led to legal disputes under doctrines like contra proferentem, where ambiguities are interpreted in favor of the insured. By minimizing human handling of policy documents, AI helps eliminate contradictions and standardizes underwriting practices. However, with this technological leap comes a growing regulatory focus, especially on ensuring that AI does not inadvertently introduce bias or violate consumer protection laws.
A key concern among regulators is the potential for AI models to produce discriminatory outcomes if they are trained on inappropriate data sets. States like New York have issued guidance cautioning insurers to ensure compliance with antidiscrimination laws, particularly regarding race, gender, and other protected classes. Currently, 27 states have implemented or are drafting regulations to monitor AI use in insurance, highlighting the need for oversight to prevent unfair pricing or policy decisions stemming from flawed algorithms.
While AI offers powerful tools for tasks like fraud detection, health risk analysis, and claims evaluation, the reliability of the data used is critical. “When utilizing AI systems for functions that were traditionally performed by humans—reviewing records or investigating a policyholder’s history for example—an insurer needs to confirm that the output is accurate and complies with existing law and isn’t arbitrary or discriminatory,” he said. Even unintended biases—such as those stemming from names or geographic data—can skew decisions.
Businesses are advised to stay alert for unusual outcomes, such as unexpected denials or pricing, which may warrant closer scrutiny of how AI-driven decisions were made.
Rochester Business Journal subscribers can read the full article here.