On April 24, 2024, the Connecticut State Senate passed a bill titled “An Act Concerning Artificial Intelligence” (SB 2 2024).1 It passed the senate 24–12, but only after lawmakers voiced concerns regarding the stifling of innovation, burdening small business, and making the state an outlier. The House of Representatives pushed back against the bill and has proposed major amendments. If enacted as is, the bill will impose a wide array of restrictions and obligations on businesses and individuals in Connecticut who use AI. Certain provisions of the bill are slated to take effect on October 1, 2024.
AI and Harm
The bill provides different definitions of artificial intelligence (AI) for different contexts. Generally, some definitions specify AI as a technology that involves the use of data to train algorithms or predictive models; other definitions specify AI as a machine-based system that infers methods to generate outputs based on inputs, including content, decisions, predictions, or recommendations.
The bill generally defines high-risk AI as an AI system developed to make a consequential decision. With respect to AI-generated decisions, the bill seeks to mitigate algorithmic discrimination, generally defined as any condition in which an AI system materially increases the risk of any unjustified differential treatment or impact that disfavors any individual or group on the basis of protected class or other differentiating characteristics.
Scope
Generally, the bill establishes requirements regarding the development and deployment of AI; establishes an advisory council; and places restrictions on high-risk AI systems, certain types of intimate synthetic images, and certain types of deceptive media. The bill provides the following definitions for key actors:
- Consumer: “Any individual who is a resident of this state.”
- Deployer: “. . . any person doing business in this state that deploys (A) a generative artificial intelligence system, or (B) a high-risk artificial intelligence system.”
- Developer: “. . . any person doing business in this state that develops, or intentionally and substantially modifies, (A) a general-purpose artificial intelligence model, (B) a generative artificial intelligence system, or (C) a high-risk artificial intelligence system.”
Key Takeaways
Below are key takeaways regarding the bill’s restrictions and obligations:
- Developers must use reasonable care to protect consumers from the risks of algorithmic discrimination.
- The restrictions on algorithmic discrimination do not apply to expanding an applicant or customer pool to increase diversity or redress historic discrimination.
- Before a developer provides a high-risk AI system to any deployer, the developer must satisfy a number of requirements, including:
- providing an intended-use statement; and
- providing documentation that describes the risks of discrimination.
- Developers of high-risk AI systems must:
- complete an impact assessment; and
- make a public statement (e.g., a statement posted on the developer’s website) regarding the types of high-risk AI systems the developer has developed and makes available.
- Within 90 days after a developer of a high-risk AI system discovers that the system has caused, or is likely to have caused, algorithmic discrimination, the developer must disclose the risk arising from the discrimination to the Attorney General of Connecticut (AG), the Connecticut Commissioner of Consumer Protection, and all deployers of the system.
- Developers of high-risk AI systems will have a rebuttable presumption that they used reasonable care according to the bill if they carry out, among other steps, implementation of a risk management policy and program that is reasonable in view of the guidance and standard set forth in the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology (NIST) or another nationally or internationally recognized risk management framework for AI systems.
- On or before using a high-risk AI system to make a consequential decision concerning a consumer, the deployer must notify the consumer of this fact and provide a statement detailing the system and how it works.
- Anyone in Connecticut who uses an AI system to interact with consumers must ensure the system discloses to consumers that they are interacting with an AI system.
- Developers of AI systems that generate synthetic digital content must ensure that:
- the systems’ outputs are marked in a machine-readable format and detectable as synthetic digital content; and
- the outputs are further marked in a manner that is clear and respects accessibility requirements.
- Deployers of AI systems that generate synthetic digital content must disclose to consumers that the synthetic digital content has been artificially generated upon the consumers’ first interaction with, or exposure to, the synthetic digital content.
New Crimes
The bill sets forth several notable crimes related to synthetic intimate images and deceptive media involving elections. Creators should beware that some deep fakes may fall within the scope of these crimes.
Under certain provisions of the bill, a person may be guilty of unlawful dissemination of a synthetic intimate image when all three of the following events occur:
- the person disseminates a film or other image that is at least partially generated by a computer system and includes a synthetic representation that is virtually indistinguishable from an actual representation of certain private body parts or another person engaged in sexual intercourse;
- the person disseminates this synthetic intimate image without the consent of the other person; and
- the nonconsenting person suffers harm as a result of the dissemination.
This unlawful dissemination may lead to a class A misdemeanor if the violator disseminates this type of synthetic intimate image to a single person by any means. The crime level increases to a class D felony if the violator disseminates this type of synthetic intimate image to multiple people by means of an interactive computer service, an information service, or a telecommunication service.
Setting forth exceptions for parody and satire, the bill also imposes restrictions on the distribution of deceptive media during the 90-day period preceding the availability of overseas ballots for an election if a number of elements are present, including:
- the violator knew that the deceptive media depicted a human engaged in speech or conduct in which the human did not engage; and
- it was reasonably foreseeable that the distribution would (i) harm the reputation or electoral prospects of a candidate and (ii) change the voting behavior of electors.
Distributing deceptive media in violation of the bill may result in a class C misdemeanor. If the violator committed a similar act within five years after being convicted of a prior violation, the crime level increases to a class D felony.
Enforcement and Damages
The bill provides enforcement authority to several Connecticut entities. The Commission on Human Rights and Opportunities (CHRO) may enforce the law if it finds that a developer or deployer failed to use reasonable care to prevent a discriminatory practice, with fines between $3,000 and $7,000.
Excluding the powers of the CHRO, the AG and the Commissioner of the Department of Consumer Protection will, according to the bill, retain exclusive authority to enforce the provisions of the law, violations of which are defined as unfair trade practices.
There is no private right of action for violations; however, certain provisions of the bill grant candidates and other people the right to file a civil action for an injunction against others who have violated the bill. The bill provides cure periods for certain violations.
Artificial Intelligence Advisory Council
The bill establishes an Artificial Intelligence Advisory Council. The council is tasked with making recommendations to the General Law Committee and the Department of Economic Community Development. The council’s responsibilities include studying other states’ AI laws; maintaining an ongoing dialogue between academia, government, and industry; and making recommendations for adopting new legislation on AI.
If you have any questions regarding this alert, please contact Renato Smith, Data Security & Technology Practice Area co-chair, at rsmith@barclaydamon.com; Joshua Maddox, summer associate, at jmaddox@barclaydamon.com; or another member of the firm’s Data Security & Technology Practice Area.
1Sub. B. No. 2, Gen. Assemb., Feb. Sess. (Conn. 2024).