close
close

Sweeping Artificial Intelligence Bill Stands in House »CBIA

Despite the Senate passing sweeping legislation to regulate artificial intelligence in the private sector last month, the bill failed because it was not brought to a vote in the House before the General Assembly’s May 8 deadline for passage.

The decision not to approve the bill came shortly after Gov. Ned Lamont told reporters he would veto the bill if it came to his desk for his signature.

SB 2 would be the first law in the nation to regulate artificial intelligence applications developed and deployed by private industry.

The bill, which passed the Senate by a 24-12 vote, also criminalized fraudulent synthetic media in elections and the wanton dissemination of synthetic intimate information, and created a number of artificial intelligence-related workforce development programs.

Sen. James Maroney (R-Milford), a key supporter of the bill, developed the regulatory framework in close collaboration with a bipartisan group of legislators from other states.

Colorado has introduced nearly identical legislation that is currently under consideration in this legislature. It’s unclear where Colorado Gov. Jared Polis stands on the current version of the bill.

Working group

SB 2 was a continuation of the work of the Connecticut Artificial Intelligence Working Group established last year.

The working group – composed of legislators, industry representatives, scientists and academics – was created as a result of Public Act 23-16.

The group was tasked with making policy recommendations for the ethical and equitable use of artificial intelligence by state government and private industry, as well as assessing the White House Office of Science and Technology’s plan for an Artificial Intelligence Bill of Rights.

While the task force reached consensus on many of the policy recommendations that went into SB 2 (i.e., synthetic media in elections, workforce development, fake porn), it failed to reach consensus on provisions governing private sector development and implementation.

Although the working group did not reach consensus on private sector regulation, the bill included stringent reporting requirements.

Although the working group did not reach consensus on private sector regulations, sections 1-7 of the latest version of the bill passed by the Senate included broad measures establishing a series of reporting requirements for developers and implementers using “high-risk (AI)” — defined as ” any system (AI) that, when implemented, makes a consistent decision or is an important factor in decision-making.”

In turn, a “follow-up decision”, the definition of which changed several times as a result of new amendments, meant “any decision which has a material or similarly significant effect on the provision or denial of any consumer, or on the cost or conditions of (A) any assessment of a criminal case, any review of a judgment or plea agreement, any decision to grant clemency, parole, probation or dismissal, (B) any enrollment or educational opportunities, (C) any employment or employment opportunities, (D) any financial or lending services, (E) any essential government services, (F) any health care services, or (G) any housing, insurance, or legal services.”

When developers “develop” and implementers “deploy” these high-risk systems, the Act required a number of reporting requirements to be met by the state and the entity or consumer using the system.

If a developer or implementer fails to exercise “due diligence” to protect consumers from “any known or reasonably foreseeable risk of algorithmic discrimination,” that entity (1) must notify appropriate parties using the system; and (2) an enforcement action may be brought by the Attorney General to remedy such discrimination within 60 days.

Broad concerns

These sections sparked widespread concern from industry, consumer advocates, lawmakers, the Department of Economic and Community Development and Lamont himself.

During a public hearing on the bill earlier this year, CBIA supported several of the bill’s workforce development initiatives but called on lawmakers to work with industry to develop regulations that would not hinder economic growth:

Regarding definitions, regulations, and reporting requirements that were not consensus items of the Artificial Intelligence Working Group (Sections 1-7), CBIA has concerns about the unintended consequences of these sections that could undermine AI innovation in our state, if adopted in the proposed form. CBIA is committed to working with this committee and other stakeholders to develop an artificial intelligence regulatory framework that builds on best practices currently being developed around the world and that supports innovation, grows our economy, and protects the people of Connecticut.

Lamont said Connecticut would be better off working with a consortium of other states to address the development and deployment of artificial intelligence.

Shortly after the Senate passed SB 2, the U.S. Chamber of Commerce sent a letter to House Speaker Matt Ritter (D-Hartford) urging caution when passing such far-reaching legislation:

“Existing laws and regulations already cover many activities related to artificial intelligence. Where gaps exist, policymakers should ensure that new policies are risk-sensitive and do not unnecessarily hinder innovation… Given the significant complexities associated with AI and the broad scope of SB 2, we urge you to provide more time for review and analysis “

On Tuesday, the Connecticut Post reported that if SB 2 reached the governor’s desk, he would veto the decision.

Lamont told the newspaper that Connecticut would be better off working with a consortium of other states to address the development and deployment of artificial intelligence in the private sector.

Maroney, who led the multi-year effort to pass a data protection bill two years ago, is expected to raise the issue again next session.


For more information, contact Wyatt Bosworth at CBIA (860.244.1155) | @WyattBosworthCT.