State politicians are becoming more interested in artificial intelligence, and they have many inquiries.

State politicians are becoming more interested in artificial intelligence, and they have many inquiries.
State politicians are becoming more interested in artificial intelligence, and they have many inquiries.

State politicians frequently prioritize their own state governments before setting limits on the private sector as they scramble to keep up with rapidly growing artificial intelligence technologies.

Legislators are looking for ways to shield their citizens from discrimination and other damages without impeding recent developments in business, research, education, and other fields.

“The government will be our first target. During a floor discussion in May, Connecticut state senator James Maroney stated, “We’re trying to set a good example.
By the end of 2023, Connecticut intends to compile a list of every artificial intelligence-based technology used by its government and share it online. Beginning the following year, state officials will have to routinely evaluate these systems to make sure they won’t result in unjustified discrimination.


The General Assembly’s go-to expert on artificial intelligence, Maroney, a Democrat, predicted that Connecticut lawmakers will likely concentrate on business next year. This fall, he intends to work on model AI legislation with legislators from states including Colorado, New York, Virginia, Minnesota, and others. This legislation will include “broad guardrails” and put an emphasis on issues like product liability and the requirement that AI systems undergo impact analyses.

Must Read: How a vote in California on self-driving taxis could affect the future of AI?

“It’s evolving quickly, and people are embracing it quickly. Therefore, we must anticipate this, he stated in a subsequent interview. We’re actually already behind it, but we really can’t put off creating any kind of responsibility for much longer.

This year, legislation pertaining to artificial intelligence was introduced in at least 25 states, Puerto Rico, and the District of Columbia. The National Conference of State Legislatures reports that as of late July, resolutions or laws have been passed in 14 states and Puerto Rico. The list does not contain legislation focusing on certain AI technology, such as autonomous vehicles or facial recognition, which NCSL is monitoring independently.

While Louisiana formed a new technology and cyber security committee to study AI’s impact on state operations, procurement, and policy, legislatures in Texas, North Dakota, West Virginia, and Puerto Rico established advisory bodies to study and monitor the AI systems their respective state agencies are using. Similar measures were taken by several states last year.

Lawmakers are curious. “Who uses it? How do you employ it? Heather Morton, a legislative analyst at NCSL who analyzes artificial intelligence, cybersecurity, privacy, and internet concerns in state legislatures, described her work as “just gathering that data to figure out what’s out there, who’s doing what.” Within the boundaries of their own states, the states are attempting to resolve the issue.

Must Read: Jeffrey Celavie AI

After a probe by Yale Law School’s Media Freedom and Information Access Clinic revealed AI is already being used to do things like assign students to magnet schools, set bail, and distribute welfare benefits, among other things, Connecticut passed a new law requiring state agencies using AI systems to be periodically checked for potential unlawful discrimination. However, the public is largely unaware of the specifics of the algorithms.

The organization claimed that artificial intelligence (AI) “has spread throughout Connecticut’s government rapidly and largely unchecked, a development that’s not unique to this state.”

The “secret computerized algorithms” Idaho was using to evaluate people with developmental impairments for federally funded health care services were discovered through a lawsuit, according to Richard Eppink, legal director of the American Civil Liberties Union of Idaho, who testified in May before Congress about the discovery. He claimed in written testimony that the automated system used inaccurate data and depended on inputs that the state hadn’t verified.

The term “AI” can be used to refer to a wide variety of technologies, from algorithms that suggest what Netflix users should watch next to generative AI tools like ChatGPT that can help writers or artists produce new works of art. The boom in business investment in generative AI technologies has sparked public interest and raised concerns about, among other risks, its capacity to deceive people and disseminate misinformation.

Must Read: The role of Artificial Intelligence in the Metaverse

Some states haven’t yet tried to solve the problem. Democratic state senator Chris Lee of Hawaii claimed that lawmakers in his state failed to enact any legislation overseeing AI this year “simply because I think at the time, we didn’t know what to do.”

Instead, Lee’s resolution, which urges Congress to enact safety regulations for the use of artificial intelligence and restrict its use in the use of force by the military and police, was approved by the Hawaii House and Senate.

In the upcoming legislative session, Lee, vice-chair of the Senate Labor and Technology Committee, said he intends to present a bill that is comparable to Connecticut’s new regulation. Lee also wants to establish a permanent working group or department to handle AI-related issues with the appropriate competence, which he acknowledges is challenging to find.

There aren’t many people currently employed by traditional institutions or state governments who have this kind of experience, he claimed.

The construction of barriers surrounding AI is being spearheaded by the European Union. Bipartisan AI legislation has been discussed in Congress; Senate Majority Leader Chuck Schumer stated in June that it would maximize the advantages of the technology and significantly reduce its risks.

Must Read: China Announces Measurements Regarding Services Used by Generative AI

Gov. Katie Hobbs, a Democrat from Arizona, vetoed a bill that forbade the use of artificial intelligence in voting equipment. Hobbs said in her letter of disapproval that the law “attempts to solve challenges that do not currently face our state.”

Democratic senator Lisa Wellman, a former systems analyst and programmer, said in Washington that state legislators need to get ready for a world in which machine systems are used more and more frequently.

She intends to introduce legislation that will make computer science a requirement for high school graduation the following year.

In Wellman’s opinion, AI and computer science are now an essential component of education. “And we must really comprehend how to incorporate it.”

Leave a Comment