Hawaii is taking Lead on Regulating Artificial Intelligence

Hawaii is taking Lead on Regulating Artificial Intelligence
Hawaii is taking Lead on Regulating Artificial Intelligence

A new state office of AI Safety and Regulation could take a risk-based approach to regulating various AI products. Not a day passes without a major news headline on the great strides being made on artificial intelligence — and warnings from industry insiders, academics, and activists about the potentially very serious risks from AI.

A 2023 survey of AI experts found that 36% fear that AI development may result in a “nuclear-level catastrophe.”

Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies, and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.

Why are we all so concerned? In short: AI development is going way too fast, and it’s not being regulated.

Must read:Zoom can now train its A.I. using some customer data

Quick Acceleration

 The primary concern is the new generation of highly complex “chatbots,” or what are officially referred to as “large language models,” which include ChatGPT, Bard, Claude 2, and many others on the horizon.

These AIs are evolving at an incredibly fast rate. “Artificial general intelligence,” which is defined as AI that is as excellent as or superior on practically everything a person can do, is expected to soon be a reality thanks to this tremendous acceleration.

When AGI is developed, which could happen soon or in a decade or more, AI will be able to advance on its own without the help of humans. It will accomplish this in a similar manner to how, for instance, in just nine hours after being turned on, Google’s Alpha Zero AI in 2017 learnt how to play chess better than even the very best human or other AI players.

It accomplished this accomplishment by replaying itself several times. On the Uniform Bar Exam, a standardized test that is used to certify attorneys for practice in several states, GPT-4 outperformed 90% of human test-takers.

This number increased from only 10% in the GPT3.5 version before it, which was trained on a smaller data set. Numerous other standardized tests also showed similar gains. Most of these exams focus on reasoning rather than memorization.

 Since reasoning is perhaps the defining characteristic of general intelligence, even today’s AIs exhibit strong general intelligence traits. The New York Times quoted AI researcher Geoffrey Hinton, who was formerly employed by Google for a period of years, as saying: “Look at how it was five years ago and how it is now.

Propagate the difference forward using it. That scares me. Sam Altman, the CEO of OpenAI, referred to regulation as “crucial” in a Senate hearing on the potential of AI held in mid-May.

However, since that time, Congress has hardly taken any action on AI, and the White House recently sent a letter praising the top AI development firms like Google and OpenAI for their entirely voluntary approach.

Must read:How AI is speeding up the discovery of new drugs

A voluntary approach to AI safety regulation is analogous to asking the oil industry to voluntarily certify that their goods protect us from climate change. We might only have one shot to get it right in terms of regulating AI to ensure its safety given the “AI explosion” that is now unfolding and the potential arrival of artificial general intelligence.

Because the threat is so immediate, employees are collaborating with state lawmakers in Hawaii to establish a new Office of AI Safety and Regulation.

 Congress is working on AI safety issues, but considering the severity of the threat, it appears that Congress will not be able to move quickly enough. The new agency would adhere to the “precautionary principle” by putting the onus on AI product developers to show that their products are secure for Hawaii before they are approved for usage in Hawaii.

Regulators’ present strategy is to let AI businesses just expose their goods to the public, where they are being accepted at an unprecedented rate, with no real assurance of their safety.

My aim is that this strategy will assist in protecting Hawaii from the more severe risks that artificial intelligence (AI) poses. In a recent open letter, hundreds of scientists and executives in the AI field cautioned that AI might be as catastrophic as nuclear weapons or pandemics.

Hawaii can and ought to set an example for how to control these risks at the state level. We cannot afford to wait for Congress to act because anything they do will almost certainly be far too little, far too late.

Must read:The rise of artificial intelligence in schools

Leave a Comment