Beijing is prepared to adopt extensive new restrictions for AI services this week, attempting to strike a balance between state control of the technology and enough backing for its enterprises to become competitive on a global scale.
Platform providers must register their services and go through a security examination before they are made available on the market, according to 24 government regulations. Seven organizations, including the National Development and Reform Commission and the Chinese Cyberspace Administration, will be in charge of supervision.
The final rules are less burdensome than an earlier draft from April, but they nonetheless demonstrate that China, like Europe, is pushing forward with governmental control of what may be the most promising — and divisive — technology of the past 30 years.
In contrast, the United States has no legislation that is seriously being considered, despite the fact that business executives have warned that AI faces a “risk of extinction” and OpenAI’s Sam Altman has pushed Congress to become involved in public hearings.
“China got started very quickly,” said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace and author of several research papers on the subject. “It started developing the regulatory tools and the regulatory muscles, so they’ll be more prepared to control more sophisticated technological applications.”
Regulations in China are more stringent than anything thought of in Western democracies. However, they also contain actions that are doable and supported in nations like the US.
For instance, Beijing will require clear labels on all artificially produced content, including images and films. That aims to stop deceptions like a video of Nancy Pelosi that was altered online to make it look like she was intoxicated.
Any company introducing an AI model will also be required by China to train its models using “legitimate data” and to make that data available to regulators as needed. A rule like this would assuage media companies’ concerns about AI engines appropriating their works. Chinese businesses must also offer a clear process for managing public complaints about their products or content.
According to Andy Chun, an artificial intelligence expert and adjunct professor at the City University of Hong Kong, this tactic poses severe risks with generative AI. While the U.S. has generally had a hands-off attitude to regulation, this approach has allowed Silicon Valley titans to grow into global powerhouses.
“We are just beginning to realize how profoundly AI has the potential to change how people work, live, and play,” he said. If AI research is allowed to continue unchecked, humans is also clearly at risk.
Businesses in China need to exercise much greater caution. Only a few days after launching its ChatYuan service in February, the Hangzhou-based Yuanyu Intelligence shut it down. According to screenshots that were shared online, the bot had referred to Russia’s invasion of Ukraine as a “war of aggression”—contrary to Beijing’s position—and cast doubt on China’s economic prospects.
The business has now completely given up on the ChatGPT model in favor of KnowX, an AI productivity tool. The company’s leader, Xu Liang, stated that “machines cannot achieve 100% filtering.” But you may improve the model by incorporating human traits like patriotism, dependability, and wisdom.
Beijing plays by different rules than Washington because of its totalitarian government. The tech giants can’t fight back when Chinese agencies criticize and sanction them, and they frequently applaud the government in public for their control.
Big Tech in the United States employs armies of attorneys and lobbyists to fight practically every regulation measure. According to Aynne Kokas, associate professor of media studies at the University of Virginia, this will make it difficult to enact effective AI legislation in addition to the heated public debate among stakeholders.
AI is starting to permeate China’s extensive censorship system, which maintains the nation’s internet free of taboo and contentious issues. Technically speaking, that does not imply that it is simple.
According to You from the Chinese University of Hong Kong, “one of the most alluring innovations of ChahtGPT and similar AI innovations is its unpredictability or its own innovation beyond our human intervention.” “In many instances, the platform service providers have no control over it.”
One large language model is used by some Chinese IT businesses to verify that another LLM is free of any content that might be deemed problematic through the use of two-way keyword filtering.
The creator of a software company, who asked to remain anonymous owing to political sensitivities, claimed that the government will even perform spot checks on how data is being labeled by AI systems.
According to Nathan Freitas, fellow at Harvard University’s Berkman Klein Center for Internet and Society, “what is potentially the most fascinating and concerning time-line is the one where censorship happens through new large language models developed specifically as censors.”
The European Union might be the most forward-thinking in defending people from such overreach. The June draft bill limits the use of facial recognition technologies and establishes privacy restrictions. The EU proposal would also mandate that businesses conduct some examination of the risks associated with their services, such as those to national security or health systems.
But there are drawbacks to the EU’s strategy. According to OpenAI’s Altman, his company may “cease operating” in nations that impose excessively onerous restrictions.
To be “targeted and iterative,” according to Sheehan, is one lesson Washington may take from Chinese regulators.
“Build these tools so that they can keep improving as they keep regulating.”