CA, MOUNTAIN VIEW — At Google’s annual developer conference in May, James Manyika, the organization’s new director of “tech and society,” was left to speak among the enthusiastic excitement about artificial intelligence.
In front of a large crowd that had gathered in an outdoor stadium, Manyika spoke on the problem of false pictures and how artificial intelligence reinforces racism and misogyny in society. He forewarned that as technology advances, more issues would surface.
But don’t worry, he assured the audience, Google is adopting “a responsible approach to AI.” As Manyika spoke, the words “bold and responsible” flashed over a big screen.
The term has replaced “don’t be evil,” which the corporation deleted from the preamble of its code of conduct in 2018, as Google’s catchphrase for the AI era. The slogan captures Silicon Valley’s overarching AI message, as many of the most important figures in the sector rush to create ever-more potent versions of the technology while warning of its perils and urging for monitoring and regulation from the government.
Manyika, a Zimbabwean-born former technology advisor to the Obama administration with an Oxford PhD in AI, has embraced this dichotomy in his current position as Google’s AI ambassador.
He is adamant that Google is the ideal custodian for this promising future and that technology will have incredible positive effects on human society. However, immediately after the developers’ meeting, Manyika and hundreds of other AI experts issued a one-sentence declaration warning that AI posed a “extinction risk” comparable to “pandemics and nuclear war.”
Muat read:Hawaii is taking Lead on Regulating Artificial Intelligence
AI is “an amazing, powerful, transformational technology,” according to Manyika, who recently gave an interview. However, he acknowledged that “bad things could happen.”
Bad things, according to critics, are already occurring. OpenAI’s ChatGPT has created reams of bogus content since its launch in November, including a fictitious sexual harassment incident that identified a genuine law professor.
Stability AI’s Stable Diffusion model’s open source variations have flooded the internet with realistic images of child sexual assault, hampering attempts to stop actual crimes. Early iterations of Microsoft’s Bing developed an unsettlingly ominous and antagonistic user base.
Additionally, a recent study by the Washington Post revealed that a number of chatbots, including Google’s Bard, suggested dangerously low-calorie diets, smoking, and even tapeworms as methods of weight loss.
“Google’s AI products, like Bard, are already having negative effects. And that’s the issue with ‘boldness’ in contrast with ‘responsible‘ AI development,’ according to senior researcher Tamara Kneese.
Critics claim that negative things are already occurring. Since its debut in November of last year, OpenAI’s ChatGPT has created reams of bogus content, such as a fabricated sexual harassment controversy that included a genuine law professor. Stability AI’s Stable Diffusion model has been made available as open source, which has undermined attempts to stop real-world crimes by generating a deluge of realistic images of child sexual assault.
Bing, a search engine from Microsoft, had a frighteningly antagonistic and early user base. Furthermore, a recent Washington Post investigation revealed that some chatbots, like Google’s Bard, suggested dangerously low-calorie diets, smoking, and even tapeworms as methods of weight loss.
“Bard and other Google AI products are already having negative effects. The issue with ‘boldness’ in opposition to ‘responsible’ AI development, according to senior researcher Tamara Kneese, is that
Project manager of the nonprofit organization Data & Society, which investigates the consequences of AAccording to Kneese, “big tech companies are calling for regulation.” “But at the same time, they are shipping products quickly and with little to no oversight.”
While reputable experts warn of longer-term risks, including the possibility that the technology may one day surpass human intellect, regulators throughout the world are already rushing to decide how to control the technology. Nearly every week, a hearing on Capitol Hill is devoted to AI.
Google too has trust difficulties, if AI does. The business has battled for a long time to convince users that it can protect the enormous quantity of data it gathers from their email inboxes and search history.
When it comes to AI, the company’s reputation is particularly shaky: In 2020, After she released a study stating that the company’s AI might be contaminated by racism and sexism owing to the data it was trained on, it dismissed renowned AI ethics researcher Timnit Gebru.I.
Google debuted its chatbot early this year in a haste to catch up after ChatGPT and other rivals had already captivated the public’s attention. In the meanwhile, the tech giant is facing intense competition. AI is viewed as a means by competitors like Microsoft and a number of well-funded start-ups to loosen Google’s hold on the internet economy.
Manyika entered this high-stress situation with poise and assurance. He is a seasoned pro in the world of conferences and sits on a staggering array of powerful boards, including the White House AI advisory group, where he serves as vice chair.
He talked at the Cannes Lions Festival in June, and in April he was a guest on “60 Minutes.” He has spoken in front of the UN and attends Davos frequently.
Additionally, he provides comfort regarding Google’s participation in the AI gold rush in every interview, conference presentation, and blog post by using the same three words to describe the company’s strategy: “bold and responsible.”
Must read:Global $22.64 Bn Artificial Intelligence in Marketing Markets