The creators of this ground-breaking technology frequently struggle to predict or explain the behavior of their systems as tech corporations start to incorporate AI into all of their products and every aspect of our lives.
Why it’s important?
The fact that this may be the scariest part of the current AI boom is well known among those who design AI but is less well understood by the general public.
- “It is not at all clear — not even to the scientists and programmers who build them — how or why the generative language and image models work,” said Alex Karp, CEO of Palantir, recently in The New York Times.
What’s Happening?
We have been utilizing computer systems for many years that, given the same input, provide the same outcome.
- In contrast, generative AI systems try to produce several options from a single query.
- The identical inquiry can easily yield a variety of responses.
It is difficult to understand how generative AI generates a specific result since the element of randomness occurs on a scale that can involve billions of variables.
Yes, math is the end all be all. But to claim that is like to saying that the human body is made entirely of atoms. It is real! But it doesn’t always help when you need to find a solution to a problem in a fair amount of time.
Must read:This week Google stock jumped 10% fueled by cloud, ads, and hope in AI
Driving the news:
In a paper published on Thursday, four academics demonstrate how users may get through “guardrails” that prevent AI systems from, for example, teaching “how to make a bomb.”
- When specifically asked, the most popular chatbots, like ChatGPT, Bing, and Bard, won’t respond. But if you add some more code to the prompt, they’ll go into great depth.
- The researchers speculated that these dangers may be inevitable given the basic nature of deep learning algorithms. You can’t create guardrails that will stand if you can’t precisely forecast how the system will react to a fresh prompt.
Story between the lines
Since AI developers find it difficult to explain the behavior of the systems, their industry now relies just as much on oral tradition and trade secrets as it does on formal science.
- In February, mathematician Stephen Wolfram said, “It’s part of the lore of neural nets that — in some sense — so long as the setup one has is ‘roughly right‘.” Without ever actually needing to “understand at an engineering level,” he continued, “quite how the neural net has ended up configuring itself,” it is typically able to zero in on specifics just by conducting enough training.
What Is ChatGPT Doing … and Why Does It Work?
This is where the “voodoo” begins, according to Wolfram:
“If we constantly choose the term with the highest ranking, we’ll typically get a very ‘flat’ essay that never seems to ‘show any innovation’ (and sometimes even repeats word for word). This is due to some cause that perhaps one day we’ll have a scientific-style explanation of. However, if occasionally (at random) we choose terms with lower rankings, the essay is “more interesting.”
What is going on other side?
Some experts contend that the misconception that “we don’t understand AI” is false.
- Earlier this year, Princeton computer scientist Arvind Narayanan tweeted that “the black boxness of neural networks is greatly exaggerated.” “To reverse engineer them, we have great tools. The barriers are political (funding for companies vs. study on societal benefit) and cultural (making stuff is cooler than comprehending).
- Other detractors contend that the “we don’t know how it works” defense is a ruse used by AI corporations to escape responsibility.
What to watch for?
It’s still unclear if AI developers will be able to eventually offer more thorough explanations of why and how their systems function.
But the likelihood that we will receive those answers increases as more businesses develop AI that can clearly explain its decisions and decision-making processes.
Must read:Artificial intelligence is intensifying the fight against infectious diseases