Yeah, we’re spooked’: AI starting to have big real-world impact, says expert

According to a scientist who published a key textbook on artificial intelligence, specialists are “spooked” by their success in the field. This is equating AI advancement to the development of the atomic bomb.

Prof Stuart Russell is the founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. He stated that most experts believed machines more intelligent than humans would be developed this century. He advocated for international treaties to govern the technology’s development.

“The AI community has yet to reconcile to the idea that we are now beginning to have a pretty huge impact in the actual world,” he told the Guardian. “We were simply in the lab, building stuff, trying to get things to function, and failing terribly over the most of the field’s existence”. As a result, the issue of real-world effects is no longer relevant. To catch up, we must develop swiftly.”

Artificial intelligence is employed in many facets of modern life. From search engines to banking, with recent breakthroughs in image recognition and machine translation being among the most prominent.

Russell co-wrote the pioneering book Artificial Intelligence. A Modern Approach in 1995 and will give this year’s BBC Reith lectures on “Living with Artificial Intelligence,” which begin on Monday. He believes the urgent effort is needed to ensure humans retain control when super clever AI is produced.

“AI has been built with a certain methodology and general approach in mind. And we’re not cautious enough to apply that type of technology in complex real-world scenarios,” he added.

For example, requesting that AI cure cancer as soon as possible might be hazardous. “It would most likely find ways to induce tumors in the whole human population. By allowing it to do millions of tests simultaneously, using all of us as guinea pigs,” Russell added. “And that’s because that’s the solution to the goal we gave it. We just failed to say that you can’t use humans as guinea pigs and you can’t use the entire world’s GDP to do your tests. You can’t do this and you can’t do that.”

Russell stated that there was still a significant difference between today’s AI and that represented in films such as Ex Machina, but that a future with robots more intelligent than humans was on the horizon.

Russell commented, “I think the most optimistic projections range from 10 years to a few hundred years”. “However, almost all AI specialists agree that it will happen this century.”

One problem is that a machine does not have to be smarter in every manner than humans to pose a serious threat. He explained, “It’s something that’s happening right now”. “When you look at social media and the algorithms that pick what people read and watch, you can see that they have a lot of control over our cognitive input.”

The final consequence, he believes, is that the algorithms dominate the user, brainwashing them into being more predictable in terms of what they choose to engage with, hence increasing click-based revenue.

Is it conceivable that the success of AI researchers has caused them to grow paranoid? “I think we’re starting to get concerned,” Russell remarked.

“It reminds me of something that happened in physics,” he added, adding that specialists always underlined the concept was theoretical. “Physicists understood that atomic energy existed. They could measure the masses of various atoms. They could also calculate how much energy could be produced if different types of atoms could be converted”. Then it happened, and they were taken aback.”

He is particularly worried about the use of artificial intelligence in military applications, such as tiny anti-personnel weapons. “Those are the ones that are extremely readily scalable, which means you could put a million of them in a single truck, open the rear, and out they go and wipe out an entire city,” Russell explained.

Russell believes that the future of AI rests in building robots. They recognize that the genuine aim, as well as our preferences, are ambiguous, requiring them to consult with humans – much like a butler – before making any choice. However, the concept is complicated, not least because different people have varied – and often contradictory. Preferences and those choices are not fixed.

Russell advocated a code of conduct for researchers, regulations, and treaties to assure the safety of AI systems in use, and researcher training to ensure that AI is not subject to concerns such as racial bias. He argues that European Union rules barring robots from impersonating humans should be implemented internationally.

Russell stated that he hopes the Reith lectures would emphasize that the future may be decided. “The public needs to be involved in such choices,” he continues, “since it’s the public who will profit or lose.”

However, there was another message as well. “It will take a long time for AI to advance, but it does not make it science fiction,” he remarked.

Leave a comment

Your email address will not be published.