Big tech has succeeded in distracting the world from the existential risks artificial intelligence still poses to humanity, a leading scientist and artificial intelligence campaigner has warned.
Max Tegmark told The Guardian in an interview at the Artificial Intelligence Summit in Seoul, South Korea, that the shift in focus from the extinction of life to the broader concept of artificial intelligence safety may lead to a lack of confidence in the most powerful The delay in implementing strict regulation by the creators of artificial intelligence is unacceptable.
“In 1942, Enrico Fermi built the first reactor with a self-sustaining nuclear chain reaction under the Chicago football stadium,” said Tegmark, who was trained as a physicist. “When the leading physicists of the day discovered this, they were really horrified because they realized that the biggest remaining obstacle to building a nuclear bomb had just been overcome. They realized that there were only a few years left—in fact, 1945 Three years have passed since the Trinity Test.
“An artificial intelligence model that can pass the Turing test [where someone cannot tell in conversation that they are not speaking to another human] The same warning applies to the type of artificial intelligence you might lose control of. That’s why people like Geoffrey Hinton and Yoshua Bengio, and even many tech CEOs, are, at least privately, freaking out.
Because of these concerns, Tegmark’s nonprofit Future of Life Institute last year led a call for a six-month “moratorium” on advanced artificial intelligence research. He said that the launch of OpenAI’s GPT-4 model in March of that year was the canary in the coal mine and proved that the risks were very close.
Despite the signatures of thousands of experts, including Hinton and Bengio (two of the three “godfathers” of artificial intelligence) who pioneered the machine learning methods that underpin the field today, there is no consensus.
Instead, the emerging field of AI regulation is being spearheaded by the AI Summit, the second in Seoul following the one held at Bletchley Park in the UK last November. “We hope this letter legitimizes this conversation and are very happy with the outcome. Once people see people like Bengio worried, they think: ‘It’s okay for me to be worried. ” Even the guy at my gas station said to me after that that he was worried that artificial intelligence would replace us.
“But now we need to move from talk to action.”
However, since the initial announcement of the Bletchley Park Summit, the focus of international AI regulation has shifted away from existential risks.
In Seoul, only one of three “high-level” panels discussed security issues directly, and looked at “the full range” of risks, “from privacy breaches to job market disruption and potentially catastrophic consequences.” Tegmark believes downplaying the most serious risks is unhealthy and not accidental.
“This is exactly what I predicted through industry lobbying,” he said. “In 1955, the first journal article was published stating that smoking causes lung cancer, and you would have thought that there would be some regulatory action soon. But that was not the case, and that did not happen until 1980, as industry pushed hard for decentralization. Attention. I feel like that’s what’s happening right now.
Newsletter Promotion Post
“Of course, artificial intelligence also causes current harms: there’s bias, it hurts marginalized groups… But like [the UK science and technology secretary] Michelle Donelan herself says it’s not impossible to cope with both. It’s a bit like saying, ‘Let’s not focus on climate change because there are going to be hurricanes this year, so we should just focus on hurricanes.’
Tegmark’s critics have made the same argument against his own claim: that the industry wants everyone to talk about hypothetical risks in the future to distract from specific current harms, a charge he dismisses. “Even if you consider it on its own merits, it’s quite galactic-minded: for someone like this, it would be quite a 4D chess [OpenAI boss] Sam Altman, trying to evade regulation, tells everyone that everyone can have their lights turned off, and then tries to convince people like us to sound the alarm.
Instead, he believes the muted support from some tech leaders is because “I think they all feel like they’re stuck in an impossible situation where they can’t stop even if they want to. If the CEO of a tobacco company wakes up one morning , it feels like what they’re doing is not right, what’s going to happen? They’re going to change the CEO. So the only way to make safety first is for the government to set safety standards for everyone.