Sam Altman says international agency should monitor AI models

OpenAI CEO, Sam Altman, says that an international agency should be set up to monitor powerful future frontier AI models to ensure safety. In an interview on the All-In podcast, Altman said that we’ll soon see frontier AI models that will be significantly more powerful, and potentially more dangerous. Altman said, “I think there will come a time in the not super distant future, like we’re not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm.” The US and EU authorities have both been passing legislation to regulate AI, but Altman doesn’t The post Sam Altman says international agency should monitor AI models appeared first on DailyAI.

May 14, 2024 - 03:10
 21
Sam Altman says international agency should monitor AI models

OpenAI CEO, Sam Altman, says that an international agency should be set up to monitor powerful future frontier AI models to ensure safety.

In an interview on the All-In podcast, Altman said that we’ll soon see frontier AI models that will be significantly more powerful, and potentially more dangerous.

Altman said, “I think there will come a time in the not super distant future, like we’re not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm.”

The US and EU authorities have both been passing legislation to regulate AI, but Altman doesn’t believe inflexible legislation can keep up with how quickly AI is advancing. He is also critical of individual US states attempting to regulate AI independently.

Speaking about anticipated advanced AI systems, Altman said, “And for those kinds of systems in the same way we have like global oversight of nuclear weapons or synthetic bio or things that can really like have a very negative impact way beyond the realm of one country.

I would like to see some sort of international agency that is looking at the most powerful systems and ensuring like reasonable safety testing.”

Altman said this kind of international oversight would be necessary to prevent a superintelligent AI from being able to “escape and recursively self-improve.”

Altman acknowledged that while oversight of powerful AI models is necessary, overregulation of AI could stifle progress.

His suggested approach is similar to international nuclear regulation. The International Atomic Energy Agency has oversight over member states with access to meaningful amounts of nuclear material.

“If the line where we’re only going to look at models that are trained on computers that cost more than 10 billion or more than 100 billion or whatever dollars, I’d be fine with that. There’d be some line that’d be fine. And I don’t think that puts any regulatory burden on startups,” he explained.

Altman explained why he felt the agency approach was better than trying to legislate AI.

“The reason I have pushed for an agency-based approach for kind of like the big picture stuff and not…write it in laws,… in 12 months, it will all be written wrong…And I don’t think even if these people were like, true world experts, I don’t think they could get it right. Looking at 12 or 24 months,” he said.

When will GPT-5 be released?

When asked about a GPT-5 release date, Altman was predictably unforthcoming but hinted that it may not happen the way we think.

“We take our time when releasing major models…Also, I don’t know if we’ll call it GPT-5,” he said.

Altman pointed to the iterative improvements OpenAI has made on GPT-4 and said these better indicate how the company will roll out future improvements.

So it seems like we’re less likely to see a release of “GPT-5” and more likely to have additional features added to GPT-4.

We’ll have to wait for OpenAI’s update announcements later today to see if we get any more clues about what ChatGPT changes we can expect to see.

If you want to listen to the full interview you can listen to it here.

The post Sam Altman says international agency should monitor AI models appeared first on DailyAI.