Tech companies across the globe commit to fresh set of voluntary rules
Leading AI companies have agreed to a new set of voluntary safety commitments, announced by the UK and South Korean governments before a two-day AI summit in Seoul. Commitments involve 16 tech companies, including Amazon, Google, Meta, Microsoft, OpenAI, xAI, and Zhipu AI. These Among the commitments, companies pledge “not to develop or deploy a model at all” if severe risks can’t be managed. Companies have also agreed to publish how they’ll measure and mitigate risks associated with AI models. The new commitments come after eminent AI researchers, including Yoshua Bengio, Geoffrey Hinton, Andrew Yao, and Yuval Noah Harari, published a The post Tech companies across the globe commit to fresh set of voluntary rules appeared first on DailyAI.
Leading AI companies have agreed to a new set of voluntary safety commitments, announced by the UK and South Korean governments before a two-day AI summit in Seoul.
Commitments involve 16 tech companies, including Amazon, Google, Meta, Microsoft, OpenAI, xAI, and Zhipu AI. These
Among the commitments, companies pledge “not to develop or deploy a model at all” if severe risks can’t be managed.
Companies have also agreed to publish how they’ll measure and mitigate risks associated with AI models.
The new commitments come after eminent AI researchers, including Yoshua Bengio, Geoffrey Hinton, Andrew Yao, and Yuval Noah Harari, published a paper in Science named Managing extreme AI risks amid rapid progress.
That paper made several recommendations which helped guide the new safety framework:
- Oversight and honesty: Developing methods to ensure AI systems are transparent and produce reliable outputs.
- Robustness: Ensuring AI systems behave predictably in new situations.
- Interpretability and transparency: Understanding AI decision-making processes.
- Inclusive AI development: Mitigating biases and integrating diverse values.
- Evaluation for dangerous actions: Developing rigorous methods to assess AI capabilities and predict risks before deployment.
- Evaluating AI alignment: Ensuring AI systems align with intended goals and do not pursue harmful objectives.
- Risk assessments: Comprehensively assessing societal risks associated with AI deployment.
- Resilience: Creating defenses against AI-enabled threats such as cyberattacks and social manipulation.
Anna Makanju, vice president of global affairs at OpenAI, stated about the new recommendations, “The field of AI safety is quickly evolving, and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science. We remain committed to collaborating with other research labs, companies, and governments to ensure AI is safe and benefits all of humanity.”
Michael Sellitto, Head of Global Affairs at Anthropic, commented similarly, “The Frontier AI safety commitments underscore the importance of safe and responsible frontier model development. As a safety-focused organization, we have made it a priority to implement rigorous policies, conduct extensive red teaming, and collaborate with external experts to make sure our models are safe. These commitments are an important step forward in encouraging responsible AI development and deployment.”
Another voluntary framework
This mirrors the “voluntary commitments” made at the White House in July last year by Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI to encourage AI technology’s safe, secure, and transparent development.
These new rules state that the 16 companies would “provide public transparency” on their safety implementations, except where doing so might increase risks or divulge sensitive commercial information disproportionately to societal benefits.
UK Prime Minister Rishi Sunak said, “It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety.”
It’s a world first because firms beyond North America, such as Zhipu.ai, joined it.
However, voluntary commitments to AI Safety have been in vogue for a while. There’s little risk for AI companies to agree to them, as there’s no means to enforce them. That also indicates how blunt an instrument they are when push comes to shove.
Dan Hendrycks, the safety adviser to Elon Musk’s startup xAI, noted that the voluntary commitments would help “lay the foundation for concrete domestic regulation.”
A fair comment, but by its own admission, we’re yet to ‘lay the foundations’ when extreme risks are imminent, according to some leading researchers.
Not everyone agrees on how dangerous AI really is, but the point remains that the sentiment behind these frameworks isn’t yet aligning with actions.
Nations form AI safety network
As this smaller AI safety summit gets underway in Seoul, South Korea, ten nations and the European Union (EU) agreed to establish an international network of publicly backed “AI Safety Institutes.”
The “Seoul Statement of Intent toward International Cooperation on AI Safety Science” agreement involves countries including the UK, the United States, Australia, Canada, France, Germany, Italy, Japan, South Korea, Singapore, and the EU.
Notably absent from the agreement was China. However, the Chinese government participated, and a Chinese firm, Zhipu.ai, signed up to the framework described above.
China has previously expressed a willingness to cooperate on AI safety and has been in ‘secret’ talks with the US.
This smaller interim summit came with less fanfare than the first, held in the UK’s Bletchley Park last November.
However, several well-known tech figures joined, including Elon Musk, former Google CEO Eric Schmidt, and DeepMind founder Sir Demis Hassabis.
More commitments and discussions will come to light over the coming days.
The post Tech companies across the globe commit to fresh set of voluntary rules appeared first on DailyAI.