EU kickstarts AI code of practice to balance innovation & safety

The European Commission has kicked off its project to develop the first-ever General-Purpose AI Code of Practice, and it’s tied closely to the recently passed EU AI Act. The Code is aimed at setting some clear ground rules for AI models like ChatGPT and Google Gemini, especially when it comes to things like transparency, copyright, and managing the risks these powerful systems pose. At a recent online plenary, nearly 1,000 experts from academia, industry, and civil society gathered to help shape what this Code will look like. The process is being led by a group of 13 international experts, including The post EU kickstarts AI code of practice to balance innovation & safety appeared first on DailyAI.

Oct 1, 2024 - 07:00
 0
EU kickstarts AI code of practice to balance innovation & safety

The European Commission has kicked off its project to develop the first-ever General-Purpose AI Code of Practice, and it’s tied closely to the recently passed EU AI Act.

The Code is aimed at setting some clear ground rules for AI models like ChatGPT and Google Gemini, especially when it comes to things like transparency, copyright, and managing the risks these powerful systems pose.

At a recent online plenary, nearly 1,000 experts from academia, industry, and civil society gathered to help shape what this Code will look like.

The process is being led by a group of 13 international experts, including Yoshua Bengio, one of the ‘godfathers’ of AI, who’s taking charge of the group focusing on technical risks. Bengio won the Turing Award, which is effectively the Nobel Prize for computing, so his opinions carry deserved weight.

Bengio’s pessimistic views on the catastrophic risk that powerful AI poses to humanity hint at the direction the team he heads will take.

These working groups will meet regularly to draft the Code with the final version expected by April 2025. Once finalized, the Code will have a big impact on any company looking to deploy its AI products in the EU.

The EU AI Act lays out a strict regulatory framework for AI providers, but the Code of Practice will be the practical guide companies will have to follow. The Code will deal with issues like making AI systems more transparent, ensuring they comply with copyright laws, and setting up measures to manage the risks associated with AI.

The teams drafting the Code will need to balance how AI is developed responsibly and safely, without stifling innovation, something the EU is already being criticized for. The latest AI models and features from Meta, Apple, and OpenAI are not being fully deployed in the EU due to already strict GDPR privacy laws.

The implications are huge. If done right, this Code could set global standards for AI safety and ethics, giving the EU a leadership role in how AI is regulated. But if the Code is too restrictive or unclear, it could slow down AI development in Europe, pushing innovators elsewhere.

While the EU would no doubt welcome global adoption of its Code, this is unlikely as China and the US appear to be more pro-development than risk-averse. The veto of California’s SB 1047 AI safety bill is a good example of the differing approaches to AI regulation.

AGI is unlikely to emerge from the EU tech industry, but the EU is also less likely to be ground zero for any potential AI-powered catastrophe.

The post EU kickstarts AI code of practice to balance innovation & safety appeared first on DailyAI.