Foo Yun Chee of Reuters reports the G7 countries, including Canada, France, Germany, Italy, Japan, Britain, and the United States, along with the European Union, will agree on Monday to a voluntary code of conduct for companies involved in the development of advanced AI systems.
The 11-point code aims to establish guidelines for creating “safe, secure, and trustworthy AI worldwide.”
It encourages companies to identify and evaluate risks throughout the AI lifecycle and mandates public reporting on AI systems’ capabilities and limitations.
Though the code “is meant to help seize the benefits and address the risks and challenges brought by these technologies, the G7’s code of conduct for AI development is described as “voluntary.” It does not include mandates or legally binding requirements.
The code provides “voluntary guidance” for organizations developing advanced AI systems and encourages companies to identify, evaluate, and mitigate risks.
Regulation may be on the horizon, Chee reports.
European Commission digital chief Vera Jourova, speaking at a forum on internet governance in Kyoto, Japan earlier this month, said that a Code of Conduct was a strong basis to ensure safety and that it would act as a bridge until regulation is in place.
From my perspective, countries, including the UK and the US, will want to go slow on AI regulation so as not to get in the way of the innovation AI is likely to bring. There’s also the question of whether government agencies are competent of regulating AI, altogether.
Note that the G7 code applies to advanced AI systems, generally referred to as the development and application of AI technologies that go beyond basic machine learning algorithms. They involve complex models like deep neural networks, natural language processing, computer vision, and AI systems capable of performing cognitive tasks typically completed by humans.