Collaboration on AI Safety Established between China, US, and EU

Collaboration on AI Safety Established between China, US, and EU

China and the US agreed to work together with at⁢ least 25 other countries to ‍mitigate ​any risks arising out ⁢of the⁢ progression of AI.

The two countries, along with several other countries including the EU, India Germany,  and ‌France, signed an agreement, dubbed ‍Bletchley Declaration, at the UK AI Safety Summit to form a common line of thinking that would oversee the evolution of AI and ensure​ that the technology is advancing safely.

The ⁣agreement has been named‌ after a place in London, which ​was‌ home to code breakers during​ the Second World War. The Summit at which the agreement was signed ⁤was attended by various political leaders and chief executives of technology companies including Elon Musk and OpenAI’s Sam Altman.

The Bletchley Declaration calls upon signing member countries‌ to take two broad approaches to control frontier risks, such as nuclear,​ chemical, or biological war, arising from AI.

“We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as ‌well as where ⁢frontier AI systems ⁢may amplify risks such as ​disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models,” the agreement read.

The first approach is to identify AI safety risks of shared concern and then build a shared scientific and evidence-based understanding of these risks.

Once an understanding is⁣ formed, the member countries will adjudicate‍ evolving AI capabilities against the formed understanding to gauge the impact of the new AI capabilities on society at large, according to the agreement.

The second approach is to build‍ respective risk-based policies across member ⁢countries to ensure safety in light of any risks and each member understanding that individual approaches may vary on national circumstances and applicable legal frameworks.

“This (the second approach) includes, alongside increased transparency by private actors ​developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research,” the agreement read.

Earlier in May, hundreds of tech industry leaders, academics, and other⁤ public figures signed an open letter warning that⁤ AI evolution could lead to an extinction event and said⁣ that controlling the tech ​should be a top ‌global priority.

This week, ChatGPT-creator OpenAI said it was readying a team to prevent what ‌the company calls frontier AI models from ‌starting a nuclear war and other⁣ threats.

US President Joe Biden’s⁤ administration, too, during ‍this⁣ week issued a long-awaited executive order that hammers out clear rules and oversight ‌measures to ensure that AI ​is kept in check,‍ while also providing paths for​ it to grow.

2023-11-07 02:41:06
Post from‌ www.computerworld.com rnrn

Exit mobile version