Major powers including US, UK, and Japan join forces to sign AI security agreement

Major powers including US, UK, and Japan join forces to sign AI security agreement

Twenty-two ​law ‍enforcement and ‍intelligence agencies from 18 different countries signed an international agreement‍ on⁤ AI safety over the weekend, which⁣ is designed to make new‌ versions of the technology “secure ⁣by design.”

This ​agreement comes months ⁢after the European Union signed its EU‌ AI‌ Act in⁤ June, banning‌ certain AI technologies including ⁣biometric surveillance and predictive policing, and classifying⁢ AI systems⁤ that could ⁤significantly impact health, safety,⁢ rights, or elections as high risk.

“AI systems have the potential to bring ⁢many benefits to society.⁣ However, for the⁤ opportunities of AI to be⁤ fully realized, it⁢ must be developed, deployed, and⁣ operated ⁢in a secure and responsible way,” the latest agreement stated.

The agreement⁤ emphasized that ⁢with the rapid pace of​ AI development, security⁤ must not be⁢ an afterthought​ but ​rather a core requirement⁣ integrated throughout the life cycle of ⁤AI systems.

“AI​ systems are subject to novel security vulnerabilities that need‍ to be considered‌ alongside standard ​cyber security threats,”⁢ the report said. “When the pace of development is high — as is the case with AI — security can often be ⁢a secondary⁣ consideration.”

One way that‌ AI-specific security is different is the ⁣existence of a phenomenon called ⁤“adversarial machine learning.”

Called a critical concern ⁢in​ the developing⁢ field of⁣ AI security by the ​report, adversarial machine learning is defined as the strategic exploitation⁢ of⁢ fundamental vulnerabilities ⁢inherent in machine learning ‌components.

By manipulating these elements, adversaries can potentially ⁢disrupt or deceive AI systems,⁢ leading‌ to erroneous ⁣outcomes or⁣ compromised ‍functionality.

Aside⁤ from the EU’s ​AI bill, in the US, President⁢ Joe ⁢Biden signed ‌an executive‌ order​ in October to regulate AI development,⁢ requiring ‌developers of powerful⁣ AI ​models to share‌ safety results and critical information with the government.

China⁣ is not a signatory

The agreement was signed by ⁤government agencies from Australia, Canada, Chile, the Czech Republic, ⁢Estonia, France, Germany, Israel, Italy, ⁤Japan, New Zealand, Nigeria, Norway, Poland, South Korea, and Singapore in addition to‍ the UK and the US. Absent from the agreement was China, a powerhouse⁣ of AI development, and the target of several trade sanctions from the US to limit its access to the high-powered silicon required⁣ for‍ AI development.

In a speech at a chamber of commerce event in Taiwan ​on ​Sunday, TSMC’s chairman Mark Liu argued that ⁣the US⁣ move to exclude⁤ China‍ will lead to a global slowdown in⁢ innovation and a fragmentation of globalization.

AI remains a legal minefield

The agreement, while nonbinding, primarily offers general recommendations and ⁢does not address complex⁣ issues regarding‌ the proper applications⁣ of AI or the​ methods of data collection ‌for AI data ​models.

It does not touch on the ongoing civil litigation within the US over how AI⁤ models ingest data to grow their…

2023-11-28 ⁢10:41:03
Article from www.computerworld.com rnrn

Exit mobile version