The US government has established a consortium called the US AI Safety Institute Consortium (AISIC) to address the safety and security of artificial intelligence. The consortium includes AI creators, users, and academics, and is part of the National Institute of Standards and Technology. Its goal is to develop guidelines for red-teaming AI systems, evaluating AI capacity, managing risk, ensuring safety and security, and watermarking AI-generated content.
More than 200 companies and organizations, including Amazon.com, Carnegie Mellon University, Duke University, the Free Software Foundation, and Visa, are part of AISIC. Major developers of AI tools such as Apple, Google, Microsoft, and OpenAI are also members of the consortium.
The US Department of Commerce announced the creation of AISIC and emphasized the importance of setting AI safety standards while promoting innovation. The Biden administration has also named Elizabeth Kelly as the director of the newly formed US Artificial Intelligence Safety Institute (USAISI), which will house AISIC.
While the timeline for the consortium’s work is uncertain, President Joe Biden’s executive order on AI safety suggests the need for regulations. The order emphasizes the importance of mitigating the risks associated with AI and calls for a society-wide effort to address these challenges.
Biden’s goals include requiring developers to share safety test results, developing standards and tools for safe AI deployment, protecting residents against AI-enabled fraud, and establishing a cybersecurity program to address vulnerabilities in critical software. The AI Safety Institute Consortium is seeking contributions from its members to address these areas.
Lawmakers have introduced several AI-related bills in the US Congress, indicating a growing focus on AI regulation and safety.
2024-02-13 17:00:03
Article from www.computerworld.com