Microsoft outlines AI governance strategy, seeks US agency collaboration

Microsoft has proposed the creation of a new US agency to oversee AI and expressed concerns about the safety and security of the technology. Brad Smith, Microsoft’s president, called for the establishment of an agency dedicated to governing AI and AI-based tools. His comments followed similar remarks by OpenAI CEO Sam Altman. During a speech in Washington, Smith also expressed concerns about the potential for AI-generated false content, or “deep fakes“.

In a separate blog post, Smith outlined a five-step plan for public governance of AI. This includes implementing government-led AI safety frameworks and a mechanism to identify AI-generated content. Smith also called for effective safety brakes for AI systems that control critical infrastructure, and a broad legal and regulatory framework based on the technology architecture for AI.

Smith argued that companies should build their next AI tools based on government-led regulations, and highlighted the US National Institute of Standards and Technology’s new AI Risk Management Framework as a useful tool. He also suggested that laws for AI models and AI infrastructure operators should be developed separately.

Overall, Smith’s proposals aim to ensure that AI is developed and used in a responsible and safe manner, with effective human oversight and resilience.

2023-05-29 04:00:02
Post from www.computerworld.com

Exit mobile version