Is Microsoft’s Reception of Biden’s New AI Order Positive or Negative?

Is Microsoft’s Reception of Biden’s New AI Order Positive or Negative?

President Joseph R. Biden Jr.‌ last month laid down a wide-ranging executive order⁢ targeting generative AI,‌ dealing ​with everything​ from safety and‌ security measures, ‌to issues ‍related ‍to bias and civil rights, ⁣and oversight over how genAI is produced. On the surface, the order sounds ‍like a comprehensive and powerful one.

But is it really? Microsoft, along with most‍ other big genAI creators, welcomed ​the order, with‌ Microsoft Vice⁢ Chair and President Brad ‍Smith calling⁣ it “another critical ‌step forward in the governance of AI‍ technology…. We look forward to working with US⁢ officials to​ fully realize the power and​ promise of this emerging technology.”

He wasn’t alone. Other tech execs hailed it as well. Why? The New York Times put it this way: “Executives at companies like​ Microsoft, Google, OpenAI⁣ and Meta have all said that they fully expect the United States ⁢to regulate the technology — and some executives, surprisingly, have seemed a bit relieved. Companies say they are worried‍ about corporate liability if the more powerful systems they use‌ are abused.‍ And they are‌ hoping that putting a government imprimatur on ‍some of their AI-based products may alleviate concerns among consumers.”

That brings ​up a⁣ basic question: Does Smith’s and other tech leaders’⁢ support​ for government regulation mean we can feel⁣ secure AI will be deployed in⁢ a responsible ⁢way?‍ Or are they pleased with Biden’s action because they’ll be‍ left alone to do what they please?

To answer that, we first need to look into the details of the order.

Biden faces off against unregulated AI

Biden was blunt about why he ‍issued the order: “To realize the promise⁢ of AI ‌and avoid⁢ the ‌risks, we need ‍to govern this technology. There’s no other way around it.”

Presidents frequently use executive orders as a way make it appear they’re taking serious action, while doing ⁢little ⁣more than scoring political points. This time, it’s different.‍ The‌ genAI regulations are based ‍on a ​carefully ‍researched analysis of the many ways in which ⁣the technology could go off the‌ rails and⁤ cause serious harm if ⁣allowed to be developed unfettered. They’re designed to erect guardrails around it.

The standards focus on multiple areas,⁣ the most important of which are safety and security, privacy, and equity and civil rights. Among the safety and security strictures are requirements that companies ⁣who develop the biggest AI systems — think Microsoft, Google, Facebook‍ and OpenAI — must safety-test⁣ their ⁣systems and share the results with the government. That way, the ⁢order claims, the government can make sure the systems are safe and secure before they’re released.

Additionally, several government agencies, including the National Institute of Standards⁣ and the⁣ US Department of Homeland Security, will establish‌ “red-team” testing standards overseeing “critical infrastructure, as well as⁣ chemical, biological, radiological, ⁤nuclear, and…

2023-11-20 10:41:02
Post from www.computerworld.com rnrn

Exit mobile version