Can generative AI surpass the law’s power?

Can generative AI surpass the law’s power?

Generative AI, led by Microsoft and Microsoft-backed OpenAI, ‍has turned into what seems ⁢an unstoppable juggernaut. Since⁣ OpenAI ‍released an early demo of its generative AI tool ⁢ChatGPT less than eight months⁢ ago, the technology has seemingly taken over the⁣ tech world.

Tech behemoths like Microsoft,‌ Google, and Meta ‍have gone all in,⁣ with countless smaller companies and startups searching for tech gold. ⁢Critics, ​including many AI researchers, worry that ‌if the technology continues unchecked, it⁢ could become increasingly ⁢dangerous, spread misinformation, invade ⁣privacy, steal intellectual property, take‌ control of vital ⁣infrastructure, and‍ even pose an existential ‍threat to humankind.

The‍ only⁤ recourse, it seems, is courts and federal agencies. As I’ve noted ⁤before,​ Microsoft and OpenAI have insinuated themselves into the good graces of many lawmakers, including those who will‍ decide whether‌ and how to‍ regulate​ AI, so Congress ​may ⁢well be beyond hope.⁤ That’s why government agencies and the courts need to​ act.

The ⁣stakes couldn’t ⁣be ​higher.‍ And now, thanks to a spate of⁣ lawsuits and action‌ by ‌the US Federal Trade Commission (FTC), we ⁢may soon find out‍ whether Microsoft’s AI and OpenAI are mightier than the law.

The FTC steps up

Federal agencies have rarely been aggressive with tech ⁤companies. If ‍they do try to act,⁣ it’s‌ usually well after ⁢harm has been⁤ done. And the ⁢result is​ typically at best ‍a slap on the wrist.

That’s not the case⁤ under the Biden ​administration, though. The FTC ⁢hasn’t​ been ⁢shy in going after Big Tech.​ And in the middle of ⁢July, it took its most important​ step yet: It opened ⁢an investigation into whether Microsoft-backed OpenAI has violated consumer ‌protection laws and ​harmed consumers by illegally collecting data, violating ⁢consumer privacy and publishing false information about people.

In ⁤a 20-page ⁤letter sent by the FTC⁢ to OpenAI, the agency said it’s probing whether the company “engaged in unfair or deceptive privacy or data security practices ‍or engaged in unfair or deceptive practices relating to risks of harm to consumers.”

The letter made ‍clear how seriously the FTC takes the ​investigation. It wants vast amounts of‍ information, ⁣including technical details about how ChatGPT gathers data, how the data is used and stored,⁢ the ⁤use ‍of APIs and plugins, ⁣and information about how OpenAI trains, builds, ‍and monitors ‍the Large⁣ Language Models (LLMs)⁤ that fuel its chatbot.

None of this should be a surprise to Microsoft or ChatGPT. In ⁣May, ⁤FTC ⁢Chair​ Lina Khan wrote an opinion piece in​ The New York Times laying ​out how she believed AI must‍ be regulated.​ She wrote that the FTC wouldn’t allow ⁢“business models‌ or practices involving the mass exploitation of ‌their users,” adding, “Although these tools are novel, they are not exempt from existing rules, and the FTC will vigorously enforce the ⁣laws we are charged‍ with administering, even‍ in this new market.”

In addition to the…

2023-07-28 02:00:04
Source from www.computerworld.com

Exit mobile version