Controlling AI deep fakes, mistakes, and biases may be possible despite their inevitability.

Controlling AI deep fakes, mistakes, and biases may be possible despite their inevitability.

As generative AI platforms such as ChatGPT, Dall-E2, and AlphaCode barrel ahead at a breakneck pace, keeping the technology from hallucinating and spewing erroneous or offensive responses is nearly impossible.

Especially as AI tools get better by the day at mimicking natural language, it will soon be impossible to discern fake results from real ones, prompting companies to set up “guardrails” against the worst outcomes, whether they be accidental or intentional efforts by bad actors.

AI industry experts speaking at the MIT Technology Review’s EmTech Digital conference this week weighed in on how generative AI companies are dealing with a variety of ethical and practical hurdles as even as they push ahead on developing the next generation of the technology.

“This is a problem in general with technologies,” said Margaret Mitchell, chief ethics scientist at machine learning app vendor Hugging Face. “It can be developed for really positive uses and then also be used for negative, problematic, or malicious uses; that’s called dual use. I don’t know that there’s a way to have any sort of guarantee any technology you put out won’t have dual use.

“But I do think it’s important to try to minimize it as much as possible,” she added.

Generative AI relies on large language models (LLMs), a type of machine learning technology that uses algorithms to  generate responses to user prompts or queries. The LLMs access massive troves of information in databases or directly from the Internet and are controlled by millions or even hundreds of billions of parameters that establish how that information can provide responses.

The key to ensuring responsible research is robust documentation of LLMs and their dataset development, why they were created, and water marks that identify content created by a computer model. Even then, problems are likely to emerge.

“In many ways, we cannot guarantee that these models will not produce toxic speech, [and] in some cases reinforce biases in the data they digested,” said Joelle Pineau, a vice president of AI research at Meta AI. “We believe more research is necessary…for those models.”

For generative AI developers, there’s a tradeoff between legitimate safety concerns and transparency for crowdsourcing development, according to Pineau. Meta AI, the research arm of Meta Platforms (formerly Facebook), won’t release some of the LLMs it creates for commercial use because it cannot guarantee there aren’t baked-in biases, toxic speech, or otherwise errant content. But it would allow them to be used for research to build trust, allow other researchers and application developers to know “what’s under the hood,” and help speed innovation.

Generative AI has been shown to have “baked-in biases,” meaning when it is used used for the discovery, screening, interviewing, and hiring of candidates, it can favor people based on race or gender. As a result, states,…

2023-05-25 21:30:03
Original from www.computerworld.com

Exit mobile version