Experts Claim That Halting AI is Neither Feasible Nor Desirable: Q&A

Experts Claim That Halting AI is Neither Feasible Nor Desirable: Q&A

As generative AI tools such as OpenAI’s ChatGPT and Google’s Bard continue to evolve at a breakneck pace, raising questions around trustworthiness and even human rights, experts are weighing if or how the technology can be slowed and made more safe.

In March, the nonprofit Future of Life Institute published an open letter calling for a six-month pause in the development of ChatGPT, the AI-based chatbot created by Microsoft-backed OpenAI. The letter, now signed by more than 31,000 people, emphasized that powerful AI systems should only be developed once their risks can be managed.

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asked.

Apple co-founder Steve Wozniak and SpaceX and Tesla CEO Elon Musk joined thousands of other signatories in agreeing AI poses “profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

In May, the nonprofit Center for AI Safety published a similar open letter declaring that AI poses a global extinction risk on par with pandemics and nuclear war. Signatories to that statement included many of the very AI scientists and executives who brought generative AI to the masses.

Jobs are also expected to be replaced by generative AI — lots of jobs. In March, Goldman Sachs released a report estimating generative AI and its ability to automate tasks could affect as many as 300 million jobs globally. And in early May, IBM said it would pause plans to fill about 7,800 positions and estimated that nearly three in 10 back-office jobs could be replaced by AI over a five-year period, according to a Bloomberg report.

While past industrial revolutions automated tasks and replaced workers, those changes also created more jobs than they eliminated. For example, the steam engine needed coal to function — and people to build and maintain it.

Generative AI, however, is not an industrial revolution equivalent. AI can teach itself, and it has already ingested most of the information created by humans. Soon, AI will begin to supplement human knowledge with its own.

Geoff Schaefer, head of Responsible AI, Booz Allen Hamilton

Geoff Schaefer is head of Responsible AI at Booz Allen Hamilton, a US government and military contractor, specializing in intelligence. Susannah Shattuck is head of product at Credo AI, an AI governance SaaS vendor.

Computerworld spoke recently with Schaefer and Shattuck about the future of AI and its impact on jobs and society as a whole. The following are excerpts from that interview.

What risks does generative AI pose? Shattuck: “Algorithmic bias. These are systems that are making predictions based on patterns in data that they’ve been trained on. And as we all know, we live in a biased world. That data we’re training these systems on is often biased, and if we’re not careful and thoughtful…

2023-06-04 05:58:07
Link from www.computerworld.com

Exit mobile version