AI: Debunking the Exaggerated Fear and Hype

AI: Debunking the Exaggerated Fear and Hype

Two reputable ‌news organizations — Reuters​ and The Information‌ — recently reported sources claiming that recent drama around OpenAI’s leadership was based in part ⁤on a massive technological ⁤breakthrough at the company.

That breakthrough is something called⁢ Q* (pronounced cue-star), which is⁣ claimed to be able to do grade-school-level math, and integrate that mathematical reasoning to improve the choosing of responses.

Here’s everything⁢ you need to know about Q*,⁢ and why it’s ‌nothing ⁣to freak out about.

The problem: AI​ can’t think

The LLM-based generative AI (genAI) revolution we’ve all been obsessing over this year is based on what is‍ essentially a word- or number-prediction algorithm. It’s‍ basically Gmail’s “Smart Compose” feature on steroids.

When you interact with a⁣ genAI chatbot, such as ChatGPT, it takes your input ‍and responds based on prediction. It predicts the first word will be X, then the second word will ⁢by Y and the third word will be Z, all based on its training on massive amounts of ‍data. But ⁤these chatbots don’t know what the words mean, or what the ⁣concepts are. It⁢ just predicts next words, within the confines of human-generated parameters.

That’s⁣ why artificial intelligence can be artificially stupid.

In May, a lawyer named Steven⁣ A. Schwartz used ChatGPT to write a legal brief for a case in Federal District Court. The brief cited cases that never existed.⁣ ChatGPT ​just made ⁢them ⁤up ⁣because LLMs‍ don’t know or care about reality, only likely word order.

In September, the Microsoft-owned news site MSN ⁣published an LLM-written​ obituary for former NBA player Brandon Hunter. The⁢ headline‌ read: “Brandon Hunter useless ‌at 42.” The‍ article claimed Hunter ⁣had “handed away at the age of 42” and that during his two-season career, he played “67 video games.”

GenAI can’t⁤ reason. It can⁢ know that⁣ it’s possible to replace “dead” with “useless,” “passed” with “handed” and “games” with “video games.” But it’s too dumb to know that‌ these alternatives are nonsensical in a basketball player’s obit.

The Q* solution: AI that can think

Although no actual facts ‍are ⁤publicly known about Q*, the emerging consensus in AI ‍circles ‍is that the technology is being‌ developed by a team led by OpenAI’s chief scientist, Ilya Sutskever, and that it combines⁤ the AI techniques Q-learning and A* search (hence the⁤ name Q*).

(Q-learning ⁣is an AI-training tool that rewards⁢ the⁣ AI tool for making‍ the correct “decision” in the process of​ formulating⁤ a response.⁢ A* is an algorithm for⁣ checking nodes in a graph and looking for pathways between nodes. Neither ⁢of these techniques is new or unique to OpenAI.)

The idea is that it could enhance ChatGPT by the ⁤application of something like reason or mathematical⁢ logic — i.e., “thinking” — to arrive at better results. And, the hype goes, a ChatGPT that can think approaches artificial general intelligence (AGI).

The AGI goal,…

2023-12-07 10:41:03
Original from www.computerworld.com rnrn

Exit mobile version