What happens when genAI vendors kill off their best sources?

What happens when genAI vendors kill off their best sources?

If you think the latest generative AI (genAI) tools such as Google AI Overviews and OpenAI GPT-4o will change the world, you’re right. They will. But will they change it for the better? That’s another question.

I’ve been playing with both tools (and other genAI programs, as well). I’ve found they’re still prone to hallucinations, but sound more convincing than ever. That’s not a good thing.

One of the reasons I’m still making a living as a tech journalist is because I’m very good at discerning fact from fantasy. Part of that skill set comes from being an excellent researcher. The large language models (LLM) that underpin genAI chatbots…, not so much. Today, and for the foreseeable future, at their best, genAI is really just very good at copying and pasting from the work of others. 

That means the results they spit out are only as good as their sources. Look at it this way: if I want to know about the latest news, I go to The New York Times, the Washington Post, and the Wall Street Journal. Not only do I trust their reporters, but I know what their biases are. 

For example, I know I can believe what the Journal has to say about financial news, but I take their columnists with a huge grain of salt. (That’s just me; you might love them.)

As for the Times, remember it claims that OpenAI has stolen its stories to train ChatGPT — and if it wins its case, genAI is in trouble. Because other publishers will follow in quick succession. When that happens, all the genAI engines will have to steal — uhm, learn — their content from the likes of Reddit; your “private” Slack messages; and Stack Overflow, where users are sabotaging their answers to screw up OpenAI. 

2024-05-24 01:51:02
Link from www.computerworld.com

Exit mobile version