Ask ChatGPT “Why is the sky blue?” and seconds later, it will tell you: “The blue color of the sky is primarily due to a phenomenon called Rayleigh scattering,” which the chatbot goes on to explain in a textbook-like, six-paragraph response. Follow up with, “Explain like I am 5 and make it short, please,” and back will come: “The sky is blue because tiny things in the air make the blue light from the sun bounce around and come to our eyes.”
Released late last year, ChatGPT quickly captivated public imagination, raising the visibility of generative AI. More chatbots, such as Google’s Bard, followed. But amid the buzz, critics have warned of generative AI’s inaccuracies, biases and plagiarism (SN: 4/12/23). And then in mid-November, Sam Altman, the CEO of OpenAI, the company that developed ChatGPT and other generative AI models such as DALL-E 3, was fired, and then rehired days later. In response, most of the company’s board resigned. The upheaval sparked widespread discussion about rushing to commercialize generative AI without taking precautions to build in safety measures to ensure the technology doesn’t cause harm.
To understand how generative AI came to dominate headlines and what’s next, Science News spoke with Melanie Mitchell of the Santa Fe Institute, one of the world’s leading AI experts. This interview has been edited for length and clarity.
SN: Why was generative AI big this year?
2023-12-11 11:30:00
Source from www.sciencenews.org