Google’s intelligent chatbot exhibits fear towards white males

Google’s intelligent chatbot exhibits fear towards white males

Google, the technological giant, has recently unveiled its updated smart assistant. Formerly known as Bard, the chatbot has been rebranded as Gemini and now boasts enhanced artificial intelligence.⁢ While developers have likened its capabilities to those⁢ of OpenAI’s GPT-4, some of Gemini’s results ⁤have left users puzzled.

Following extensive ⁤use and testing, users ‌have raised concerns about the quality of Gemini’s ⁤responses. At times, its answers defy the logic of the questions posed, a rarity even in early AI chatbots. Furthermore, it appears to exhibit a bias towards dark-skinned individuals and⁢ other races.

Users have found it nearly impossible to prompt Gemini to generate an image of a white man. Even when asked to be “historically accurate,” the neural network consistently distorts the context of requests, favoring images of dark-skinned individuals or representatives of ⁤other races.

As a result,‌ social media has been abuzz‌ with jokes about black kings ⁢in medieval Europe and Asians of‌ Scottish descent.

Article from www.playground.ru

Exit mobile version