Elon Musk has revealed a new generative AI model-based chatbot that will take on several large language models including OpenAI’s ChatGPT, Google’s PaLM 2, and Anthropic’s Claude 2.
The chatbot, which has been christened Grok AI, was developed by xAI — a generative AI venture launched by Musk in July to make an AI model understand the true nature of the universe in order to make it safer.
“Grok is an AI modeled after the ‘Hitchhiker’s Guide to the Galaxy’, so intended to answer almost anything and, far harder, even suggest what questions to ask,” the xAI team posted on its website.
The team further said that Grok has been fashioned in such a manner that it answers any question with a bit of wit and “has a rebellious streak.”
Further, the team said that Grok has access to Twitter, now X, allowing it to learn from all the content that is posted on the platform.
“It will also answer spicy questions that are rejected by most other AI systems,” the xAI team wrote.
However, the team warned that the chatbot is a “very early beta product” and has been unveiled with just two months of training. The model behind Grok is expected to “rapidly” improve every passing weak, the team claimed.
Grok-1 beats GPT-3.5 but fails to surpass GPT-4
Grok-1, which is the engine behind Grok, can be used for natural language processing tasks including question-answering, information retrieval, creative writing, and coding assistance, according to the xAI team.
“While Grok-1 excels in information processing, it is crucial to have humans review Grok-1’s work to ensure accuracy. The Grok-1 language model does not have the capability to search the web independently,” the team said, adding that search tools and databases enhance the capabilities and factualness of the model when deployed in Grok.
“The model can still hallucinate, despite the access to external information sources,” the team warned.
In order to develop Grok-1, the xAI team first trained a prototype large language model (LLM), dubbed Grok-0 with 33 billion parameters.
The Grok-0 model performed close to Meta’s LLaMA 2 model which has been trained with 70 billion parameters. The first model was then trained and improved upon to produce Grok-1.
The xAI team also ran benchmark tests to compare Grok-1’s performance against PaLM 2, Claude 2, Inflection-1, LLaMA 2, GPT-3.5 and GPT-4 models. These benchmark tests include middle school math problems, multidisciplinary multiple choice questions, and Python code completion tasks.
On these benchmark tests, Grok-1 surpassed performance of all other models except GPT-4, which the company claims has been trained on a “significantly larger amount of training data and compute resources.”
The Grok AI chatbot, according to the company, is currently being offered to a limited number of users in the US. Users who are interested to test out Grok AI can sign up for a waitlist, the team said, adding that it has…
2023-11-13 02:41:02
Source from www.infoworld.com