XAI condemns Grok’s obsession with white massacre of ‘unauthorized correction’. Techcrunch
XAI See repeatedly When called in a specific situation of X, “South Africa’s white massacre”.
On Wednesday, GROK began to reply to X’s dozens of posts with information about South Africa’s white massacre, even in response to an unrelated topic. The strange answer is from the Grok on the X account, and every time a person tags “@grok”, he responds to a user who has an AI production post.
According to the post of the official X account of XAI on Thursday, the system prompt (advanced guidelines to guide the bot’s behavior) on Wednesday morning to be changed to a high level of instructions instructed to provide a “specific reply” for the “political subject”. XAI says that the adjustment said, “ [its] Internal policy and core value ”, and the company has done a thorough investigation.
This is the second time that XAI has publicly admitted that AI responded in a controversial way by changing the unauthorized change of Grok’s code.
Grok in February Simply censored Donald Trump and ELON Musk, the owner of XAI’s billionaire founder and XAI engineering lead, are unpleasant mentions of Donald Trump and ELON Musk. Defective employee In order to ignore the source mentioning the Musk or Trump, the wrong information was embedded, and the XAI reversed the change as soon as the user began to point it out.
XAI said it would make some changes to prevent similar events from happening in the future on Thursday.
You will do XAI from today Posting Grok’s System Prompt In Github and Changelog. The company also said that it will “take additional inspections and measures” to ensure that XAI employees cannot modify the system prompt without review.
Despite frequent warnings about Musk’s dangers AI Written Not confirmedXAI has poor AI safety results. Recent report Grok found that he would take off the picture of a woman when he requested. Chatbots are also available Quite more idiot It is cursed without much restraint to speak more than AI such as Google’s Gemini and Chatgpt.
According to a study by Saferai, a non -profit organization to improve the responsibility of AI Labs, XAI is not well suited to safety among colleagues. “Very weak” risk management practices. Earlier this month, XAI I missed the deadline for self -imposition Post the final AI safety framework.