XAI’s promised security report is MIA

Elon Musk’s IA company, XAI, has missed a self-imposed deadline to publish an AI security framework, as noted by the Midas Project surveillance group.
XAI is not exactly known for its strong commitments to IA security, as it is generally understood. A recent report revealed that the company’s IA chatbot, Grok, would underestimate the photos of women when asked. Grok can also be considerably coarse than chatbots like Gemini and Chatgpt, cursing without too much restraint to speak.
However, in February at the Summit of AI Seoul, a global gathering of leaders and stakeholders of AI, XAI published a project of executive describing the approach of the company in terms of IA security. The eight -page document presented the priorities and philosophy of XAI security, in particular the company’s comparative analysis protocols and the deployment considerations of the AI model.
As the Midas project noted on Tuesday in the blog post, the project applied only to future models of unpertified AI “not currently in development”. In addition, he did not explain how XAI would identify and implement risk attenuations, a central component of a document signed by the company at the top of the Seoul.
In the project, XAI said that he planned to publish a revised version of his security policy “within three months” – by May 10. The deadline came and came without recognition on the official Xai channels.
Despite the frequent musk warnings on the dangers of AI, without control, XAI has a bad experience in IA security. A recent study by Saferai, a non -profit organization aimed at improving the responsibility of AI laboratories, revealed that XAI ranks badly among its peers, because of its “very low” risk management practices.
This does not suggest that other AI laboratories are considerably better. In recent months, the Rivaux de Xai, notably Google and Openai, have precipitated security tests and have been slow to publish model security reports (or sautéed publishing reports). Some experts have expressed their concern that the apparent priority of security efforts takes place at a time when AI is more capable – and therefore potentially dangerous – than ever.




