Technical News

Elon Musk lists his three most important ingredients for AI

Elon Musk, CEO of Tesla Inc., during the US-Saudi Investment Forum at the Kennedy Center in Washington, DC, United States, Wednesday, November 19, 2025.

Bloomberg | Bloomberg | Getty Images

Elon Musk has once again sounded the alarm about the dangers of AI and listed what he considers the three most important ingredients to ensure a positive future for this technology.

The billionaire CEO of TeslaSpaceX, xAI, X and The Boring Company, appeared on a podcast with Indian billionaire Nikhil Kamath on Sunday.

“It’s not that we’re guaranteed a positive future with AI,” Musk said on the podcast. “When you create powerful technology, there is a certain danger: powerful technology can be potentially destructive.”

Musk was a co-founder of OpenAI alongside Sam Altman, but left its board of directors in 2018 and publicly criticized the company for abandoning its founding mission as a nonprofit to develop AI safely after the launch of ChatGPT in 2022. Musk’s xAI developed its own chatbot, Grok, in 2023.

Musk has previously warned that “AI is one of the greatest risks to the future of civilization”, and stressed that rapid progress is leading AI to become a greater risk to society than cars, planes or medicines.

On the podcast, the tech billionaire stressed the importance of ensuring that AI technologies seek the truth instead of repeating inaccuracies. “It can be very dangerous,” Musk told Kamath, who is also co-founder of brokerage firm Zerodha.

“Truth, beauty and curiosity. I think those are the three most important things for AI,” he said.

He said that, without adhering strictly to truths, AI would learn information from online sources where it would “absorb a lot of lies and then have difficulty reasoning because those lies are inconsistent with reality.”

He added: “You can drive an AI crazy if you force it to believe things that aren’t true, because that would lead to conclusions that are also wrong. »

“Hallucinations” – incorrect or misleading responses – are a major challenge facing AI. Earlier this year, an AI feature Apple launched on its iPhones generated fake news alerts.

These included a false summary of notifications from the BBC News app on an article about the PDC World Darts Championship semi-final, in which it was falsely claimed that British darts player Luke Littler had won the championship. Littler did not win the tournament final until the next day.

Apple told the BBC at the time that it was working on an update to fix the problem, which clarifies when Apple Intelligence is responsible for the text displayed in notifications.

Musk added that “some appreciation of beauty is important” and that “you know it when you see it.”

Musk said AI should want to learn more about the nature of reality because humanity is more interesting than machines.

“It is more interesting to see the continuation, if not the prosperity, of humanity than to exterminate humanity,” he said.

Geoffrey Hinton, a computer scientist and former Google vice president known as the “godfather of AI,” said earlier this year that there was a “10 to 20 percent chance” that AI would “wipe us out,” in an episode of the Diary of a CEO podcast. Some of the short-term risks he cited included hallucinations and the automation of entry-level jobs.

“The hope is that if enough smart people do enough research with enough resources, we’ll find a way to build them up so they never want to hurt us,” Hinton added.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button