Elon Musk teased the release of his artificial intelligence startup xAI’s first tool on Friday – even as the freewheeling billionaire predicted that advanced AI tools could eventually replace all jobs.
The 52-year-old tech tycoon – who has previously expressed concern that AI could wipe out humanity without proper guardrails in place – revealed that his firm will “release its first AI to a select group” beginning Saturday.
“In some important respects, it is the best that currently exists,” Musk said in a Friday post on X. He did not elaborate on who would test the tool or its purported capabilities.
Musk announced the AI tool’s debut just hours after he discussed the technology’s potential implications alongside British Prime Minister Rishi Sunak at the AI Safety Summit in the UK. During the event, Musk said AI could potentially become “the most disruptive force in history” as advancements were made.
“It’s hard to say exactly what that moment is, but there will come a point where no job is needed,” Musk said, according to CNBC. “You can have a job if you wanted to have a job for personal satisfaction. But the AI would be able to do everything.”
“I don’t know if that makes people comfortable or uncomfortable,” Musk quipped, drawing laughter from the audience.
Musk launched xAI in July and tapped Igor Babuschkin, formerly of Google-owned DeepMind, to lead the project. The firm is expected to work closely with his other companies, including Tesla and X.
At the time, the Tesla boss said his AI team would try to boost humanity’s “understanding of the universe” through its advancements and provide an alternative to tools developed by Microsoft-backed OpenAI and Google.
The X owner is pressing forward with his own initiative just months after he joined more than 1,000 experts in publicly calling for a six-month pause in the development of advanced AI models.
The group cited various potential risks if its concerns went unheeded, including the spread of “propaganda and untruth,” job losses, the development of “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us,” and the risk of “loss of control of our civilization.”
Musk stepped up his warnings in May, stating that there was a “non-zero chance” of AI “going Terminator” and “annihilating humanity.”
Musk isn’t the only tech bigwig to make dire predictions about the technology.
OpenAI’s Sam Altman, whose firm built the popular ChatGPT, has likened the threat to that of pandemics and nuclear weapons, while ex-Google CEO Eric Schmidt said AI is an “existential risk” to humanity that could result in “many, many, many, many people harmed or killed”
AI developers face the prospect of increased federal oversight of their projects. Earlier this week, President Biden issued a first-of-its-kind executive order
As The Post has reported, some critics say the regulatory push by Musk, OpenAI boss Sam Altman and other well-known industry leaders is merely a cynical play to ensure they have a major say in what rules are implemented – and potentially shut down competitors.
This story originally Appeared on NYPost