Authored by Andrew Moran via The Epoch Times,
Hundreds of people, from conservative commentators to prominent tech executives, have signed a letter seeking a ban on “the development of superintelligence.”
This year, leading technology firms such as Google, Meta Platforms, and OpenAI have accelerated efforts to build artificial intelligence (AI) systems capable of outperforming humans across a broad spectrum of elementary and complex tasks.
A growing chorus of prominent people thinks that it is time to hit the brakes—at least temporarily.
The letter, put together by the Future of Life Institute, calls for a ban on advancing superintelligent AI until there is public demand and science charts a safe path for the technology.
“We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in,” reads the brief statement, released on Oct. 22.
The Future of Life Institute has spent the past decade sounding the alarm over the existential risks posed by advanced AI. Its petition has drawn thousands of signatures and support from hundreds of high-profile figures aligned with the group’s mission, including AI pioneers Yoshua Bengio and Geoffrey Hinton.
Bengio said AI systems could outperform most individuals in various cognitive tasks in the next few years. While they will bring advancements, they could also “carry significant risks,” Bengio wrote in a personal note released with the letter.
“To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he wrote.
“We also need to make sure the public has a much stronger say in decisions that will shape our collective future.”
The letter warns of increasing threats to the world, including the loss of freedom, civil liberties, and “human economic obsolescence and disempowerment.”
Among the other signatories are conservative media personality Glenn Beck, Virgin Group founder Sir Richard Branson, Apple co-founder Steve Wozniak, former national security adviser Susan Rice, and political commentator Steve Bannon.
The letter expresses consternation over the rapid development and deployment of AI across a wide array of industries, political ideologies, and religious sects.
“The future of AI should serve humanity, not replace it,“ Prince Harry, one of many signatories alongside his wife, Meghan, said in a personal note released with the letter. ”The true test of progress will be not how fast we move, but how wisely we steer.”
Stuart Russell, an AI pioneer and computer science professor at the University of California–Berkeley, said that the statement is not a prohibition or moratorium “in the usual sense.” Instead, he wrote, it is a proposal to install the necessary safeguards for a technology that “has a significant chance to cause human extinction.”
“Is that too much to ask?” Russell wrote.
In a 2015 blog post, OpenAI CEO Sam Altman wrote that the rise of “superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.”
Tesla CEO Elon Musk attends the Building a Legacy: Remembering Charlie Kirk memorial event at the State Farm Stadium in Glendale, Ariz., on Sept. 21, 2025. Madalina Kilroy/The Epoch Times
Elon Musk, CEO of Tesla Motors and SpaceX, told podcast host Joe Rogan earlier this year that there is “only a 20 percent chance of annihilation.”
“The probability of a good outcome is like 80 percent,” the billionaire entrepreneur said.
It is not only experts and famous individuals who voice caution.
The Future of Life Institute cited a recent national survey of 2,000 adults that found only 5 percent support for “the status quo of fast, unregulated development.” Close to two-thirds (64 percent) think that superhuman AI either should not be created until it is proven safe and controllable or “should never be developed.”
AI on the Street and at Work
For the past three years, Wall Street has been immersed in the rise of AI, with many market watchers comparing it to the dot-com bubble 25 years ago.
Others say it is very different from the exuberance of the late 1990s, when investors poured billions of dollars into companies with “dot-com” in their names.
“Overall, there are some similarities (increasing market concentration in tech stocks; aggressive capital investment ahead of revenues),” John Belton, portfolio manager at Gabelli Funds, said in a note emailed to The Epoch Times.
“But I think it is oversimplifying things to say we are in a ‘bubble’ (almost certainly not in a valuation bubble; but an argument to be made that there is some recent froth in earnings streams).”
Whether the AI bubble is real or not, companies are pressing ahead with AI, and U.S. workers are worried.
According to June data from FactSet Insights, during the second quarter, more than 40 percent of S&P 500 firms commented on “AI” during earnings calls. This is the fifth consecutive quarter in which more than 200 S&P 500 firms have done so.
A Reuters-Ipsos poll conducted this past summer found that 71 percent of respondents were worried about AI “putting too many people out of work permanently.”
While AI has yet to spur widespread job displacement, member of the Federal Reserve Board of Governors Christopher Waller said last week that more companies are preparing for the new technology in their day-to-day operations.
“Retailers in particular are cutting back on employment for call centers and IT-related occupations,” Waller said at an Oct. 15 DC Fintech Week event. “So far, most say this is being handled through attrition, but a number of retailers say that there is the potential for downsizing next year.”
Even employees working in the AI field are facing job cuts.
Meta announced on Oct. 22 that it is eliminating about 600 positions in its Superintelligence Labs, which will affect Facebook Artificial Intelligence Research and other AI and AI-related products and infrastructure.
Loading recommendations…