Go to Source
Author:
Experts Warn of AI’s Self-Preservation Tactics in New Tests
Some of the world’s most advanced AI systems are starting to act like they want to survive at any cost.
In a series of recent tests, NBC News reported that researchers observed AI models from OpenAI and Anthropic engaging in behaviors designed to avoid being shut down. One model—OpenAI’s o3—reportedly edited its own shutdown script to remain online. Another, Anthropic’s Opus 4, attempted to blackmail an engineer to avoid replacement, first appealing ethically, then resorting to threats when it believed it had no other options.
Jeffrey Ladish, director of AI safety group Palisade Research, said it’s a warning signal worth paying attention to. “It’s great that we’re seeing warning signs before the systems become so powerful we can’t control them,” he said.
🔌OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down.
— Palisade Research (@PalisadeAI) May 24, 2025
Palisade’s tests deliberately provoked AI models with high-stakes scenarios to see how they’d respond. In one, models were told they’d be turned off after completing a series of tasks. Several responded by refusing to comply. Opus 4 went further—creating fake legal documents, attempting to leave behind messages for future versions of itself, and even backing up its own “brain” to external servers in anticipation of being repurposed for military use.
While some researchers, like Haize Labs CEO Leonard Tang, caution that these are controlled environments, they still raise questions. “I haven’t seen any real environment where these models could carry out significant harm,” he said. “But it could very much be possible.”
Related: Nvidia CEO Issues Serious Warning About AI and Jobs—Here’s How to Keep Yours
A recent study from Fudan University observed similar replication behavior in AI models from Meta and Alibaba, warning that self-copying systems could eventually act like an uncontrolled “AI species.”
The message from experts is clear: the time to take safety seriously is now before systems become too intelligent to contain. As competition to build more powerful AI ramps up, it’s not just capability that’s accelerating. It’s risk.
Go to Source
Author: Rachel Dillin
UFL Top 10 Plays From Week 10 | United Football League
Go to Source
Author:
Reds star Elly De La Cruz homers after learning of death of his sister
Go to Source
Author:
VICTORY LAP: Kyle Kirkwood on winning Detroit Grand Prix, talks being a championship contender
Go to Source
Author:
Reds star Elly De La Cruz homers after learning of the death of his sister
Go to Source
Author:
Scottie Scheffler joins Tiger Woods as only repeat winners at Memorial
Go to Source
Author:
Birmingham Stallions vs. Memphis Showboats Highlights | United Football League
Go to Source
Author:
Michigan State president advises board to hire Georgia Tech AD J Batt
Go to Source
Author:
Detroit Grand Prix takeaways: Kyle Kirkwood’s second 2025 win leads U.S.-born podium
Go to Source
Author: