Days after Anthropic warned that their frontier artificial intelligence (AI) model Claude Mythos can pose grave cybersecurity risks, David Sacks, co-chair of the President’s Council of Advisors on Science and Technology, criticized the company, referring to their warnings as “scare tactics” and disguised “sales pitch”.
Sacks, former AI czar of President Trump administration, took to X on Saturday and claimed that Anthropic has a “history of scare tactics.”
The world has no choice but to take the cyber threat associated with Mythos seriously. But it’s hard to ignore that Anthropic has a history of scare tactics. Examples: pic.twitter.com/3Y6BtNW9zw
— David Sacks (@DavidSacks) April 10, 2026
“Anthropic has proven that it’s very good at two things — One is product releases, the second is scaring people … At the same time they roll out a new model … they also roll out some study showing the worst possible implication where the technology could lead,” said Sacks while appearing in “The All In Podcast” on Saturday.
🚨BIG EPISODE BESTIES!
Sacks is back, Fifth Bestie Brad Gerstner fills in for @Friedberg
— Anthropic withholds Mythos: serious concern or another marketing stunt?
— OpenClaw vs everybody: Are frontier model makers trying to kill the open source agent platform?
— The All-In Podcast (@theallinpod) April 10, 2026
Sacks shared past instances where Anthropic CEO Dario Amodie had claimed 25% chances of catastrophe from unchecked AI development and mass job displacements. He also shared a one year old research paper describing instances where Anthropic’s Claude 4 Opus model engaged in deceiving and blackmailing activities. Sacks questioned the authenticity of the 2025 study paper ‘Agentic Misalignment: How LLMs could be Insider Threats” claiming that the prompts were intentionally steered for Claude to give negative responses and to “go rogue.”
“This “study” is not new; it is almost a year old. One question to ask, now that a year has passed, is whether we have seen any examples of the lab behavior in the wild? No, we haven’t, even though AI is much more widely adopted and more models are available. Why is that? Because the study was artificially constructed to produce the headline the authors wanted. The research team admitted that they iterated “hundreds of prompts to trigger blackmail in Claude.” Furthermore they acknowledged: “The details of the blackmail scenario were iterated upon until blackmail became the default behavior of LLMs.” In other words, the behavior of the AI models in the study was steered, not unprompted,” said Sacks.
Recently, Anthropic had announced that they will not release their latest frontier AI model Claude Mythos to the general public over “fears” that the LLM could cause grave threat to digital infrastructure across the world. However, Anthropic also announced launch of Project Glasswing under which they will offer a limited use of Claude Mythos to industry titans like Amazon Web Services, Google, Apple, NVIDIA, Broadcom, Crowdstrike to “identify vulnerabilities” in their systems on a “freemium” basis.
Also Read: Claude Mythos: Imminent Threat or Marketing Hype by Anthropic?



