Anthropic's AI Model Mythos Ignites Controversy Over Transparency and Investment Motives
This week, the artificial intelligence company Anthropic announced the creation of an AI model so powerful that it chose not to release it to the public, citing overwhelming responsibility and cybersecurity risks. The model, named Mythos, quickly garnered attention from high-profile figures, including US Treasury Secretary Scott Bessent and Reform UK MP Danny Kruger, who urged the UK government to engage with Anthropic due to potential catastrophic cybersecurity threats.
Scepticism and Criticism from AI Experts
However, not everyone is convinced by Anthropic's narrative. Notable AI critic Gary Marcus expressed scepticism, comparing Anthropic CEO Dario Amodei to OpenAI's Sam Altman, suggesting both have graduated from the same school of hype and exaggeration. This sentiment is echoed by other experts who question the company's motives.
Dr. Heidy Khlaaf, chief AI scientist at the AI Now Institute, pointed out that the capacities of Mythos were not substantiated. She criticized the release of a marketing post with purposely vague language that obscures evidence, raising concerns about whether Anthropic is attempting to garner further investment without proper scrutiny.
Anthropic's Media Strategy and Public Perception
Anthropic has been highly successful in its media outreach, securing features in prestigious publications such as the New Yorker, Wall Street Journal, and Time magazine. The company's co-founders, Dario Amodei and Jack Clark, have appeared on multiple podcasts, discussing profound questions about AI consciousness and economic impact.
Despite this positive coverage, some tech public relations professionals have noted that Anthropic deserves equal scrutiny. One PR expert highlighted that the company accidentally leaked part of Claude's internal source code in early April, an incident that would typically draw ridicule if it involved a larger tech firm.
Technical and Practical Concerns
Jameison O'Reilly, an expert in offensive cybersecurity, acknowledged that Mythos is a real development but questioned the significance of some of Anthropic's claims. For instance, he noted that finding thousands of zero-day vulnerabilities in major operating systems might not be as impactful in real-world cybersecurity scenarios as suggested.
Additionally, Anthropic faces practical challenges, such as limited resources and computing capacity. The company has introduced usage caps on its popular Claude model and requires extra payments for third-party tools, indicating potential infrastructure limitations that could hinder the release of a hyped new creation like Mythos.
The Broader AI Market and Strategic Implications
Anthropic, like its rival OpenAI, is in a race to raise billions of dollars and capture a market that remains ill-defined. The battle for dominance often hinges on intangible attributes like a sense of self or soul in AI agents, making marketing and public perception crucial.
Dr. Khlaaf suggested that Mythos might be a strategic announcement to show that Anthropic is open for business, using safety as a PR tool to gain public trust before prioritizing profits. This approach, she warned, could mirror the bait-and-switch playbook previously employed by OpenAI, albeit with better obscurity.
As protests in San Francisco call for a pause in AI development, the debate around Anthropic's actions highlights the tension between innovation, responsibility, and commercial interests in the rapidly evolving AI landscape.



