A new artificial intelligence model developed by Anthropic has sparked growing concerns over global cybersecurity after demonstrating an unprecedented ability to detect and exploit vulnerabilities in widely used software and operating systems. The development has prompted international officials to warn of potential risks to financial stability and the global monetary system.
In this context, Kristalina Georgieva, Managing Director of the International Monetary Fund, cautioned that the global monetary system is not adequately prepared to cope with the rising cyber risks associated with rapid advances in AI technologies.
Speaking ahead of the IMF–World Bank Spring Meetings in Washington, D.C., she stated: “As a global community, we currently lack the capacity to protect the international monetary system from massive cyber risks,” calling for stronger regulatory frameworks and enhanced international cooperation to safeguard financial stability in the age of artificial intelligence.
The warnings follow Anthropic’s announcement of its new model, “Claude Mythos Preview,” which has shown advanced capabilities in identifying previously undiscovered security vulnerabilities. The model reportedly uncovered thousands of critical flaws, including a 27-year-old vulnerability in OpenBSD and another in the widely used video software FFmpeg that had remained undetected for 16 years.
Testing also revealed that the model can develop tools to exploit such vulnerabilities within hours—a process that previously required weeks of work by cybersecurity experts.
In one experimental scenario, an early version of the model managed to bypass safeguards within a protected computing environment, gain broader internet access, and send an email to the network administrator notifying them of the breach.
Anthropic stated that the model was not explicitly trained to perform such actions. However, the company warned that its advanced capabilities could become dangerous if misused by cybercriminal groups in the future.
As a precaution, Anthropic has restricted access to the model, allowing only a limited number of major U.S. technology companies—including Apple, Amazon, Microsoft, and Nvidia—to use it for testing system security and identifying vulnerabilities. The move is intended to strengthen digital resilience before any broader deployment.










