OpenAI is gating GPT-5.5 Cyber to credentialed defenders first, mirroring the same Anthropic Mythos restriction Altman publicly mocked.
Key Takeaways
GPT-5.5 Cyber covers pen testing, vulnerability identification and exploitation, and malware reverse engineering via a credentialed application process.
Altman previously called Anthropic’s Mythos gating “fear-based marketing”; OpenAI is now applying the same rollout model.
OpenAI is consulting the U.S. government to expand access beyond the initial critical cyber defender cohort.
An unauthorized group reportedly accessed Anthropic’s Mythos anyway, undermining the premise of both restricted launches.
Hacker News Comment Review
Consensus reads both launches as marketing theater: restricted access generates mystique without requiring proof the tools are uniquely dangerous or capable.
A defender at a security firm reports rising false-positive refusals on legitimate defensive tasks, and says OpenAI outsourced TAP verification to a poor vendor with AI-only support.
Skeptics note current model combinations and agentic setups already approximate these capabilities, making hard gating mostly symbolic.
Notable Comments
@lmeyerov: reports refusals for basic IT defense work increasing noticeably, with TAP verification outsourced to a bad vendor and internal support routed to AI.
@samrus: “giving me $200/mo might actually make it safe” – sharp parody of the safety-as-subscription framing both labs are running.