Google says criminal hackers used AI to find a major software flaw

· ai · Source ↗

TLDR

  • Google reports criminal hackers leveraged AI to discover a significant software vulnerability, signaling a new phase in offensive security.

Key Takeaways

  • AI-assisted vulnerability discovery is no longer theoretical; criminal actors have used it against real targets.
  • Anthropic’s Mythos model, flagged for exceptional exploit-finding capability, was released only to select firms and US/UK government agencies.
  • Restricted-access security-focused models like Mythos and OpenAI’s GPT-5.5-Cyber differ from consumer models by removing standard safety guardrails.
  • The pattern mirrors the fuzzing era: a new automated technique arrives, discovered bugs surge, and access asymmetry shapes who benefits first.

Hacker News Comment Review

  • Commenters dispute the article’s framing, noting GPT-5.5-Cyber has comparable capability to Mythos, and the “exclusive” access narrative likely reflects marketing more than a true capability gap.
  • Restricting open-weight models to contain the threat is seen as ineffective: capable Chinese models face no such restrictions and are already accessible globally.
  • A recurring observation is that attackers only need one success, making LLM error rates acceptable for offensive use, and potentially accelerating the burning of hoarded zero-days before AI independently rediscovers them.

Notable Comments

  • @gman2093: “Attackers only need to be right once” – LLM unreliability is asymmetrically less costly for offense than defense, which may also deflate zero-day stockpile value.

Original | Discuss on HN