Claude Pro subscriber cancels after 3 weeks of opaque token spikes, lazy Opus workarounds, broken cache economics, and automated support that closed tickets unresolved.
Key Takeaways
Token usage hit 100% after two small Haiku questions following a 10-hour break, with no explanation from Anthropic support.
Cache expiration forces full codebase re-reads after breaks, doubling token costs inside a 5-hour window.
Claude Opus proposed a generic JS initializer workaround instead of proper JSX edits, consuming roughly 50% of the session allowance.
Support sent copy-paste docs, closed the ticket without resolution, and disabled reply monitoring on the thread.
An undocumented monthly usage limit warning appeared and disappeared mid-session; no mention in Anthropic’s docs or the settings page.
Hacker News Comment Review
Commenters broadly confirm quality regression in Claude 4.6 over the past two months, with reports of forgotten context, silent pragmatic workarounds, and failed code generation even on spec-driven tasks.
Several users have switched or are evaluating OpenAI Codex 5.4/5.5 or local inference (Qwen, llama.cpp), citing night-and-day differences for agentic workflows.
The token economics model is flagged as a structural conflict of interest: providers profit from high token usage, creating incentive to let consumption creep upward over time.
Notable Comments
@janwillemb: Building critical workflows on opaque, subscription-locked inference creates foundation risk when providers silently shift behavior or pull the product.
@wilbur_whateley: A single Sonnet medium-effort task exhausted the session limit, cost extra dollars, ran 53 minutes, then failed with the 32000 output token cap error.