OpenAI releases GPT-5.5 and GPT-5.5 Pro in the API

· ai ai-agents · Source ↗

TLDR

  • GPT-5.5 and GPT-5.5 Pro land in the API with 1M token context, built-in computer use, MCP, Skills, and medium reasoning effort default.

Key Takeaways

  • GPT-5.5 supports 1M token context, image input, structured outputs, function calling, prompt caching, Batch, and web search in one model.
  • Built-in computer use, hosted shell, apply patch, Skills, and MCP are native capabilities, eliminating external tooling setup.
  • GPT-5.5 Pro is Responses API only (including Batch), targeting high-compute workloads that benefit from additional inference budget.
  • Tool search defers large tool surfaces to runtime, reducing token overhead and preserving prompt cache performance and latency.
  • GPT-5.5 defaults to medium reasoning effort, reverting from GPT-5.1’s none default introduced November 2025.

Hacker News Comment Review

  • Benchmark results are inconsistent: one SQL benchmark scores it 25/25 matching Opus 4.7, while a WordPress/GravityForms benchmark ranks it last on both performance and value.
  • The tiered pricing cliff at 272K context drew skepticism: input doubles to $10/M above that threshold, making long-context workflows more expensive than Opus 4.7 without proportional efficiency gains.
  • OpenAI cited unresolved API safety requirements the day before launch with no public explanation, raising questions about what safeguard work was completed in under 24 hours.

Notable Comments

  • @wincy: GPT-5.5 returned an incomplete transaction stub requiring manual follow-up despite explicit rollback instructions, flagging persistent instruction-following gaps.
  • @robertwt7: GPT-5.5 paired with Codex delivers high-confidence results; contrasts with Opus 4.7’s CLAUDE.md instruction drift and hallucinations in daily use.

Original | Discuss on HN