A recent experience with ChatGPT 5.5 Pro

· ai math · Source ↗

TLDR

  • Tim Gowers used ChatGPT 5.5 Pro to produce PhD-level additive number theory results in under an hour, with zero mathematical input from himself.

Key Takeaways

  • ChatGPT 5.5 Pro solved an open problem from Nathanson’s additive number theory paper, constructing a quadratic upper bound after 17 minutes of reasoning – verified correct by MIT student Isaac Rajagopal.
  • It then improved Rajagopal’s exponential bound to polynomial in k, using what Rajagopal called a clever, original idea he would have been proud to find after weeks of work.
  • The LLM’s edge came from recognizing that Nathanson’s inductive construction implicitly used a Sidon set, then substituting a more efficient Sidon set of quadratic diameter.
  • Gowers flags a structural problem for math research culture: “gentle” open problems, traditionally used to onboard PhD students, are now solvable by LLM in under an hour.
  • No clear publication venue exists for AI-produced correct mathematics; Gowers proposes a moderated repository requiring human certification or proof-assistant formalization.

Hacker News Comment Review

  • Commenters broadly agree expert human oversight remains necessary: LLMs still make conceptual errors only domain experts catch, making deep human knowledge a prerequisite for reliable use.
  • A recurring thread debates credit and analogy: commenters compare the human-LLM dynamic to F1 drivers and cars – the human who directs and certifies the work still contributes meaningfully, even if the LLM does the technical lifting.
  • Access inequality surfaced as a concrete concern: frontier long-thinking models like GPT-5.5 Pro are unaffordable under typical Eastern European academic budgets, creating a two-tier research environment.

Notable Comments

  • @vthallam: OpenAI employee publicly offered a free Pro account to the academic commenter who raised the access-cost barrier.

Original | Discuss on HN