Grok 4.3

· ai · Source ↗

TLDR

  • xAI releases Grok 4.3 with a 1M token context window, reasoning, function calling, and structured outputs at $1.25/$2.50 per 1M input/output tokens.

Key Takeaways

  • 1,000,000 token context window; requests exceeding 200K tokens are charged at higher rates.
  • Pricing: $1.25/1M input, $0.20/1M cached input, $2.50/1M output; cached tokens cut costs significantly.
  • Rate limits: 1,800 requests/minute, 10,000,000 tokens/minute across us-east-1 and eu-west-1 regions.
  • Capabilities include function calling, structured outputs, and built-in reasoning (model thinks before responding).

Hacker News Comment Review

  • Commenters broadly see Grok as cheaper and faster than Claude or GPT-4.1, but coding capability is rated below current frontier models, limiting its appeal for dev tooling use cases.
  • A practical niche emerged in discussion: lower guardrails make Grok useful for sensitive classification tasks (e.g., trafficking-related charity work) where other models refuse outright.
  • Sentiment on positioning is mixed; some see it as a strong chat/voice model, others view it as trailing Claude and Codex on the tasks builders care most about.

Notable Comments

  • @michaelbuckbee: Quick eval vs Opus 4.7 and GPT-4.1 showed similar tone quality; Grok was fastest and cheapest, Claude slowest and priciest.
  • @sudb: Charity anti-trafficking app used Grok for one-shot classification where all other models refused; cites near-frontier quality plus cheap fast models.

Original | Discuss on HN