Rate limits: 1,800 requests/minute, 10,000,000 tokens/minute across us-east-1 and eu-west-1 regions.
Capabilities include function calling, structured outputs, and built-in reasoning (model thinks before responding).
Hacker News Comment Review
Commenters broadly see Grok as cheaper and faster than Claude or GPT-4.1, but coding capability is rated below current frontier models, limiting its appeal for dev tooling use cases.
A practical niche emerged in discussion: lower guardrails make Grok useful for sensitive classification tasks (e.g., trafficking-related charity work) where other models refuse outright.
Sentiment on positioning is mixed; some see it as a strong chat/voice model, others view it as trailing Claude and Codex on the tasks builders care most about.
Notable Comments
@michaelbuckbee: Quick eval vs Opus 4.7 and GPT-4.1 showed similar tone quality; Grok was fastest and cheapest, Claude slowest and priciest.
@sudb: Charity anti-trafficking app used Grok for one-shot classification where all other models refused; cites near-frontier quality plus cheap fast models.