DeepSeek-V4 on Day 0: From Fast Inference to Verified RL with SGLang and Miles

· ai · Source ↗

TLDR

  • SGLang and Miles launched as the first open-source stack to serve and train DeepSeek-V4 on release day, covering both fast inference and verified RL.

Key Takeaways

  • LMSYS shipped Day-0 support for DeepSeek-V4 across inference and RL training simultaneously, not just serving.
  • SGLang handles the inference layer; Miles handles the RL training side, forming a paired open-source pipeline.
  • “Verified RL” framing signals the stack is positioned for post-training and alignment workflows, not just deployment.
  • Day-0 coordination with a frontier model release is a competitive signal for the SGLang/Miles ecosystem against vLLM and TRTLLM.

Hacker News Comment Review

  • vLLM published a parallel Day-0 DeepSeek-V4 post; the two ecosystems are racing to claim first-mover credibility on new model releases.
  • InferenceX has published DeepSeek-V4 throughput benchmarks but the setups are not directly comparable across engines, making cross-stack performance claims hard to verify.
  • Commenters flagged an apparent unspoken norm among SGLang, vLLM, and TRTLLM to stop publishing head-to-head benchmarks against each other, a reversal from past practice.

Notable Comments

  • @Palmik: “I find it odd that sglang, vLLM, TRTLLM don’t seem to want to publish benchmarks comparing each other. They used to, but now there seems to be some unspoken rule against it.”

Original | Discuss on HN