Agents need control flow, not more prompts

· ai-agents coding · Source ↗

TLDR

  • Reliable agents require deterministic control flow encoded in software, not elaborate prompt chains that collapse under complexity.

Key Takeaways

  • Prompt chains are non-deterministic, weakly specified, and unverifiable; logic must move out of prose into runtime scaffolds.
  • Explicit state transitions and validation checkpoints treat the LLM as a component, not the orchestrating system.
  • Without programmatic error detection, teams default to one of three failure modes: human babysitter, exhaustive auditor, or vibe-accepting the output.
  • Silent failure is the core hazard; deterministic orchestration alone is insufficient without aggressive runtime verification.

Hacker News Comment Review

  • Broad commenter consensus: use LLMs to generate deterministic artifacts (code, structured objects) rather than letting them drive runtime decisions directly via API calls.
  • Practitioners reported the same arc: week one is prompt expansion with degrading reliability; week two is defining precise objects, methods, and typed schemas in actual code.
  • A dissenting thread questions whether chasing determinism is the right goal at all, arguing compensation controls around an assumed failure rate may be more realistic than eliminating non-determinism.

Notable Comments

  • @bwestergard: LLMs at runtime should shrink to helping users choose compliant inputs; hard business rules belong in software.
  • @pdp: “LLMs do not run deterministically and that is ok” – argues for failure-rate assumptions and compensation controls instead.

Original | Discuss on HN