LLMs Are Not a Higher Level of Abstraction

· ai coding · Source ↗

TLDR

  • Blog post argues LLMs fail the abstraction definition because their output is f(x) -> P(y | z1 | z2 | ... zN), not a deterministic f(x) -> y.

Key Takeaways

  • Every prior stack level (binary, assembly, C, Python) maps a specific input to a specific reproducible artifact; LLMs return a probability distribution over outputs.
  • The stochastic output means tests can pass on y while silent extras z1..zN (credential leaks, open FTP access) also ship.
  • The author frames this not as opinion but as a mathematical fact about the function signature of LLM inference.
  • The abstraction framing is seen as a self-awareness failure: developers become passive conduits for AI artifacts rather than reasoners.

Hacker News Comment Review

  • Commenters broadly rejected non-determinism as the disqualifying criterion, noting TCP over noisy cell networks is stochastic yet supports clean abstractions above it; the stronger objection is blast radius: a dropped packet fails safely, an LLM hallucination compiles silently and does the wrong thing.
  • Several commenters argued existing C/Unix portability across PDP, 8088, and 68k platforms was already effectively probabilistic, weakening the author’s sharp compiler-as-deterministic-baseline claim.
  • The practical consensus leans toward “leaky abstraction” rather than “not an abstraction”: LLMs let developers reason at a higher level by offloading cognitive load, but the leakiness demands verification discipline the author’s post undersells.

Notable Comments

  • @jefurii: Prior abstraction layers let you drill down to lower ones; LLMs break that property, which is the sharper reason the title claim holds.
  • @royal__: Natural language is an inherently leaky interface for encoding logic intent, making LLM abstraction unreliable even when non-determinism is tolerated.

Original | Discuss on HN