What's missing in the 'agentic' story: a well-defined user agent role

· ai-agents · Source ↗

TLDR

  • Every online interaction is a lopsided negotiation; AI needs a formalized user-agent role to act as genuine collective bargaining, not just safety guardrails.

Key Takeaways

  • The cloud computing era shifted control away from users; the author traces the agency deficit to that transition, not to AI itself.
  • Current AI framing conflates safety (keeping agents constrained) with agency (agents reliably acting in the user’s interest) – these are distinct problems.
  • MCP and similar tool-layer standards exist, but the harness layer (Claude Code, Cursor) remains proprietary, leaving user-agent representation dependent on vendor decisions.
  • A true user-agent role would need open, user-controlled harnesses – analogous to how browsers negotiated on behalf of users via User-Agent strings and robots.txt.
  • Without a defined agent identity layer, sites have no counterparty to negotiate with, driving unilateral responses like Cloudflare’s default-block plus 402 Payment Required.

Hacker News Comment Review

  • Core skepticism centers on whether any protocol can reliably encode “user intent” at the agent layer; tptacek draws a direct parallel to “packet intent,” a security property researchers have chased since the 1980s with no reliable solution.
  • Builders on Claude Code and Cursor flag a concrete structural risk: MCP is open on the tool layer, but proprietary harnesses mean a single vendor decision can break your product – commenters frame this as rebuilding the mobile app-store trap under a new name.
  • The browser analogy breaks down on incentives: publishers accepted User-Agent and robots.txt because clicks paid for it; AI agents are the destination, not a referral source, so sites have no reason to cooperate with an open standard.

Notable Comments

  • @durch: the agent itself is the attack surface – an adversary controlling the communication channel can exfiltrate anything the agent holds, including shared secrets, in ways the agent cannot detect.
  • @ryandrake: frames agent autonomy as a philosophical break from the “bicycle for the mind” model – wants to do things through a computer, not be chauffeured by it.

Original | Discuss on HN