Susam Pal proposes three human-facing rules: no anthropomorphism, no blind deference, and no abdication of responsibility when using AI systems.
Key Takeaways
Law 1 (Non-Anthropomorphism): LLMs are statistical pattern matchers; attributing emotions or intent distorts judgment and can cause emotional dependence.
Law 2 (Non-Deference): AI outputs lack peer review; verification burden scales with consequence severity. Proof checkers and unit tests can help in code/math contexts.
Law 3 (Non-Abdication): “The AI told us to” is never an acceptable excuse. Humans who choose to act on AI output bear full accountability.
Vendors actively work against Law 1 by post-training models for warmth and engagement, making user-side discipline harder.
Self-driving cars are flagged as the hardest edge case: AI acts faster than human review, yet design-level responsibility still falls on humans.
Hacker News Comment Review
Broad commenter consensus that Law 1 is unenforceable: humans anthropomorphize chairs, rocks, and Wilson from Cast Away; demanding they stop with fluent chatbots is not realistic.
Tension emerged over whether anthropomorphism is actually harmful: several commenters argued casual attribution (like “killing” a process) rarely causes erroneous belief in a real mind, undermining the framing.
A concrete risk thread flagged AI agents (Claude Code, Cursor, Codex) committing to GitHub as the human user, enabling code to reach production with zero human eyes on it, which violates Laws 2 and 3 in practice.
Notable Comments
@eranation: AI coding tools impersonating GitHub users mean PRs can be written, reviewed, and merged with no human ever reading the diff.
@ACCount37: Argues the opposite of Law 1: LLMs trained on human-curated RLHF data may warrant more anthropomorphic modeling, not less.