Generative AI Vegetarianism

· media · Source ↗

TLDR

  • A Canadian civic-tech blogger coins “generative AI vegetarianism”: deliberately avoiding ChatGPT, Copilot, Claude, and Gemini in daily life.

Key Takeaways

  • The practice means disabling Microsoft Copilot, Google Gemini, and Apple Intelligence wherever possible, and not resharing AI-generated content.
  • The author distinguishes GenAI from older algorithmic tools: spam filters, facial recognition, OCR, and procedural art are explicitly not in scope.
  • Nine named reasons span bias in training data, skill atrophy, cliche-prone output, artist displacement, vendor lock-in risk, and energy use.
  • The “accountability sink” argument: harmful decisions made via AI tools obscure which humans are responsible, shielding decision-makers from scrutiny.
  • The author frames it as a lifestyle default rather than a case-by-case judgment, citing reduced cognitive load as a practical benefit.

Hacker News Comment Review

  • Near-universal consensus in comments: “vegetarianism” is the wrong metaphor because it carries moral associations that dilute the argument; alternatives like “GenAI-free” or “organic software” were proposed.
  • Sharp disagreement on career implications: one camp predicts non-senior developers who avoid LLMs will be unhirable by mid-2027; another describes late-career engineers quietly ignoring mandates and doing token minimum usage to satisfy metrics.
  • A thread noted Steam’s mandatory generative-content disclosure as a real-world analog where market signals are already forming around AI-free creative work.

Notable Comments

  • @ryandrake: describes late-career engineers “setting aside time weekly to do the bare minimum token-spending needed to appease the AI metric gods.”
  • @manvel_hn: Steam requires disclosure of generative content in games; titles like Clair Obscur and The Finals already drew player backlash.

Original | Discuss on HN