Our newsroom AI policy

· media books · Source ↗

TLDR

  • Ars Technica published its formal AI policy: reporters may use vetted AI tools for research but cannot attribute AI-generated or AI-summarized material to named sources.

Key Takeaways

  • AI tools are permitted for navigating large document volumes, summarizing background material, and searching datasets, but all sourced claims require direct reporter review.
  • No AI-generated quotes, paraphrases, or characterizations of named sources are allowed, overriding any use of AI in the research phase.
  • AI-generated images, audio, and video cannot be published as authentic documentation; synthetic media in AI coverage must be clearly labeled near the material.
  • Every reporter using AI in a story must disclose it to editors and retains full personal accountability – tools and colleagues cannot absorb responsibility.
  • The policy was last updated April 22, 2026, and Ars states violations have already led to action.

Hacker News Comment Review

  • The policy is widely read as a direct response to a prior incident where a reporter was fired over fabricated AI-generated quotes; commenters note several rules are narrowly scoped to that specific failure mode rather than systemic AI risk.
  • Skeptics flag a structural contradiction: permitting AI for research summarization while banning AI-summarized attribution creates an ambiguous middle layer that editors must police case by case.
  • A broader concern raised is that AI content generation degrades the original-content ecosystem that LLMs depend on for training data, threatening the long-term quality of both journalism and AI outputs.

Notable Comments

  • @legitster: flags the bootstrap problem – AI needs original human content to train on, but mass AI generation erodes the incentive to produce it.
  • @applfanboysbgon: argues the permitted research-summarization use is itself unreliable given LLM hallucination behavior, making the policy self-contradicting at its foundation.

Original | Discuss on HN