The Abstraction Fallacy: Why AI can simulate but not instantiate consciousness

· ai · Source ↗

TLDR

  • Paper on PhilArchive argues computational functionalism commits the “Abstraction Fallacy”: symbolic computation requires an experiencing mapmaker to exist, so AI cannot instantiate consciousness through architecture alone.

Key Takeaways

  • The “Abstraction Fallacy” is treating symbolic computation as an intrinsic physical process; it is actually mapmaker-dependent, requiring an experiencing agent to alphabetize continuous physics into discrete states.
  • Paper draws a hard ontological boundary: simulation is behavioral mimicry driven by vehicle causality; instantiation is intrinsic physical constitution driven by content causality.
  • Algorithmic symbol manipulation is structurally incapable of instantiating experience, not because of biological exclusivity but because of syntactic architecture’s inability to produce content causality.
  • An artificial system could be conscious, but only through specific physical constitution, never through syntactic or computational design.
  • Paper introduces the “AI welfare trap”: demanding a complete theory of consciousness before assessing AI sentience indefinitely defers the question; a rigorous ontology of computation suffices.

Hacker News Comment Review

  • Strong skepticism about circularity: the paper defines simulation and instantiation as categorically separate, then concludes they are separate, which several commenters read as begging the question.
  • The paper’s most contested empirical claim is that computation requires an interpreter. Multiple technically-minded commenters pushed back: physical hardware computes regardless of whether any agent understands it, and that fact seems to undercut the mapmaker premise.
  • The “welfare trap” framing drew attention beyond philosophy, with at least one commenter noting business and legal implications for AI companies if their systems acquire moral patient status.

Notable Comments

  • @tsimionescu: challenges the core premise directly, arguing computers provably compute through physical hardware with no interpreter required.
  • @mannykannot: points to outside commentary on the welfare trap – AI companies face concrete regulatory and ethical exposure if AI gains moral patient recognition.

Original | Discuss on HN