Paper argues a new cognitive bias – LLMorphism – causes people to model their own thinking on LLM architecture, not just attribute minds to machines.
Key Takeaways
LLMorphism spreads via two mechanisms: analogical transfer (projecting LLM features onto humans) and metaphorical availability (LLM vocabulary colonizing how people describe thought).
The core logical error: similarity in linguistic output does not imply similarity in cognitive architecture.
Paper distinguishes LLMorphism from mechanomorphism, anthropomorphism, computationalism, predictive-processing theories, and dehumanization – it is a distinct category.
Domains at risk include work, education, healthcare, creativity, and responsibility attribution.
The framing flips the usual AI debate: the problem is not only over-attributing mind to machines but under-attributing mind to humans.
Hacker News Comment Review
One commenter notes LLM-driven self-modeling may not be novel – systems thinking and other mechanistic frameworks have historically prompted similar reductive views of human cognition.