Essay in Computational Brain & Behavior argues scientists routinely overestimate the depth of their understanding, with predictive models like linear regression and ChatGPT amplifying the illusion that prediction implies causality.
Key Takeaways
Prediction does not imply causality; well-fitting mathematical or simulation models create false confidence that mechanisms are understood.
Even Isaac Newton correctly computed dice probabilities but gave a verbal explanation that would have failed on variant problems.
The essay identifies nine distinct illusion types, including explanatory depth, completeness, causal strength, and recipient comprehension illusions.
Linear regression is used as a case study: a tool most scientists believe they fully understand but do not at any deep causal level.
Incomplete understanding is universal across expertise levels; even airplane wing and bicycle stability designers admit they cannot deeply explain their own models.