Dartmouth provost Santiago Schnell argues AI industrializes education’s pre-existing mistake: rewarding verbal fluency over the formation of genuine judgment.
Key Takeaways
Milton’s 1644 warning: language is the instrument, not the substance; LLMs scale this confusion to industrial throughput across every institution.
LLMs supply finished prose before students undergo the reading, hesitation, and revision that make language meaningful – the mistake of sequence becomes a system.
Weighing evidence, judging conclusions, and taking responsibility for claims form the mind only when a person performs them; delegation makes them cease to occur.
Recommended fixes: in-class writing, oral defense of arguments, and AI transparency disclosures covering what was asked, kept, rejected, and why.
The root problem predates LLMs; institutions were already rewarding performance over understanding – AI makes the emptiness undeniable.
Hacker News Comment Review
Commenters converge on a floor problem: humans need enough internalized knowledge to evaluate AI outputs usefully; without that baseline, AI interaction is noise.
Historical skeptics note industrial-era schools were explicitly factory prep; commenters doubt institutions will choose formation over output optimization when outsourcing looks like efficiency.
One practitioner applied the thesis directly to engineering teams: oral defense of AI-drafted design documents as friction that surfaces fragile thinking before it ships.
Notable Comments
@lokimedes: names “absorptive capacity” as the key variable – education sets the floor that determines whether AI outputs are useful or just noise.
@mncharity: proposes Fermi questions with collaborative iterative bounding as a concrete AI-resistant exercise that forces genuine quantitative reasoning under uncertainty.