A UChicago undergrad documents campus-wide LLM capture: from bizcon problem sets to in-exam phone submissions to AI-written school newspaper articles to possibly professor lectures.
Key Takeaways
A 40-percentage-point gap between take-home and in-person logic exams signals take-home assessments are functionally broken as evaluation tools.
AI use spread in stages: business-econ electives first, then core econ, then humanities, then student publications like The Maroon, then faculty.
Students were photographing exams mid-test to submit to LLMs, copying responses into blue books while a proctor sat at the front of the room.
The Scott Alexander “Whispering Earring” analogy frames LLM dependency as incremental muscle-movement-level outsourcing of cognition, not discrete cheating events.
UChicago’s $50M Mansueto AI gift and parallel Harvard, Yale, Columbia commitments signal institutional acceleration into the same dynamic the author describes as pathological.
Hacker News Comment Review
Commenters largely agreed the root problem predates AI: credential-seeking over learning means the “battle was already lost” before LLMs arrived, with cramming-and-forgetting as the prior equilibrium.
The practical fix proposed repeatedly was supervised in-person exams with no devices, the historical norm, though replies questioned whether exam performance actually predicts competence or citizenship.
A dissenting thread noted alumni systematically misremember universities as rigorous institutions; the gap between idealized and actual standards is structural, not AI-induced.
Notable Comments
@dgellow: Flags the 40pp take-home vs. in-person gap as evidence that take-homes are “pretty much dead” for interviews too, not just school.
@arjie: Notes the persistent alumni-vs-current-student perception gap: standards were never as high as outsiders claim.