A builder reflects on neurons-playing-DOOM demos and whether 200,000 cultured human neurons constitute a conscious entity deserving moral consideration.
Key Takeaways
A lab grew neurons, trained them with reinforcement-style rewards to play DOOM, raising the question of whether reward-signal training implies inner experience.
The author distinguishes LLMs (next-token predictors, no inner life) from biological neurons, which perform actual signal interpretation analogous to biological seeing.
200,000 neurons exceeds jellyfish/worm neuron counts, making existing “not enough to matter” intuitions hard to anchor.
Commercial incentives (energy efficiency, storage density vs. silicon) ensure biocomputing development continues regardless of unresolved ethics.
No regulatory or public discourse framework currently exists for assigning moral status to cultured neural tissue.
Hacker News Comment Review
The technical reality of the DOOM demo is more complex than the post implies: a full PyTorch stack wraps the neuron chip, making it unclear how much work the biological tissue actually does versus the silicon scaffolding.
Commenters debate whether consciousness requires embodied brainstem signals (hunger, pain) rather than cortical or petri-dish neurons, citing Mark Solms’ “The Hidden Spring” as a framework suggesting isolated neurons are unlikely to be conscious.
A recurring thread questions why AI-consciousness advocates are not more vocal about biocomputing risks, and notes that moral intuitions tend to track physical resemblance to humans rather than neuron count or capability.
Notable Comments
@pjs_: Points to the actual doom-neuron GitHub repo showing a PyTorch stack underneath, urging careful interpretation of what neurons actually contribute.
@croemer: Suggests replacing the neurons with /dev/urandom to test whether the chip’s Doom performance is real signal or artifact, echoing the qday prize methodology.