Addy Osmani’s Agent Skills (26K stars) wraps AI coding agents in senior-engineer scaffolding: specs, tests, reviews, and scope discipline enforced as markdown workflow files.
Key Takeaways
A “skill” is a markdown workflow with explicit steps and exit criteria, not a prose best-practices essay – the distinction determines whether agents actually follow it.
Anti-rationalization tables pre-empt LLM excuses: each skill pairs common skip-justifications (“too small for a spec”) with written rebuttals baked into the prompt.
Six lifecycle phases map to standard SDLC via slash commands: /spec, /plan, /build, /test, /review, /ship; a meta-skill router activates only relevant skills to preserve context budget.
Progressive disclosure keeps the 20-skill library usable: load skills by phase, not all at session start, to avoid context poisoning.
Scope discipline is treated as non-negotiable: the meta-skill explicitly forbids touching code outside the stated task, directly targeting the “agent rewrites three unrelated files” failure mode.
Hacker News Comment Review
The core skepticism is reliability, not design: commenters argue LLMs will selectively drop workflow requirements from AGENTS.md or skill files, making harness approaches fragile regardless of how well-specified they are.
A counterpoint with traction: human engineers also drop requirements regularly, and the same process-and-review methods used to manage human reliability apply here – perfection is not the bar.
Practical feedback favors treating the repo as reference rather than bulk-installing: team and individual workflows vary too much for a shared 20-skill config to fit without heavy trimming.
Notable Comments
@dmix: Recommends cherry-picking individual skills like vim plugins rather than installing the full set; bulk configs don’t fit varied team workflows.
@zmmmmm: Flags that individual skills run pages long with tables and code examples, raising real context-window cost concerns for multi-skill sessions.