TacoSkill LABTacoSkill LAB

The full-lifecycle AI skills platform.

Product

  • SkillHub
  • Playground
  • Skill Create
  • SkillKit

Resources

  • Privacy
  • Terms
  • About

Platforms

  • Claude Code
  • Cursor
  • Codex CLI
  • Gemini CLI
  • OpenCode

© 2026 TacoSkill LAB. All rights reserved.

TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
  1. Home
  2. /
  3. SkillHub
  4. /
  5. skills-eval
Improve

skills-eval

6.6

by athola

167Favorites
156Upvotes
0Downvotes

Evaluate and improve Claude skill quality through auditing. Triggers: quality-assurance, skills, optimization, tool-use, performance-metrics, skill audit, quality review, compliance check, improvement suggestions, token usage analysis, skill evaluation, skill assessment, skill optimization, skill standards, skill metrics, skill performance. Use when reviewing skill quality, preparing skills for production, or auditing existing skills. Do not use when creating new skills (use modular-skills) or writing prose (use writing-clearly-and-concisely). Use this skill before shipping any skill to production.

quality-assurance

6.6

Rating

0

Installs

Testing & Quality

Category

Quick Review

Excellent skill with clear description, comprehensive task knowledge, and well-organized structure. The description clearly specifies when to use (reviewing skill quality, pre-production audits) and when not to use (creating new skills, writing prose). Task knowledge is thorough with specific scripts (skills_auditor.py, improvement_suggester.py, compliance_checker.py, etc.) and detailed workflows covering audit, analysis, optimization, and compliance checking. Structure is exemplary with concise SKILL.md providing overview and quick start, while deferring detailed methodologies to referenced modules. Novelty is solid: skill evaluation with multi-dimensional scoring, automated compliance checking, and token optimization represents meaningful complexity that would require significant agent effort to replicate. Minor deduction on novelty as skill evaluation frameworks, while valuable, are conceptually straightforward compared to highly specialized technical domains.

LLM Signals

Description coverage9
Task knowledge9
Structure9
Novelty7

GitHub Signals

135
16
1
73
Last commit 0 days ago

Publisher

athola

athola

Skill Author

Related Skills

code-reviewerdebugging-wizardtest-master

Loading SKILL.md…

Try onlineView on GitHub

Publisher

athola avatar
athola

Skill Author

Related Skills

code-reviewer

Jeffallan

6.4

debugging-wizard

Jeffallan

6.4

test-master

Jeffallan

6.4

playwright-expert

Jeffallan

6.4
Try online