TacoSkill LABTacoSkill LAB

The full-lifecycle AI skills platform.

DiscordFeedback

Product

  • SkillHub
  • Playground
  • Skill Create
  • SkillKit

Resources

  • Help Center
  • Privacy
  • Terms
  • About

Platforms

  • Claude Code
  • Cursor
  • Codex CLI
  • Gemini CLI
  • OpenCode

© 2026 TacoSkill LAB. All rights reserved.

TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundLeaderboardSkillKit
  1. Home
  2. /
  3. SkillHub
  4. /
  5. agent-evaluation
Improve

agent-evaluation

1.3

by majiayu000

0Views
74Favorites
122Upvotes
0Downvotes

Evaluate and improve Claude Code commands, skills, and agents. Use when testing prompt effectiveness, validating context engineering choices, or measuring improvement quality.

evaluation

1.3

Rating

0

Installs

AI & LLM

Category

Quick Review

No summary available.

LLM Signals

Description coverage-
Task knowledge-
Structure-
Novelty-

GitHub Signals

49
7
1
1
Last commit 0 days ago

Publisher

majiayu000 logo
majiayu000

Skill Author

Related Skills

prompt-engineermcp-developerrag-architect

Loading SKILL.md…

Try onlineView on GitHub

Publisher

majiayu000 logo
majiayu000

Skill Author

Related Skills

prompt-engineer

Jeffallan

7.0

mcp-developer

Jeffallan

6.4

rag-architect

Jeffallan

7.0

fine-tuning-expert

Jeffallan

6.4
Try online