auto-claude-optimization
Auto-Claude performance optimization and cost management. Use when optimizing token usage, reducing API costs, improving build speed, or tuning agent performance.
下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o auto-claude-optimization.zip https://jpskill.com/download/9359.zip && unzip -o auto-claude-optimization.zip && rm auto-claude-optimization.zip
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/9359.zip -OutFile "$d\auto-claude-optimization.zip"; Expand-Archive "$d\auto-claude-optimization.zip" -DestinationPath $d -Force; ri "$d\auto-claude-optimization.zip"
完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。
💾 手動でダウンロードしたい(コマンドが難しい人向け)
- 1. 下の青いボタンを押して
auto-claude-optimization.zipをダウンロード - 2. ZIPファイルをダブルクリックで解凍 →
auto-claude-optimizationフォルダができる - 3. そのフォルダを
C:\Users\あなたの名前\.claude\skills\(Win)または~/.claude/skills/(Mac)へ移動 - 4. Claude Code を再起動
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-18
- 取得日時
- 2026-05-18
- 同梱ファイル
- 1
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
Auto-Claude Optimization
Performance tuning, cost reduction, and efficiency improvements.
Performance Overview
Key Metrics
| Metric | Impact | Optimization |
|---|---|---|
| API latency | Build speed | Model selection, caching |
| Token usage | Cost | Prompt efficiency, context limits |
| Memory queries | Speed | Embedding model, index tuning |
| Build iterations | Time | Spec quality, QA settings |
Model Optimization
Model Selection
| Model | Speed | Cost | Quality | Use Case |
|---|---|---|---|---|
| claude-opus-4-5-20251101 | Slow | High | Best | Complex features |
| claude-sonnet-4-5-20250929 | Fast | Medium | Good | Standard features |
# Override model in .env
AUTO_BUILD_MODEL=claude-sonnet-4-5-20250929
Extended Thinking Tokens
Configure thinking budget per agent:
| Agent | Default | Recommended |
|---|---|---|
| Spec creation | 16000 | Keep default for quality |
| Planning | 5000 | Reduce to 3000 for speed |
| Coding | 0 | Keep disabled |
| QA Review | 10000 | Reduce to 5000 for speed |
# In agent configuration
max_thinking_tokens=5000 # or None to disable
Token Optimization
Reduce Context Size
-
Smaller spec files
# Keep specs concise # Bad: 5000 word spec # Good: 500 word spec with clear criteria -
Limit codebase scanning
# In context/builder.py MAX_CONTEXT_FILES = 50 # Reduce from 100 -
Use targeted searches
# Instead of full codebase scan # Focus on relevant directories
Efficient Prompts
Optimize system prompts in apps/backend/prompts/:
<!-- Bad: Verbose -->
You are an expert software developer who specializes in building
high-quality, production-ready applications. You have extensive
experience with many programming languages and frameworks...
<!-- Good: Concise -->
Expert full-stack developer. Build production-quality code.
Follow existing patterns. Test thoroughly.
Memory Optimization
# Use efficient embedding model
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
# Or offline with smaller model
OLLAMA_EMBEDDING_MODEL=all-minilm
OLLAMA_EMBEDDING_DIM=384
Speed Optimization
Parallel Execution
# Enable more parallel agents (default: 4)
MAX_PARALLEL_AGENTS=8
Reduce QA Iterations
# Limit QA loop iterations
MAX_QA_ITERATIONS=10 # Default: 50
# Skip QA for quick iterations
python run.py --spec 001 --skip-qa
Faster Spec Creation
# Force simple complexity for quick tasks
python spec_runner.py --task "Fix typo" --complexity simple
# Skip research phase
SKIP_RESEARCH_PHASE=true python spec_runner.py --task "..."
API Timeout Tuning
# Reduce timeout for faster failure detection
API_TIMEOUT_MS=120000 # 2 minutes (default: 10 minutes)
Cost Management
Monitor Token Usage
# Enable cost tracking
ENABLE_COST_TRACKING=true
# View usage report
python usage_report.py --spec 001
Cost Reduction Strategies
-
Use cheaper models for simple tasks
# For simple specs AUTO_BUILD_MODEL=claude-sonnet-4-5-20250929 python spec_runner.py --task "..." -
Limit context window
MAX_CONTEXT_TOKENS=50000 # Reduce from 100000 -
Batch similar tasks
# Create specs together, run together python spec_runner.py --task "Add feature A" python spec_runner.py --task "Add feature B" python run.py --spec 001 python run.py --spec 002 -
Use local models for memory
# Ollama for memory (free) GRAPHITI_LLM_PROVIDER=ollama GRAPHITI_EMBEDDER_PROVIDER=ollama
Cost Estimation
| Operation | Estimated Tokens | Cost (Opus) | Cost (Sonnet) |
|---|---|---|---|
| Simple spec | 10k | ~$0.30 | ~$0.06 |
| Standard spec | 50k | ~$1.50 | ~$0.30 |
| Complex spec | 200k | ~$6.00 | ~$1.20 |
| Build (simple) | 50k | ~$1.50 | ~$0.30 |
| Build (standard) | 200k | ~$6.00 | ~$1.20 |
| Build (complex) | 500k | ~$15.00 | ~$3.00 |
Memory System Optimization
Embedding Performance
# Faster embeddings
OPENAI_EMBEDDING_MODEL=text-embedding-3-small # 1536 dim, fast
# Higher quality (slower)
OPENAI_EMBEDDING_MODEL=text-embedding-3-large # 3072 dim
# Offline (fastest, free)
OLLAMA_EMBEDDING_MODEL=all-minilm
OLLAMA_EMBEDDING_DIM=384
Query Optimization
# Limit search results
memory.search("query", limit=10) # Instead of 100
# Use semantic caching
ENABLE_MEMORY_CACHE=true
Database Maintenance
# Compact database periodically
python -c "from integrations.graphiti.memory import compact_database; compact_database()"
# Clear old episodes
python query_memory.py --cleanup --older-than 30d
Build Efficiency
Spec Quality = Build Speed
High-quality specs reduce iterations:
# Good spec (fewer iterations)
## Acceptance Criteria
- [ ] User can log in with email/password
- [ ] Invalid credentials show error message
- [ ] Successful login redirects to /dashboard
- [ ] Session persists for 24 hours
# Bad spec (more iterations)
## Acceptance Criteria
- [ ] Login works
Subtask Granularity
Optimal subtask size:
- Too large: Agent gets stuck, needs recovery
- Too small: Overhead per subtask
- Optimal: 30-60 minutes of work each
Parallel Work
Let agents spawn subagents for parallel execution:
Main Coder
├── Subagent 1: Frontend (parallel)
├── Subagent 2: Backend (parallel)
└── Subagent 3: Tests (parallel)
Environment Tuning
Optimal .env Configuration
# Performance-focused configuration
AUTO_BUILD_MODEL=claude-sonnet-4-5-20250929
API_TIMEOUT_MS=180000
MAX_PARALLEL_AGENTS=6
# Memory optimization
GRAPHITI_LLM_PROVIDER=ollama
GRAPHITI_EMBEDDER_PROVIDER=ollama
OLLAMA_LLM_MODEL=llama3.2:3b
OLLAMA_EMBEDDING_MODEL=all-minilm
OLLAMA_EMBEDDING_DIM=384
# Reduce verbosity
DEBUG=false
ENABLE_FANCY_UI=false
Resource Limits
# Limit Python memory
export PYTHONMALLOC=malloc
# Set max file descriptors
ulimit -n 4096
Benchmarking
Measure Build Time
# Time a build
time python run.py --spec 001
# Compare models
time AUTO_BUILD_MODEL=claude-opus-4-5-20251101 python run.py --spec 001
time AUTO_BUILD_MODEL=claude-sonnet-4-5-20250929 python run.py --spec 001
Profile Memory Usage
# Monitor memory
watch -n 1 'ps aux | grep python | head -5'
# Profile script
python -m cProfile -o profile.stats run.py --spec 001
python -c "import pstats; p = pstats.Stats('profile.stats'); p.sort_stats('cumulative').print_stats(20)"
Quick Wins
Immediate Optimizations
-
Switch to Sonnet for most tasks
AUTO_BUILD_MODEL=claude-sonnet-4-5-20250929 -
Use Ollama for memory
GRAPHITI_LLM_PROVIDER=ollama GRAPHITI_EMBEDDER_PROVIDER=ollama -
Skip QA for prototypes
python run.py --spec 001 --skip-qa -
Force simple complexity for small tasks
python spec_runner.py --task "..." --complexity simple
Medium-Term Improvements
- Optimize prompts in
apps/backend/prompts/ - Configure project-specific security allowlist
- Set up memory caching
- Tune parallel agent count
Long-Term Strategies
- Self-hosted LLM for memory (Ollama)
- Caching layer for common operations
- Incremental context building
- Project-specific prompt optimization
Related Skills
- auto-claude-memory: Memory configuration
- auto-claude-build: Build process
- auto-claude-troubleshooting: Debugging