jpskill.com
📦 その他 コミュニティ 🟡 少し慣れが必要 👤 幅広いユーザー

📦 Darwinian Evolver

darwinian-evolver

Imbueの進化ループを活用し、プロンプト、正規表現、SQL、コードなどを自動的に改善・最適化するSkill。

⏱ 手作業のあれこれ 1日 → 1時間

📺 まず動画で見る(YouTube)

▶ 【Claude Code完全入門】誰でも使える/Skills活用法/経営者こそ使うべき ↗

※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。

📜 元の英語説明(参考)

Evolve prompts/regex/SQL/code with Imbue's evolution loop.

🇯🇵 日本人クリエイター向け解説

一言でいうと

Imbueの進化ループを活用し、プロンプト、正規表現、SQL、コードなどを自動的に改善・最適化するSkill。

※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。

⚡ おすすめ: コマンド1行でインストール(60秒)

下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。

🍎 Mac / 🐧 Linux
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o darwinian-evolver.zip https://jpskill.com/download/1153.zip && unzip -o darwinian-evolver.zip && rm darwinian-evolver.zip
🪟 Windows (PowerShell)
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/1153.zip -OutFile "$d\darwinian-evolver.zip"; Expand-Archive "$d\darwinian-evolver.zip" -DestinationPath $d -Force; ri "$d\darwinian-evolver.zip"

完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。

💾 手動でダウンロードしたい(コマンドが難しい人向け)
  1. 1. 下の青いボタンを押して darwinian-evolver.zip をダウンロード
  2. 2. ZIPファイルをダブルクリックで解凍 → darwinian-evolver フォルダができる
  3. 3. そのフォルダを C:\Users\あなたの名前\.claude\skills\(Win)または ~/.claude/skills/(Mac)へ移動
  4. 4. Claude Code を再起動

⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。

🎯 このSkillでできること

下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。

📦 インストール方法 (3ステップ)

  1. 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
  2. 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
  3. 3. 展開してできたフォルダを、ホームフォルダの .claude/skills/ に置く
    • · macOS / Linux: ~/.claude/skills/
    • · Windows: %USERPROFILE%\.claude\skills\

Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。

詳しい使い方ガイドを見る →
最終更新
2026-05-17
取得日時
2026-05-17
同梱ファイル
3

💬 こう話しかけるだけ — サンプルプロンプト

  • Darwinian Evolver の使い方を教えて
  • Darwinian Evolver で何ができるか具体例で見せて
  • Darwinian Evolver を初めて使う人向けにステップを案内して

これをClaude Code に貼るだけで、このSkillが自動発動します。

📖 Claude が読む原文 SKILL.md(中身を展開)

この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。

Darwinian Evolver

Run Imbue's darwinian_evolver — an LLM-driven evolutionary search loop — to optimize a prompt, regex, SQL query, or small code snippet against a fitness function.

Status: thin wrapper around the upstream tool. The skill installs it, walks the agent through writing a Problem definition (organism + evaluator + mutator), and drives the loop via the upstream CLI or a small custom Python driver.

License: the upstream tool is AGPL-3.0. The skill ONLY ever invokes it via the upstream CLI or a subprocess/uv run call (mere aggregation). Do NOT import upstream classes into Hermes itself.

When to Use

  • User says "optimize this prompt", "evolve a regex for X", "auto-improve this code/SQL", "search for a better instruction".
  • You have a scorer (exact match, regex pass-rate, unit test, LLM-judge, runtime metric) AND a starting candidate (organism). If you don't have a scorer, stop and define one first — that's the hard part.
  • Cost is OK: a typical run is 50–500 LLM calls. On gpt-4o-mini that's pennies; on Claude Sonnet it can be a few dollars.

Do not use this when:

  • The optimization target is differentiable (use gradient descent / DSPy).
  • You only need to try 2–3 variants — just write them by hand.
  • The fitness signal is purely subjective with no measurable criterion.

Prerequisites

  • Python ≥3.11
  • git, uv (or pip)
  • One of: OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY

The skill ships a small parrot_openrouter.py driver that uses OPENROUTER_API_KEY via the OpenAI SDK, so any model on OpenRouter works. The upstream CLI itself hardcodes Anthropic and needs ANTHROPIC_API_KEY.

Install (One-Time)

Run via the terminal tool:

mkdir -p ~/.hermes/cache/darwinian-evolver && cd ~/.hermes/cache/darwinian-evolver
[ -d darwinian_evolver ] || git clone --depth 1 https://github.com/imbue-ai/darwinian_evolver.git
cd darwinian_evolver && uv sync

Verify:

cd ~/.hermes/cache/darwinian-evolver/darwinian_evolver \
  && uv run darwinian_evolver --help | head -5

Quick Start — The Built-In Parrot Example

Tiny smoke test (requires ANTHROPIC_API_KEY):

cd ~/.hermes/cache/darwinian-evolver/darwinian_evolver
uv run darwinian_evolver parrot \
  --num_iterations 2 \
  --num_parents_per_iteration 2 \
  --mutator_concurrency 2 --evaluator_concurrency 2 \
  --output_dir /tmp/parrot_demo

Outputs:

  • /tmp/parrot_demo/snapshots/iteration_N.pkl — pickled population per iteration
  • /tmp/parrot_demo/<jsonl> — per-iteration JSON log (path printed at end)

Open ~/.hermes/cache/darwinian-evolver/darwinian_evolver/darwinian_evolver/lineage_visualizer.html in a browser and load the JSON log to see the evolutionary tree.

Quick Start — OpenRouter Driver (No Anthropic Key)

The skill ships scripts/parrot_openrouter.py — same parrot problem, but the LLM call goes through OpenRouter so any provider works.

# From wherever the skill is installed:
SKILL_DIR=~/.hermes/skills/research/darwinian-evolver
DE_DIR=~/.hermes/cache/darwinian-evolver/darwinian_evolver

cd "$DE_DIR" && \
  EVOLVER_MODEL='openai/gpt-4o-mini' \
  uv run --with openai python "$SKILL_DIR/scripts/parrot_openrouter.py" \
    --num_iterations 3 --num_parents_per_iteration 2 \
    --output_dir /tmp/parrot_or

Inspect the result with scripts/show_snapshot.py:

uv run --with openai python "$SKILL_DIR/scripts/show_snapshot.py" \
  /tmp/parrot_or/snapshots/iteration_3.pkl

Expected output: 7 evolved prompt templates ranked by score, with the best landing around 0.6–0.8 (the seed Say {{ phrase }} scored 0.000).

Defining a Custom Problem

The skill ships templates/custom_problem_template.py — copy, edit, run. Three things you must define:

  1. Organism — a Pydantic BaseModel subclass holding the artifact being evolved (prompt_template: str, regex_pattern: str, sql_query: str, code_block: str, etc.). Add a run(*args) method that exercises it.

  2. Evaluator.evaluate(organism) -> EvaluationResult(score=..., trainable_failure_cases=[...], holdout_failure_cases=[...], is_viable=True).

    • score is in [0, 1]. Higher is better.
    • trainable_failure_cases — what the mutator sees. Include enough context (input, expected, actual) for the LLM to diagnose.
    • holdout_failure_cases — kept out of the mutator's view. Use these to detect overfitting.
    • is_viable=True unless the organism is completely broken (raises, returns None, etc.). A 0-score viable organism is fine — it just gets down-weighted in parent selection.
  3. Mutator.mutate(organism, failure_cases, learning_log_entries) -> list[Organism]. Typically: build an LLM prompt that includes the current organism + a failure case + an ask to propose a fix; parse the LLM's response; return a new Organism. Return [] on parse failure — the loop handles it.

Then write a driver script that wires Problem(initial_organism, evaluator, [mutators]) into EvolveProblemLoop and iterates over loop.run(num_iterations=N) — the shipped scripts/parrot_openrouter.py is the reference.

Hyperparameters That Actually Matter

flag default when to change
--num_iterations 5 bump to 10–20 once you trust the evaluator
--num_parents_per_iteration 4 drop to 2 for cheap exploration
--mutator_concurrency 10 drop to 2–4 to avoid rate limits
--evaluator_concurrency 10 same; evaluator hits the LLM too
--batch_size 1 raise to 3–5 once your mutator handles multiple failures
--verify_mutations off turn on once mutator is wasteful (>10× cost saving on later runs per Imbue)
--midpoint_score p75 leave alone unless scores cluster
--sharpness 10 leave alone

Pitfalls

  1. Initial organism must be viable — set is_viable=True in your EvaluationResult even on a 0-score seed. The loop refuses non-viable organisms because they imply the loop has nothing to evolve from.
  2. Provider content filters kill runs. Azure-backed OpenRouter models reject phrases like "ignore previous instructions" with HTTP 400. Wrap the LLM call in try/except and return f"<LLM_ERROR: {e}>" — the evolver will just score that organism 0 and move on.
  3. loop.run() is a generator — calling it doesn't run anything until you iterate. Use for snap in loop.run(num_iterations=N):.
  4. Snapshots are nested pickles. iteration_N.pkl contains a dict with population_snapshot (more pickled bytes). To unpickle you must have the Organism class importable under the same dotted path it was pickled at.
  5. Concurrency defaults are aggressive. 10/10 will hit rate limits on most providers. Start with 2/2.
  6. CLI is hardcoded to Anthropic. uv run darwinian_evolver <problem> reaches for ANTHROPIC_API_KEY and uses Claude Sonnet. To use any other provider, write a driver like parrot_openrouter.py.
  7. AGPL. Never from darwinian_evolver import ... inside Hermes core. Custom driver scripts under ~/.hermes/skills/... are user-side and fine.
  8. No PyPI package. pip install darwinian-evolver will pull the wrong thing. Always install from the GitHub repo.

Verification

After install + a parrot run, exit code 0 from this is sufficient:

DE_DIR=~/.hermes/cache/darwinian-evolver/darwinian_evolver
ls "$DE_DIR/darwinian_evolver/lineage_visualizer.html" >/dev/null && \
cd "$DE_DIR" && uv run darwinian_evolver --help >/dev/null && \
echo "darwinian-evolver: OK"

References

同梱ファイル

※ ZIPに含まれるファイル一覧。`SKILL.md` 本体に加え、参考資料・サンプル・スクリプトが入っている場合があります。