cs-ml
ML: supervised/unsupervised/RL, CNN/RNN/Transformer, training, evaluation, MLOps, LLM fine-tuning
下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o cs-ml.zip https://jpskill.com/download/22166.zip && unzip -o cs-ml.zip && rm cs-ml.zip
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/22166.zip -OutFile "$d\cs-ml.zip"; Expand-Archive "$d\cs-ml.zip" -DestinationPath $d -Force; ri "$d\cs-ml.zip"
完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。
💾 手動でダウンロードしたい(コマンドが難しい人向け)
- 1. 下の青いボタンを押して
cs-ml.zipをダウンロード - 2. ZIPファイルをダブルクリックで解凍 →
cs-mlフォルダができる - 3. そのフォルダを
C:\Users\あなたの名前\.claude\skills\(Win)または~/.claude/skills/(Mac)へ移動 - 4. Claude Code を再起動
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-18
- 取得日時
- 2026-05-18
- 同梱ファイル
- 1
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
cs-ml
Purpose
This skill handles machine learning tasks, including supervised/unsupervised learning, reinforcement learning, CNN/RNN/Transformer models, training pipelines, evaluation metrics, MLOps workflows, and LLM fine-tuning. It integrates with OpenClaw to automate code generation and execution for ML projects.
When to Use
Use this skill when building ML models from scratch, fine-tuning pre-trained models like BERT, deploying models via MLOps, or evaluating performance. Apply it for tasks involving large datasets, neural networks, or production pipelines, such as image recognition with CNNs or text generation with Transformers.
Key Capabilities
- Train supervised models using algorithms like linear regression or decision trees via scikit-learn integration.
- Implement unsupervised learning with K-means clustering or PCA for dimensionality reduction.
- Build and train deep learning models: CNNs for images (e.g., using Keras), RNNs for sequences, or Transformers for NLP tasks.
- Handle RL environments with libraries like Stable Baselines, including Q-learning loops.
- Evaluate models with metrics like accuracy, F1-score, or ROC curves, and generate confusion matrices.
- Manage MLOps: model deployment to containers, monitoring with MLflow, and CI/CD integration.
- Fine-tune LLMs like GPT variants using Hugging Face Transformers, with techniques like LoRA for efficiency.
Usage Patterns
Invoke this skill via OpenClaw's CLI or API to generate code snippets. For training, specify model type and data source; for evaluation, provide a trained model path. Always set environment variables for authentication, e.g., export $OPENCLAW_API_KEY=your_key. Patterns include:
- Pipeline mode: Chain training and evaluation in a single command.
- Interactive mode: Use for iterative fine-tuning, querying the skill for code adjustments.
- Example 1: Train a CNN for image classification – Call the skill with data path, then run the generated script.
- Example 2: Fine-tune an LLM – Provide a base model and dataset, get a fine-tuning script, and execute it with specified hyperparameters.
Common Commands/API
Use OpenClaw's CLI for direct execution or API for programmatic access. Authentication requires $OPENCLAW_API_KEY in your environment.
-
CLI Command for training a CNN:
openclaw cs-ml train --model cnn --data /path/to/images --epochs 10 --batch-size 32
This generates a Python script using TensorFlow:from tensorflow import keras model = keras.Sequential([keras.layers.Conv2D(32, 3, activation='relu')]) model.fit(train_data, epochs=10) -
CLI Command for LLM fine-tuning:
openclaw cs-ml fine-tune --model bert --dataset /path/to/text.json --learning-rate 5e-5
Output script example:from transformers import BertForSequenceClassification model = BertForSequenceClassification.from_pretrained('bert-base') trainer = Trainer(model=model, train_dataset=dataset) trainer.train() -
API Endpoint for evaluation:
POST tohttps://api.openclaw.com/cs-ml/evaluatewith JSON body:
{ "model_path": "/path/to/model.h5", "data_path": "/path/to/test.csv", "metrics": ["accuracy", "f1"] }
Response includes metrics output. -
Config Format: Use YAML for hyperparameters, e.g.:
model: transformer params: layers: 12 hidden_size: 768
Integration Notes
Integrate this skill with other OpenClaw skills by chaining commands, e.g., use "data-processing" skill first for data cleaning, then pass output to cs-ml for training. For external tools, set up dependencies like installing TensorFlow via pip install tensorflow in your generated scripts. Use $OPENCLAW_API_KEY for API calls in custom code. For MLOps, link with cloud services: export model to S3 with AWS CLI, then deploy via cs-ml command. Ensure compatibility by specifying library versions, e.g., Transformers 4.20+.
Error Handling
Common errors include data mismatches, authentication failures, or library version conflicts. Handle them as follows:
- Data errors: Check for shape issues in training commands, e.g., if
openclaw cs-ml trainfails with "Input shape mismatch", verify data with--validate-dataflag. - Authentication: If API calls fail with 401, ensure $OPENCLAW_API_KEY is set and not expired; retry with
openclaw cs-ml --retry-auth. - Runtime errors: For GPU issues in deep learning, add
--device cudaand handle with try-except in generated code:try: model.fit(data) except RuntimeError as e: print(f"Error: {e}, falling back to CPU") - General: Log outputs with
--verboseflag and debug generated scripts line-by-line.
Graph Relationships
- Related to cluster: computer-science
- Connected tags: ml, deep-learning, neural-networks, transformers, cs
- Links to other skills: depends on "data-processing" for preprocessing; enhances "deployment" for MLOps pipelines; integrates with "nlp" for Transformer-based tasks