jpskill.com
🛠️ 開発・MCP コミュニティ

mistral-api

Mistral AI API — European LLM provider with strong code and reasoning models. Use when you need GDPR-compliant AI inference, code generation with Codestral, multilingual tasks, cost-efficient inference, or a European data-residency option.

⚡ おすすめ: コマンド1行でインストール(60秒)

下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。

🍎 Mac / 🐧 Linux
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o mistral-api.zip https://jpskill.com/download/15130.zip && unzip -o mistral-api.zip && rm mistral-api.zip
🪟 Windows (PowerShell)
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/15130.zip -OutFile "$d\mistral-api.zip"; Expand-Archive "$d\mistral-api.zip" -DestinationPath $d -Force; ri "$d\mistral-api.zip"

完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。

💾 手動でダウンロードしたい(コマンドが難しい人向け)
  1. 1. 下の青いボタンを押して mistral-api.zip をダウンロード
  2. 2. ZIPファイルをダブルクリックで解凍 → mistral-api フォルダができる
  3. 3. そのフォルダを C:\Users\あなたの名前\.claude\skills\(Win)または ~/.claude/skills/(Mac)へ移動
  4. 4. Claude Code を再起動

⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。

🎯 このSkillでできること

下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。

📦 インストール方法 (3ステップ)

  1. 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
  2. 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
  3. 3. 展開してできたフォルダを、ホームフォルダの .claude/skills/ に置く
    • · macOS / Linux: ~/.claude/skills/
    • · Windows: %USERPROFILE%\.claude\skills\

Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。

詳しい使い方ガイドを見る →
最終更新
2026-05-18
取得日時
2026-05-18
同梱ファイル
1
📖 Claude が読む原文 SKILL.md(中身を展開)

この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。

Mistral AI API

Overview

Mistral AI is a French AI company providing high-quality, cost-efficient language models with EU data residency and GDPR compliance. Their models excel at code generation (Codestral), multilingual tasks, and reasoning. Mistral's API follows OpenAI conventions closely, making integration straightforward.

Setup

# Python
pip install mistralai

# TypeScript/Node
npm install @mistralai/mistralai
export MISTRAL_API_KEY=...

Available Models

Model Context Best For
mistral-large-latest 128k Most capable, complex reasoning
mistral-small-latest 128k Cost-efficient, everyday tasks
codestral-latest 256k Code generation & completion
mistral-embed 8k Text embeddings
open-mistral-nemo 128k Open-weight, edge deployment

Instructions

Basic Chat Completion (Python)

from mistralai import Mistral

client = Mistral(api_key="your_api_key")  # or reads MISTRAL_API_KEY

response = client.chat.complete(
    model="mistral-large-latest",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain the difference between async and sync programming."},
    ],
)

print(response.choices[0].message.content)
print(f"Prompt tokens: {response.usage.prompt_tokens}")
print(f"Completion tokens: {response.usage.completion_tokens}")

TypeScript/Node.js

import Mistral from "@mistralai/mistralai";

const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY });

const response = await client.chat.complete({
  model: "mistral-large-latest",
  messages: [{ role: "user", content: "Hello from TypeScript!" }],
});

console.log(response.choices[0].message.content);

Streaming

from mistralai import Mistral

client = Mistral()

stream = client.chat.stream(
    model="mistral-small-latest",
    messages=[{"role": "user", "content": "Write a haiku about programming."}],
)

for event in stream:
    chunk = event.data.choices[0].delta.content
    if chunk:
        print(chunk, end="", flush=True)
print()

Function Calling

import json
from mistralai import Mistral

client = Mistral()

tools = [
    {
        "type": "function",
        "function": {
            "name": "search_products",
            "description": "Search for products in a catalog",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {"type": "string"},
                    "max_price": {"type": "number"},
                    "category": {"type": "string"},
                },
                "required": ["query"],
            },
        },
    }
]

messages = [{"role": "user", "content": "Find laptops under $1000"}]

response = client.chat.complete(
    model="mistral-large-latest",
    messages=messages,
    tools=tools,
    tool_choice="auto",
)

if response.choices[0].finish_reason == "tool_calls":
    tool_call = response.choices[0].message.tool_calls[0]
    args = json.loads(tool_call.function.arguments)
    print(f"Function: {tool_call.function.name}, Args: {args}")

    # Add tool result and continue
    messages.append(response.choices[0].message)
    messages.append({
        "role": "tool",
        "tool_call_id": tool_call.id,
        "content": json.dumps([{"name": "ThinkPad X1", "price": 899}]),
    })

    final = client.chat.complete(model="mistral-large-latest", messages=messages)
    print(final.choices[0].message.content)

JSON Mode

from mistralai import Mistral
import json

client = Mistral()

response = client.chat.complete(
    model="mistral-small-latest",
    messages=[
        {
            "role": "user",
            "content": "Return a JSON object with fields: title, author, year for the book '1984'",
        }
    ],
    response_format={"type": "json_object"},
)

data = json.loads(response.choices[0].message.content)
print(data)  # {"title": "1984", "author": "George Orwell", "year": 1949}

Text Embeddings

from mistralai import Mistral

client = Mistral()

response = client.embeddings.create(
    model="mistral-embed",
    inputs=["Machine learning is transforming industries.", "AI is the future of technology."],
)

embeddings = [item.embedding for item in response.data]
print(f"Embedding dimension: {len(embeddings[0])}")  # 1024

# Compute cosine similarity
import numpy as np

def cosine_similarity(a, b):
    return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))

similarity = cosine_similarity(embeddings[0], embeddings[1])
print(f"Similarity: {similarity:.3f}")

Codestral for Code Completion

from mistralai import Mistral

client = Mistral()

# Fill-in-the-middle (FIM) — Codestral's signature feature
response = client.fim.complete(
    model="codestral-latest",
    prompt="def fibonacci(n):\n    if n <= 1:\n        return n\n    ",
    suffix="\n\nresult = fibonacci(10)\nprint(result)",
)

print(response.choices[0].message.content)
# Returns the middle code that connects prompt to suffix
# Standard code generation
response = client.chat.complete(
    model="codestral-latest",
    messages=[
        {
            "role": "user",
            "content": "Write a Python class for a rate limiter using token bucket algorithm.",
        }
    ],
)
print(response.choices[0].message.content)

GDPR Compliance Notes

  • All API data processed in EU data centers by default.
  • Mistral AI is headquartered in Paris, France — subject to EU/GDPR jurisdiction.
  • For enterprise data residency guarantees, use Mistral's Azure or GCP deployments.
  • No training on user data by default — check your plan's DPA for details.

Guidelines

  • Use mistral-large-latest for complex tasks, mistral-small-latest for cost savings.
  • Codestral is specialized for code and significantly outperforms general models on FIM tasks.
  • The mistral-embed model produces 1024-dimensional vectors.
  • Mistral models have strong multilingual performance, especially in French, Spanish, Italian, German, and Portuguese.
  • Function calling requires tool_choice to be set — use "auto" for model-driven decisions.
  • JSON mode requires the system or user prompt to explicitly mention JSON output.