✍️ Parallel Web
??ェブ検索や特定のページのコンテンツ取得、データへの情報
📺 まず動画で見る(YouTube)
▶ 【最新版】Claude(クロード)完全解説!20以上の便利機能をこの動画1本で全て解説 ↗
※ jpskill.com 編集部が参考用に選んだ動画です。動画の内容と Skill の挙動は厳密には一致しないことがあります。
📜 元の英語説明(参考)
All-in-one web toolkit powered by parallel-cli, with a strong emphasis on academic and scientific sources. Use this skill whenever the user needs to search the web, fetch/extract URL content, enrich data with web-sourced fields, or run deep research reports. Covers: web search (fast lookups, research, current info — prioritizing peer-reviewed papers, preprints, and scholarly databases), URL extraction (fetching pages, articles, academic PDFs), bulk data enrichment (adding fields to CSV/lists from the web), and deep research (exhaustive multi-source reports grounded in academic literature). Also handles setup, status checks, and result retrieval. Use this skill for ANY web-related task — even if the user doesn't mention 'parallel' or 'web' explicitly. If they want to look something up, fetch a page, enrich a dataset, investigate a topic, find academic papers, check citations, or review scientific literature, this is the skill to use.
🇯🇵 日本人クリエイター向け解説
??ェブ検索や特定のページのコンテンツ取得、データへの情報
※ jpskill.com 編集部が日本のビジネス現場向けに補足した解説です。Skill本体の挙動とは独立した参考情報です。
下記のコマンドをコピーしてターミナル(Mac/Linux)または PowerShell(Windows)に貼り付けてください。 ダウンロード → 解凍 → 配置まで全自動。
mkdir -p ~/.claude/skills && cd ~/.claude/skills && curl -L -o parallel-web.zip https://jpskill.com/download/4199.zip && unzip -o parallel-web.zip && rm parallel-web.zip
$d = "$env:USERPROFILE\.claude\skills"; ni -Force -ItemType Directory $d | Out-Null; iwr https://jpskill.com/download/4199.zip -OutFile "$d\parallel-web.zip"; Expand-Archive "$d\parallel-web.zip" -DestinationPath $d -Force; ri "$d\parallel-web.zip"
完了後、Claude Code を再起動 → 普通に「動画プロンプト作って」のように話しかけるだけで自動発動します。
💾 手動でダウンロードしたい(コマンドが難しい人向け)
- 1. 下の青いボタンを押して
parallel-web.zipをダウンロード - 2. ZIPファイルをダブルクリックで解凍 →
parallel-webフォルダができる - 3. そのフォルダを
C:\Users\あなたの名前\.claude\skills\(Win)または~/.claude/skills/(Mac)へ移動 - 4. Claude Code を再起動
⚠️ ダウンロード・利用は自己責任でお願いします。当サイトは内容・動作・安全性について責任を負いません。
🎯 このSkillでできること
下記の説明文を読むと、このSkillがあなたに何をしてくれるかが分かります。Claudeにこの分野の依頼をすると、自動で発動します。
📦 インストール方法 (3ステップ)
- 1. 上の「ダウンロード」ボタンを押して .skill ファイルを取得
- 2. ファイル名の拡張子を .skill から .zip に変えて展開(macは自動展開可)
- 3. 展開してできたフォルダを、ホームフォルダの
.claude/skills/に置く- · macOS / Linux:
~/.claude/skills/ - · Windows:
%USERPROFILE%\.claude\skills\
- · macOS / Linux:
Claude Code を再起動すれば完了。「このSkillを使って…」と話しかけなくても、関連する依頼で自動的に呼び出されます。
詳しい使い方ガイドを見る →- 最終更新
- 2026-05-17
- 取得日時
- 2026-05-17
- 同梱ファイル
- 5
💬 こう話しかけるだけ — サンプルプロンプト
- › Parallel Web で、自社の新サービスを紹介する記事を書いて
- › Parallel Web で、SNS投稿用に短く言い直して
- › Parallel Web を使って、過去の記事を最新版にアップデート
これをClaude Code に貼るだけで、このSkillが自動発動します。
📖 Claude が読む原文 SKILL.md(中身を展開)
この本文は AI(Claude)が読むための原文(英語または中国語)です。日本語訳は順次追加中。
Parallel Web Toolkit
A unified skill for all web-powered tasks: searching, extracting, enriching, and researching — with academic and scientific sources as the default priority.
Routing — pick the right capability
Read the user's request and match it to one of the capabilities below. For web search, extract, enrichment, and deep research, read the corresponding reference file for detailed instructions.
| User wants to... | Capability | Where |
|---|---|---|
| Look something up, research a topic, find current info | Web Search | references/web-search.md |
| Fetch content from a specific URL (webpage, article, PDF) | Web Extract | references/web-extract.md |
| Add web-sourced fields to a list of companies/people/products | Data Enrichment | references/data-enrichment.md |
| Get an exhaustive, multi-source report (user says "deep research", "exhaustive", "comprehensive") | Deep Research | references/deep-research.md |
| Install or authenticate parallel-cli | Setup | Below |
| Check status of a running research/enrichment task | Status | Below |
| Retrieve completed research results by run ID | Result | Below |
Decision guide
- Default to Web Search for a single lookup, research question, or "what is X?" query. It's fast and cost-effective. When the query touches a scientific or technical topic, include academic domains (see
references/web-search.md) to surface peer-reviewed and preprint sources alongside general results. - Use Web Extract when the user provides a URL or asks you to read/fetch a specific page. Prefer this over the built-in WebFetch tool. Particularly useful for extracting full text from academic PDFs, preprint servers, and journal articles.
- Use Data Enrichment when the user has multiple entities (a CSV, a list of companies/people/products, or even a short inline list) and wants to find or add the same kind of information for each one. The key signal is a repeated lookup across a set of items — e.g., "find the CEO for each of these companies" or "get the founding year for Apple, Stripe, and Anthropic." Even if the user doesn't say "enrich," use
parallel-cli enrichwhenever the task is the same query applied to multiple entities. Do NOT use Web Search in a loop for this — the enrichment pipeline handles batching, parallelism, and structured output automatically. - Use Deep Research only when the user explicitly asks for deep, exhaustive, or comprehensive research. It is 10-100x slower and more expensive than Web Search — never default to it. Deep research is especially valuable for literature reviews and multi-paper synthesis.
- If
parallel-cliis not found when running any command, follow the Setup section below.
Academic source priority
Across all capabilities, prefer academic and scientific sources when the query is technical or scientific in nature. This means:
- Peer-reviewed journal articles and conference proceedings over blog posts or news articles
- Preprints (arXiv, bioRxiv, medRxiv) when peer-reviewed versions aren't available
- Institutional and government sources (NIH, WHO, NASA, NIST) over commercial sites
- Primary research over secondary summaries
When citing academic sources, include author names and publication year where available (e.g., Smith et al., 2025) in addition to the standard citation format. If a DOI is present, prefer the DOI link.
Context chaining
Several capabilities support multi-turn context via interaction_id. When a research or enrichment task completes, it returns an interaction_id. If the user asks a follow-up question related to that task, pass --previous-interaction-id to carry context forward automatically. This avoids restating what was already found.
Setup
If parallel-cli is not installed, install and authenticate:
curl -fsSL https://parallel.ai/install.sh | bash
If unable to install that way, use uv instead:
uv tool install "parallel-web-tools[cli]"
Then authenticate. First, check if a .env file exists in the project root and contains PARALLEL_API_KEY. If so, load it with dotenv:
dotenv -f .env run parallel-cli auth
If dotenv isn't available, install it with pip install python-dotenv[cli] or uv pip install python-dotenv[cli].
If there's no .env file or it doesn't contain the key, fall back to interactive login:
parallel-cli login
Or set the key manually: export PARALLEL_API_KEY="your-key"
Verify with:
parallel-cli auth
If parallel-cli is not found after install, add ~/.local/bin to PATH.
Check task status
parallel-cli research status "$RUN_ID" --json
Report the current status to the user (running, completed, failed, etc.).
Get completed result
parallel-cli research poll "$RUN_ID" --json
Present results in a clear, organized format.
同梱ファイル
※ ZIPに含まれるファイル一覧。`SKILL.md` 本体に加え、参考資料・サンプル・スクリプトが入っている場合があります。
- 📄 SKILL.md (6,013 bytes)
- 📎 references/data-enrichment.md (3,045 bytes)
- 📎 references/deep-research.md (4,728 bytes)
- 📎 references/web-extract.md (1,622 bytes)
- 📎 references/web-search.md (3,841 bytes)