mcp2cli

安装量: 364
排名: #2576

安装

npx skills add https://github.com/knowsuchagency/mcp2cli --skill mcp2cli

mcp2cli Turn any MCP server or OpenAPI spec into a CLI at runtime. No codegen. Install

Run directly (no install needed)

uvx mcp2cli --help

Or install

pip install mcp2cli Core Workflow Connect to a source (MCP server or OpenAPI spec) Discover available commands with --list Inspect a specific command with --help Execute the command with flags

MCP over HTTP

mcp2cli --mcp https://mcp.example.com/sse --list mcp2cli --mcp https://mcp.example.com/sse create-task --help mcp2cli --mcp https://mcp.example.com/sse create-task --title "Fix bug"

MCP over stdio

mcp2cli --mcp-stdio "npx @modelcontextprotocol/server-filesystem /tmp" --list mcp2cli --mcp-stdio "npx @modelcontextprotocol/server-filesystem /tmp" read-file --path /tmp/hello.txt

OpenAPI spec (remote or local, JSON or YAML)

mcp2cli --spec https://petstore3.swagger.io/api/v3/openapi.json --list mcp2cli --spec ./openapi.json --base-url https://api.example.com list-pets --status available CLI Reference mcp2cli [global options] [command options] Source (mutually exclusive, one required): --spec URL|FILE OpenAPI spec (JSON or YAML, local or remote) --mcp URL MCP server URL (HTTP/SSE) --mcp-stdio CMD MCP server command (stdio transport) Options: --auth-header K:V HTTP header sent with requests (repeatable) --base-url URL Override base URL from spec --env KEY=VALUE Env var for stdio server process (repeatable) --cache-key KEY Custom cache key --cache-ttl SECONDS Cache TTL (default: 3600) --refresh Bypass cache --list List available subcommands --pretty Pretty-print JSON output --raw Print raw response body --toon Encode output as TOON (token-efficient for LLMs) --version Show version Subcommands and flags are generated dynamically from the source. Patterns Authentication

API key header

mcp2cli --spec ./spec.json --auth-header "Authorization:Bearer tok_..." list-items

Multiple headers

mcp2cli --mcp https://mcp.example.com/sse \ --auth-header "x-api-key:sk-..." \ --auth-header "x-org-id:org_123" \ search --query "test" POST with JSON body from stdin echo '{"name": "Fido", "tag": "dog"}' | mcp2cli --spec ./spec.json create-pet --stdin Env vars for stdio servers mcp2cli --mcp-stdio "node server.js" --env API_KEY = sk- .. . --env DEBUG = 1 search --query "test" Caching Specs and MCP tool lists are cached in ~/.cache/mcp2cli/ (1h TTL). Local files are never cached. mcp2cli --spec https://api.example.com/spec.json --refresh --list

Force refresh

mcp2cli --spec https://api.example.com/spec.json --cache-ttl 86400 --list

24h TTL

TOON output (token-efficient for LLMs) mcp2cli --mcp https://mcp.example.com/sse --toon list-tags Best for large uniform arrays — 40-60% fewer tokens than JSON. Generating a Skill from an API When the user asks to create a skill from an MCP server or OpenAPI spec, follow this workflow: Discover all available commands: uvx mcp2cli --mcp https://target.example.com/sse --list Inspect each command to understand parameters: uvx mcp2cli --mcp https://target.example.com/sse < command

--help Test key commands to verify they work: uvx mcp2cli --mcp https://target.example.com/sse < command

--param value Create a SKILL.md that teaches another AI agent how to use this API via mcp2cli. Include: The source flag ( --mcp , --mcp-stdio , or --spec ) and URL Any required auth headers Common workflows with example commands The --list and --help discovery pattern for commands not covered The generated skill should use mcp2cli as its execution layer — the agent runs uvx mcp2cli ... commands rather than making raw HTTP/MCP calls.

返回排行榜