deepseek

安装量: 98
排名: #8421

安装

npx skills add https://github.com/vm0-ai/vm0-skills --skill deepseek

DeepSeek API

Use the DeepSeek API via direct curl calls to access powerful AI language models for chat, reasoning, and code generation.

Official docs: https://api-docs.deepseek.com/

When to Use

Use this skill when you need to:

Chat completions with DeepSeek-V3.2 model Deep reasoning tasks using the reasoning model Code generation and completion (FIM - Fill-in-the-Middle) OpenAI-compatible API as a cost-effective alternative Prerequisites Sign up at DeepSeek Platform and create an account Go to API Keys and generate a new API key Top up your balance (no free tier, but very affordable pricing) export DEEPSEEK_API_KEY="your-api-key"

Pricing (per 1M tokens) Type Price Input (cache hit) $0.028 Input (cache miss) $0.28 Output $0.42 Rate Limits

DeepSeek does not enforce strict rate limits. They will try to serve every request. During high traffic, connections are maintained with keep-alive signals.

Important: When using $VAR in a command that pipes to another command, wrap the command containing $VAR in bash -c '...'. Due to a Claude Code bug, environment variables are silently cleared when pipes are used directly.

bash -c 'curl -s "https://api.example.com" -H "Authorization: Bearer $API_KEY"'

How to Use

All examples below assume you have DEEPSEEK_API_KEY set.

The base URL for the DeepSeek API is:

https://api.deepseek.com (recommended) https://api.deepseek.com/v1 (OpenAI-compatible) 1. Basic Chat Completion

Send a simple chat message:

Write to /tmp/deepseek_request.json:

{ "model": "deepseek-chat", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello, who are you?" } ] }

Then run:

bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'

Available models:

deepseek-chat: DeepSeek-V3.2 non-thinking mode (128K context, 8K max output) deepseek-reasoner: DeepSeek-V3.2 thinking mode (128K context, 64K max output) 2. Chat with Temperature Control

Adjust creativity/randomness with temperature:

Write to /tmp/deepseek_request.json:

{ "model": "deepseek-chat", "messages": [ { "role": "user", "content": "Write a short poem about coding." } ], "temperature": 0.7, "max_tokens": 200 }

Then run:

bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'

Parameters:

temperature (0-2, default 1): Higher = more creative, lower = more deterministic top_p (0-1, default 1): Nucleus sampling threshold max_tokens: Maximum tokens to generate 3. Streaming Response

Get real-time token-by-token output:

Write to /tmp/deepseek_request.json:

{ "model": "deepseek-chat", "messages": [ { "role": "user", "content": "Explain quantum computing in simple terms." } ], "stream": true }

Then run:

bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'

Streaming returns Server-Sent Events (SSE) with delta chunks, ending with data: [DONE].

  1. Deep Reasoning (Thinking Mode)

Use the reasoner model for complex reasoning tasks:

Write to /tmp/deepseek_request.json:

{ "model": "deepseek-reasoner", "messages": [ { "role": "user", "content": "What is 15 * 17? Show your work." } ] }

Then run:

bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'

The reasoner model excels at math, logic, and multi-step problems.

  1. JSON Output Mode

Force the model to return valid JSON:

Write to /tmp/deepseek_request.json:

{ "model": "deepseek-chat", "messages": [ { "role": "system", "content": "You are a JSON generator. Always respond with valid JSON." }, { "role": "user", "content": "List 3 programming languages with their main use cases." } ], "response_format": { "type": "json_object" } }

Then run:

bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'

  1. Multi-turn Conversation

Continue a conversation with message history:

Write to /tmp/deepseek_request.json:

{ "model": "deepseek-chat", "messages": [ { "role": "user", "content": "My name is Alice." }, { "role": "assistant", "content": "Nice to meet you, Alice." }, { "role": "user", "content": "What is my name?" } ] }

Then run:

bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'

  1. Code Completion (FIM)

Use Fill-in-the-Middle for code completion (beta endpoint):

Write to /tmp/deepseek_request.json:

{ "model": "deepseek-chat", "prompt": "def add(a, b):\n ", "max_tokens": 20 }

Then run:

bash -c 'curl -s "https://api.deepseek.com/beta/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].text'

FIM is useful for:

Code completion in editors Filling gaps in documents Context-aware text generation 8. Function Calling (Tools)

Define functions the model can call:

Write to /tmp/deepseek_request.json:

{ "model": "deepseek-chat", "messages": [ { "role": "user", "content": "What is the weather in Tokyo?" } ], "tools": [ { "type": "function", "function": { "name": "get_weather", "description": "Get the current weather for a location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city name" } }, "required": ["location"] } } } ] }

Then run:

bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'

The model will return a tool_calls array when it wants to use a function.

  1. Check Token Usage

Extract usage information from response:

Write to /tmp/deepseek_request.json:

{ "model": "deepseek-chat", "messages": [ { "role": "user", "content": "Hello" } ] }

Then run:

bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq '.usage'

Response includes:

prompt_tokens: Input token count completion_tokens: Output token count total_tokens: Sum of both OpenAI SDK Compatibility

DeepSeek is fully compatible with OpenAI SDKs. Just change the base URL:

Python:

from openai import OpenAI client = OpenAI(api_key="your-deepseek-key", base_url="https://api.deepseek.com")

Node.js:

import OpenAI from 'openai'; const client = new OpenAI({ apiKey: 'your-deepseek-key', baseURL: 'https://api.deepseek.com' });

Tips: Complex JSON Payloads

For complex requests with nested JSON (like function calling), use a temp file to avoid shell escaping issues:

Write to /tmp/deepseek_request.json:

{ "model": "deepseek-chat", "messages": [{"role": "user", "content": "What is the weather in Tokyo?"}], "tools": [{ "type": "function", "function": { "name": "get_weather", "description": "Get current weather", "parameters": { "type": "object", "properties": {"location": {"type": "string"}}, "required": ["location"] } } }] }

Then run:

bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'

Guidelines Choose the right model: Use deepseek-chat for general tasks, deepseek-reasoner for complex reasoning Use caching: Repeated prompts with same prefix benefit from cache pricing ($0.028 vs $0.28) Set max_tokens: Prevent runaway generation by setting appropriate limits Use streaming for long responses: Better UX for real-time applications JSON mode requires system prompt: When using response_format, include JSON instructions in system message FIM uses beta endpoint: Code completion endpoint is at api.deepseek.com/beta Complex JSON: Use temp files with -d @filename to avoid shell quoting issues

返回排行榜