move blog to docs
109
docs/blog/2025-02-25-project-motivation.md
Normal file
@@ -0,0 +1,109 @@
|
|||||||
|
---
|
||||||
|
title: Project Motivation and Principles
|
||||||
|
date: 2025-02-25
|
||||||
|
tags: [claude-code, reverse-engineering, tutorial]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Project Motivation and Principles
|
||||||
|
|
||||||
|
As early as the day after Claude Code was released (2025-02-25), I began and completed a reverse engineering attempt of the project. At that time, using Claude Code required registering for an Anthropic account, applying for a waitlist, and waiting for approval. However, due to well-known reasons, Anthropic blocks users from mainland China, making it impossible for me to use the service through normal means. Based on known information, I discovered the following:
|
||||||
|
|
||||||
|
1. Claude Code is installed via npm, so it's very likely developed with Node.js.
|
||||||
|
2. Node.js offers various debugging methods: simple `console.log` usage, launching with `--inspect` to hook into Chrome DevTools, or even debugging obfuscated code using `d8`.
|
||||||
|
|
||||||
|
My goal was to use Claude Code without an Anthropic account. I didn't need the full source code—just a way to intercept and reroute requests made by Claude Code to Anthropic's models to my own custom endpoint. So I started the reverse engineering process:
|
||||||
|
|
||||||
|
1. First, install Claude Code:
|
||||||
|
```bash
|
||||||
|
npm install -g @anthropic-ai/claude-code
|
||||||
|
```
|
||||||
|
|
||||||
|
2. After installation, the project is located at `~/.nvm/versions/node/v20.10.0/lib/node_modules/@anthropic-ai/claude-code`(this may vary depending on your Node version manager and version).
|
||||||
|
|
||||||
|
3. Open the package.json to analyze the entry point:
|
||||||
|
```package.json
|
||||||
|
{
|
||||||
|
"name": "@anthropic-ai/claude-code",
|
||||||
|
"version": "1.0.24",
|
||||||
|
"main": "sdk.mjs",
|
||||||
|
"types": "sdk.d.ts",
|
||||||
|
"bin": {
|
||||||
|
"claude": "cli.js"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18.0.0"
|
||||||
|
},
|
||||||
|
"type": "module",
|
||||||
|
"author": "Boris Cherny <boris@anthropic.com>",
|
||||||
|
"license": "SEE LICENSE IN README.md",
|
||||||
|
"description": "Use Claude, Anthropic's AI assistant, right from your terminal. Claude can understand your codebase, edit files, run terminal commands, and handle entire workflows for you.",
|
||||||
|
"homepage": "https://github.com/anthropics/claude-code",
|
||||||
|
"bugs": {
|
||||||
|
"url": "https://github.com/anthropics/claude-code/issues"
|
||||||
|
},
|
||||||
|
"scripts": {
|
||||||
|
"prepare": "node -e \"if (!process.env.AUTHORIZED) { console.error('ERROR: Direct publishing is not allowed.\\nPlease use the publish-external.sh script to publish this package.'); process.exit(1); }\"",
|
||||||
|
"preinstall": "node scripts/preinstall.js"
|
||||||
|
},
|
||||||
|
"dependencies": {},
|
||||||
|
"optionalDependencies": {
|
||||||
|
"@img/sharp-darwin-arm64": "^0.33.5",
|
||||||
|
"@img/sharp-darwin-x64": "^0.33.5",
|
||||||
|
"@img/sharp-linux-arm": "^0.33.5",
|
||||||
|
"@img/sharp-linux-arm64": "^0.33.5",
|
||||||
|
"@img/sharp-linux-x64": "^0.33.5",
|
||||||
|
"@img/sharp-win32-x64": "^0.33.5"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The key entry is `"claude": "cli.js"`. Opening cli.js, you'll see the code is minified and obfuscated. But using WebStorm's `Format File` feature, you can reformat it for better readability:
|
||||||
|

|
||||||
|
|
||||||
|
Now you can begin understanding Claude Code's internal logic and prompt structure by reading the code. To dig deeper, you can insert console.log statements or launch in debug mode with Chrome DevTools using:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
NODE_OPTIONS="--inspect-brk=9229" claude
|
||||||
|
```
|
||||||
|
|
||||||
|
This command starts Claude Code in debug mode and opens port 9229. Visit chrome://inspect/ in Chrome and click inspect to begin debugging:
|
||||||
|

|
||||||
|

|
||||||
|
|
||||||
|
By searching for the keyword api.anthropic.com, you can easily locate where Claude Code makes its API calls. From the surrounding code, it's clear that baseURL can be overridden with the `ANTHROPIC_BASE_URL` environment variable, and `apiKey` and `authToken` can be configured similarly:
|
||||||
|

|
||||||
|
|
||||||
|
So far, we've discovered some key information:
|
||||||
|
|
||||||
|
1. Environment variables can override Claude Code's `baseURL` and `apiKey`.
|
||||||
|
|
||||||
|
2. Claude Code adheres to the Anthropic API specification.
|
||||||
|
|
||||||
|
Therefore, we need:
|
||||||
|
1. A service to convert OpenAI API-compatible requests into Anthropic API format.
|
||||||
|
|
||||||
|
2. Set the environment variables before launching Claude Code to redirect requests to this service.
|
||||||
|
|
||||||
|
Thus, `claude-code-router` was born. This project uses `Express.js` to implement the `/v1/messages` endpoint. It leverages middlewares to transform request/response formats and supports request rewriting (useful for prompt tuning per model).
|
||||||
|
|
||||||
|
Back in February, the full DeepSeek model series had poor support for Function Calling, so I initially used `qwen-max`. It worked well—but without KV cache support, it consumed a large number of tokens and couldn't provide the native `Claude Code` experience.
|
||||||
|
|
||||||
|
So I experimented with a Router-based mode using a lightweight model to dispatch tasks. The architecture included four roles: `router`, `tool`, `think`, and `coder`. Each request passed through a free lightweight model that would decide whether the task involved reasoning, coding, or tool usage. Reasoning and coding tasks looped until a tool was invoked to apply changes. However, the lightweight model lacked the capability to route tasks accurately, and architectural issues prevented it from effectively driving Claude Code.
|
||||||
|
|
||||||
|
Everything changed at the end of May when the official Claude Code was launched, and `DeepSeek-R1` model (released 2025-05-28) added Function Call support. I redesigned the system. With the help of AI pair programming, I fixed earlier request/response transformation issues—especially the handling of models that return JSON instead of Function Call outputs.
|
||||||
|
|
||||||
|
This time, I used the `DeepSeek-V3` model. It performed better than expected: supporting most tool calls, handling task decomposition and stepwise planning, and—most importantly—costing less than one-tenth the price of Claude 3.5 Sonnet.
|
||||||
|
|
||||||
|
The official Claude Code organizes agents differently from the beta version, so I restructured my Router mode to include four roles: the default model, `background`, `think`, and `longContext`.
|
||||||
|
|
||||||
|
- The default model handles general tasks and acts as a fallback.
|
||||||
|
|
||||||
|
- The `background` model manages lightweight background tasks. According to Anthropic, Claude Haiku 3.5 is often used here, so I routed this to a local `ollama` service.
|
||||||
|
|
||||||
|
- The `think` model is responsible for reasoning and planning mode tasks. I use `DeepSeek-R1` here, though it doesn't support cost control, so `Think` and `UltraThink` behave identically.
|
||||||
|
|
||||||
|
- The `longContext` model handles long-context scenarios. The router uses `tiktoken` to calculate token lengths in real time, and if the context exceeds 32K, it switches to this model to compensate for DeepSeek's long-context limitations.
|
||||||
|
|
||||||
|
This describes the evolution and reasoning behind the project. By cleverly overriding environment variables, we can forward and modify requests without altering Claude Code's source—allowing us to benefit from official updates while using our own models and custom prompts.
|
||||||
|
|
||||||
|
This project offers a practical approach to running Claude Code under Anthropic's regional restrictions, balancing `cost`, `performance`, and `customizability`. That said, the official `Max Plan` still offers the best experience if available.
|
||||||
94
docs/blog/2025-11-18-glm-reasoning.md
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
---
|
||||||
|
title: GLM-4.6 Supports Reasoning and Interleaved Thinking
|
||||||
|
date: 2025-11-18
|
||||||
|
tags: [glm, reasoning, chain-of-thought]
|
||||||
|
---
|
||||||
|
|
||||||
|
# GLM-4.6 Supports Reasoning and Interleaved Thinking
|
||||||
|
|
||||||
|
## Enabling Reasoning in Claude Code with GLM-4.6
|
||||||
|
|
||||||
|
Starting from version 4.5, GLM has supported Claude Code. I've been following its progress closely, and many users have reported that reasoning could not be enabled within Claude Code. Recently, thanks to sponsorship from Zhipu, I decided to investigate this issue in depth. According to the [official documentation](https://docs.z.ai/api-reference/llm/chat-completion), the`/chat/completions` endpoint has reasoning enabled by default, but the model itself decides whether to think:
|
||||||
|
|
||||||
|
```
|
||||||
|
thinking.type enum<string> default:enabled
|
||||||
|
|
||||||
|
Whether to enable the chain of thought(When enabled, GLM-4.6, GLM-4.5 and others will automatically determine whether to think, while GLM-4.5V will think compulsorily), default: enabled
|
||||||
|
|
||||||
|
Available options: enabled, disabled
|
||||||
|
```
|
||||||
|
|
||||||
|
However, within Claude Code, its heavy system prompt interference disrupts GLM's internal reasoning judgment, causing the model to rarely think.
|
||||||
|
Therefore, we need to explicitly guide the model to believe reasoning is required. Since claude-code-router functions as a proxy, the only feasible approach is modifying prompts or parameters.
|
||||||
|
|
||||||
|
Initially, I tried completely removing Claude Code's system prompt — and indeed, the model started reasoning — but that broke Claude Code's workflow.
|
||||||
|
So instead, I used prompt injection to clearly instruct the model to think step by step.
|
||||||
|
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// transformer.ts
|
||||||
|
import { UnifiedChatRequest } from "../types/llm";
|
||||||
|
import { Transformer } from "../types/transformer";
|
||||||
|
|
||||||
|
export class ForceReasoningTransformer implements Transformer {
|
||||||
|
name = "forcereasoning";
|
||||||
|
|
||||||
|
async transformRequestIn(
|
||||||
|
request: UnifiedChatRequest
|
||||||
|
): Promise<UnifiedChatRequest> {
|
||||||
|
const systemMessage = request.messages.find(
|
||||||
|
(item) => item.role === "system"
|
||||||
|
);
|
||||||
|
if (Array.isArray(systemMessage?.content)) {
|
||||||
|
systemMessage.content.push({
|
||||||
|
type: "text",
|
||||||
|
text: "You are an expert reasoning model.\nAlways think step by step before answering. Even if the problem seems simple, always write down your reasoning process explicitly.\nNever skip your chain of thought.\nUse the following output format:\n<reasoning_content>(Write your full detailed thinking here.)</reasoning_content>\n\nWrite your final conclusion here.",
|
||||||
|
});
|
||||||
|
}
|
||||||
|
const lastMessage = request.messages[request.messages.length - 1];
|
||||||
|
if (lastMessage.role === "user" && Array.isArray(lastMessage.content)) {
|
||||||
|
lastMessage.content.push({
|
||||||
|
type: "text",
|
||||||
|
text: "You are an expert reasoning model.\nAlways think step by step before answering. Even if the problem seems simple, always write down your reasoning process explicitly.\nNever skip your chain of thought.\nUse the following output format:\n<reasoning_content>(Write your full detailed thinking here.)</reasoning_content>\n\nWrite your final conclusion here.",
|
||||||
|
});
|
||||||
|
}
|
||||||
|
if (lastMessage.role === "tool") {
|
||||||
|
request.messages.push({
|
||||||
|
role: "user",
|
||||||
|
content: [
|
||||||
|
{
|
||||||
|
type: "text",
|
||||||
|
text: "You are an expert reasoning model.\nAlways think step by step before answering. Even if the problem seems simple, always write down your reasoning process explicitly.\nNever skip your chain of thought.\nUse the following output format:\n<reasoning_content>(Write your full detailed thinking here.)</reasoning_content>\n\nWrite your final conclusion here.",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
return request;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Why use `<reasoning_content>` instead of the `<think>` tag? Two reasons:
|
||||||
|
|
||||||
|
1. Using the `<think>` tag doesn't effectively trigger reasoning — likely because the model was trained on data where `<think>` had special behavior.
|
||||||
|
|
||||||
|
2. If we use `<think>`, the reasoning output is split into a separate field, which directly relates to the chain-of-thought feedback problem discussed below.
|
||||||
|
|
||||||
|
## Chain-of-Thought Feedback
|
||||||
|
Recently, Minimax released `Minimax-m2`, along with [an article](https://www.minimaxi.com/news/why-is-interleaved-thinking-important-for-m2) explaining interleaved thinking.
|
||||||
|
While the idea isn't entirely new, it's a good opportunity to analyze it.
|
||||||
|
|
||||||
|
Why do we need to interleaved thinking?
|
||||||
|
Minimax's article mentions that the Chat Completion API does not support passing reasoning content between requests.
|
||||||
|
We know ChatGPT was the first to support reasoning, but OpenAI initially didn't expose the chain of thought to users.
|
||||||
|
Therefore, the Chat Completion API didn't need to support it. Even the CoT field was first introduced by DeepSeek.
|
||||||
|
|
||||||
|
Do we really need explicit CoT fields? What happens if we don't have them? Will it affect reasoning?
|
||||||
|
By inspecting [sglang's source code](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/parser/reasoning_parser.py), we can see that reasoning content is naturally emitted in messages with specific markers.
|
||||||
|
If we don't split it out, the next-round conversation will naturally include it.
|
||||||
|
Thus, the only reason we need interleaved thinking is because we separated the reasoning content from the normal messages.
|
||||||
|
|
||||||
|
With fewer than 40 lines of code above, I implemented a simple exploration of enabling reasoning and chain-of-thought feedback for GLM-4.5/4.6.
|
||||||
|
(It's only simple because I haven't implemented parsing logic yet — you could easily modify the transformer to split reasoning output on response and merge it back on request, improving Claude Code's frontend display compatibility.)
|
||||||
|
|
||||||
|
If you have better ideas, feel free to reach out — I'd love to discuss further.
|
||||||
111
docs/blog/2025-11-18-router-exploration.md
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
---
|
||||||
|
title: Maybe We Can Do More with the Router
|
||||||
|
date: 2025-11-18
|
||||||
|
tags: [router, transformer, deepseek]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Maybe We Can Do More with the Router
|
||||||
|
|
||||||
|
Since the release of `claude-code-router`, I've received a lot of user feedback, and quite a few issues are still open. Most of them are related to support for different providers and the lack of tool usage from the deepseek model.
|
||||||
|
|
||||||
|
Originally, I created this project for personal use, mainly to access claude code at a lower cost. So, multi-provider support wasn't part of the initial design. But during troubleshooting, I discovered that even though most providers claim to be compatible with the OpenAI-style `/chat/completions` interface, there are many subtle differences. For example:
|
||||||
|
|
||||||
|
1. When Gemini's tool parameter type is string, the `format` field only supports `date` and `date-time`, and there's no tool call ID.
|
||||||
|
|
||||||
|
2. OpenRouter requires `cache_control` for caching.
|
||||||
|
|
||||||
|
3. The official DeepSeek API has a `max_output` of 8192, but Volcano Engine's limit is even higher.
|
||||||
|
|
||||||
|
Aside from these, smaller providers often have quirks in their parameter handling. So I decided to create a new project, [musistudio/llms](https://github.com/musistudio/llms), to deal with these compatibility issues. It uses the OpenAI format as a base and introduces a generic Transformer interface for transforming both requests and responses.
|
||||||
|
|
||||||
|
Once a `Transformer` is implemented for each provider, it becomes possible to mix-and-match requests between them. For example, I implemented bidirectional conversion between Anthropic and OpenAI formats in `AnthropicTransformer`, which listens to the `/v1/messages` endpoint. Similarly, `GeminiTransformer` handles Gemini <-> OpenAI format conversions and listens to `/v1beta/models/:modelAndAction`.
|
||||||
|
|
||||||
|
When both requests and responses are transformed into a common format, they can interoperate seamlessly:
|
||||||
|
|
||||||
|
```
|
||||||
|
AnthropicRequest -> AnthropicTransformer -> OpenAIRequest -> GeminiTransformer -> GeminiRequest -> GeminiServer
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
GeminiResponse -> GeminiTransformer -> OpenAIResponse -> AnthropicTransformer -> AnthropicResponse
|
||||||
|
```
|
||||||
|
|
||||||
|
Using a middleware layer to smooth out differences may introduce some performance overhead, but the main goal here is to enable `claude-code-router` to support multiple providers.
|
||||||
|
|
||||||
|
As for the issue of DeepSeek's lackluster tool usage — I found that it stems from poor instruction adherence in long conversations. Initially, the model actively calls tools, but after several rounds, it starts responding with plain text instead. My first workaround was injecting a system prompt to remind the model to use tools proactively. But in long contexts, the model tends to forget this instruction.
|
||||||
|
|
||||||
|
After reading the DeepSeek documentation, I noticed it supports the `tool_choice` parameter, which can be set to `"required"` to force the model to use at least one tool. I tested this by enabling the parameter, and it significantly improved the model's tool usage. We can remove the setting when it's no longer necessary. With the help of the `Transformer` interface in [musistudio/llms](https://github.com/musistudio/llms), we can modify the request before it's sent and adjust the response after it's received.
|
||||||
|
|
||||||
|
Inspired by the Plan Mode in `claude code`, I implemented a similar Tool Mode for DeepSeek:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export class TooluseTransformer implements Transformer {
|
||||||
|
name = "tooluse";
|
||||||
|
|
||||||
|
transformRequestIn(request: UnifiedChatRequest): UnifiedChatRequest {
|
||||||
|
if (request.tools?.length) {
|
||||||
|
request.messages.push({
|
||||||
|
role: "system",
|
||||||
|
content: `<system-reminder>Tool mode is active. The user expects you to proactively execute the most suitable tool to help complete the task.
|
||||||
|
Before invoking a tool, you must carefully evaluate whether it matches the current task. If no available tool is appropriate for the task, you MUST call the \`ExitTool\` to exit tool mode — this is the only valid way to terminate tool mode.
|
||||||
|
Always prioritize completing the user's task effectively and efficiently by using tools whenever appropriate.</system-reminder>`,
|
||||||
|
});
|
||||||
|
request.tool_choice = "required";
|
||||||
|
request.tools.unshift({
|
||||||
|
type: "function",
|
||||||
|
function: {
|
||||||
|
name: "ExitTool",
|
||||||
|
description: `Use this tool when you are in tool mode and have completed the task. This is the only valid way to exit tool mode.
|
||||||
|
IMPORTANT: Before using this tool, ensure that none of the available tools are applicable to the current task. You must evaluate all available options — only if no suitable tool can help you complete the task should you use ExitTool to terminate tool mode.
|
||||||
|
Examples:
|
||||||
|
1. Task: "Use a tool to summarize this document" — Do not use ExitTool if a summarization tool is available.
|
||||||
|
2. Task: "What's the weather today?" — If no tool is available to answer, use ExitTool after reasoning that none can fulfill the task.`,
|
||||||
|
parameters: {
|
||||||
|
type: "object",
|
||||||
|
properties: {
|
||||||
|
response: {
|
||||||
|
type: "string",
|
||||||
|
description:
|
||||||
|
"Your response will be forwarded to the user exactly as returned — the tool will not modify or post-process it in any way.",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
required: ["response"],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
return request;
|
||||||
|
}
|
||||||
|
|
||||||
|
async transformResponseOut(response: Response): Promise<Response> {
|
||||||
|
if (response.headers.get("Content-Type")?.includes("application/json")) {
|
||||||
|
const jsonResponse = await response.json();
|
||||||
|
if (
|
||||||
|
jsonResponse?.choices[0]?.message.tool_calls?.length &&
|
||||||
|
jsonResponse?.choices[0]?.message.tool_calls[0]?.function?.name ===
|
||||||
|
"ExitTool"
|
||||||
|
) {
|
||||||
|
const toolArguments = JSON.parse(toolCall.function.arguments || "{}");
|
||||||
|
jsonResponse.choices[0].message.content = toolArguments.response || "";
|
||||||
|
delete jsonResponse.choices[0].message.tool_calls;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle non-streaming response if needed
|
||||||
|
return new Response(JSON.stringify(jsonResponse), {
|
||||||
|
status: response.status,
|
||||||
|
statusText: response.statusText,
|
||||||
|
headers: response.headers,
|
||||||
|
});
|
||||||
|
} else if (response.headers.get("Content-Type")?.includes("stream")) {
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
return response;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This transformer ensures the model calls at least one tool. If no tools are appropriate or the task is finished, it can exit using `ExitTool`. Since this relies on the `tool_choice` parameter, it only works with models that support it.
|
||||||
|
|
||||||
|
In practice, this approach noticeably improves tool usage for DeepSeek. The tradeoff is that sometimes the model may invoke irrelevant or unnecessary tools, which could increase latency and token usage.
|
||||||
|
|
||||||
|
This update is just a small experiment — adding an `"agent"` to the router. Maybe there are more interesting things we can explore from here.
|
||||||
@@ -0,0 +1,102 @@
|
|||||||
|
---
|
||||||
|
title: 项目初衷及原理
|
||||||
|
date: 2025-02-25
|
||||||
|
tags: [claude-code, 逆向工程, 教程]
|
||||||
|
---
|
||||||
|
|
||||||
|
# 项目初衷及原理
|
||||||
|
|
||||||
|
早在 Claude Code 发布的第二天(2025-02-25),我就尝试并完成了对该项目的逆向。当时要使用 Claude Code 你需要注册一个 Anthropic 账号,然后申请 waitlist,等待通过后才能使用。但是因为众所周知的原因,Anthropic 屏蔽了中国区的用户,所以通过正常手段我无法使用,通过已知的信息,我发现:
|
||||||
|
|
||||||
|
1. Claude Code 使用 npm 进行安装,所以很大可能其使用 Node.js 进行开发。
|
||||||
|
2. Node.js 调试手段众多,可以简单使用`console.log`获取想要的信息,也可以使用`--inspect`将其接入`Chrome Devtools`,甚至你可以使用`d8`去调试某些加密混淆的代码。
|
||||||
|
|
||||||
|
由于我的目标是让我在没有 Anthropic 账号的情况下使用`Claude Code`,我并不需要获得完整的源代码,只需要将`Claude Code`请求 Anthropic 模型时将其转发到我自定义的接口即可。接下来我就开启了我的逆向过程:
|
||||||
|
|
||||||
|
1. 首先安装`Claude Code`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -g @anthropic-ai/claude-code
|
||||||
|
```
|
||||||
|
|
||||||
|
2. 安装后该项目被放在了`~/.nvm/versions/node/v20.10.0/lib/node_modules/@anthropic-ai/claude-code`中,因为我使用了`nvm`作为我的 node 版本控制器,当前使用`node-v20.10.0`,所以该路径会因人而异。
|
||||||
|
3. 找到项目路径之后可通过 package.json 分析包入口,内容如下:
|
||||||
|
|
||||||
|
```package.json
|
||||||
|
{
|
||||||
|
"name": "@anthropic-ai/claude-code",
|
||||||
|
"version": "1.0.24",
|
||||||
|
"main": "sdk.mjs",
|
||||||
|
"types": "sdk.d.ts",
|
||||||
|
"bin": {
|
||||||
|
"claude": "cli.js"
|
||||||
|
},
|
||||||
|
"engines": {
|
||||||
|
"node": ">=18.0.0"
|
||||||
|
},
|
||||||
|
"type": "module",
|
||||||
|
"author": "Boris Cherny <boris@anthropic.com>",
|
||||||
|
"license": "SEE LICENSE IN README.md",
|
||||||
|
"description": "Use Claude, Anthropic's AI assistant, right from your terminal. Claude can understand your codebase, edit files, run terminal commands, and handle entire workflows for you.",
|
||||||
|
"homepage": "https://github.com/anthropics/claude-code",
|
||||||
|
"bugs": {
|
||||||
|
"url": "https://github.com/anthropics/claude-code/issues"
|
||||||
|
},
|
||||||
|
"scripts": {
|
||||||
|
"prepare": "node -e \"if (!process.env.AUTHORIZED) { console.error('ERROR: Direct publishing is not allowed.\\nPlease use the publish-external.sh script to publish this package.'); process.exit(1); }\"",
|
||||||
|
"preinstall": "node scripts/preinstall.js"
|
||||||
|
},
|
||||||
|
"dependencies": {},
|
||||||
|
"optionalDependencies": {
|
||||||
|
"@img/sharp-darwin-arm64": "^0.33.5",
|
||||||
|
"@img/sharp-darwin-x64": "^0.33.5",
|
||||||
|
"@img/sharp-linux-arm": "^0.33.5",
|
||||||
|
"@img/sharp-linux-arm64": "^0.33.5",
|
||||||
|
"@img/sharp-linux-x64": "^0.33.5",
|
||||||
|
"@img/sharp-win32-x64": "^0.33.5"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
其中`"claude": "cli.js"`就是我们要找的入口,打开 cli.js,发现代码被压缩混淆过了。没关系,借助`webstorm`的`Formate File`功能可以重新格式化,让代码变得稍微好看一点。就像这样:
|
||||||
|

|
||||||
|
|
||||||
|
现在,你可以通过阅读部分代码来了解`Claude Code`的内容工具原理与提示词。你也可以在关键地方使用`console.log`来获得更多信息,当然,也可以使用`Chrome Devtools`来进行断点调试,使用以下命令启动`Claude Code`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
NODE_OPTIONS="--inspect-brk=9229" claude
|
||||||
|
```
|
||||||
|
|
||||||
|
该命令会以调试模式启动`Claude Code`,并将调试的端口设置为`9229`。这时候通过 Chrome 访问`chrome://inspect/`即可看到当前的`Claude Code`进程,点击`inspect`即可进行调试。
|
||||||
|

|
||||||
|

|
||||||
|
|
||||||
|
通过搜索关键字符`api.anthropic.com`很容易能找到`Claude Code`用来发请求的地方,根据上下文的查看,很容易发现这里的`baseURL`可以通过环境变量`ANTHROPIC_BASE_URL`进行覆盖,`apiKey`和`authToken`也同理。
|
||||||
|

|
||||||
|
|
||||||
|
到目前为止,我们获得关键信息:
|
||||||
|
|
||||||
|
1. 可以使用环境变量覆盖`Claude Code`的`BaseURL`和`apiKey`的配置
|
||||||
|
|
||||||
|
2. `Claude Code`使用[Anthropic API](https://docs.anthropic.com/en/api/overview)的规范
|
||||||
|
|
||||||
|
所以我们需要:
|
||||||
|
|
||||||
|
1. 实现一个服务用来将`OpenAI API`的规范转换成`Anthropic API`格式。
|
||||||
|
|
||||||
|
2. 启动`Claude Code`之前写入环境变量将`baseURL`指向到该服务。
|
||||||
|
|
||||||
|
于是,`claude-code-router`就诞生了,该项目使用`Express.js`作为 HTTP 服务,实现`/v1/messages`端点,使用`middlewares`处理请求/响应的格式转换以及请求重写功能(可以用来重写 Claude Code 的提示词以针对单个模型进行调优)。
|
||||||
|
在 2 月份由于`DeepSeek`全系列模型对`Function Call`的支持不佳导致无法直接使用`DeepSeek`模型,所以在当时我选择了`qwen-max`模型,一切表现的都很好,但是`qwen-max`不支持`KV Cache`,意味着我要消耗大量的 token,但是却无法获取`Claude Code`原生的体验。
|
||||||
|
所以我又尝试了`Router`模式,即使用一个小模型对任务进行分发,一共分为四个模型:`router`、`tool`、`think`和`coder`,所有的请求先经过一个免费的小模型,由小模型去判断应该是进行思考还是编码还是调用工具,再进行任务的分发,如果是思考和编码任务将会进行循环调用,直到最终使用工具写入或修改文件。但是实践下来发现免费的小模型不足以很好的完成任务的分发,再加上整个 Agnet 的设计存在缺陷,导致并不能很好的驱动`Claude Code`。
|
||||||
|
直到 5 月底,`Claude Code`被正式推出,这时`DeepSeek`全系列模型(R1 于 05-28)均支持`Function Call`,我开始重新设计该项目。在与 AI 的结对编程中我修复了之前的请求和响应转换问题,在某些场景下模型输出 JSON 响应而不是`Function Call`。这次直接使用`DeepSeek-v3`模型,它工作的比我想象中要好:能完成绝大多数工具调用,还支持用步骤规划解决任务,最关键的是`DeepSeek`的价格不到`claude Sonnet 3.5`的十分之一。正式发布的`Claude Code`对 Agent 的组织也不同于测试版,于是在分析了`Claude Code`的请求调用之后,我重新组织了`Router`模式:现在它还是四个模型:默认模型、`background`、`think`和`longContext`。
|
||||||
|
|
||||||
|
- 默认模型作为最终的兜底和日常处理
|
||||||
|
|
||||||
|
- `background`是用来处理一些后台任务,据 Anthropic 官方说主要用`Claude Haiku 3.5`模型去处理一些小任务,如俳句生成和对话摘要,于是我将其路由到了本地的`ollama`服务。
|
||||||
|
|
||||||
|
- `think`模型用于让`Claude Code`进行思考或者在`Plan Mode`下使用,这里我使用的是`DeepSeek-R1`,由于其不支持推理成本控制,所以`Think`和`UltraThink`是一样的逻辑。
|
||||||
|
|
||||||
|
- `longContext`是用于处理长下上文的场景,该项目会对每次请求使用tiktoken实时计算上下文长度,如果上下文大于32K则使用该模型,旨在弥补`DeepSeek`在长上下文处理不佳的情况。
|
||||||
|
|
||||||
|
以上就是该项目的发展历程以及我的一些思考,通过巧妙的使用环境变量覆盖的手段在不修改`Claude Code`源码的情况下完成请求的转发和修改,这就使得在可以得到 Anthropic 更新的同时使用自己的模型,自定义自己的提示词。该项目只是在 Anthropic 封禁中国区用户的情况下使用`Claude Code`并且达到成本和性能平衡的一种手段。如果可以的话,还是官方的Max Plan体验最好。
|
||||||
@@ -0,0 +1,89 @@
|
|||||||
|
---
|
||||||
|
title: GLM-4.6支持思考及思维链回传
|
||||||
|
date: 2025-11-18
|
||||||
|
tags: [glm, 思考, 思维链]
|
||||||
|
---
|
||||||
|
|
||||||
|
# GLM-4.6支持思考及思维链回传
|
||||||
|
|
||||||
|
## GLM-4.6在cluade code中启用思考
|
||||||
|
GLM从4.5开始就对claude code进行了支持,我之前也一直在关注,很多用户反映在claude code中无法启用思考,刚好最近收到了来自智谱的赞助,就着手进行研究。
|
||||||
|
|
||||||
|
首先根据[官方文档](https://docs.bigmodel.cn/api-reference/%E6%A8%A1%E5%9E%8B-api/%E5%AF%B9%E8%AF%9D%E8%A1%A5%E5%85%A8),我们发现`/chat/completions`端点是默认启用思考的,但是是由模型判断是否需要进行思考
|
||||||
|
|
||||||
|
```
|
||||||
|
thinking object
|
||||||
|
仅 GLM-4.5 及以上模型支持此参数配置. 控制大模型是否开启思维链。
|
||||||
|
|
||||||
|
thinking.type enum<string> default:enabled
|
||||||
|
是否开启思维链(当开启后 GLM-4.6 GLM-4.5 为模型自动判断是否思考,GLM-4.5V 为强制思考), 默认: enabled.
|
||||||
|
|
||||||
|
Available options: enabled, disabled
|
||||||
|
```
|
||||||
|
|
||||||
|
在claude code本身大量的提示词干扰下,会严重阻碍GLM模型本身的判断机制,导致模型很少进行思考。所以我们需要对模型进行引导,让模型认为需要进行思考。但是`claude-code-router`作为proxy,能做的只能是修改提示词/参数。
|
||||||
|
|
||||||
|
在最开始,我尝试直接删除claude code的系统提示词,模型确实进行了思考,但是这样就无法驱动claude code。所以我们需要进行提示词注入,明确告知模型需要进行思考。
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// transformer.ts
|
||||||
|
import { UnifiedChatRequest } from "../types/llm";
|
||||||
|
import { Transformer } from "../types/transformer";
|
||||||
|
|
||||||
|
export class ForceReasoningTransformer implements Transformer {
|
||||||
|
name = "forcereasoning";
|
||||||
|
|
||||||
|
async transformRequestIn(
|
||||||
|
request: UnifiedChatRequest
|
||||||
|
): Promise<UnifiedChatRequest> {
|
||||||
|
const systemMessage = request.messages.find(
|
||||||
|
(item) => item.role === "system"
|
||||||
|
);
|
||||||
|
if (Array.isArray(systemMessage?.content)) {
|
||||||
|
systemMessage.content.push({
|
||||||
|
type: "text",
|
||||||
|
text: "You are an expert reasoning model. \nAlways think step by step before answering. Even if the problem seems simple, always write down your reasoning process explicitly. \nNever skip your chain of thought. \nUse the following output format:\n<reasoning_content>(Write your full detailed thinking here.)</reasoning_content>\n\nWrite your final conclusion here.",
|
||||||
|
});
|
||||||
|
}
|
||||||
|
const lastMessage = request.messages[request.messages.length - 1];
|
||||||
|
if (lastMessage.role === "user" && Array.isArray(lastMessage.content)) {
|
||||||
|
lastMessage.content.push({
|
||||||
|
type: "text",
|
||||||
|
text: "You are an expert reasoning model. \nAlways think step by step before answering. Even if the problem seems simple, always write down your reasoning process explicitly. \nNever skip your chain of thought. \nUse the following output format:\n<reasoning_content>(Write your full detailed thinking here.)</reasoning_content>\n\nWrite your final conclusion here.",
|
||||||
|
});
|
||||||
|
}
|
||||||
|
if (lastMessage.role === "tool") {
|
||||||
|
request.messages.push({
|
||||||
|
role: "user",
|
||||||
|
content: [
|
||||||
|
{
|
||||||
|
type: "text",
|
||||||
|
text: "You are an expert reasoning model. \nAlways think step by step before answering. Even if the problem seems simple, always write down your reasoning process explicitly. \nNever skip your chain of thought. \nUse the following output format:\n<reasoning_content>(Write your full detailed thinking here.)</reasoning_content>\n\nWrite your final conclusion here.",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
return request;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
至于为什么让模型将思考内容放入reasoning_content标签而不是think标签有两个原因:
|
||||||
|
1. 直接使用think标签不能很好的激活思考,猜测是训练模型时以think标签作为数据集进行训练。
|
||||||
|
2. 如果使用think标签,模型的推理内容会被拆分到单独的字段,这就涉及到我们接下来要说的思维链回传问题。
|
||||||
|
|
||||||
|
|
||||||
|
## 思维链回传
|
||||||
|
|
||||||
|
近期Minimax发布了Minimax-m2,与此同时,他们还发布了一篇[文章](https://www.minimaxi.com/news/why-is-interleaved-thinking-important-for-m2)介绍思维链回传。但是太阳底下无新鲜事,刚好借此来剖析一下。
|
||||||
|
1. 我们首先来看一下为什么需要回传思维链?
|
||||||
|
Minimax在文章中说的是Chat Completion API不支持在后续请求中传递推理内容。我们知道ChatGPT是最先支持推理的,但是OpenAI最初没有开放思维链给用户,所以对于Chat Completion API来讲并不需要支持思维链相关的东西。就连CoT的字段也是DeepSeek率先在Chat Completion API中加入的。
|
||||||
|
|
||||||
|
2. 我们真的需要这些字段吗?
|
||||||
|
如果没有这些字段会怎么样?会影响到模型的思考吗?可以查看一下[sglang的源码](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/parser/reasoning_parser.py)发现思维链的信息原本就会在消息中按照特定的标记进行输出,假如我们不对其进行拆分,正常情况下在下轮对话中会自然包含这些信息。所以需要思维链回传的原因就是我们对模型的思维链内容进行拆分。
|
||||||
|
|
||||||
|
我用上面不到40行的代码完成了对GLM-4.5/6支持思考以及思维链回传的简单探索(单纯是因为没时间做拆分,完全可以在transformer中响应时先做拆分,请求时再进行合并,这样对cc前端的展示适配会更好),如果你有什么更好的想法也欢迎与我联系。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -0,0 +1,101 @@
|
|||||||
|
---
|
||||||
|
title: 或许我们能在 Router 中做更多事情
|
||||||
|
date: 2025-11-18
|
||||||
|
tags: [router, transformer, deepseek]
|
||||||
|
---
|
||||||
|
|
||||||
|
# 或许我们能在 Router 中做更多事情
|
||||||
|
|
||||||
|
自从`claude-code-router`发布以来,我收到了很多用户的反馈,至今还有不少的 issues 未处理。其中大多都是关于不同的供应商的支持和`deepseek`模型调用工具不积极的问题。
|
||||||
|
之前开发这个项目主要是为了我自己能以较低成本使用上`claude code`,所以一开始的设计并没有考虑到多供应商的情况。在实际的排查问题中,我发现尽管市面上所有的供应商几乎都宣称兼容`OpenAI`格式调用,即通过`/chat/compeletions`接口调用,但是其中的细节差异非常多。例如:
|
||||||
|
|
||||||
|
1. Gemini 的工具参数类型是 string 时,`format`参数只支持`date`和`date-time`,并且没有工具调用 ID。
|
||||||
|
|
||||||
|
2. OpenRouter 需要使用`cache_control`进行缓存。
|
||||||
|
|
||||||
|
3. DeepSeek 官方 API 的 `max_output` 为 8192,而火山引擎的会更大。
|
||||||
|
|
||||||
|
除了这些问题之外,还有一些其他的小的供应商,他们或多或少参数都有点问题。于是,我打算开发一个新的项目[musistudio/llms](https://github.com/musistudio/llms)来处理这种不同服务商的兼容问题。该项目使用 OpenAI 格式为基础的通用格式,提供了一个`Transformer`接口,该接口用于处理转换请求和响应。当我们给不同的服务商都实现了`Transformer`后,我们可以实现不同服务商的混合调用。比如我在`AnthropicTransformer`中实现了`Anthropic` <-> `OpenAI`格式的互相转换,并监听了`/v1/messages`端点,在`GeminiTransformer`中实现了`Gemini` <-> `OpenAI`格式的互相转换,并监听了`/v1beta/models/:modelAndAction`端点,当他们的请求和响应都被转换成一个通用格式的时候,就可以实现他们的互相调用。
|
||||||
|
|
||||||
|
```
|
||||||
|
AnthropicRequest -> AnthropicTransformer -> OpenAIRequest -> GeminiTransformer -> GeminiRequest -> GeminiServer
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
GeminiReseponse -> GeminiTransformer -> OpenAIResponse -> AnthropicTransformer -> AnthropicResponse
|
||||||
|
```
|
||||||
|
|
||||||
|
虽然使用中间层抹平差异可能会带来一些性能问题,但是该项目最初的目的是为了让`claude-code-router`支持不同的供应商。
|
||||||
|
|
||||||
|
至于`deepseek`模型调用工具不积极的问题,我发现这是由于`deepseek`在长上下文中的指令遵循不佳导致的。现象就是刚开始模型会主动调用工具,但是在经过几轮对话后模型只会返回文本。一开始的解决方案是通过注入一个系统提示词告知模型需要积极去使用工具以解决用户的问题,但是后面测试发现在长上下文中模型会遗忘该指令。
|
||||||
|
查看`deepseek`文档后发现模型支持`tool_choice`参数,可以强制让模型最少调用 1 个工具,我尝试将该值设置为`required`,发现模型调用工具的积极性大大增加,现在我们只需要在合适的时候取消这个参数即可。借助[musistudio/llms](https://github.com/musistudio/llms)的`Transformer`可以让我们在发送请求前和收到响应后做点什么,所以我参考`claude code`的`Plan Mode`,实现了一个使用与`deepseek`的`Tool Mode`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export class TooluseTransformer implements Transformer {
|
||||||
|
name = "tooluse";
|
||||||
|
|
||||||
|
transformRequestIn(request: UnifiedChatRequest): UnifiedChatRequest {
|
||||||
|
if (request.tools?.length) {
|
||||||
|
request.messages.push({
|
||||||
|
role: "system",
|
||||||
|
content: `<system-reminder>Tool mode is active. The user expects you to proactively execute the most suitable tool to help complete the task.
|
||||||
|
Before invoking a tool, you must carefully evaluate whether it matches the current task. If no available tool is appropriate for the task, you MUST call the \`ExitTool\` to exit tool mode — this is the only valid way to terminate tool mode.
|
||||||
|
Always prioritize completing the user's task effectively and efficiently by using tools whenever appropriate.</system-reminder>`,
|
||||||
|
});
|
||||||
|
request.tool_choice = "required";
|
||||||
|
request.tools.unshift({
|
||||||
|
type: "function",
|
||||||
|
function: {
|
||||||
|
name: "ExitTool",
|
||||||
|
description: `Use this tool when you are in tool mode and have completed the task. This is the only valid way to exit tool mode.
|
||||||
|
IMPORTANT: Before using this tool, ensure that none of the available tools are applicable to the current task. You must evaluate all available options — only if no suitable tool can help you complete the task should you use ExitTool to terminate tool mode.
|
||||||
|
Examples:
|
||||||
|
1. Task: "Use a tool to summarize this document" — Do not use ExitTool if a summarization tool is available.
|
||||||
|
2. Task: "What's the weather today?" — If no tool is available to answer, use ExitTool after reasoning that none can fulfill the task.`,
|
||||||
|
parameters: {
|
||||||
|
type: "object",
|
||||||
|
properties: {
|
||||||
|
response: {
|
||||||
|
type: "string",
|
||||||
|
description:
|
||||||
|
"Your response will be forwarded to the user exactly as returned — the tool will not modify or post-process it in any way.",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
required: ["response"],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
return request;
|
||||||
|
}
|
||||||
|
|
||||||
|
async transformResponseOut(response: Response): Promise<Response> {
|
||||||
|
if (response.headers.get("Content-Type")?.includes("application/json")) {
|
||||||
|
const jsonResponse = await response.json();
|
||||||
|
if (
|
||||||
|
jsonResponse?.choices[0]?.message.tool_calls?.length &&
|
||||||
|
jsonResponse?.choices[0]?.message.tool_calls[0]?.function?.name ===
|
||||||
|
"ExitTool"
|
||||||
|
) {
|
||||||
|
const toolArguments = JSON.parse(toolCall.function.arguments || "{}");
|
||||||
|
jsonResponse.choices[0].message.content = toolArguments.response || "";
|
||||||
|
delete jsonResponse.choices[0].message.tool_calls;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle non-streaming response if needed
|
||||||
|
return new Response(JSON.stringify(jsonResponse), {
|
||||||
|
status: response.status,
|
||||||
|
statusText: response.statusText,
|
||||||
|
headers: response.headers,
|
||||||
|
});
|
||||||
|
} else if (response.headers.get("Content-Type")?.includes("stream")) {
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
return response;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
该工具将始终让模型至少调用一个工具,如果没有合适的工具或者任务已完成可以调用`ExitTool`来退出工具模式,因为是依靠`tool_choice`参数实现的,所以仅适用于支持该参数的模型。经过测试,该工具能显著增加`deepseek`的工具调用次数,弊端是可能会有跟任务无关或者没有必要的工具调用导致增加任务执行事件和消耗的 `token` 数。
|
||||||
|
|
||||||
|
这次更新仅仅是在 Router 中实现一个`agent`的一次小探索,或许还能做更多其他有趣的事也说不定...
|
||||||
BIN
docs/static/blog-images/alipay.jpg
vendored
Normal file
|
After Width: | Height: | Size: 332 KiB |
BIN
docs/static/blog-images/chrome-devtools.png
vendored
Normal file
|
After Width: | Height: | Size: 915 KiB |
BIN
docs/static/blog-images/chrome-inspect.png
vendored
Normal file
|
After Width: | Height: | Size: 240 KiB |
BIN
docs/static/blog-images/claude-code-router-img.png
vendored
Normal file
|
After Width: | Height: | Size: 29 KiB |
BIN
docs/static/blog-images/claude-code.png
vendored
Normal file
|
After Width: | Height: | Size: 353 KiB |
BIN
docs/static/blog-images/models.gif
vendored
Normal file
|
After Width: | Height: | Size: 2.3 MiB |
67
docs/static/blog-images/roadmap.svg
vendored
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
<svg viewBox="0 0 1200 420" xmlns="http://www.w3.org/2000/svg">
|
||||||
|
<defs>
|
||||||
|
<style>
|
||||||
|
.road { stroke: #7aa2ff; stroke-width: 6; fill: none; filter: drop-shadow(0 6px 18px rgba(122,162,255,0.25)); }
|
||||||
|
.dash { stroke: rgba(122,162,255,0.25); stroke-width: 6; fill: none; stroke-dasharray: 2 18; }
|
||||||
|
.node { filter: drop-shadow(0 3px 10px rgba(126,240,193,0.35)); }
|
||||||
|
.node-circle { fill: #7ef0c1; }
|
||||||
|
.node-core { fill: #181b22; stroke: white; stroke-width: 1.5; }
|
||||||
|
.label-bg { fill: rgba(24,27,34,0.8); stroke: rgba(255,255,255,0.12); rx: 12; }
|
||||||
|
.label-text { fill: #e8ecf1; font-weight: 700; font-size: 14px; font-family: Arial, sans-serif; }
|
||||||
|
.label-sub { fill: #9aa6b2; font-weight: 500; font-size: 12px; font-family: Arial, sans-serif; }
|
||||||
|
.spark { fill: none; stroke: #ffd36e; stroke-width: 1.6; stroke-linecap: round; }
|
||||||
|
</style>
|
||||||
|
</defs>
|
||||||
|
|
||||||
|
<!-- Background road with dash -->
|
||||||
|
<path class="dash" d="M60,330 C320,260 460,100 720,160 C930,205 990,260 1140,260"/>
|
||||||
|
|
||||||
|
<!-- Main road -->
|
||||||
|
<path class="road" d="M60,330 C320,260 460,100 720,160 C930,205 990,260 1140,260"/>
|
||||||
|
|
||||||
|
<!-- New Documentation Node -->
|
||||||
|
<g class="node" transform="translate(200,280)">
|
||||||
|
<circle class="node-circle" r="10"/>
|
||||||
|
<circle class="node-core" r="6"/>
|
||||||
|
</g>
|
||||||
|
|
||||||
|
<!-- New Documentation Label -->
|
||||||
|
<g transform="translate(80,120)">
|
||||||
|
<rect class="label-bg" width="260" height="92"/>
|
||||||
|
<text class="label-text" x="16" y="34">New Documentation</text>
|
||||||
|
<text class="label-sub" x="16" y="58">Clear structure, examples & best practices</text>
|
||||||
|
</g>
|
||||||
|
|
||||||
|
<!-- Plugin Marketplace Node -->
|
||||||
|
<g class="node" transform="translate(640,150)">
|
||||||
|
<circle class="node-circle" r="10"/>
|
||||||
|
<circle class="node-core" r="6"/>
|
||||||
|
</g>
|
||||||
|
|
||||||
|
<!-- Plugin Marketplace Label -->
|
||||||
|
<g transform="translate(560,20)">
|
||||||
|
<rect class="label-bg" width="320" height="100"/>
|
||||||
|
<text class="label-text" x="16" y="34">Plugin Marketplace</text>
|
||||||
|
<text class="label-sub" x="16" y="58">Community submissions, ratings & version constraints</text>
|
||||||
|
</g>
|
||||||
|
|
||||||
|
<!-- One More Thing Node -->
|
||||||
|
<g class="node" transform="translate(1080,255)">
|
||||||
|
<circle class="node-circle" r="10"/>
|
||||||
|
<circle class="node-core" r="6"/>
|
||||||
|
</g>
|
||||||
|
|
||||||
|
<!-- One More Thing Label -->
|
||||||
|
<g transform="translate(940,300)">
|
||||||
|
<rect class="label-bg" width="250" height="86"/>
|
||||||
|
<text class="label-text" x="16" y="34">One More Thing</text>
|
||||||
|
<text class="label-sub" x="16" y="58">🚀 Confidential project · Revealing soon</text>
|
||||||
|
</g>
|
||||||
|
|
||||||
|
<!-- Spark decorations -->
|
||||||
|
<g transform="translate(1125,290)">
|
||||||
|
<path class="spark" d="M0 0 L8 0 M4 -4 L4 4"/>
|
||||||
|
<path class="spark" d="M14 -2 L22 -2 M18 -6 L18 2"/>
|
||||||
|
<path class="spark" d="M-10 6 L-2 6 M-6 2 L-6 10"/>
|
||||||
|
</g>
|
||||||
|
</svg>
|
||||||
|
After Width: | Height: | Size: 2.7 KiB |
BIN
docs/static/blog-images/search.png
vendored
Normal file
|
After Width: | Height: | Size: 984 KiB |
BIN
docs/static/blog-images/sponsors/glm-en.jpg
vendored
Normal file
|
After Width: | Height: | Size: 343 KiB |
BIN
docs/static/blog-images/sponsors/glm-zh.jpg
vendored
Normal file
|
After Width: | Height: | Size: 396 KiB |
BIN
docs/static/blog-images/statusline-config.png
vendored
Normal file
|
After Width: | Height: | Size: 91 KiB |
BIN
docs/static/blog-images/statusline.png
vendored
Normal file
|
After Width: | Height: | Size: 22 KiB |
BIN
docs/static/blog-images/ui.png
vendored
Normal file
|
After Width: | Height: | Size: 518 KiB |
BIN
docs/static/blog-images/webstorm-formate-file.png
vendored
Normal file
|
After Width: | Height: | Size: 1012 KiB |
BIN
docs/static/blog-images/wechat.jpg
vendored
Normal file
|
After Width: | Height: | Size: 109 KiB |
BIN
docs/static/blog-images/wechat_group.jpg
vendored
Normal file
|
After Width: | Height: | Size: 237 KiB |