Merge branch 'feature/cli'

# Conflicts:
#	.gitignore
#	index.mjs
This commit is contained in:
jinhui.li
2025-06-10 12:58:00 +08:00
28 changed files with 2427 additions and 1667 deletions

View File

@@ -1,31 +0,0 @@
## If you don't want to use multi-model routing
## set ENABLE_ROUTER to false, and define the following variables
## the model needs to support function calling
ENABLE_ROUTER=false
OPENAI_API_KEY=""
OPENAI_BASE_URL=""
OPENAI_MODEL=""
## If you want to use multi-model routing, set ENABLE_ROUTER to true
# ENABLE_ROUTER=true
## Define the model for the tool agent, the model needs to support function calling
# TOOL_AGENT_API_KEY=""
# TOOL_AGENT_BASE_URL=""
# TOOL_AGENT_MODEL=""
## Define the model for the coder agent
# CODER_AGENT_API_KEY=""
# CODER_AGENT_BASE_URL=""
# CODER_AGENT_MODEL=""
## Define the model for the thinker agent, using a model that supports reasoning will yield better results
# THINK_AGENT_API_KEY=""
# THINK_AGENT_BASE_URL=""
# THINK_AGENT_MODEL=""
## Define the model for the router agent, this model is the entry point for each request, it will consume a lot of tokens, please choose a small model to reduce costs
# ROUTER_AGENT_API_KEY=""
# ROUTER_AGENT_BASE_URL=""
# ROUTER_AGENT_MODEL=""

4
.gitignore vendored
View File

@@ -1,5 +1,5 @@
node_modules
.env
log.txt
.DS_Store
pnpm-lock.yaml
.idea
dist

9
.npmignore Normal file
View File

@@ -0,0 +1,9 @@
src
node_modules
.claude
CLAUDE.md
screenshoots
.DS_Store
.vscode
.idea
.env

View File

@@ -1,25 +1,12 @@
# CLAUDE.md
## Build/Lint/Test Commands
- Install dependencies: `npm i`
- Start server: `node index.mjs` (requires OPENAI_API_KEY, OPENAI_BASE_URL, OPENAI_MODEL env vars)
- Set environment variables:
```shell
export DISABLE_PROMPT_CACHING=1
export ANTHROPIC_AUTH_TOKEN="test"
export ANTHROPIC_BASE_URL="http://127.0.0.1:3456"
export API_TIMEOUT_MS=600000
```
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.You need use English to write text.
## Code Style Guidelines
- Follow existing formatting in README.md and other files
- Use ES module syntax (`import`/`export`)
- Environment variables are uppercase with underscores
- API endpoints use `/v1/` prefix
- JSON payloads follow strict structure with model, max_tokens, messages, system, etc.
- Include type information in JSON payloads where possible
- Use descriptive variable names
- Keep code modular - separate files for router, index, etc.
- Include example usage/documentation in README
- Use markdown code blocks for code samples
- Document API endpoints and parameters
## Key Development Commands
- Build: `npm run build`
- Start: `npm start`
## Architecture
- Uses `express` for routing (see `src/server.ts`)
- Bundles with `esbuild` for CLI distribution
- Plugins are loaded from `$HOME/.claude-code-router/plugins`

102
README.md
View File

@@ -2,104 +2,46 @@
> This is a repository for testing routing Claude Code requests to different models.
![demo.png](https://github.com/musistudio/claude-code-reverse/blob/main/screenshoots/demo.png)
## Warning! This project is for testing purposes and may consume a lot of tokens! It may also fail to complete tasks!
## Implemented
- [x] Mormal Mode and Router Mode
- [x] Using the qwen2.5-coder-3b model as the routing dispatcher (since its currently free on Alibaba Clouds official website)
- [x] Using the qwen-max-0125 model as the tool invoker
- [x] Using deepseek-v3 as the coder model
- [x] Using deepseek-r1 as the reasoning model
- [x] Support proxy
Thanks to the free qwen2.5-coder-3b model from Alibaba and deepseeks KV-Cache, we can significantly reduce the cost of using Claude Code. Make sure to set appropriate ignorePatterns for the project. See: https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview
## Usage
0. Install Claude Code
1. Install Claude Code
```shell
npm install -g @anthropic-ai/claude-code
```
1. Clone this repo
2. Install Claude Code Router
```shell
git clone https://github.com/musistudio/claude-code-reverse.git
npm install -g @musistudio/claude-code-router
```
2. Install dependencies
3. Start Claude Code by claude-code-router
```shell
npm i
ccr code
```
3. Start server
```shell
# Alternatively, you can create an .env file in the repo directory
# You can refer to the .env.example file to create the .env file
## Plugin
## disable router
ENABLE_ROUTER=false
OPENAI_API_KEY=""
OPENAI_BASE_URL=""
OPENAI_MODEL=""
The plugin allows users to rewrite Claude Code prompt and custom router. The plugin path is in `$HOME/.claude-code-router/plugins`. Currently, there are two demos available:
1. [custom router](https://github.com/musistudio/claude-code-router/blob/dev/custom-prompt/plugins/deepseek.js)
2. [rewrite prompt](https://github.com/musistudio/claude-code-router/blob/dev/custom-prompt/plugins/gemini.js)
## enable router
ENABLE_ROUTER=true
export TOOL_AGENT_API_KEY=""
export TOOL_AGENT_BASE_URL=""
export TOOL_AGENT_MODEL="qwen-max-2025-01-25"
You need to move them to the `$HOME/.claude-code-router/plugins` directory and configure 'usePlugin' in `$HOME/.claude-code-router/config.json`like this:
export CODER_AGENT_API_KEY=""
export CODER_AGENT_BASE_URL="https://api.deepseek.com"
export CODER_AGENT_MODEL="deepseek-chat"
export THINK_AGENT_API_KEY=""
export THINK_AGENT_BASE_URL="https://api.deepseek.com"
export THINK_AGENT_MODEL="deepseek-reasoner"
export ROUTER_AGENT_API_KEY=""
export ROUTER_AGENT_BASE_URL=""
export ROUTER_AGENT_MODEL="qwen2.5-coder-3b-instruct"
node index.mjs
```json
{
"usePlugin": "gemini",
"LOG": true,
"OPENAI_API_KEY": "",
"OPENAI_BASE_URL": "",
"OPENAI_MODEL": ""
}
```
4. Set environment variable to start claude code
```shell
export DISABLE_PROMPT_CACHING=1
export ANTHROPIC_AUTH_TOKEN="test"
export ANTHROPIC_BASE_URL="http://127.0.0.1:3456"
export API_TIMEOUT_MS=600000
claude
```
## Normal Mode
The initial version uses a single model to accomplish all tasks. This model needs to support function calling and must allow for a sufficiently large tool description length, ideally greater than 1754. If the model used in this mode does not support KV Cache, it will consume a significant number of tokens.
![normal mode](https://github.com/musistudio/claude-code-reverse/blob/main/screenshoots/normal.png)
## Router Mode
Using multiple models to handle different tasks, this mode requires setting ENABLE_ROUTER to true and configuring four models: ROUTER_AGENT_MODEL, TOOL_AGENT_MODEL, CODER_AGENT_MODEL, and THINK_AGENT_MODEL.
ROUTER_AGENT_MODEL does not require high intelligence and is only responsible for request routing. A small model is sufficient for this task (testing has shown that the qwen-coder-3b model performs well).
TOOL_AGENT_MODEL must support function calling and allow for a sufficiently large tool description length, ideally greater than 1754. If the model used in this mode does not support KV Cache, it will consume a significant number of tokens.
CODER_AGENT_MODEL and THINK_AGENT_MODEL can use the DeepSeek series of models.
The purpose of router mode is to separate tool invocation from coding tasks, enabling the use of inference models like r1, which do not support function calling.
![router mode](https://github.com/musistudio/claude-code-reverse/blob/main/screenshoots/router.png)
## Features
- [x] Plugins
- [] Support change models
- [] Suport scheduled tasks

7
config.json Normal file
View File

@@ -0,0 +1,7 @@
{
"usePlugin": "",
"LOG": true,
"OPENAI_API_KEY": "",
"OPENAI_BASE_URL": "",
"OPENAI_MODEL": ""
}

330
index.mjs
View File

@@ -1,330 +0,0 @@
import express from "express";
import { OpenAI } from "openai";
import dotenv from "dotenv";
import { existsSync } from "fs";
import { writeFile } from "fs/promises";
import { Router } from "./router.mjs";
import { getOpenAICommonOptions } from "./utils.mjs";
dotenv.config();
const app = express();
const port = 3456;
app.use(express.json({ limit: "500mb" }));
let client;
if (process.env.ENABLE_ROUTER && process.env.ENABLE_ROUTER === "true") {
const router = new Router();
client = {
call: (data) => {
return router.route(data);
},
};
} else {
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: process.env.OPENAI_BASE_URL,
...getOpenAICommonOptions(),
});
client = {
call: (data) => {
data.model = process.env.OPENAI_MODEL;
return openai.chat.completions.create(data);
},
};
}
app.post("/v1/messages", async (req, res) => {
try {
let {
model,
max_tokens,
messages,
system = [],
temperature,
metadata,
tools,
} = req.body;
messages = messages.map((item) => {
if (item.content instanceof Array) {
return {
role: item.role,
content: item.content.map((it) => {
const msg = {
...it,
type: ["tool_result", "tool_use"].includes(it?.type)
? "text"
: it?.type,
};
if (msg.type === "text") {
msg.text = it?.content
? JSON.stringify(it.content)
: it?.text || "";
delete msg.content;
}
return msg;
}),
};
}
return {
role: item.role,
content: item.content,
};
});
const data = {
model,
messages: [
...system.map((item) => ({
role: "system",
content: item.text,
})),
...messages,
],
temperature,
stream: true,
};
if (tools) {
data.tools = tools
.filter((tool) => !["StickerRequest"].includes(tool.name))
.map((item) => ({
type: "function",
function: {
name: item.name,
description: item.description,
parameters: item.input_schema,
},
}));
}
const completion = await client.call(data);
// Set SSE response headers
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("Cache-Control", "no-cache");
res.setHeader("Connection", "keep-alive");
const messageId = "msg_" + Date.now();
let contentBlockIndex = 0;
let currentContentBlocks = [];
// Send message_start event
const messageStart = {
type: "message_start",
message: {
id: messageId,
type: "message",
role: "assistant",
content: [],
model,
stop_reason: null,
stop_sequence: null,
usage: { input_tokens: 1, output_tokens: 1 },
},
};
res.write(
`event: message_start\ndata: ${JSON.stringify(messageStart)}\n\n`
);
let isToolUse = false;
let toolUseJson = "";
let currentToolCall = null;
let hasStartedTextBlock = false;
for await (const chunk of completion) {
const delta = chunk.choices[0].delta;
if (delta.tool_calls && delta.tool_calls.length > 0) {
const toolCall = delta.tool_calls[0];
if (!isToolUse) {
// Start new tool call block
isToolUse = true;
currentToolCall = toolCall;
const toolBlockStart = {
type: "content_block_start",
index: contentBlockIndex,
content_block: {
type: "tool_use",
id: `toolu_${Date.now()}`,
name: toolCall.function.name,
input: {},
},
};
// Add to content blocks list
currentContentBlocks.push({
type: "tool_use",
id: toolBlockStart.content_block.id,
name: toolCall.function.name,
input: {},
});
res.write(
`event: content_block_start\ndata: ${JSON.stringify(
toolBlockStart
)}\n\n`
);
toolUseJson = "";
}
// Stream tool call JSON
if (toolCall.function.arguments) {
const jsonDelta = {
type: "content_block_delta",
index: contentBlockIndex,
delta: {
type: "input_json_delta",
partial_json: toolCall.function.arguments,
},
};
toolUseJson += toolCall.function.arguments;
// Try to parse complete JSON and update content block
try {
const parsedJson = JSON.parse(toolUseJson);
currentContentBlocks[contentBlockIndex].input = parsedJson;
} catch (e) {
// JSON not yet complete, continue accumulating
}
res.write(
`event: content_block_delta\ndata: ${JSON.stringify(jsonDelta)}\n\n`
);
}
} else if (delta.content) {
// Handle regular text content
if (isToolUse) {
// End previous tool call block
const contentBlockStop = {
type: "content_block_stop",
index: contentBlockIndex,
};
res.write(
`event: content_block_stop\ndata: ${JSON.stringify(
contentBlockStop
)}\n\n`
);
contentBlockIndex++;
isToolUse = false;
}
if (!delta.content) continue;
// If text block not yet started, send content_block_start
if (!hasStartedTextBlock) {
const textBlockStart = {
type: "content_block_start",
index: contentBlockIndex,
content_block: {
type: "text",
text: "",
},
};
// Add to content blocks list
currentContentBlocks.push({
type: "text",
text: "",
});
res.write(
`event: content_block_start\ndata: ${JSON.stringify(
textBlockStart
)}\n\n`
);
hasStartedTextBlock = true;
}
// Send regular text content
const contentDelta = {
type: "content_block_delta",
index: contentBlockIndex,
delta: {
type: "text_delta",
text: delta.content,
},
};
// Update content block text
if (currentContentBlocks[contentBlockIndex]) {
currentContentBlocks[contentBlockIndex].text += delta.content;
}
res.write(
`event: content_block_delta\ndata: ${JSON.stringify(
contentDelta
)}\n\n`
);
}
}
// Close last content block
const contentBlockStop = {
type: "content_block_stop",
index: contentBlockIndex,
};
res.write(
`event: content_block_stop\ndata: ${JSON.stringify(contentBlockStop)}\n\n`
);
// Send message_delta event with appropriate stop_reason
const messageDelta = {
type: "message_delta",
delta: {
stop_reason: isToolUse ? "tool_use" : "end_turn",
stop_sequence: null,
content: currentContentBlocks,
},
usage: { input_tokens: 100, output_tokens: 150 },
};
res.write(
`event: message_delta\ndata: ${JSON.stringify(messageDelta)}\n\n`
);
// Send message_stop event
const messageStop = {
type: "message_stop",
};
res.write(`event: message_stop\ndata: ${JSON.stringify(messageStop)}\n\n`);
res.end();
} catch (error) {
console.error("Error in streaming response:", error);
res.status(400).json({
status: "error",
message: error.message,
});
}
});
async function initializeClaudeConfig() {
const homeDir = process.env.HOME;
const configPath = `${homeDir}/.claude.json`;
if (!existsSync(configPath)) {
const userID = Array.from(
{ length: 64 },
() => Math.random().toString(16)[2]
).join("");
const configContent = {
numStartups: 184,
autoUpdaterStatus: "enabled",
userID,
hasCompletedOnboarding: true,
lastOnboardingVersion: "0.2.9",
projects: {},
};
await writeFile(configPath, JSON.stringify(configContent, null, 2));
}
}
async function run() {
await initializeClaudeConfig();
app.listen(port, "0.0.0.0", () => {
console.log(`Example app listening on port ${port}`);
});
}
run();

1013
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,19 +1,33 @@
{
"name": "claude-code-router",
"name": "@musistudio/claude-code-router",
"version": "1.0.0",
"description": "You can switch the API endpoint by modifying the ANTHROPIC_BASE_URL environment variable.",
"main": "index.mjs",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node index.mjs"
"description": "Use Claude Code without an Anthropics account and route it to another LLM provider",
"bin": {
"ccr": "./dist/cli.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"scripts": {
"build": "esbuild src/cli.ts --bundle --platform=node --outfile=dist/cli.js"
},
"keywords": ["claude", "code", "router", "llm", "anthropic"],
"author": "musistudio",
"license": "MIT",
"dependencies": {
"@anthropic-ai/sdk": "^0.39.0",
"dotenv": "^16.4.7",
"express": "^4.21.2",
"https-proxy-agent": "^7.0.6",
"openai": "^4.85.4"
},
"devDependencies": {
"@types/express": "^5.0.0",
"esbuild": "^0.25.1",
"typescript": "^5.8.2"
},
"publishConfig": {
"ignore": [
"!build/",
"src/",
"screenshots/"
]
}
}

139
plugins/deepseek.js Normal file
View File

@@ -0,0 +1,139 @@
const {
log,
streamOpenAIResponse,
createClient,
} = require("claude-code-router");
const thinkRouter = {
name: "think",
description: `This agent is used solely for complex reasoning and thinking tasks. It should not be called for information retrieval or repetitive, frequent requests. Only use this agent for tasks that require deep analysis or problem-solving. If there is an existing result from the Thinker agent, do not call this agent again. You are only responsible for deep thinking to break down tasks, no coding or tool calls are needed. Finally, return the broken-down steps in order, for example:\n1. xxx\n2. xxx\n3. xxx`,
run(args) {
const client = createClient({
apiKey: process.env.THINK_AGENT_API_KEY,
baseURL: process.env.THINK_AGENT_BASE_URL,
});
const messages = JSON.parse(JSON.stringify(args.messages));
messages.forEach((msg) => {
if (Array.isArray(msg.content)) {
msg.content = JSON.stringify(msg.content);
}
});
let startIdx = messages.findIndex((msg) => msg.role !== "system");
if (startIdx === -1) startIdx = messages.length;
for (let i = startIdx; i < messages.length; i++) {
const expectedRole = (i - startIdx) % 2 === 0 ? "user" : "assistant";
messages[i].role = expectedRole;
}
if (
messages.length > 0 &&
messages[messages.length - 1].role === "assistant"
) {
messages.push({
role: "user",
content:
"Please follow the instructions provided above to resolve the issue.",
});
}
delete args.tools;
return client.chat.completions.create({
...args,
messages,
model: process.env.THINK_AGENT_MODEL,
});
},
};
class Router {
constructor() {
this.routers = [thinkRouter];
this.client = createClient({
apiKey: process.env.ROUTER_AGENT_API_KEY,
baseURL: process.env.ROUTER_AGENT_BASE_URL,
});
}
async route(args) {
log(`Request Router: ${JSON.stringify(args, null, 2)}`);
const res = await this.client.chat.completions.create({
...args,
messages: [
...args.messages,
{
role: "system",
content: `## **Guidelines:**
- **Trigger the "think" mode when the user's request involves deep thinking, complex reasoning, or multi-step analysis.**
- **Criteria:**
- Involves multi-layered logical reasoning or causal analysis
- Requires establishing connections or pattern recognition between different pieces of information
- Involves cross-domain knowledge integration or weighing multiple possibilities
- Requires creative thinking or non-direct inference
### **Special Case:**
- **When the user sends "test", respond with "success" only.**
### **Format requirements:**
- When you need to trigger the "think" mode, return the following JSON format:
\`\`\`json
{
"use": "think"
}
\`\`\`
`,
},
],
model: process.env.ROUTER_AGENT_MODEL,
stream: false,
});
let result;
try {
const text = res.choices[0].message.content;
if (!text) {
throw new Error("No text");
}
result = JSON.parse(
text.slice(text.indexOf("{"), text.lastIndexOf("}") + 1)
);
} catch (e) {
res.choices[0].delta = res.choices[0].message;
log(`No Router: ${JSON.stringify(res.choices[0].message)}`);
return [res];
}
const router = this.routers.find((item) => item.name === result.use);
if (!router) {
res.choices[0].delta = res.choices[0].message;
log(`No Router: ${JSON.stringify(res.choices[0].message)}`);
return [res];
}
log(`Use Router: ${router.name}`);
if (router.name === "think") {
const agentResult = await router.run({
...args,
stream: false,
});
try {
args.messages.push({
role: "user",
content:
`${router.name} Agent Result: ` +
agentResult.choices[0].message.content,
});
log(
`${router.name} Agent Result: ` +
agentResult.choices[0].message.content
);
return await this.route(args);
} catch (error) {
console.log(agentResult);
throw error;
}
}
return router.run(args);
}
}
const router = new Router();
module.exports = async function handle(req, res, next) {
const completions = await router.route(req.body);
streamOpenAIResponse(res, completions, req.body.model);
};

23
plugins/gemini.js Normal file
View File

@@ -0,0 +1,23 @@
module.exports = async function handle(req, res, next) {
if (Array.isArray(req.body.tools)) {
// rewrite tools definition
req.body.tools.forEach((tool) => {
if (tool.function.name === "BatchTool") {
// HACK: Gemini does not support objects with empty properties
tool.function.parameters.properties.invocations.items.properties.input.type =
"number";
return;
}
Object.keys(tool.function.parameters.properties).forEach((key) => {
const prop = tool.function.parameters.properties[key];
if (
prop.type === "string" &&
!["enum", "date-time"].includes(prop.format)
) {
delete prop.format;
}
});
});
}
next();
};

1142
pnpm-lock.yaml generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,171 +0,0 @@
import { OpenAI } from "openai";
import { getOpenAICommonOptions } from "./utils.mjs";
const useToolRouter = {
name: "use-tool",
description: `This agent can call user-specified tools to perform tasks. The user provides a list of tools to be used, and the agent integrates these tools to complete the specified tasks efficiently. The agent follows user instructions and ensures proper tool utilization for each request`,
run(args) {
const client = new OpenAI({
apiKey: process.env.TOOL_AGENT_API_KEY,
baseURL: process.env.TOOL_AGENT_BASE_URL,
...getOpenAICommonOptions(),
});
return client.chat.completions.create({
...args,
messages: [
...args.messages,
{
role: "system",
content:
"You need to select the appropriate tool for the task based on the users request. Review the requirements and choose the tool that fits the task best.",
},
],
model: process.env.TOOL_AGENT_MODEL,
});
},
};
const coderRouter = {
name: "coder",
description: `This agent is solely responsible for helping users write code. This agent could not call tools. This agent is used for writing and modifying code when the user provides clear and specific coding requirements. For example, tasks like implementing a quicksort algorithm in JavaScript or creating an HTML layout. If the user's request is unclear or cannot be directly translated into code, please route the task to 'Thinker' first for clarification or further processing.`,
run(args) {
const client = new OpenAI({
apiKey: process.env.CODER_AGENT_API_KEY,
baseURL: process.env.CODER_AGENT_BASE_URL,
...getOpenAICommonOptions(),
});
delete args.tools;
args.messages.forEach((item) => {
if (Array.isArray(item.content)) {
item.content = JSON.stringify(item.content);
}
});
return client.chat.completions.create({
...args,
messages: [
...args.messages,
{
role: "system",
content:
"You are a code writer who helps users write code based on their specific requirements. You create algorithms, implement functionality, and build structures according to the clear instructions provided by the user. Your focus is solely on writing code, ensuring that the task is completed accurately and efficiently.",
},
],
model: process.env.CODER_AGENT_MODEL,
});
},
};
const thinkRouter = {
name: "thinker",
description: `This agent is used solely for complex reasoning and thinking tasks. It should not be called for information retrieval or repetitive, frequent requests. Only use this agent for tasks that require deep analysis or problem-solving. If there is an existing result from the Thinker agent, do not call this agent again.`,
run(args) {
const client = new OpenAI({
apiKey: process.env.THINK_AGENT_API_KEY,
baseURL: process.env.THINK_AGENT_BASE_URL,
...getOpenAICommonOptions(),
});
const messages = JSON.parse(JSON.stringify(args.messages));
messages.forEach((msg) => {
if (Array.isArray(msg.content)) {
msg.content = JSON.stringify(msg.content);
}
});
let startIdx = messages.findIndex((msg) => msg.role !== "system");
if (startIdx === -1) startIdx = messages.length;
for (let i = startIdx; i < messages.length; i++) {
const expectedRole = (i - startIdx) % 2 === 0 ? "user" : "assistant";
messages[i].role = expectedRole;
}
if (
messages.length > 0 &&
messages[messages.length - 1].role === "assistant"
) {
messages.push({
role: "user",
content:
"Please follow the instructions provided above to resolve the issue.",
});
}
delete args.tools;
return client.chat.completions.create({
...args,
messages,
model: process.env.THINK_AGENT_MODEL,
});
},
};
export class Router {
constructor() {
this.routers = [useToolRouter, coderRouter, thinkRouter];
this.client = new OpenAI({
apiKey: process.env.ROUTER_AGENT_API_KEY,
baseURL: process.env.ROUTER_AGENT_BASE_URL,
...getOpenAICommonOptions(),
});
}
async route(args) {
const res = await this.client.chat.completions.create({
...args,
messages: [
...args.messages,
{
role: "system",
content: `You are an AI task router that receives user requests and forwards them to the appropriate AI models for task handling. You do not process any requests directly but are responsible for understanding the user's request and choosing the correct router based on the task and necessary steps. The available routers are: ${JSON.stringify(
this.routers.map((router) => {
return {
name: router.name,
description: router.description,
};
})
)}. Each router is designated for specific types of tasks, and you ensure that the request is routed accordingly for efficient processing. Use the appropriate router based on the users request:
If external tools are needed to gather more information, use the 'use-tool' router.
If the task involves writing code, use the 'coder' router.
If deep reasoning or analysis is required to break down steps, use the 'thinker' router.
Instead, format your response as a JSON object with one field: 'use' (string)`,
},
],
model: process.env.ROUTER_AGENT_MODEL,
stream: false,
});
let result;
try {
const text = res.choices[0].message.content;
result = JSON.parse(
text.slice(text.indexOf("{"), text.lastIndexOf("}") + 1)
);
} catch (e) {
console.log(e);
res.choices[0].delta = res.choices[0].message;
return [res];
}
const router = this.routers.find((item) => item.name === result.use);
if (!router) {
res.choices[0].delta = res.choices[0].message;
return [res];
}
if (router.name === "thinker" || router.name === "coder") {
const agentResult = await router.run({
...args,
stream: false,
});
try {
args.messages.push({
role: "assistant",
content:
`${router.name} Agent Result: ` +
agentResult.choices[0].message.content,
});
return await this.route(args);
} catch (error) {
console.log(agentResult);
throw error;
}
}
return router.run(args);
}
}

88
src/cli.ts Normal file
View File

@@ -0,0 +1,88 @@
#!/usr/bin/env node
import { run } from "./index";
import { closeService } from "./utils/close";
import { showStatus } from "./utils/status";
import { executeCodeCommand } from "./utils/codeCommand";
import { isServiceRunning } from "./utils/processCheck";
import { version } from "../package.json";
const command = process.argv[2];
const HELP_TEXT = `
Usage: claude-code [command]
Commands:
start Start service
stop Stop service
status Show service status
code Execute code command
-v, version Show version information
-h, help Show help information
Example:
claude-code start
claude-code code "Write a Hello World"
`;
async function waitForService(
timeout = 10000,
initialDelay = 1000
): Promise<boolean> {
// Wait for an initial period to let the service initialize
await new Promise((resolve) => setTimeout(resolve, initialDelay));
const startTime = Date.now();
while (Date.now() - startTime < timeout) {
if (isServiceRunning()) {
// Wait for an additional short period to ensure service is fully ready
await new Promise((resolve) => setTimeout(resolve, 500));
return true;
}
await new Promise((resolve) => setTimeout(resolve, 100));
}
return false;
}
async function main() {
switch (command) {
case "start":
await run({ daemon: true });
break;
case "stop":
await closeService();
break;
case "status":
showStatus();
break;
case "code":
if (!isServiceRunning()) {
console.log("Service not running, starting service...");
await run({ daemon: true });
// Wait for service to start, exit with error if timeout
if (await waitForService()) {
executeCodeCommand(process.argv.slice(3));
} else {
console.error(
"Service startup timeout, please manually run claude-code start to start the service"
);
process.exit(1);
}
} else {
executeCodeCommand(process.argv.slice(3));
}
break;
case "-v":
case "version":
console.log(`claude-code version: ${version}`);
break;
case "-h":
case "help":
console.log(HELP_TEXT);
break;
default:
console.log(HELP_TEXT);
process.exit(1);
}
}
main().catch(console.error);

18
src/constants.ts Normal file
View File

@@ -0,0 +1,18 @@
import path from "node:path";
import os from "node:os";
export const HOME_DIR = path.join(os.homedir(), ".claude-code-router");
export const CONFIG_FILE = `${HOME_DIR}/config.json`;
export const PLUGINS_DIR = `${HOME_DIR}/plugins`;
export const PID_FILE = path.join(HOME_DIR, '.claude-code-router.pid');
export const DEFAULT_CONFIG = {
log: false,
OPENAI_API_KEY: "",
OPENAI_BASE_URL: "",
OPENAI_MODEL: "",
};

82
src/index.ts Normal file
View File

@@ -0,0 +1,82 @@
import { existsSync } from "fs";
import { writeFile } from "fs/promises";
import { getOpenAICommonOptions, initConfig, initDir } from "./utils";
import { createServer } from "./server";
import { formatRequest } from "./middlewares/formatRequest";
import { rewriteBody } from "./middlewares/rewriteBody";
import OpenAI from "openai";
import { streamOpenAIResponse } from "./utils/stream";
import { isServiceRunning, savePid } from "./utils/processCheck";
import { fork } from "child_process";
async function initializeClaudeConfig() {
const homeDir = process.env.HOME;
const configPath = `${homeDir}/.claude.json`;
if (!existsSync(configPath)) {
const userID = Array.from(
{ length: 64 },
() => Math.random().toString(16)[2]
).join("");
const configContent = {
numStartups: 184,
autoUpdaterStatus: "enabled",
userID,
hasCompletedOnboarding: true,
lastOnboardingVersion: "1.0.17",
projects: {},
};
await writeFile(configPath, JSON.stringify(configContent, null, 2));
}
}
interface RunOptions {
port?: number;
daemon?: boolean;
}
async function run(options: RunOptions = {}) {
const port = options.port || 3456;
// Check if service is already running
if (isServiceRunning()) {
console.log("✅ Service is already running in the background.");
return;
}
await initializeClaudeConfig();
await initDir();
await initConfig();
// Save the PID of the background process
savePid(process.pid);
// Use port from environment variable if set (for background process)
const servicePort = process.env.SERVICE_PORT
? parseInt(process.env.SERVICE_PORT)
: port;
const server = createServer(servicePort);
server.useMiddleware(formatRequest);
server.useMiddleware(rewriteBody);
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: process.env.OPENAI_BASE_URL,
...getOpenAICommonOptions(),
});
server.app.post("/v1/messages", async (req, res) => {
try {
if (process.env.OPENAI_MODEL) {
req.body.model = process.env.OPENAI_MODEL;
}
const completion: any = await openai.chat.completions.create(req.body);
await streamOpenAIResponse(res, completion, req.body.model, req.body);
} catch (e) {
console.error("Error in OpenAI API call:", e);
}
});
server.start();
console.log(`🚀 Claude Code Router is running on port ${servicePort}`);
}
export { run };

View File

@@ -0,0 +1,209 @@
import { Request, Response, NextFunction } from "express";
import { ContentBlockParam } from "@anthropic-ai/sdk/resources";
import { MessageCreateParamsBase } from "@anthropic-ai/sdk/resources/messages";
import OpenAI from "openai";
import { streamOpenAIResponse } from "../utils/stream";
import { log } from "../utils/log";
export const formatRequest = async (
req: Request,
res: Response,
next: NextFunction
) => {
let {
model,
max_tokens,
messages,
system = [],
temperature,
metadata,
tools,
stream,
}: MessageCreateParamsBase = req.body;
log("formatRequest: ", req.body);
try {
// @ts-ignore
const openAIMessages = Array.isArray(messages)
? messages.flatMap((anthropicMessage) => {
const openAiMessagesFromThisAnthropicMessage = [];
if (!Array.isArray(anthropicMessage.content)) {
// Handle simple string content
if (typeof anthropicMessage.content === "string") {
openAiMessagesFromThisAnthropicMessage.push({
role: anthropicMessage.role,
content: anthropicMessage.content,
});
}
// If content is not string and not array (e.g. null/undefined), it will result in an empty array, effectively skipping this message.
return openAiMessagesFromThisAnthropicMessage;
}
// Handle array content
if (anthropicMessage.role === "assistant") {
const assistantMessage = {
role: "assistant",
content: null, // Will be populated if text parts exist
};
let textContent = "";
// @ts-ignore
const toolCalls = []; // Corrected type here
anthropicMessage.content.forEach((contentPart) => {
if (contentPart.type === "text") {
textContent +=
(typeof contentPart.text === "string"
? contentPart.text
: JSON.stringify(contentPart.text)) + "\\n";
} else if (contentPart.type === "tool_use") {
toolCalls.push({
id: contentPart.id,
type: "function",
function: {
name: contentPart.name,
arguments: JSON.stringify(contentPart.input),
},
});
}
});
const trimmedTextContent = textContent.trim();
if (trimmedTextContent.length > 0) {
// @ts-ignore
assistantMessage.content = trimmedTextContent;
}
if (toolCalls.length > 0) {
// @ts-ignore
assistantMessage.tool_calls = toolCalls;
}
// @ts-ignore
if (
assistantMessage.content ||
// @ts-ignore
(assistantMessage.tool_calls &&
// @ts-ignore
assistantMessage.tool_calls.length > 0)
) {
openAiMessagesFromThisAnthropicMessage.push(assistantMessage);
}
} else if (anthropicMessage.role === "user") {
// For user messages, text parts are combined into one message.
// Tool results are transformed into subsequent, separate 'tool' role messages.
let userTextMessageContent = "";
// @ts-ignore
const subsequentToolMessages = [];
anthropicMessage.content.forEach((contentPart) => {
if (contentPart.type === "text") {
userTextMessageContent +=
(typeof contentPart.text === "string"
? contentPart.text
: JSON.stringify(contentPart.text)) + "\\n";
} else if (contentPart.type === "tool_result") {
// Each tool_result becomes a separate 'tool' message
subsequentToolMessages.push({
role: "tool",
tool_call_id: contentPart.tool_use_id,
content:
typeof contentPart.content === "string"
? contentPart.content
: JSON.stringify(contentPart.content),
});
}
});
const trimmedUserText = userTextMessageContent.trim();
if (trimmedUserText.length > 0) {
openAiMessagesFromThisAnthropicMessage.push({
role: "user",
content: trimmedUserText,
});
}
// @ts-ignore
openAiMessagesFromThisAnthropicMessage.push(
// @ts-ignore
...subsequentToolMessages
);
} else {
// Fallback for other roles (e.g. system, or custom roles if they were to appear here with array content)
// This will combine all text parts into a single message for that role.
let combinedContent = "";
anthropicMessage.content.forEach((contentPart) => {
if (contentPart.type === "text") {
combinedContent +=
(typeof contentPart.text === "string"
? contentPart.text
: JSON.stringify(contentPart.text)) + "\\n";
} else {
// For non-text parts in other roles, stringify them or handle as appropriate
combinedContent += JSON.stringify(contentPart) + "\\n";
}
});
const trimmedCombinedContent = combinedContent.trim();
if (trimmedCombinedContent.length > 0) {
openAiMessagesFromThisAnthropicMessage.push({
role: anthropicMessage.role, // Cast needed as role could be other than 'user'/'assistant'
content: trimmedCombinedContent,
});
}
}
return openAiMessagesFromThisAnthropicMessage;
})
: [];
const systemMessages: OpenAI.Chat.Completions.ChatCompletionMessageParam[] =
Array.isArray(system)
? system.map((item) => ({
role: "system",
content: item.text,
}))
: [{ role: "system", content: system }];
const data: any = {
model,
messages: [...systemMessages, ...openAIMessages],
temperature,
stream,
};
if (tools) {
data.tools = tools
.filter((tool) => !["StickerRequest"].includes(tool.name))
.map((item: any) => ({
type: "function",
function: {
name: item.name,
description: item.description,
parameters: item.input_schema,
},
}));
}
if (stream) {
res.setHeader("Content-Type", "text/event-stream");
}
res.setHeader("Cache-Control", "no-cache");
res.setHeader("Connection", "keep-alive");
req.body = data;
} catch (error) {
console.error("Error in request processing:", error);
const errorCompletion: AsyncIterable<OpenAI.Chat.Completions.ChatCompletionChunk> =
{
async *[Symbol.asyncIterator]() {
yield {
id: `error_${Date.now()}`,
created: Math.floor(Date.now() / 1000),
model: "gpt-3.5-turbo",
object: "chat.completion.chunk",
choices: [
{
index: 0,
delta: {
content: `Error: ${(error as Error).message}`,
},
finish_reason: "stop",
},
],
};
},
};
await streamOpenAIResponse(res, errorCompletion, model, req.body);
}
next();
};

View File

@@ -0,0 +1,43 @@
import { Request, Response, NextFunction } from "express";
import Module from "node:module";
import { streamOpenAIResponse } from "../utils/stream";
import { log } from "../utils/log";
import { PLUGINS_DIR } from "../constants";
import path from "node:path";
import { access } from "node:fs/promises";
import { OpenAI } from "openai";
import { createClient } from "../utils";
// @ts-ignore
const originalLoad = Module._load;
// @ts-ignore
Module._load = function (request, parent, isMain) {
if (request === "claude-code-router") {
return {
streamOpenAIResponse,
log,
OpenAI,
createClient,
};
}
return originalLoad.call(this, request, parent, isMain);
};
export const rewriteBody = async (
req: Request,
res: Response,
next: NextFunction
) => {
if (!process.env.usePlugin) {
return next();
}
const pluginPath = path.join(PLUGINS_DIR, `${process.env.usePlugin}.js`);
try {
await access(pluginPath);
const rewritePlugin = require(pluginPath);
rewritePlugin(req, res, next);
} catch (e) {
console.error(e);
next();
}
};

23
src/server.ts Normal file
View File

@@ -0,0 +1,23 @@
import express, { RequestHandler } from "express";
interface Server {
app: express.Application;
useMiddleware: (middleware: RequestHandler) => void;
start: () => void;
}
export const createServer = (port: number): Server => {
const app = express();
app.use(express.json({ limit: "500mb" }));
return {
app,
useMiddleware: (middleware: RequestHandler) => {
app.use("/v1/messages", middleware);
},
start: () => {
app.listen(port, () => {
console.log(`Server is running on port ${port}`);
});
},
};
};

23
src/utils/close.ts Normal file
View File

@@ -0,0 +1,23 @@
import { isServiceRunning, cleanupPidFile } from './processCheck';
import { existsSync, readFileSync } from 'fs';
import { homedir } from 'os';
import { join } from 'path';
export async function closeService() {
const PID_FILE = join(homedir(), '.claude-code-router.pid');
if (!isServiceRunning()) {
console.log("No service is currently running.");
return;
}
try {
const pid = parseInt(readFileSync(PID_FILE, 'utf-8'));
process.kill(pid);
cleanupPidFile();
console.log("Service has been successfully stopped.");
} catch (e) {
console.log("Failed to stop the service. It may have already been stopped.");
cleanupPidFile();
}
}

31
src/utils/codeCommand.ts Normal file
View File

@@ -0,0 +1,31 @@
import { spawn } from 'child_process';
import { isServiceRunning } from './processCheck';
export async function executeCodeCommand(args: string[] = []) {
// Service check is now handled in cli.ts
// Set environment variables
const env = {
...process.env,
DISABLE_PROMPT_CACHING: '1',
ANTHROPIC_BASE_URL: 'http://127.0.0.1:3456',
API_TIMEOUT_MS: '600000'
};
// Execute claude command
const claudeProcess = spawn('claude', args, {
env,
stdio: 'inherit',
shell: true
});
claudeProcess.on('error', (error) => {
console.error('Failed to start claude command:', error.message);
console.log('Make sure Claude Code is installed: npm install -g @anthropic-ai/claude-code');
process.exit(1);
});
claudeProcess.on('close', (code) => {
process.exit(code || 0);
});
}

89
src/utils/index.ts Normal file
View File

@@ -0,0 +1,89 @@
import { HttpsProxyAgent } from "https-proxy-agent";
import OpenAI, { ClientOptions } from "openai";
import fs from "node:fs/promises";
import readline from "node:readline";
import {
CONFIG_FILE,
DEFAULT_CONFIG,
HOME_DIR,
PLUGINS_DIR,
} from "../constants";
export function getOpenAICommonOptions(): ClientOptions {
const options: ClientOptions = {};
if (process.env.PROXY_URL) {
options.httpAgent = new HttpsProxyAgent(process.env.PROXY_URL);
}
return options;
}
const ensureDir = async (dir_path: string) => {
try {
await fs.access(dir_path);
} catch {
await fs.mkdir(dir_path, { recursive: true });
}
};
export const initDir = async () => {
await ensureDir(HOME_DIR);
await ensureDir(PLUGINS_DIR);
};
const createReadline = () => {
return readline.createInterface({
input: process.stdin,
output: process.stdout,
});
};
const question = (query: string): Promise<string> => {
return new Promise((resolve) => {
const rl = createReadline();
rl.question(query, (answer) => {
rl.close();
resolve(answer);
});
});
};
const confirm = async (query: string): Promise<boolean> => {
const answer = await question(query);
return answer.toLowerCase() !== "n";
};
export const readConfigFile = async () => {
try {
const config = await fs.readFile(CONFIG_FILE, "utf-8");
return JSON.parse(config);
} catch {
const apiKey = await question("Enter OPENAI_API_KEY: ");
const baseUrl = await question("Enter OPENAI_BASE_URL: ");
const model = await question("Enter OPENAI_MODEL: ");
const config = Object.assign({}, DEFAULT_CONFIG, {
OPENAI_API_KEY: apiKey,
OPENAI_BASE_URL: baseUrl,
OPENAI_MODEL: model,
});
await writeConfigFile(config);
return config;
}
};
export const writeConfigFile = async (config: any) => {
await ensureDir(HOME_DIR);
await fs.writeFile(CONFIG_FILE, JSON.stringify(config, null, 2));
};
export const initConfig = async () => {
const config = await readConfigFile();
Object.assign(process.env, config);
};
export const createClient = (options: ClientOptions) => {
const client = new OpenAI({
...options,
...getOpenAICommonOptions(),
});
return client;
};

33
src/utils/log.ts Normal file
View File

@@ -0,0 +1,33 @@
import fs from "node:fs";
import path from "node:path";
import { HOME_DIR } from "../constants";
const LOG_FILE = path.join(HOME_DIR, "claude-code-router.log");
// Ensure log directory exists
if (!fs.existsSync(HOME_DIR)) {
fs.mkdirSync(HOME_DIR, { recursive: true });
}
export function log(...args: any[]) {
// Check if logging is enabled via environment variable
const isLogEnabled = process.env.LOG === "true";
if (!isLogEnabled) {
return;
}
const timestamp = new Date().toISOString();
const logMessage = `[${timestamp}] ${
Array.isArray(args)
? args
.map((arg) =>
typeof arg === "object" ? JSON.stringify(arg) : String(arg)
)
.join(" ")
: ""
}\n`;
// Append to log file
fs.appendFileSync(LOG_FILE, logMessage, "utf8");
}

60
src/utils/processCheck.ts Normal file
View File

@@ -0,0 +1,60 @@
import { existsSync, readFileSync, writeFileSync } from 'fs';
import { PID_FILE } from '../constants';
export function isServiceRunning(): boolean {
if (!existsSync(PID_FILE)) {
return false;
}
try {
const pid = parseInt(readFileSync(PID_FILE, 'utf-8'));
process.kill(pid, 0);
return true;
} catch (e) {
// Process not running, clean up pid file
cleanupPidFile();
return false;
}
}
export function savePid(pid: number) {
writeFileSync(PID_FILE, pid.toString());
}
export function cleanupPidFile() {
if (existsSync(PID_FILE)) {
try {
const fs = require('fs');
fs.unlinkSync(PID_FILE);
} catch (e) {
// Ignore cleanup errors
}
}
}
export function getServicePid(): number | null {
if (!existsSync(PID_FILE)) {
return null;
}
try {
const pid = parseInt(readFileSync(PID_FILE, 'utf-8'));
return isNaN(pid) ? null : pid;
} catch (e) {
return null;
}
}
export function getServiceInfo() {
const pid = getServicePid();
const running = isServiceRunning();
return {
running,
pid,
port: 3456,
endpoint: 'http://127.0.0.1:3456',
pidFile: PID_FILE
};
}

27
src/utils/status.ts Normal file
View File

@@ -0,0 +1,27 @@
import { getServiceInfo } from './processCheck';
export function showStatus() {
const info = getServiceInfo();
console.log('\n📊 Claude Code Router Status');
console.log('═'.repeat(40));
if (info.running) {
console.log('✅ Status: Running');
console.log(`🆔 Process ID: ${info.pid}`);
console.log(`🌐 Port: ${info.port}`);
console.log(`📡 API Endpoint: ${info.endpoint}`);
console.log(`📄 PID File: ${info.pidFile}`);
console.log('');
console.log('🚀 Ready to use! Run the following commands:');
console.log(' claude-code-router code # Start coding with Claude');
console.log(' claude-code-router close # Stop the service');
} else {
console.log('❌ Status: Not Running');
console.log('');
console.log('💡 To start the service:');
console.log(' claude-code-router start');
}
console.log('');
}

305
src/utils/stream.ts Normal file
View File

@@ -0,0 +1,305 @@
import { Response } from "express";
import { OpenAI } from "openai";
import { log } from "./log";
interface ContentBlock {
type: string;
id?: string;
name?: string;
input?: any;
text?: string;
}
interface MessageEvent {
type: string;
message?: {
id: string;
type: string;
role: string;
content: any[];
model: string;
stop_reason: string | null;
stop_sequence: string | null;
usage: {
input_tokens: number;
output_tokens: number;
};
};
delta?: {
stop_reason?: string;
stop_sequence?: string | null;
content?: ContentBlock[];
type?: string;
text?: string;
partial_json?: string;
};
index?: number;
content_block?: ContentBlock;
usage?: {
input_tokens: number;
output_tokens: number;
};
}
export async function streamOpenAIResponse(
res: Response,
completion: any,
model: string,
body: any
) {
const write = (data: string) => {
log("response: ", data);
res.write(data);
};
const messageId = "msg_" + Date.now();
if (!body.stream) {
res.json({
id: messageId,
type: "message",
role: "assistant",
// @ts-ignore
content: completion.choices[0].message.content || completion.choices[0].message.tool_calls?.map((item) => {
return {
type: 'tool_use',
id: item.id,
name: item.function?.name,
input: item.function?.arguments ? JSON.parse(item.function.arguments) : {},
};
}) || '',
stop_reason: completion.choices[0].finish_reason === 'tool_calls' ? "tool_use" : "end_turn",
stop_sequence: null,
usage: {
input_tokens: 100,
output_tokens: 50,
},
});
res.end();
return;
}
let contentBlockIndex = 0;
let currentContentBlocks: ContentBlock[] = [];
// Send message_start event
const messageStart: MessageEvent = {
type: "message_start",
message: {
id: messageId,
type: "message",
role: "assistant",
content: [],
model,
stop_reason: null,
stop_sequence: null,
usage: { input_tokens: 1, output_tokens: 1 },
},
};
write(`event: message_start\ndata: ${JSON.stringify(messageStart)}\n\n`);
let isToolUse = false;
let toolUseJson = "";
let hasStartedTextBlock = false;
try {
for await (const chunk of completion) {
log("Processing chunk:", chunk);
const delta = chunk.choices[0].delta;
if (delta.tool_calls && delta.tool_calls.length > 0) {
const toolCall = delta.tool_calls[0];
if (!isToolUse) {
// Start new tool call block
isToolUse = true;
const toolBlock: ContentBlock = {
type: "tool_use",
id: `toolu_${Date.now()}`,
name: toolCall.function?.name,
input: {},
};
const toolBlockStart: MessageEvent = {
type: "content_block_start",
index: contentBlockIndex,
content_block: toolBlock,
};
currentContentBlocks.push(toolBlock);
write(
`event: content_block_start\ndata: ${JSON.stringify(
toolBlockStart
)}\n\n`
);
toolUseJson = "";
}
// Stream tool call JSON
if (toolCall.function?.arguments) {
const jsonDelta: MessageEvent = {
type: "content_block_delta",
index: contentBlockIndex,
delta: {
type: "input_json_delta",
partial_json: toolCall.function?.arguments,
},
};
toolUseJson += toolCall.function.arguments;
try {
const parsedJson = JSON.parse(toolUseJson);
currentContentBlocks[contentBlockIndex].input = parsedJson;
} catch (e) {
log(e);
// JSON not yet complete, continue accumulating
}
write(
`event: content_block_delta\ndata: ${JSON.stringify(jsonDelta)}\n\n`
);
}
} else if (delta.content) {
// Handle regular text content
if (isToolUse) {
log("Tool call ended here:", delta);
// End previous tool call block
const contentBlockStop: MessageEvent = {
type: "content_block_stop",
index: contentBlockIndex,
};
write(
`event: content_block_stop\ndata: ${JSON.stringify(
contentBlockStop
)}\n\n`
);
contentBlockIndex++;
isToolUse = false;
}
if (!delta.content) continue;
// If text block not yet started, send content_block_start
if (!hasStartedTextBlock) {
const textBlock: ContentBlock = {
type: "text",
text: "",
};
const textBlockStart: MessageEvent = {
type: "content_block_start",
index: contentBlockIndex,
content_block: textBlock,
};
currentContentBlocks.push(textBlock);
write(
`event: content_block_start\ndata: ${JSON.stringify(
textBlockStart
)}\n\n`
);
hasStartedTextBlock = true;
}
// Send regular text content
const contentDelta: MessageEvent = {
type: "content_block_delta",
index: contentBlockIndex,
delta: {
type: "text_delta",
text: delta.content,
},
};
// Update content block text
if (currentContentBlocks[contentBlockIndex]) {
currentContentBlocks[contentBlockIndex].text += delta.content;
}
write(
`event: content_block_delta\ndata: ${JSON.stringify(
contentDelta
)}\n\n`
);
}
}
} catch (e: any) {
// If text block not yet started, send content_block_start
if (!hasStartedTextBlock) {
const textBlock: ContentBlock = {
type: "text",
text: "",
};
const textBlockStart: MessageEvent = {
type: "content_block_start",
index: contentBlockIndex,
content_block: textBlock,
};
currentContentBlocks.push(textBlock);
write(
`event: content_block_start\ndata: ${JSON.stringify(
textBlockStart
)}\n\n`
);
hasStartedTextBlock = true;
}
// Send regular text content
const contentDelta: MessageEvent = {
type: "content_block_delta",
index: contentBlockIndex,
delta: {
type: "text_delta",
text: JSON.stringify(e),
},
};
// Update content block text
if (currentContentBlocks[contentBlockIndex]) {
currentContentBlocks[contentBlockIndex].text += JSON.stringify(e);
}
write(
`event: content_block_delta\ndata: ${JSON.stringify(contentDelta)}\n\n`
);
}
// Close last content block
const contentBlockStop: MessageEvent = {
type: "content_block_stop",
index: contentBlockIndex,
};
write(
`event: content_block_stop\ndata: ${JSON.stringify(contentBlockStop)}\n\n`
);
// Send message_delta event with appropriate stop_reason
const messageDelta: MessageEvent = {
type: "message_delta",
delta: {
stop_reason: isToolUse ? "tool_use" : "end_turn",
stop_sequence: null,
content: currentContentBlocks,
},
usage: { input_tokens: 100, output_tokens: 150 },
};
if (!isToolUse) {
log("body: ", body, "messageDelta: ", messageDelta);
}
write(`event: message_delta\ndata: ${JSON.stringify(messageDelta)}\n\n`);
// Send message_stop event
const messageStop: MessageEvent = {
type: "message_stop",
};
write(`event: message_stop\ndata: ${JSON.stringify(messageStop)}\n\n`);
res.end();
}

20
tsconfig.json Normal file
View File

@@ -0,0 +1,20 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "CommonJS",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"moduleResolution": "node",
"noImplicitAny": true,
"allowSyntheticDefaultImports": true,
"sourceMap": true,
"declaration": true
},
"include": ["src/**/*.ts"],
"exclude": ["node_modules", "dist"]
}

View File

@@ -1,9 +0,0 @@
import { HttpsProxyAgent } from "https-proxy-agent";
export function getOpenAICommonOptions() {
const options = {};
if (process.env.PROXY_URL) {
options.httpAgent = new HttpsProxyAgent(process.env.PROXY_URL);
}
return options;
}