Proxy Settings

Proxy Overview

Chat2API provides an OpenAI-compatible API proxy that allows you to use any OpenAI-compatible client with configured AI providers.

Status Monitoring

The proxy settings page displays real-time status at the top:

StatusDescription
RunningProxy server is running
StoppedProxy server is stopped
ErrorProxy server encountered an error

Status Information

  • Port: Current listening port
  • Uptime: Server running duration
  • Requests: Total requests processed
  • Success Rate: Request success rate

Quick Actions

  • Start Proxy: Start the proxy server
  • Stop Proxy: Stop the proxy server
  • Restart Proxy: Restart the proxy server

API Endpoints

EndpointMethodDescription
/v1/chat/completionsPOSTChat completion (streaming supported)
/v1/completionsPOSTText completion
/v1/modelsGETList available models
/v1/models/:modelGETGet model details
/healthGETHealth check
/statsGETUsage statistics

Basic Usage

Using curl

curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "DeepSeek-V3.2",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ]
  }'

Using OpenAI SDK (Python)

from openai import OpenAI

client = OpenAI(
    api_key="your-api-key",
    base_url="http://localhost:8080/v1"
)

response = client.chat.completions.create(
    model="DeepSeek-V3.2",
    messages=[{"role": "user", "content": "Hello!"}]
)

Using OpenAI SDK (JavaScript)

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-api-key',
  baseURL: 'http://localhost:8080/v1',
});

const response = await client.chat.completions.create({
  model: 'DeepSeek-V3.2',
  messages: [{ role: 'user', content: 'Hello!' }],
});

Streaming Response

Set stream: true to enable streaming:

curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_KEY" \
  -d '{
    "model": "DeepSeek-V3.2",
    "messages": [{"role": "user", "content": "Hello!"}],
    "stream": true
  }'

Response Format

Non-streaming

{
  "id": "chatcmpl-xxx",
  "object": "chat.completion",
  "created": 1234567890,
  "model": "DeepSeek-V3.2",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 20,
    "total_tokens": 30
  }
}

Streaming

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","created":1234567890,"model":"DeepSeek-V3.2","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}]}

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","created":1234567890,"model":"DeepSeek-V3.2","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}

data: [DONE]

On this page