Chat Completions

The OpenAI-compatible POST /v1/chat/completions endpoint. Parameters, response shape, tool calling, structured outputs, and errors for supported models.

What is the Chat Completions endpoint?#

POST /v1/chat/completions is opengateway's OpenAI-compatible chat endpoint. It accepts the OpenAI Chat Completions request shape, supports tools and provider-aware structured output parameters, and routes supported model IDs across OpenAI, Anthropic, Google, and OpenAI-compatible providers.

Endpoint#

POST https://api.opengateway.ai/v1/chat/completions

Minimum request#

curl https://api.opengateway.ai/v1/chat/completions \
  -H "Authorization: Bearer $OPENGATEWAY_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

Parameters#

The request body follows the OpenAI Chat Completions shape. The parameters you will use most often:

ParameterTypeDescription
modelstringowner/model-name. Required.
messagesarrayConversation history. Required.
streambooleanStream tokens via SSE. Defaults to false.
temperaturenumber0.0 to 2.0. Defaults vary by provider.
max_tokensnumberCap on the output length.
toolsarrayOpenAI-format function definitions.
tool_choicestring or object"auto", "required", "none", or a specific tool.
response_formatobject{ "type": "json_object" } or JSON Schema on providers that support it.
extra.fallbacksarrayModel IDs to try on failure. See Fallbacks.

Response#

The response shape follows OpenAI's Chat Completions response. opengateway also includes an extra.routing.attempts extension with the provider attempts made for the request.

{
  "id": "chatcmpl_og_abc123",
  "object": "chat.completion",
  "created": 1729584000,
  "model": "openai/gpt-4o-mini",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 7,
    "total_tokens": 16
  },
  "extra": {
    "routing": {
      "attempts": [
        {
          "provider": "openai",
          "region": "default",
          "status": "succeeded"
        }
      ]
    }
  }
}

The extra.routing.attempts field tells you which provider target served the request and which fallback attempts failed or succeeded. You can ignore it during normal operation and use it when debugging.

Tool calling#

Declare tools in the OpenAI format. opengateway translates the definitions into each provider's native format before sending.

{
  "model": "anthropic/claude-sonnet-4",
  "messages": [{"role": "user", "content": "What's the weather in Tokyo?"}],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get weather for a city",
        "parameters": {
          "type": "object",
          "properties": {
            "city": {"type": "string"}
          },
          "required": ["city"]
        }
      }
    }
  ]
}

The response always uses OpenAI's tool_calls shape, regardless of which provider ran the model.

Structured outputs#

Ask for a JSON object:

{
  "model": "openai/gpt-4o-mini",
  "messages": [...],
  "response_format": { "type": "json_object" }
}

Ask for a specific schema:

{
  "response_format": {
    "type": "json_schema",
    "json_schema": {
      "name": "event",
      "schema": { /* JSON Schema */ }
    }
  }
}

Structured output support is provider-specific. OpenAI-compatible providers that reject a format are blocked before the upstream call when opengateway knows the format is unsupported.

Errors#

Errors use standard HTTP status codes and a JSON body.

{
  "error": {
    "message": "An upstream service provider is currently unavailable",
    "type": "service_unavailable_error"
  }
}

Use the response id or the matching log row when you investigate a request in the dashboard.

Common error codes:

  • authentication_error: the API key is invalid or missing.
  • rate_limit_error: an upstream provider rate-limited the request.
  • service_unavailable_error: an upstream provider is unavailable.
  • invalid_request_error: the body is malformed or uses an unsupported parameter.
  • not_found_error: the model ID does not exist or is not enabled for your team.

See also#