Migrate from OpenAI
Move from OpenAI to OpenGateway in three lines. Keep the OpenAI SDK, Chat Completions API, streaming shape, tool calling, and response format.
OpenGateway speaks OpenAI's Chat Completions API. If your app already uses the OpenAI SDK, you change three lines and you are done. The model parameter, the streaming format, the function calling shape, even the error envelope — they all stay identical.
What changes#
Three things, no more:
- Base URL —
https://api.openai.com/v1becomeshttps://api.opengateway.ai/v1 - API key — your OpenAI key becomes an OpenGateway key
- Model ID —
gpt-4obecomesopenai/gpt-4o(theowner/modelnamespace)
Everything else — request body, response body, streaming events, tool calling, JSON mode, vision — stays exactly the same.
Side-by-side#
# Before — OpenAI direct
from openai import OpenAI
client = OpenAI(api_key="sk-...")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)# After — OpenGateway
from openai import OpenAI
client = OpenAI(
api_key="og_live_...",
base_url="https://api.opengateway.ai/v1", # ← line 1
)
response = client.chat.completions.create(
model="openai/gpt-4o", # ← line 2 (added "openai/" prefix)
messages=[{"role": "user", "content": "Hello"}],
)Two lines, plus the API key swap. Your existing code works without further
changes. Streaming, tool calling, and response_format work identically.
TypeScript#
// Before
import OpenAI from "openai";
const client = new OpenAI();
// After
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.OPENGATEWAY_API_KEY,
baseURL: "https://api.opengateway.ai/v1",
});Calls to client.chat.completions.create({...}) work as before. Just prefix the
model with openai/.
Why migrate#
You get three things you cannot get from OpenAI alone:
- Request-level fallbacks. Chat Completions requests can include fallback model IDs, so the gateway can try another model when the first target fails.
- Built-in observability. Every request shows up in the dashboard with cost, latency, token counts, and the provider that served it. No middleware, no agent.
- One dashboard. API keys, request logs, dashboard metrics, and billing live in the control plane.
Add a fallback in one line#
Once you are on OpenGateway, adding fallback models to Chat Completions takes one extra field in the request body:
response = client.chat.completions.create(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
extra_body={"extra": {"fallbacks": ["anthropic/claude-sonnet-4", "google/gemini-2.5-pro"]}},
)When openai/gpt-4o returns a 5xx, times out, or rate-limits, the gateway
tries anthropic/claude-sonnet-4, then google/gemini-2.5-pro. Your code does
not need to know.
See Fallbacks for the rules that decide what counts as a "failure."
What stays the same#
- Streaming via
stream: true→ SSE events with the samedata: {...}envelope - Tool calling via
toolsandtool_choice(translated to Anthropic/Google shapes behind the scenes when you use those models) response_format: { type: "json_object" }and JSON schema on providers that support it- Vision inputs (
image_url) where the underlying model supports them - Error envelope:
{ "error": { "message": "...", "type": "..." } }
What is different#
- Model IDs are namespaced:
openai/gpt-4o, notgpt-4o. Strip the namespace and routing breaks. - Gateway headers (
x-opengateway-user-id,x-opengateway-session-id, andx-sionic-task-id) are recorded for attribution in logs. See HTTP Headers. - The dashboard at opengateway.ai/dashboard shows every request, not the OpenAI dashboard.
What to know#
Will my old OpenAI key still work?#
It will not. OpenGateway issues its own keys (og_live_...). The OpenAI key
stays at OpenAI; the gateway charges you for the upstream call separately.
Get a key from API Keys.
Do I lose anything by going through the gateway?#
You add a network hop. In practice, the p50 overhead is single-digit milliseconds. For long-running streaming responses, you do not notice.
Can I keep using OpenAI directly for some calls and OpenGateway for others?#
Yes. You can run both clients side by side. Many teams cut over one feature at a time.
Does the gateway store my prompts?#
Requests and responses are logged when team body logging is enabled. Teams can turn body logging on or off in Settings → Team.