Observability
Built-in LLM observability. Every request logged and costed. No SDK to wrap, no agent to install. Cost, latency, and logs in one dashboard.
What is opengateway Observability?#
Observability in opengateway is the built-in dashboard and log viewer that captures every request through the Gateway. Cost, latency, tokens, model, provider, and request/response bodies are recorded when team body logging is enabled. No SDK to wrap and no agent to install.
Overview#
Every request that passes through the Gateway is automatically logged and costed. There is nothing to install and no SDK to wrap. When you open the dashboard, the data is already there.
This is deliberate. Wiring observability into an LLM gateway is usually a well-known tax: pick a tracing tool, wrap every SDK call, propagate request IDs, and reconcile cost reports at month end. When the gateway itself is producing the traffic, the data is already in the right place. We keep it there rather than ask you to re-plumb it.
What you get today#
Real-time KPIs for traffic, cost, latency, and error rate.
Every request, filterable by status, provider, model, API key, and time.
What it answers#
- How much did we spend on Claude yesterday? (Dashboard, model filter.)
- Why was that one response so slow? (Logs, open the request, inspect latency and provider.)
- Which users are burning the most tokens? (Logs, grouped by API key or user tag.)
- Did the gpt-4o-mini migration actually save money? (Dashboard, compare two periods.)
What you do not have to do#
- Install a Python SDK that wraps the OpenAI SDK.
- Keep a request ID alive through your async code.
- Reconcile cost reports from three different providers at month end.
- Write a retry-aware timing wrapper.
- Guess which request produced a strange answer.
All of this is already paid for by using the Gateway.