Native Mode

OpenAI (Native Mode)

Use the OpenAI SDK with Enterprise Router

Setup

Install the OpenAI SDK:

$pip install openai

Configure the client to point at your gateway:

1from openai import OpenAI
2
3client = OpenAI(
4 api_key="sk-proxy-YOUR_KEY_HERE",
5 base_url="https://gateway.example.com/openai"
6)

That’s it. Use the client exactly as you would with OpenAI directly.

Basic chat completion

1response = client.chat.completions.create(
2 model="gpt-4o",
3 messages=[
4 {"role": "system", "content": "You are a helpful assistant."},
5 {"role": "user", "content": "Explain quantum computing in one paragraph."}
6 ]
7)
8
9print(response.choices[0].message.content)

Streaming

1stream = client.chat.completions.create(
2 model="gpt-4o",
3 messages=[{"role": "user", "content": "Write a haiku about APIs."}],
4 stream=True
5)
6
7for chunk in stream:
8 if chunk.choices[0].delta.content:
9 print(chunk.choices[0].delta.content, end="")

Function calling

1import json
2
3tools = [
4 {
5 "type": "function",
6 "function": {
7 "name": "get_weather",
8 "description": "Get the current weather for a location",
9 "parameters": {
10 "type": "object",
11 "properties": {
12 "location": {"type": "string", "description": "City name"}
13 },
14 "required": ["location"]
15 }
16 }
17 }
18]
19
20response = client.chat.completions.create(
21 model="gpt-4o",
22 messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
23 tools=tools,
24 tool_choice="auto"
25)
26
27tool_call = response.choices[0].message.tool_calls[0]
28print(f"Function: {tool_call.function.name}")
29print(f"Arguments: {tool_call.function.arguments}")

TypeScript

1import OpenAI from "openai";
2
3const client = new OpenAI({
4 apiKey: "sk-proxy-YOUR_KEY_HERE",
5 baseURL: "https://gateway.example.com/openai",
6});
7
8const response = await client.chat.completions.create({
9 model: "gpt-4o",
10 messages: [{ role: "user", content: "Hello!" }],
11});
12
13console.log(response.choices[0].message.content);

Streaming (TypeScript)

1const stream = await client.chat.completions.create({
2 model: "gpt-4o",
3 messages: [{ role: "user", content: "Write a haiku about APIs." }],
4 stream: true,
5});
6
7for await (const chunk of stream) {
8 const content = chunk.choices[0]?.delta?.content;
9 if (content) process.stdout.write(content);
10}

Supported models

All OpenAI models work through native mode, including:

  • gpt-4o, gpt-4o-mini
  • gpt-5.4, gpt-5.4-mini, gpt-5.4-nano
  • o3, o3-mini, o4-mini

Any new model OpenAI releases will work automatically — the gateway proxies requests without modification.