OpenAI-compatible, Anthropic-compatible, and Gemini AI Studio native protocols are all supported for text generation.
OpenAI-Compatible Interface
For features not covered here, refer to the OpenAI API Reference .
Message Roles
Role Purpose Example systemSets the model’s behavior and persona ”You are an experienced software engineer.” userThe end user’s input ”How do I reverse a string in Python?” assistantPrior model responses, for multi-turn context ”You can use s[::-1] or ''.join(reversed(s)).”
Basic Conversation
from openai import OpenAI
client = OpenAI(
base_url = "https://api.ephone.ai/v1" ,
api_key = "API_KEY" ,
)
response = client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Hello!" }]
)
print (response.choices[ 0 ].message.content)
Streaming
from openai import OpenAI
client = OpenAI(
base_url = "https://api.ephone.ai/v1" ,
api_key = "API_KEY" ,
)
stream = client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Write a short poem about spring." }],
stream = True ,
)
for chunk in stream:
if chunk.choices[ 0 ].delta.content:
print (chunk.choices[ 0 ].delta.content, end = "" , flush = True )
For more details, see the OpenAI tool use guide .
from openai import OpenAI
client = OpenAI(
base_url = "https://api.ephone.ai/v1" ,
api_key = "API_KEY" ,
)
tools = [
{
"type" : "function" ,
"function" : {
"name" : "get_weather" ,
"description" : "Get the current weather for a city" ,
"parameters" : {
"type" : "object" ,
"properties" : {
"city" : { "type" : "string" , "description" : "City name" }
},
"required" : [ "city" ],
},
},
}
]
response = client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "What's the weather in London?" }],
tools = tools,
)
print (response.choices[ 0 ].message)
Response API
The OpenAI Responses API is supported for OpenAI models. See the OpenAI Responses API docs for usage details.
Set OPENAI_BASE_URL to https://api.ephone.ai/v1
Set OPENAI_API_KEY to your API key
Some parameters (presence_penalty, frequency_penalty, logit_bias, etc.) may be ignored by certain models
The legacy function_call parameter is deprecated — use tools instead
Anthropic-Compatible Interface
For features not covered here, refer to the Anthropic API Reference .
Basic Conversation
import anthropic
client = anthropic.Anthropic(
base_url = "https://api.ephone.ai/anthropic" ,
api_key = "API_KEY" ,
)
message = client.messages.create(
model = "claude-opus-4-5-20251101" ,
max_tokens = 1024 ,
messages = [{ "role" : "user" , "content" : "Hello!" }]
)
print (message.content[ 0 ].text)
Streaming
import anthropic
client = anthropic.Anthropic(
base_url = "https://api.ephone.ai/anthropic" ,
api_key = "API_KEY" ,
)
with client.messages.stream(
model = "claude-opus-4-5-20251101" ,
max_tokens = 1024 ,
messages = [{ "role" : "user" , "content" : "Write a short poem about spring." }],
) as stream:
for text in stream.text_stream:
print (text, end = "" , flush = True )
For more details, see the Anthropic tool use guide .
import anthropic
client = anthropic.Anthropic(
base_url = "https://api.ephone.ai/anthropic" ,
api_key = "API_KEY" ,
)
tools = [
{
"name" : "get_weather" ,
"description" : "Get the current weather for a city" ,
"input_schema" : {
"type" : "object" ,
"properties" : {
"city" : { "type" : "string" , "description" : "City name" }
},
"required" : [ "city" ],
},
}
]
message = client.messages.create(
model = "claude-opus-4-5-20251101" ,
max_tokens = 1024 ,
tools = tools,
messages = [{ "role" : "user" , "content" : "What's the weather in London?" }],
)
print (message.content)
Set ANTHROPIC_BASE_URL to https://api.ephone.ai/anthropic
Set ANTHROPIC_API_KEY to your API key
Gemini AI Studio Compatible Interface
Gemini models support direct calls using the official Google AI Studio API format — no conversion to OpenAI format required. Ideal for projects already using the Google genai SDK.
For features not covered here, refer to the Google AI Studio API Reference .
Basic Conversation
from google import genai
from google.genai import types
client = genai.Client(
api_key = "API_KEY" ,
http_options = types.HttpOptions(
api_version = "v1beta" ,
base_url = "https://api.ephone.ai" ,
),
)
response = client.models.generate_content(
model = "gemini-2.5-pro" ,
contents = "Hello!" ,
)
print (response.text)
Streaming
from google import genai
from google.genai import types
client = genai.Client(
api_key = "API_KEY" ,
http_options = types.HttpOptions(
api_version = "v1beta" ,
base_url = "https://api.ephone.ai" ,
),
)
for chunk in client.models.generate_content_stream(
model = "gemini-2.5-pro" ,
contents = "Write a short poem about spring." ,
):
print (chunk.text, end = "" , flush = True )
For more details, see the Google AI Studio function calling guide .
from google import genai
from google.genai import types
client = genai.Client(
api_key = "API_KEY" ,
http_options = types.HttpOptions(
api_version = "v1beta" ,
base_url = "https://api.ephone.ai" ,
),
)
get_weather = types.FunctionDeclaration(
name = "get_weather" ,
description = "Get the current weather for a city" ,
parameters = types.Schema(
type = "OBJECT" ,
properties = {
"city" : types.Schema( type = "STRING" , description = "City name" ),
},
required = [ "city" ],
),
)
response = client.models.generate_content(
model = "gemini-2.5-pro" ,
contents = "What's the weather in London?" ,
config = types.GenerateContentConfig(
tools = [types.Tool( function_declarations = [get_weather])]
),
)
print (response.candidates[ 0 ].content.parts)
Set base_url / baseUrl to https://api.ephone.ai (without /v1beta suffix — the SDK appends it automatically)
Set api_version / apiVersion to v1beta
Set api_key / apiKey to your API key
Install dependencies: Python pip install google-genai, Node.js npm install @google/genai
OpenAI Official Docs OpenAI Chat Completions API reference
Anthropic Official Docs Anthropic Messages API reference
Google AI Studio Docs Gemini GenerateContent API reference