Request Headers

authorizationstring*required

The authorization token (required).

Request Body

messagesarray*required

A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, images, and audio.

Items
Developer Messageobject
Properties
role"developer"*required
contentenum*required
Variants
Text Contentstring
Array Contentarray
Items
Text Content Partobject
Properties
type"text"*required
textstring*required
namestringoptional
System Messageobject
Properties
role"system"*required
contentenum*required
Variants
Text Contentstring
Array Contentarray
Items
Text Content Partobject
Properties
type"text"*required
textstring*required
namestringoptional
User Messageobject
Properties
role"user"*required
contentenum*required
Variants
Text Contentstring
Array Contentarray
Items
Text Content Partobject
Properties
type"text"*required
textstring*required
Image Content Partobject
Properties
type"image_url"*required
image_urlobject*required
Properties
urlstring*required
detailenumoptional
Variants
Auto"auto"
Low"low"
High"high"
Audio Content Partobject
Properties
type"audio_url"*required
input_audioobject*required
Properties
datastring*required
formatenum*required
Variants
WAV"wav"
MP3"mp3"
File Content Partobject
Properties
type"file"*required
fileobject*required
Properties
file_datastringoptional
file_idstringoptional
filenamestringoptional
namestringoptional
Assistant Messageobject
Properties
role"assistant"*required
contentenum*required
Variants
Text Contentstring
Array Contentarray
Items
Text Content Partobject
Properties
type"text"*required
textstring*required
Refusal Content Partobject
Properties
type"refusal"*required
refusalstring*required
namestringoptional
refusalstringoptional
tool_callsarrayoptional
Items
Tool Callobject
Properties
idstring*required
functionobject*required
Properties
namestring*required
argumentsstring*required
type"function"*required
Tool Messageobject
Properties
role"tool"*required
contentenum*required
Variants
Text Contentstring
Array Contentarray
Items
Text Content Partobject
Properties
type"text"*required
textstring*required
tool_call_idstring*required
modelenum*required

The Query Model to use for the query completion.

Variants
Query Model Namestring

The base62 22-character unique identifier for the Query Model.

Query Model Objectobject

The provided Query Model object.

Properties
modelsarraymin: 1*required

The LLMs which make up the Model.

Items
Query LLMobject

An LLM which is part of the Model.

Properties
idstring*required

Model ID used to generate the response.

modeenum*required

The mode of the model, which determines whether it generates a response or selects from the generated options.

Variants
Generate"generate"

The model generates a response.

Select Thinking"select_thinking"

The model selects a Generate ID. The model will output reasoning, even if the LLM is not a reasoning model. Best for non-reasoning models.

Select Non Thinking"select_non_thinking"

The model selects a Generate ID.

Select Thinking Logprobs"select_thinking_logprobs"

The model selects one or more Generate IDs as a probability distribution. The model will output reasoning, even if the LLM is not a reasoning model. Best for non-reasoning models.

Select Non Thinking Logprobs"select_non_thinking_logprobs"

The model selects one or more Generate IDs as a probability distribution.

select_top_logprobsnumbermin: 0max: 20optional

If the mode is one of the select logprobs modes, this controls how many of the top options are returned with their probabilities.

frequency_penaltynumbermin: -2max: 2optional

This setting aims to control the repetition of tokens based on how often they appear in the input. It tries to use less frequently those tokens that appear more in the input, proportional to how frequently they occur. Token penalty scales with the number of occurrences. Negative values will encourage token reuse.

logit_biasmap<string, number>optional

Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.

max_completion_tokensnumbermin: 1optional

An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.

presence_penaltynumbermin: -2max: 2optional

This setting aims to control the presence of tokens in the output. It tries to encourage the model to use tokens that are less present in the input, proportional to their presence in the input. Token presence scales with the number of occurrences. Negative values will encourage more diverse token usage.

reasoning_effortenumoptional

Constrains effort on reasoning for some reasoning models.

Variants
Low"low"
Medium"medium"
High"high"
stopenumoptional

Stop generation immediately if the model encounters any token specified in the stop array.

Variants
Stop Stringstring
Stop Arrayarray
Items
Stop Stringstring
temperaturenumbermin: 0max: 2optional

This setting influences the variety in the model’s responses. Lower values lead to more predictable and typical responses, while higher values encourage more diverse and less common responses. At 0, the model always gives the same response for a given input.

top_pnumbermin: 0max: 1optional

This setting limits the model’s choices to a percentage of likely tokens: only the top tokens whose probabilities add up to P. A lower value makes the model’s responses more predictable, while the default setting allows for a full range of token choices. Think of it like a dynamic Top-K.

max_tokensnumbermin: 1optional

This sets the upper limit for the number of tokens the model can generate in response. It won’t produce more than this limit. The maximum value is the context length minus the prompt length.

min_pnumbermin: 0max: 1optional

Represents the minimum probability for a token to be considered, relative to the probability of the most likely token. (The value changes depending on the confidence level of the most probable token.) If your Min-P is set to 0.1, that means it will only allow for tokens that are at least 1/10th as probable as the best possible option.

providerobjectoptional

OpenRouter provider preferences.

Properties
orderarrayoptional

List of provider slugs to try in order.

Items
Provider Slugstring
allow_fallbacksbooleanoptional

Whether to allow backup providers when the primary is unavailable.

require_parametersbooleanoptional

Only use providers that support all parameters in your request.

data_collectionenumoptional

Control whether to use providers that may store data.

Variants
Allow"allow"
Deny"deny"
onlyarrayoptional

List of provider slugs to allow for this request.

Items
Provider Slugstring
ignorearrayoptional

List of provider slugs to skip for this request.

Items
Provider Slugstring
quantizationsarrayoptional

List of quantization levels to filter by.

Items
Quantization Levelstring
sortstringoptional

Sort providers by price or throughput.

reasoningobjectoptional

OpenRouter reasoning configuration.

Properties
max_tokensnumbermin: 1optional

An upper bound for the number of tokens that can be generated for reasoning.

effortenumoptional

Constrains effort on reasoning for some reasoning models.

Variants
Low"low"
Medium"medium"
High"high"
enabledbooleanoptional

Whether reasoning is enabled for this request.

repetition_penaltynumbermin: 0max: 2optional

Helps to reduce the repetition of tokens from the input. A higher value makes the model less likely to repeat tokens, but too high a value can make the output less coherent (often with run-on sentences that lack small words). Token penalty scales based on original token’s probability.

top_anumbermin: 0max: 1optional

Consider only the top tokens with “sufficiently high” probabilities based on the probability of the most likely token. Think of it like a dynamic Top-P. A lower Top-A value focuses the choices based on the highest probability token but with a narrower scope. A higher Top-A value does not necessarily affect the creativity of the output, but rather refines the filtering process based on the maximum probability.

top_knumbermin: 1optional

This limits the model’s choice of tokens at each step, making it choose from a smaller set. A value of 1 means the model will always pick the most likely next token, leading to predictable results. By default this setting is disabled, making the model to consider all choices.

verbosityenumoptional

Controls the verbosity and length of the model response. Lower values produce more concise responses, while higher values produce more detailed and comprehensive responses.

Variants
Low"low"
Medium"medium"
High"high"
modelsarrayoptional

Fallback models. Will be tried in order if the first one fails.

Items
Model IDstring
weightenum*required

The weight of the model, which determines its influence on the Confidence Score. Must match the weight strategy of the parent Model.

Variants
Static Weightobject

A static weight value.

Properties
type"static"*required
weightnumber*required

The static weight value.

Training Table Weightobject

A dynamic weight value based on training table data.

Properties
type"training_table"*required
base_weightnumber*required

The base weight value, uninfluenced by training table data.

min_weightnumber*required

The minimum weight value. A model that never matches the correct answer will have this weight.

max_weightnumber*required

The maximum weight value. A model that always matches the correct answer will have this weight.

weightenum*required

The weight strategy for the Model, which determines how the Confidence Score is calculated.

Variants
Static Weightobject

Each LLM has a fixed weight.

Properties
type"static"*required
Training Table Weightobject

Each LLM has a dynamic weight based on training table data.

Properties
type"training_table"*required
embeddings_modelstring*required

The embedding model used to compute prompt embeddings for a training table vector search.

topnumbermin: 1*required

The number of most similar training table entries to consider when computing the dynamic weight.

logprobsbooleanoptional

Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned.

nnumbermin: 1optional

How many query completion choices to generate for each LLM in the Model. For example, if the model contains 4 LLMs, setting this to 2 will generate 8 choices.

predictionstringoptional

The predicted output from the model.

response_formatenumoptional

An object specifying the format that the model must output.

Variants
Text Response Formatobject

Responses will have no specific format.

Properties
type"text"*required
JSON Object Response Formatobject

Responses will be JSON Objects.

Properties
type"json_object"*required
JSON Schema Response Formatobject

Responses will adhere to the provided JSON Schema. May include custom "_confidence" or "_preserveOrder" fields to control Confidence ID computation.

"_confidence_" may be applied to any typed field in the schema to indicate whether the field should be included when computing Confidence ID (true by default).

"_preserveOrder" may be applied to array fields to indicate whether the order of items in the array should be preserved when computing Confidence ID (false by default).

If the Query Model contains only "select" LLMs, this field is required, and must contain only "object", "boolean", or "string" with an "enum" fields.

Properties
type"json_schema"*required
json_schemaobject*required
Properties
namestring*required
descriptionstringoptional
strictbooleanoptional

Whether to strictly enforce the schema. If true, the model will only output properties defined in the schema. If false, the model may output additional properties.

schemajson_valueoptional

The JSON Schema object defining the expected structure.

seednumberoptional

If specified, the inferencing will sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed for some models.

service_tierenumoptional

Specifies the processing type used for serving the request.

Variants
Auto"auto"
Default"default"
Flex"flex"
streambooleanoptional

If set to true, the model response data will be streamed to the client as it is generated using server-sent events.

stream_optionsobjectoptional

Options for streaming response.

Properties
include_usagebooleanoptional

If set, an additional chunk will be streamed before the data: [DONE] message. The usage field on this chunk shows the token usage statistics for the entire request, as well as the cost, if requested.

toolsarrayoptional

A list of tools the model may call.

Items
Toolobject
Properties
type"function"*required
functionobject*required
Properties
namestring*required
descriptionstringoptional
parametersjson_valueoptional

The JSON Schema object defining the expected structure.

strictbooleanoptional

Whether to strictly enforce the schema. If true, the model will only output properties defined in the schema. If false, the model may output additional properties.

top_logprobsnumbermin: 0max: 20optional

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

usageobjectoptional

OpenRouter accounting configuration.

Properties
includebooleanoptional

Whether to include Cost in the response usage.

embeddingsstringoptional

If specified, an embedding of each choice outputted by a "generate" LLM will include an embedding vector of the text content.

select_deterministicbooleanoptional

If true, Response Format must be "json_schema". The Schema must contain only object, string enum, or boolean properties.

A choice will be generated for each possible JSON output that can be constructed from the provided schema. The "model" field of these choices will be the "name" of the "json_schema". "select" LLMs will be able to vote on these choices.

If the Query Model contains only "select" LLMs, this field must be set to true.

Response Body (Unary)

idstring*required

A unique identifier for the chat completion.

choicesarray*required

An array of choices returned by the Query Model.

Items
messageobject*required

The message generated by the model for this choice.

Properties
contentstringoptional

The content of the message generated by the model.

refusalstringoptional

The refusal information if the model refused to generate a message.

role"assistant"*required

The role of the message, which is always assistant for model-generated messages.

annotationsarrayoptional

The annotations added by the model in this message.

Items
Annotationobject
Properties
type"url_citation"*required
url_citationobject*required
Properties
end_indexnumber*required

The end index of the citation in the message content.

start_indexnumber*required

The start index of the citation in the message content.

titlestring*required

The title of the cited webpage.

urlstring*required

The URL of the cited webpage.

audioobjectoptional

The audio generated by the model in this message.

Properties
idstring*required
datastring*required
expires_atnumber*required
transcriptstring*required
tool_callsarrayoptional

The tool calls made by the model in this delta.

Items
idstring*required

The tool call ID.

type"function"*required
functionobject*required
Properties
namestring*required

The name of the function being called.

argumentsstring*required

The arguments passed to the function.

reasoningstringoptional

The reasoning text generated by the model in this message.

imagesarrayoptional

The images generated by the model in this message.

Items
Imageobject
Properties
type"image_url"*required
image_urlobject*required
Properties
urlstring*required
finish_reasonenum*required

The reason why the model finished generating the response.

Variants
Stop"stop"

The model finished generating because it reached a natural stopping point.

Length"length"

The model finished generating because it reached the maximum token limit.

ToolCalls"tool_calls"

The model finished generating because it made one or more tool calls.

ContentFilter"content_filter"

The model finished generating because it triggered a content filter.

Error"error"

The model finished generating because an error occurred.

indexnumber*required

The index of the choice in the list of choices.

logprobsobjectoptional

The log probabilities of the tokens in the delta.

Properties
contentarrayoptional

An array of log probabilities for each token in the content.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

refusalarrayoptional

An array of log probabilities for each token in the refusal.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

generate_idstringoptional

A hash of the text content of the choice. Only present for "generate" LLMs.

confidence_idenumoptional

For "generate" LLMs, a hash of the text content of the choice. When the content is JSON, and Response Format was provided, the content may be modified prior to computing the hash. "_confidence": false (defaults to true) properties will be omitted. Omitting allows choices which only differ on unimportant properties to be treated as the same. "_preserveOrder": false (defaults to false) object property keys or array property items will be sorted. Sorting allows choices with the same content but in different orders to be treated as the same.

For "select" LLMs, the Confidence ID is the Confidence ID of the Generate ID that was selected. If using select logprobs, and the LLM selected multiple Generate IDs, then Confidence ID will be a probability distribution.

Variants
Confidence IDstring

A single Confidence ID.

Confidence ID Distributionmap<string, number>

A map of Confidence IDs to their probabilities. All probabilities will sum to 1.

confidence_weightnumberoptional

The weight of the LLM that produced this choice. For "static" weight, will always be a fixed value. For "training_table" weight, will depend on the training table data.

confidencenumberoptional

The confidence of the choice. Each choice with the same Confidence ID will have the same Confidence. Computed by dividing the total weight of LLMs that produced a choice with the same Confidence ID by the total weight of all Confidence IDs.

embeddingenumoptional

If the "embeddings" field was specified in the request, an embedding vector of the text content of the choice, or an error if the embedding failed.

Variants
Responseobject

The embedding response.

Properties
dataarray*required

An array of embedding objectst.

Items
Embeddingobject

An embedding vector.

Properties
embeddingarray*required

The embedding vector as an array of floats.

Items
Floatnumber

A float in the embedding vector.

indexnumber*required
object"embedding"*required
modelstring*required

The name of the model used to generate the embeddings.

object"list"*required
usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

Errorobject

An error occurred while generating the embedding.

Properties
codenumber*required

The HTTP status code for the error.

messagejson_value

A JSON message describing the error. Typically, either a string or an object.

errorobjectoptional

If an error occurred while generating this choice, the error object.

Properties
codenumber*required

The HTTP status code for the error.

messagejson_value

A JSON message describing the error. Typically, either a string or an object.

modelstring*required

The base62 22-character unique identifier for the LLM that produced this choice. If this choice was produced by the Response Format (with "select_deterministic": true), it will contain the name of the "json_schema"

model_indexnumberoptional

The index of the LLM in the Query Model that produced this choice. May be missing if this choice was produced by the Response Format.

completion_metadataobject*required

Details about the chat completion which produced this choice.

Properties
idstring*required

A unique identifier for the chat completion.

creatednumber*required

The Unix timestamp (in seconds) when the chat completion was created.

modelstring*required

The model used for the chat completion.

service_tierenumoptional

The service tier used for the chat completion.

Variants
Auto"auto"
Default"default"
Flex"flex"
system_fingerprintstringoptional

A fingerprint representing the system configuration used for the chat completion.

usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

providerstringoptional

The upstream (or upstream upstream) LLM provider used for the chat completion.

creatednumber*required

The Unix timestamp (in seconds) when the chat completion was created.

modelstring*required

The model which generated the completion. Will be prefixed by "objectiveai/".

object"chat.completion"*required
service_tierenumoptional

The service tier used for the chat completion.

Variants
Auto"auto"
Default"default"
Flex"flex"
system_fingerprintstringoptional

A fingerprint representing the system configuration used for the chat completion.

usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

training_table_dataobjectoptional

Training table data associated with the completion, if applicable.

Properties
response_format_hashstring*required

The hash of the response format used to generate the completion. Each response format has separate training table data.

embeddings_responseobject*required

The embeddings response computed from the request messages.

Properties
dataarray*required

An array of embedding objectst.

Items
Embeddingobject

An embedding vector.

Properties
embeddingarray*required

The embedding vector as an array of floats.

Items
Floatnumber

A float in the embedding vector.

indexnumber*required
object"embedding"*required
modelstring*required

The name of the model used to generate the embeddings.

object"list"*required
usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

Response Body (Streaming)

idstring*required

A unique identifier for the chat completion.

choicesarray*required

An array of choices returned by the Query Model.

Items
deltaobject*required

An object containing the incremental updates to the chat message.

Properties
contentstringoptional

The content of the message delta.

refusalstringoptional

The refusal reason if the model refused to generate a response.

role"assistant"optional

The role of the message delta.

tool_callsarrayoptional

The tool calls made by the model in this delta.

Items
indexnumber*required

The index of the tool call in the message.

idstringoptional

The tool call ID.

type"function"optional
functionobjectoptional
Properties
namestringoptional

The name of the function being called.

argumentsstringoptional

The arguments passed to the function.

reasoningstringoptional

The reasoning text generated by the model in this delta.

imagesarrayoptional

The images generated by the model in this delta.

Items
Imageobject
Properties
type"image_url"*required
image_urlobject*required
Properties
urlstring*required
finish_reasonenumoptional

The reason why the model finished generating the response.

Variants
Stop"stop"

The model finished generating because it reached a natural stopping point.

Length"length"

The model finished generating because it reached the maximum token limit.

ToolCalls"tool_calls"

The model finished generating because it made one or more tool calls.

ContentFilter"content_filter"

The model finished generating because it triggered a content filter.

Error"error"

The model finished generating because an error occurred.

indexnumber*required

The index of the choice in the list of choices.

logprobsobjectoptional

The log probabilities of the tokens in the delta.

Properties
contentarrayoptional

An array of log probabilities for each token in the content.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

refusalarrayoptional

An array of log probabilities for each token in the refusal.

Items
Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumber*required

The log probability of the token.

top_logprobsarray*required
Items
Top Logprobobject
Properties
tokenstring*required

The token text.

bytesarrayoptional

The byte representation of the token.

Items
Bytenumber

A byte in the token's byte representation.

logprobnumberoptional

The log probability of the token.

generate_idstringoptional

A hash of the text content of the choice. Only present for "generate" LLMs.

confidence_idenumoptional

For "generate" LLMs, a hash of the text content of the choice. When the content is JSON, and Response Format was provided, the content may be modified prior to computing the hash. "_confidence": false (defaults to true) properties will be omitted. Omitting allows choices which only differ on unimportant properties to be treated as the same. "_preserveOrder": false (defaults to false) object property keys or array property items will be sorted. Sorting allows choices with the same content but in different orders to be treated as the same.

For "select" LLMs, the Confidence ID is the Confidence ID of the Generate ID that was selected. If using select logprobs, and the LLM selected multiple Generate IDs, then Confidence ID will be a probability distribution.

Variants
Confidence IDstring

A single Confidence ID.

Confidence ID Distributionmap<string, number>

A map of Confidence IDs to their probabilities. All probabilities will sum to 1.

confidence_weightnumberoptional

The weight of the LLM that produced this choice. For "static" weight, will always be a fixed value. For "training_table" weight, will depend on the training table data.

confidencenumberoptional

The confidence of the choice. Each choice with the same Confidence ID will have the same Confidence. Computed by dividing the total weight of LLMs that produced a choice with the same Confidence ID by the total weight of all Confidence IDs.

embeddingenumoptional

If the "embeddings" field was specified in the request, an embedding vector of the text content of the choice, or an error if the embedding failed.

Variants
Responseobject

The embedding response.

Properties
dataarray*required

An array of embedding objectst.

Items
Embeddingobject

An embedding vector.

Properties
embeddingarray*required

The embedding vector as an array of floats.

Items
Floatnumber

A float in the embedding vector.

indexnumber*required
object"embedding"*required
modelstring*required

The name of the model used to generate the embeddings.

object"list"*required
usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

Errorobject

An error occurred while generating the embedding.

Properties
codenumber*required

The HTTP status code for the error.

messagejson_value

A JSON message describing the error. Typically, either a string or an object.

errorobjectoptional

If an error occurred while generating this choice, the error object.

Properties
codenumber*required

The HTTP status code for the error.

messagejson_value

A JSON message describing the error. Typically, either a string or an object.

modelstring*required

The base62 22-character unique identifier for the LLM that produced this choice. If this choice was produced by the Response Format (with "select_deterministic": true), it will contain the name of the "json_schema"

model_indexnumberoptional

The index of the LLM in the Query Model that produced this choice. May be missing if this choice was produced by the Response Format.

completion_metadataobject*required

Details about the chat completion which produced this choice.

Properties
idstring*required

A unique identifier for the chat completion.

creatednumber*required

The Unix timestamp (in seconds) when the first chat completion chunk was created.

modelstring*required

The model used for the chat completion.

service_tierenumoptional

The service tier used for the chat completion chunk.

Variants
Auto"auto"
Default"default"
Flex"flex"
system_fingerprintstringoptional

A fingerprint representing the system configuration used for the chat completion chunk.

usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

providerstringoptional

The upstream (or upstream upstream) LLM provider used for the chat completion chunk.

creatednumber*required

The Unix timestamp (in seconds) when the first chat completion chunk was created.

modelstring*required

The model which generated the completion. Will be prefixed by "objectiveai/".

object"chat.completion.chunk"*required
service_tierenumoptional

The service tier used for the chat completion chunk.

Variants
Auto"auto"
Default"default"
Flex"flex"
system_fingerprintstringoptional

A fingerprint representing the system configuration used for the chat completion chunk.

usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

training_table_dataobjectoptional

Training table data associated with the completion, if applicable.

Properties
response_format_hashstring*required

The hash of the response format used to generate the completion. Each response format has separate training table data.

embeddings_responseobject*required

The embeddings response computed from the request messages.

Properties
dataarray*required

An array of embedding objectst.

Items
Embeddingobject

An embedding vector.

Properties
embeddingarray*required

The embedding vector as an array of floats.

Items
Floatnumber

A float in the embedding vector.

indexnumber*required
object"embedding"*required
modelstring*required

The name of the model used to generate the embeddings.

object"list"*required
usageobjectoptional

An object containing token usage statistics for the chat completion.

Properties
completion_tokensnumber*required

The number of tokens generated in the completion.

prompt_tokensnumber*required

The number of tokens in the input prompt.

total_tokensnumber*required

The total number of tokens used (prompt + completion).

completion_tokens_detailsobjectoptional
Properties
accepted_prediction_tokensnumberoptional
rejected_prediction_tokensnumberoptional
audio_tokensnumberoptional

The number of audio tokens generated.

reasoning_tokensnumberoptional

The number of reasoning tokens generated.

prompt_tokens_detailsobjectoptional
Properties
audio_tokensnumberoptional

The number of audio tokens in the input prompt.

cached_tokensnumberoptional

The number of cached tokens in the input prompt.

costnumberoptional

The cost incurred for this chat completion, in Credits.

cost_detailsobjectoptional
Properties
upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider, in Credits.

upstream_upstream_inference_costnumberoptional

The cost charged by the upstream LLM provider's own upstream LLM provider, in Credits.

Objective Artificial Intelligence, Inc.