Request Headers

authorizationstringoptional

The authorization token (optional).

Request Path

slugstring*required

The unique identifier of the model.

Request Query

metadatabooleanoptional

Whether to include metadata in the response. If false, all other query fields are ignored.

mebooleanoptional

Whether to only return the user's own metadata statistics. If true, authorization is required.

fromstringoptional

An RFC 3339 start date to filter metadata statistics by.

tostringoptional

An RFC 3339 end date to filter metadata statistics by.

Response Body

namestring*required

A base62 22-character unique identifier for the Query Model. This is a hash of all parameters.

training_table_namestringoptional

A base62 22-character unique identifier for the Query Model. This is a hash of some parameters. Only present with Training Table Weight.

modelsarraymin: 1*required

The LLMs which make up the Model.

Items
Query LLMobject

An LLM which is part of the Model.

Properties
idstring*required

Model ID used to generate the response.

modeenum*required

The mode of the model, which determines whether it generates a response or selects from the generated options.

Variants
Generate"generate"

The model generates a response.

Select Thinking"select_thinking"

The model selects a Generate ID. The model will output reasoning, even if the LLM is not a reasoning model. Best for non-reasoning models.

Select Non Thinking"select_non_thinking"

The model selects a Generate ID.

Select Thinking Logprobs"select_thinking_logprobs"

The model selects one or more Generate IDs as a probability distribution. The model will output reasoning, even if the LLM is not a reasoning model. Best for non-reasoning models.

Select Non Thinking Logprobs"select_non_thinking_logprobs"

The model selects one or more Generate IDs as a probability distribution.

select_top_logprobsnumbermin: 0max: 20optional

If the mode is one of the select logprobs modes, this controls how many of the top options are returned with their probabilities.

frequency_penaltynumbermin: -2max: 2optional

This setting aims to control the repetition of tokens based on how often they appear in the input. It tries to use less frequently those tokens that appear more in the input, proportional to how frequently they occur. Token penalty scales with the number of occurrences. Negative values will encourage token reuse.

logit_biasmap<string, number>optional

Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.

max_completion_tokensnumbermin: 1optional

An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.

presence_penaltynumbermin: -2max: 2optional

This setting aims to control the presence of tokens in the output. It tries to encourage the model to use tokens that are less present in the input, proportional to their presence in the input. Token presence scales with the number of occurrences. Negative values will encourage more diverse token usage.

reasoning_effortenumoptional

Constrains effort on reasoning for some reasoning models.

Variants
Low"low"
Medium"medium"
High"high"
stopenumoptional

Stop generation immediately if the model encounters any token specified in the stop array.

Variants
Stop Stringstring
Stop Arrayarray
Items
Stop Stringstring
temperaturenumbermin: 0max: 2optional

This setting influences the variety in the model’s responses. Lower values lead to more predictable and typical responses, while higher values encourage more diverse and less common responses. At 0, the model always gives the same response for a given input.

top_pnumbermin: 0max: 1optional

This setting limits the model’s choices to a percentage of likely tokens: only the top tokens whose probabilities add up to P. A lower value makes the model’s responses more predictable, while the default setting allows for a full range of token choices. Think of it like a dynamic Top-K.

max_tokensnumbermin: 1optional

This sets the upper limit for the number of tokens the model can generate in response. It won’t produce more than this limit. The maximum value is the context length minus the prompt length.

min_pnumbermin: 0max: 1optional

Represents the minimum probability for a token to be considered, relative to the probability of the most likely token. (The value changes depending on the confidence level of the most probable token.) If your Min-P is set to 0.1, that means it will only allow for tokens that are at least 1/10th as probable as the best possible option.

providerobjectoptional

OpenRouter provider preferences.

Properties
orderarrayoptional

List of provider slugs to try in order.

Items
Provider Slugstring
allow_fallbacksbooleanoptional

Whether to allow backup providers when the primary is unavailable.

require_parametersbooleanoptional

Only use providers that support all parameters in your request.

data_collectionenumoptional

Control whether to use providers that may store data.

Variants
Allow"allow"
Deny"deny"
onlyarrayoptional

List of provider slugs to allow for this request.

Items
Provider Slugstring
ignorearrayoptional

List of provider slugs to skip for this request.

Items
Provider Slugstring
quantizationsarrayoptional

List of quantization levels to filter by.

Items
Quantization Levelstring
sortstringoptional

Sort providers by price or throughput.

reasoningobjectoptional

OpenRouter reasoning configuration.

Properties
max_tokensnumbermin: 1optional

An upper bound for the number of tokens that can be generated for reasoning.

effortenumoptional

Constrains effort on reasoning for some reasoning models.

Variants
Low"low"
Medium"medium"
High"high"
enabledbooleanoptional

Whether reasoning is enabled for this request.

repetition_penaltynumbermin: 0max: 2optional

Helps to reduce the repetition of tokens from the input. A higher value makes the model less likely to repeat tokens, but too high a value can make the output less coherent (often with run-on sentences that lack small words). Token penalty scales based on original token’s probability.

top_anumbermin: 0max: 1optional

Consider only the top tokens with “sufficiently high” probabilities based on the probability of the most likely token. Think of it like a dynamic Top-P. A lower Top-A value focuses the choices based on the highest probability token but with a narrower scope. A higher Top-A value does not necessarily affect the creativity of the output, but rather refines the filtering process based on the maximum probability.

top_knumbermin: 1optional

This limits the model’s choice of tokens at each step, making it choose from a smaller set. A value of 1 means the model will always pick the most likely next token, leading to predictable results. By default this setting is disabled, making the model to consider all choices.

verbosityenumoptional

Controls the verbosity and length of the model response. Lower values produce more concise responses, while higher values produce more detailed and comprehensive responses.

Variants
Low"low"
Medium"medium"
High"high"
modelsarrayoptional

Fallback models. Will be tried in order if the first one fails.

Items
Model IDstring
weightenum*required

The weight of the model, which determines its influence on the Confidence Score. Must match the weight strategy of the parent Model.

Variants
Static Weightobject

A static weight value.

Properties
type"static"*required
weightnumber*required

The static weight value.

Training Table Weightobject

A dynamic weight value based on training table data.

Properties
type"training_table"*required
base_weightnumber*required

The base weight value, uninfluenced by training table data.

min_weightnumber*required

The minimum weight value. A model that never matches the correct answer will have this weight.

max_weightnumber*required

The maximum weight value. A model that always matches the correct answer will have this weight.

namestring*required

A base62 22-character unique identifier for the Query LLM. This is a hash of all parameters.

training_table_namestringoptional

A base62 22-character unique identifier for the Query LLM. This is a hash of some parameters. Only present with Training Table Weight.

indexstring*required

The index of the Query LLM within the parent Query Model.

training_table_indexstringoptional

The index of the Query LLM within the Training Table. Only present with Training Table Weight.

weightenum*required

The weight strategy for the Model, which determines how the Confidence Score is calculated.

Variants
Static Weightobject

Each LLM has a fixed weight.

Properties
type"static"*required
Training Table Weightobject

Each LLM has a dynamic weight based on training table data.

Properties
type"training_table"*required
embeddings_modelstring*required

The embedding model used to compute prompt embeddings for a training table vector search.

topnumbermin: 1*required

The number of most similar training table entries to consider when computing the dynamic weight.

user_idstringoptional

The ID of the user who created the Query Model

createdstringoptional

The RFC 3339 timestamp when the Query Model was created

requestsnumberoptional

The number of requests made with the Query Model

chat_completion_tokensnumberoptional

The number of chat completion tokens generated by the Query Model

chat_prompt_tokensnumberoptional

The number of chat prompt tokens processed by the Query Model

chat_costnumberoptional

The total cost of chat completions generated by the Query Model, in Credits

embedding_completion_tokensnumberoptional

The number of embedding completion tokens generated by the Query Model

embedding_prompt_tokensnumberoptional

The number of embedding prompt tokens processed by the Query Model

embedding_costnumberoptional

The total cost of embedding completions generated by the Query Model, in Credits

Objective Artificial Intelligence, Inc.