Request Headers
Authorization token (required).
Request Path
The owner of the GitHub repository containing the function.
The name of the GitHub repository containing the function.
The commit SHA of the GitHub repository containing the function.
Request Body
Parameters for executing a remote function with an inline profile and streaming the response.
Properties
The retry token provided by a previous incomplete or failed function execution.
The input provided to the function.
Variants
A text rich content part.
Properties
The text content.
An image rich content part.
Properties
The URL of the image and its optional detail level.
Properties
Either a URL of the image or the base64 encoded image data.
Specifies the detail level of the image.
Variants
An audio rich content part.
Properties
The audio data and its format.
Properties
Base64 encoded audio data.
The format of the encoded audio data.
Variants
A video rich content part.
Properties
Variants
Properties
URL of the video.
A file rich content part.
Properties
The file to be used as input, either as base64 data, an uploaded file ID, or a URL.
Properties
The base64 encoded file data, used when passing the file to the model as a string.
The ID of an uploaded file to use as input.
The name of the file, used when passing the file to the model as a string.
The URL of the file, used when passing the file to the model as a URL.
Values
The input provided to the function.
Items
The input provided to the function.
Options for selecting the upstream provider of this completion.
Properties
Specifies whether to allow providers which collect data.
Variants
Whether to enforce Zero Data Retention (ZDR) policies when selecting providers.
Specifies the sorting strategy for provider selection.
Variants
Properties
Maximum price for prompt tokens.
Maximum price for completion tokens.
Maximum price for image generation.
Maximum price for audio generation.
Maximum price per request.
Preferred minimum throughput for the provider.
Preferred maximum latency for the provider.
Minimum throughput for the provider.
Maximum latency for the provider.
If specified, upstream systems will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
The maximum total time in milliseconds to spend on retries when a transient error occurs.
The maximum time in milliseconds to wait for the first chunk of a streaming response.
The maximum time in milliseconds to wait between subsequent chunks of a streaming response.
A function profile where remote profiles may omit a commit.
Variants
Properties
The owner of the GitHub repository containing the profile.
The name of the GitHub repository containing the profile.
The commit SHA of the GitHub repository containing the profile.
Whether to stream the response as a series of chunks.
Parameters for executing a remote function with an inline profile with a unary response.
Properties
The retry token provided by a previous incomplete or failed function execution.
The input provided to the function.
Variants
A text rich content part.
Properties
The text content.
An image rich content part.
Properties
The URL of the image and its optional detail level.
Properties
Either a URL of the image or the base64 encoded image data.
Specifies the detail level of the image.
Variants
An audio rich content part.
Properties
The audio data and its format.
Properties
Base64 encoded audio data.
The format of the encoded audio data.
Variants
A video rich content part.
Properties
Variants
Properties
URL of the video.
A file rich content part.
Properties
The file to be used as input, either as base64 data, an uploaded file ID, or a URL.
Properties
The base64 encoded file data, used when passing the file to the model as a string.
The ID of an uploaded file to use as input.
The name of the file, used when passing the file to the model as a string.
The URL of the file, used when passing the file to the model as a URL.
Values
The input provided to the function.
Items
The input provided to the function.
Options for selecting the upstream provider of this completion.
Properties
Specifies whether to allow providers which collect data.
Variants
Whether to enforce Zero Data Retention (ZDR) policies when selecting providers.
Specifies the sorting strategy for provider selection.
Variants
Properties
Maximum price for prompt tokens.
Maximum price for completion tokens.
Maximum price for image generation.
Maximum price for audio generation.
Maximum price per request.
Preferred minimum throughput for the provider.
Preferred maximum latency for the provider.
Minimum throughput for the provider.
Maximum latency for the provider.
If specified, upstream systems will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
The maximum total time in milliseconds to spend on retries when a transient error occurs.
The maximum time in milliseconds to wait for the first chunk of a streaming response.
The maximum time in milliseconds to wait between subsequent chunks of a streaming response.
A function profile where remote profiles may omit a commit.
Variants
Properties
The owner of the GitHub repository containing the profile.
The name of the GitHub repository containing the profile.
The commit SHA of the GitHub repository containing the profile.
Whether to stream the response as a series of chunks.
Response Body
The unique identifier of the function execution.
The tasks executed as part of the function execution.
Items
A function execution task.
Properties
The unique identifier of the function execution.
The tasks executed as part of the function execution.
When true, indicates that one or more tasks encountered errors during execution.
The output of the function execution.
Variants
The scalar output of the function execution.
The vector output of the function execution.
Items
Items
A JSON value.
Values
A JSON value.
When non-null, indicates that an error occurred during the function execution.
Properties
The status code of the error.
The message or details of the error.
A token which may be used to retry the function execution.
The UNIX timestamp (in seconds) when the function execution chunk was created.
The unique identifier of the function being executed.
The unique identifier of the profile being used.
The object type.
Variants
Token and cost usage statistics for the completion.
Properties
The number of tokens generated in the completion.
The number of tokens in the prompt.
The total number of tokens used in the prompt or generated in the completion.
Detailed breakdown of generated completion tokens.
Properties
The number of accepted prediction tokens in the completion.
The number of generated audio tokens in the completion.
The number of generated reasoning tokens in the completion.
The number of rejected prediction tokens in the completion.
Detailed breakdown of prompt tokens.
Properties
The number of audio tokens in the prompt.
The number of cached tokens in the prompt.
The number of prompt tokens written to cache.
The number of video tokens in the prompt.
The cost in credits incurred for this completion.
Detailed breakdown of upstream costs incurred.
Properties
The cost incurred upstream.
The cost incurred by upstream's upstream.
The total cost in credits incurred including upstream costs.
The index of the task in the sequence of tasks.
The index of the task amongst all mapped and non-skipped compiled tasks. Used internally.
The path of this task which may be used to navigate which nested task this is amongst the root functions tasks and sub-tasks.
Items
A vector completion task.
Properties
The unique identifier of the vector completion.
The list of chat completions created for this vector completion.
Items
A chat completion generated in the pursuit of a vector completion.
Properties
The unique identifier of the chat completion.
The unique identifier of the upstream chat completion.
The list of choices in this chat completion.
Items
A choice in a unary chat completion response.
Properties
A message generated by the assistant.
Properties
The content of the message.
The refusal message, if any.
The role of the message author.
Variants
The tool calls made by the assistant, if any.
Items
A function tool call made by the assistant.
Properties
The unique identifier of the function tool.
Properties
The name of the function.
The arguments passed to the function.
The reasoning provided by the assistant, if any.
The images generated by the assistant, if any.
Items
Properties
Properties
The Base64 URL of the generated image.
The reason why the assistant ceased to generate further tokens.
Variants
The index of the choice in the list of choices.
The log probabilities of the tokens generated by the model.
Properties
The log probabilities of the tokens in the content.
Items
The token which was selected by the sampler for this position as well as the logprobabilities of the top options.
Properties
The token string which was selected by the sampler.
The byte representation of the token which was selected by the sampler.
Items
The log probability of the token which was selected by the sampler.
The log probabilities of the top tokens for this position.
Items
The log probability of a token in the list of top tokens.
Properties
The token string.
The byte representation of the token.
Items
The log probability of the token.
The log probabilities of the tokens in the refusal.
Items
The token which was selected by the sampler for this position as well as the logprobabilities of the top options.
Properties
The token string which was selected by the sampler.
The byte representation of the token which was selected by the sampler.
Items
The log probability of the token which was selected by the sampler.
The log probabilities of the top tokens for this position.
Items
The log probability of a token in the list of top tokens.
Properties
The token string.
The byte representation of the token.
Items
The log probability of the token.
The Unix timestamp (in seconds) when the chat completion was created.
The unique identifier of the Ensemble LLM used for this chat completion.
The upstream model used for this chat completion.
Token and cost usage statistics for the completion.
Properties
The number of tokens generated in the completion.
The number of tokens in the prompt.
The total number of tokens used in the prompt or generated in the completion.
Detailed breakdown of generated completion tokens.
Properties
The number of accepted prediction tokens in the completion.
The number of generated audio tokens in the completion.
The number of generated reasoning tokens in the completion.
The number of rejected prediction tokens in the completion.
Detailed breakdown of prompt tokens.
Properties
The number of audio tokens in the prompt.
The number of cached tokens in the prompt.
The number of prompt tokens written to cache.
The number of video tokens in the prompt.
The cost in credits incurred for this completion.
Detailed breakdown of upstream costs incurred.
Properties
The cost incurred upstream.
The cost incurred by upstream's upstream.
The total cost in credits incurred including upstream costs.
The cost multiplier applied to upstream costs for computing ObjectiveAI costs.
Whether the completion used a BYOK (Bring Your Own Key) API Key.
The provider used for this chat completion.
The index of the completion amongst all chat completions.
An error encountered during the generation of this chat completion.
Properties
The status code of the error.
The message or details of the error.
The list of votes for responses in the request from the Ensemble LLMs within the provided Ensemble.
Items
A vote from an Ensemble LLM within a Vector Completion.
Properties
The unique identifier of the Ensemble LLM which generated this vote.
The index of the Ensemble LLM in the Ensemble.
The flat index of the Ensemble LLM in the expanded Ensemble, accounting for counts.
The vote generated by this Ensemble LLM. It is of the same length of the number of responses provided in the request. If the Ensemble LLM used logprobs, may be a probability distribution; otherwise, one of the responses will have a value of 1 and the rest 0.
Items
The weight assigned to this vote.
Whether this vote came from a previous Vector Completion which was retried.
The scores for each response in the request, aggregated from the votes of the Ensemble LLMs.
Items
The weights assigned to each response in the request, aggregated from the votes of the Ensemble LLMs.
Items
The Unix timestamp (in seconds) when the vector completion was created.
The unique identifier of the Ensemble used for this vector completion.
Token and cost usage statistics for the completion.
Properties
The number of tokens generated in the completion.
The number of tokens in the prompt.
The total number of tokens used in the prompt or generated in the completion.
Detailed breakdown of generated completion tokens.
Properties
The number of accepted prediction tokens in the completion.
The number of generated audio tokens in the completion.
The number of generated reasoning tokens in the completion.
The number of rejected prediction tokens in the completion.
Detailed breakdown of prompt tokens.
Properties
The number of audio tokens in the prompt.
The number of cached tokens in the prompt.
The number of prompt tokens written to cache.
The number of video tokens in the prompt.
The cost in credits incurred for this completion.
Detailed breakdown of upstream costs incurred.
Properties
The cost incurred upstream.
The cost incurred by upstream's upstream.
The total cost in credits incurred including upstream costs.
The index of the task in the sequence of tasks.
The index of the task amongst all mapped and non-skipped compiled tasks. Used internally.
The path of this task which may be used to navigate which nested task this is amongst the root functions tasks and sub-tasks.
Items
When non-null, indicates that an error occurred during the vector completion task.
Properties
The status code of the error.
The message or details of the error.
When true, indicates that one or more tasks encountered errors during execution.
The output of the function execution.
Variants
The scalar output of the function execution.
The vector output of the function execution.
Items
Items
A JSON value.
Values
A JSON value.
When non-null, indicates that an error occurred during the function execution.
Properties
The status code of the error.
The message or details of the error.
A token which may be used to retry the function execution.
The UNIX timestamp (in seconds) when the function execution chunk was created.
The unique identifier of the function being executed.
The unique identifier of the profile being used.
The object type.
Variants
Token and cost usage statistics for the completion.
Properties
The number of tokens generated in the completion.
The number of tokens in the prompt.
The total number of tokens used in the prompt or generated in the completion.
Detailed breakdown of generated completion tokens.
Properties
The number of accepted prediction tokens in the completion.
The number of generated audio tokens in the completion.
The number of generated reasoning tokens in the completion.
The number of rejected prediction tokens in the completion.
Detailed breakdown of prompt tokens.
Properties
The number of audio tokens in the prompt.
The number of cached tokens in the prompt.
The number of prompt tokens written to cache.
The number of video tokens in the prompt.
The cost in credits incurred for this completion.
Detailed breakdown of upstream costs incurred.
Properties
The cost incurred upstream.
The cost incurred by upstream's upstream.
The total cost in credits incurred including upstream costs.
Response Body (Streaming)
The unique identifier of the function execution.
The tasks executed as part of the function execution.
Items
A chunk of a function execution task.
Properties
The unique identifier of the function execution.
The tasks executed as part of the function execution.
When true, indicates that one or more tasks encountered errors during execution.
The output of the function execution.
Variants
The scalar output of the function execution.
The vector output of the function execution.
Items
Items
A JSON value.
Values
A JSON value.
When present, indicates that an error occurred during the function execution.
Properties
The status code of the error.
The message or details of the error.
A token which may be used to retry the function execution.
The UNIX timestamp (in seconds) when the function execution chunk was created.
The unique identifier of the function being executed.
The unique identifier of the profile being used.
The object type.
Variants
Token and cost usage statistics for the completion.
Properties
The number of tokens generated in the completion.
The number of tokens in the prompt.
The total number of tokens used in the prompt or generated in the completion.
Detailed breakdown of generated completion tokens.
Properties
The number of accepted prediction tokens in the completion.
The number of generated audio tokens in the completion.
The number of generated reasoning tokens in the completion.
The number of rejected prediction tokens in the completion.
Detailed breakdown of prompt tokens.
Properties
The number of audio tokens in the prompt.
The number of cached tokens in the prompt.
The number of prompt tokens written to cache.
The number of video tokens in the prompt.
The cost in credits incurred for this completion.
Detailed breakdown of upstream costs incurred.
Properties
The cost incurred upstream.
The cost incurred by upstream's upstream.
The total cost in credits incurred including upstream costs.
The index of the task in the sequence of tasks.
The index of the task amongst all mapped and non-skipped compiled tasks. Used internally.
The path of this task which may be used to navigate which nested task this is amongst the root functions tasks and sub-tasks.
Items
A chunk of a vector completion task.
Properties
The unique identifier of the vector completion.
The list of chat completion chunks created for this vector completion.
Items
A chat completion chunk generated in the pursuit of a vector completion.
Properties
The unique identifier of the chat completion.
The unique identifier of the upstream chat completion.
The list of choices in this chunk.
Items
A choice in a streaming chat completion response.
Properties
A delta in a streaming chat completion response.
Properties
The content added in this delta.
The refusal message added in this delta.
The role of the message author.
Variants
Tool calls made in this delta.
Items
A function tool call made by the assistant.
Properties
The index of the tool call in the sequence of tool calls.
The unique identifier of the function tool.
Properties
The name of the function.
The arguments passed to the function.
The reasoning added in this delta.
Images added in this delta.
Items
Properties
Properties
The Base64 URL of the generated image.
The reason why the assistant ceased to generate further tokens.
Variants
The index of the choice in the list of choices.
The log probabilities of the tokens generated by the model.
Properties
The log probabilities of the tokens in the content.
Items
The token which was selected by the sampler for this position as well as the logprobabilities of the top options.
Properties
The token string which was selected by the sampler.
The byte representation of the token which was selected by the sampler.
Items
The log probability of the token which was selected by the sampler.
The log probabilities of the top tokens for this position.
Items
The log probability of a token in the list of top tokens.
Properties
The token string.
The byte representation of the token.
Items
The log probability of the token.
The log probabilities of the tokens in the refusal.
Items
The token which was selected by the sampler for this position as well as the logprobabilities of the top options.
Properties
The token string which was selected by the sampler.
The byte representation of the token which was selected by the sampler.
Items
The log probability of the token which was selected by the sampler.
The log probabilities of the top tokens for this position.
Items
The log probability of a token in the list of top tokens.
Properties
The token string.
The byte representation of the token.
Items
The log probability of the token.
The Unix timestamp (in seconds) when the chat completion was created.
The unique identifier of the Ensemble LLM used for this chat completion.
The upstream model used for this chat completion.
Token and cost usage statistics for the completion.
Properties
The number of tokens generated in the completion.
The number of tokens in the prompt.
The total number of tokens used in the prompt or generated in the completion.
Detailed breakdown of generated completion tokens.
Properties
The number of accepted prediction tokens in the completion.
The number of generated audio tokens in the completion.
The number of generated reasoning tokens in the completion.
The number of rejected prediction tokens in the completion.
Detailed breakdown of prompt tokens.
Properties
The number of audio tokens in the prompt.
The number of cached tokens in the prompt.
The number of prompt tokens written to cache.
The number of video tokens in the prompt.
The cost in credits incurred for this completion.
Detailed breakdown of upstream costs incurred.
Properties
The cost incurred upstream.
The cost incurred by upstream's upstream.
The total cost in credits incurred including upstream costs.
The cost multiplier applied to upstream costs for computing ObjectiveAI costs.
Whether the completion used a BYOK (Bring Your Own Key) API Key.
The provider used for this chat completion.
The index of the completion amongst all chat completions.
An error encountered during the generation of this chat completion.
Properties
The status code of the error.
The message or details of the error.
The list of votes for responses in the request from the Ensemble LLMs within the provided Ensemble.
Items
A vote from an Ensemble LLM within a Vector Completion.
Properties
The unique identifier of the Ensemble LLM which generated this vote.
The index of the Ensemble LLM in the Ensemble.
The flat index of the Ensemble LLM in the expanded Ensemble, accounting for counts.
The vote generated by this Ensemble LLM. It is of the same length of the number of responses provided in the request. If the Ensemble LLM used logprobs, may be a probability distribution; otherwise, one of the responses will have a value of 1 and the rest 0.
Items
The weight assigned to this vote.
Whether this vote came from a previous Vector Completion which was retried.
The scores for each response in the request, aggregated from the votes of the Ensemble LLMs.
Items
The weights assigned to each response in the request, aggregated from the votes of the Ensemble LLMs.
Items
The Unix timestamp (in seconds) when the vector completion was created.
The unique identifier of the Ensemble used for this vector completion.
Token and cost usage statistics for the completion.
Properties
The number of tokens generated in the completion.
The number of tokens in the prompt.
The total number of tokens used in the prompt or generated in the completion.
Detailed breakdown of generated completion tokens.
Properties
The number of accepted prediction tokens in the completion.
The number of generated audio tokens in the completion.
The number of generated reasoning tokens in the completion.
The number of rejected prediction tokens in the completion.
Detailed breakdown of prompt tokens.
Properties
The number of audio tokens in the prompt.
The number of cached tokens in the prompt.
The number of prompt tokens written to cache.
The number of video tokens in the prompt.
The cost in credits incurred for this completion.
Detailed breakdown of upstream costs incurred.
Properties
The cost incurred upstream.
The cost incurred by upstream's upstream.
The total cost in credits incurred including upstream costs.
The index of the task in the sequence of tasks.
The index of the task amongst all mapped and non-skipped compiled tasks. Used internally.
The path of this task which may be used to navigate which nested task this is amongst the root functions tasks and sub-tasks.
Items
When present, indicates that an error occurred during the vector completion task.
Properties
The status code of the error.
The message or details of the error.
When true, indicates that one or more tasks encountered errors during execution.
The output of the function execution.
Variants
The scalar output of the function execution.
The vector output of the function execution.
Items
Items
A JSON value.
Values
A JSON value.
When present, indicates that an error occurred during the function execution.
Properties
The status code of the error.
The message or details of the error.
A token which may be used to retry the function execution.
The UNIX timestamp (in seconds) when the function execution chunk was created.
The unique identifier of the function being executed.
The unique identifier of the profile being used.
The object type.
Variants
Token and cost usage statistics for the completion.
Properties
The number of tokens generated in the completion.
The number of tokens in the prompt.
The total number of tokens used in the prompt or generated in the completion.
Detailed breakdown of generated completion tokens.
Properties
The number of accepted prediction tokens in the completion.
The number of generated audio tokens in the completion.
The number of generated reasoning tokens in the completion.
The number of rejected prediction tokens in the completion.
Detailed breakdown of prompt tokens.
Properties
The number of audio tokens in the prompt.
The number of cached tokens in the prompt.
The number of prompt tokens written to cache.
The number of video tokens in the prompt.
The cost in credits incurred for this completion.
Detailed breakdown of upstream costs incurred.
Properties
The cost incurred upstream.
The cost incurred by upstream's upstream.
The total cost in credits incurred including upstream costs.