Response Format

A Response Format of type "json_schema" is an important way to get the most out of an ObjectiveAI Query Completion. It allows you to control the structure of the output, making it machine-readable and parsable into native data structures. It can lead to demonstrable improvement in the quality of the response by controlling the chain of steps that the LLM takes.

The provided Response Format will be used by every "generate" LLM in the Query Model. "select" LLMs will not see it.

Using our custom schema extensions, _confidence and _preserveOrder, you can control when distinct outputs are given the same Confidence ID, increasing their Confidence Score.

_confidence

The _confidence field may be applied to any property in the schema. It is a boolean that defaults to true. If set to false, the property will be omitted when computing the Confidence ID of a choice. This is useful for properties that are not important to the meaning of the output, such as reasoning fields.

In this request, we've applied "_confidence": false to the "1_options" and "2_think_rank_1_option" properties. All we care about is the best website.

Let's see what the response looks like, with parsed content:

In the response, we can see that while gpt-5-nano and gpt-5-mini differed in their options and in their reasoning, they ultimately arrived at the same website. Due to our configuration of which properties count for Confidence ID, both choices have the same Confidence ID and Confidence Score. That makes https://www.italki.com the winner, with 66.67% Confidence.

_preserveOrder

The _preserveOrder field may be applied to any object or array property in the schema. It is a boolean that defaults to false.

For objects, if not present or set to false, object property keys will be sorted alphabetically. If the order that the LLM writes out the keys is important, set _preserveOrder to true.

For arrays, if not present or set to false, array items will be sorted alphabetically. If the order that the LLM writes out the items is important, such as when ranking, set _preserveOrder to true.

select_deterministic

Query Completion request objects contain the optional field "select_deterministic". When set to true, ObjectiveAI will attempt to generate a choice for each possible output that can be produced by the Schema.

This is a very powerful feature, as it enables the most mission critical applications of Query Completions, when the number of possible outputs is known.

This also enables (and is required for) Query Models which contain only "select" LLMs. Unlike "generate" LLMs, "select" LLMs can vote for more than one option at a time using "select_top_logprobs" as a probability distribution, resulting in much more granular Confidence Scores.

In the example Response Format, we're asking the AI to classify the support ticket by severity and category. Each possible combination of severity and category will be generated as a choice with "model":"TicketLabel" and the content being the JSON output with that combination.

You could then label the ticket with the severity and category of the choice containing the highest Confidence Score. Or, as another example, you could write some code that takes the single severity with the highest Confidence Score, and every category with a Confidence Score above some threshold.

Objective Artificial Intelligence, Inc.