What is Objective AI?
Objective AI is an AI platform built around our primary product, the Objective AI Query.
What is the Objective AI Query?
The Objective AI Query is a novel way of using large language models (LLMs). Instead of getting one response from one LLM, you get multiple responses from an Objective AI Model, each with a Confidence Score.
What is an Objective AI Model?
An Objective AI Model is a collection of LLMs, each with a mode ("generate" or "select"), a weight, and other optional parameters, such as temperature
or top_p
.
- First, the "generate" LLMs each output a response. Each response is assigned both a Generate ID and a Confidence ID. Some responses may have the same IDs.
- Next, the "select" LLMs vote on the best response, by outputting the Generate ID of which response they think is correct. Sometimes, they'll vote for more than one.
- Finally, each Confidence ID is assigned a Confidence Score, which is based on the sum of the weights of each LLM that generated or voted for a response with that Confidence ID.
How do Generate IDs and Confidence IDs work?
Every response, whether outputted by a "generate" or "select" LLM, is assigned a Confidence ID. Only responses outputted by "generate" LLMs are assigned a Generate ID.
- For "generate" LLM responses, the Generate ID is a hash of the response text. If a response format was specified, and has a field marked with
_confidence: false
, and the generated response is valid JSON, then the Confidence ID will be a hash of the response text with the excluded fields removed. Additionally, if the response format contains an Array property, and it is not marked with_preserve_order: true
, then the generated array property will be sorted prior to calculating the Confidence ID. Otherwise, the Confidence ID is the same as the Generate ID. - For "select" LLM responses, the Confidence ID is simply the Confidence ID of the response with the Generate ID that was selected. If multiple Generate IDs are selected, the associated Confidence IDs will all be included. This means that "select" LLM responses may have multiple Confidence IDs.
What is a Confidence Score?
A Confidence Score can be thought of as a ranking for a response, or as measured reliability. We've found that responses with higher Confidence Scores are more likely to be correct, as opposed to responses with lower Confidence Scores.
When does a "select" model vote for more than one Confidence ID?
If a "select" model has been configured with select_top_logprobs
, and the upstream LLM supports logprobs
, then we use those logprobs to turn the vote into a probability distribution.
What is weight?
Each LLM in an Objective AI Model has a weight, which may either be Static (a single number) or use Training Tables.
What are Training Tables?
Training Tables mode is intended for Developers, and is a way to make the weight for each LLM dynamic, based on the input prompt.
Users can train their own Training Tables mode Objective AI Model by thumbs-upping the correct response in the Studio or through the API. Under Training Tables mode, these weights are distinct for each user, and use only the user's own training data.
Training Tables mode is an important way to hone an Objective AI Model to your own use case, and to improve the model over time. This can be especially useful for use cases that need high accuracy and high reliability.
COMING SOON: Training Tables mode is not yet available, but will be soon.
How many Objective AI Models are there?
Each Objective AI Model is user-defined. To take a look at the existing ones, check out our Models. To build your own, check out the Studio.
What LLMs are supported?
Objective AI uses OpenRouter as our upstream LLM provider. An Objective AI Model may contain any LLM supported by OpenRouter. To see the full list, check out OpenRouter's models.
My head is spinning.
Objective AI is easier to use than it sounds. It works, too. See for yourself.
How much does Objective AI cost?
Objective AI, like many AI services, uses credits, which may be purchased. The number of credits used for each request depends on the LLMs used, and the number of tokens processed. We charge a 5.5% ($0.80 minimum) fee when purchasing credits, to cover payment processing fees. Credit usage is the same as charged by OpenRouter, plus a 10% service fee.