FAQ
What is ObjectiveAI?
ObjectiveAI is an AI platform built around our primary product, the ObjectiveAI Query.
What is the ObjectiveAI Query?
The ObjectiveAI Query is a novel way of using large language models (LLMs). Instead of getting one response from one LLM, you get multiple responses from a Query Model, each with a Confidence Score.
Are you asking the LLMs to provide a Confidence Score?
No.
LLMs are not good at self-assessing their own reliability, and they're especially not good at quantifying it. Neither are most people, for that matter.
We take a different approach that minimizes prompt engineering. By minimizing prompt engineering, we've created a flexible system that can handle any prompt, works with almost any LLM, and minimizes waste, making our system quite affordable.
What is a Query Model?
A Query Model is a collection of LLMs, each with a mode ("generate" or "select"), a weight, and other optional parameters, such as temperature or top_p.
- First, the
"generate"LLMs each output a response. Each response is assigned both a Generate ID and a Confidence ID. Some responses may have the same IDs."generate"LLMs use zero prompt engineering on our side. - Next, the
"select"LLMs vote on the best response, by outputting the Generate ID of which response they think is correct. Sometimes, they'll vote for more than one."select"LLMs use minimal, barebones prompt engineering - this is required to provide them the list of choices to choose from. - Finally, each Confidence ID is assigned a Confidence Score, which is based on the sum of the weights of each LLM that generated or voted for a response with that Confidence ID.
How do Generate IDs and Confidence IDs work?
Every response, whether outputted by a "generate" or "select" LLM, is assigned a Confidence ID. Only responses outputted by "generate" LLMs are assigned a Generate ID.
- For
"generate"LLM responses, the Generate ID is a hash of the response text. The Confidence ID is as well, potentially with some transforms done first. See Response Format for more details. - For
"select"LLM responses, the Confidence ID is simply the Confidence ID of the response with the Generate ID that was selected. If multiple Generate IDs are selected, the associated Confidence IDs will all be included. This means that"select"LLM responses may have multiple Confidence IDs.
What is a Confidence Score?
A Confidence Score can be thought of as a ranking for a response, or as measured reliability. We've found that responses with higher Confidence Scores are more likely to be correct, as opposed to responses with lower Confidence Scores. It's an excellent way to deal with ambiguity, which the world is full of.
When does a "select" model vote for more than one Confidence ID?
If a "select" model has been configured with select_top_logprobs, and the upstream LLM supports logprobs, then we use those logprobs to turn the vote into a probability distribution.
What is weight?
Each LLM in a Query Model has a weight, which may either be Static (a single number) or use Training Tables.
What are Training Tables?
Training Tables mode is intended for Developers, and is a way to make the weight for each LLM dynamic, based on the input prompt.
Users can train their own Training Tables mode Query Model by thumbs-upping the correct response in the Studio or by marking the Correct Confidence ID through the API. Training Table data is separate for each user.
Training Tables mode is an important way to hone a Query Model to your own use case, and to improve the model over time.
How many Query Models are there?
Each Query Model is user-defined. To take a look at the existing ones, check out our Models. To build your own, check out the Studio.
What LLMs are supported?
ObjectiveAI uses OpenRouter as our upstream LLM provider. A Query Model may contain any LLM supported by OpenRouter. To see the full list, check out OpenRouter's models.
How much does ObjectiveAI cost?
ObjectiveAI, like many AI services, uses credits, which may be purchased. The number of credits used for each request depends on the LLMs used, and the number of tokens processed. We charge a 5.5% ($0.80 minimum) fee when purchasing credits, to cover payment processing fees. Credit usage is the same as charged by OpenRouter, plus a 10% service fee.
If you are an Enterprise customer, or are otherwise interested in custom pricing, please contact us.