Create
Create Ensemble LLM
Configure a single LLM with specific model, parameters, and settings. The resulting configuration can be used directly in your code.
Configuration
The model identifier (e.g., provider/model-name)
Controls randomness (0 = deterministic, 2 = most random)
Nucleus sampling threshold
Enables probabilistic voting (required for vector completions)
Penalizes repeated tokens based on frequency
Penalizes tokens based on presence in text so far
How the LLM should format its output
Amount of reasoning to apply (for supported models)
Comma-separated list of provider preferences
Computed ID
Loading WASM validation...
WASM validation enables real-time ID computation
JSON Configuration
{
// Enter a model to generate configuration
}About Ensemble LLMs
- Content-Addressed: IDs are computed from the definition itself - identical configurations always produce identical IDs
- Immutable: Once defined, an Ensemble LLM configuration cannot be changed (changing it creates a new ID)
- No Storage Needed: Copy the JSON and use it directly in your code or save it to GitHub
- Top Logprobs: Set this (2-20) to enable probabilistic voting for vector completions