LLM providers
Every major LLM. One workflow.
Frontier models from OpenAI, Anthropic, Google, Meta, Mistral, Cohere, Amazon, and xAI — plus any open-source or self-hosted model via an OpenAI-compatible endpoint. Per-role routing across providers. The connection abstraction is project-scoped and encrypted at rest. Switch models without rearchitecting.
Native integration paths
One config-table edit away from any model.
These are the integration mechanisms. Each one unlocks an entire catalog of models — Bedrock alone covers Anthropic, Meta, Mistral, Cohere, Amazon, and AI21; OpenAI-compatible reaches every open-source model in the world.
OpenAI
openaiNative web search · image input · full frontier lineup
OpenAI-compatible
openai_compatibleConnect any open-source or self-hosted model via an OpenAI-compatible endpoint
Azure OpenAI
azureVision deployments supported · GovCloud-eligible
AWS Bedrock
bedrockRecommended on AWS · workload-identity-bound · frontier and open-source models in one integration · native streaming · region-aware
Google AI (Gemini)
googleNative search grounding · multimodal input
xAI (Grok)
xaiNative live search
Ollama / vLLM / llama.cpp
openai_compatibleLocal and self-hosted LLMs work out of the box via OpenAI-compatible endpoints — ideal for air-gapped and on-prem deployments
Models you can use today
Every frontier model. Every open-source model. One platform.
A non-exhaustive view of what runs on L2H right now. New models are typically a same-day config update — no platform upgrade required.
Per-role routing
The same physical credential can be wrapped in multiple connections to route per role: planner = strong model, worker = fast model, finalizer = strong again. Configure at the project level.
- Planner: a strong reasoning model
- Workers: a fast, parallel-friendly model
- Finalizer: a strong model for consistency check
- Embedded assistant: a latency-sensitive model
{
"connections": [
{
"label": "planner",
"provider": "<your provider>",
"model": "<strong reasoning model>",
"role": "planner"
},
{
"label": "worker",
"provider": "<your provider>",
"model": "<fast model>",
"role": "worker"
},
{
"label": "finalizer",
"provider": "<your provider>",
"model": "<strong reasoning model>",
"role": "finalizer"
}
]
}Recommended on AWS
AWS Bedrock — workload-identity-bound (no long-lived keys), native streaming, frontier and open-source models in one place.
Recommended on Azure
Azure OpenAI — credentials managed via Azure Key Vault. GovCloud deployments supported.
Recommended for air-gap
vLLM or Ollama via OpenAI-compatible. Self-hosted models, no external network at runtime.
Have a specific model in mind?
We benchmark new model families against our 59-test KB suite as they ship. Request a tailored evaluation for your environment.