Skip to main content

FAQ

Common buyer questions.

Don’t see your question? Email sales@l2h.ai.

Platform & deployment

Is this SaaS?

No. Orchestrator + Chat are customer-hosted on your own AWS, Azure, or on-prem Kubernetes (single-tenant per install). The Enterprise Agent suite (ServiceNow, VS Code) runs on AWS, Azure, Azure GovCloud, Kubernetes, or on-prem — all customer-hosted.

What about Kubernetes outside AWS?

The deployment is portable. Platform-specific bits (workload identity, the LLM gateway, the secret store) become equivalents on Azure / GCP / on-prem.

What about air-gapped?

Use the local storage driver, an OpenAI-compatible LLM (vLLM / llama.cpp / Ollama), and a self-hosted IDP. No external network calls required at runtime.

What is the upgrade story?

Standard Kubernetes upgrade flow with idempotent migration jobs. Same deployment on dev and prod. Application version bumps are pinned per release.

Where does the data live?

In a database within your own cloud account. Nothing leaves your environment.

LLM providers

Does it lock me into one LLM?

Not even close. L2H connects to every major frontier provider (OpenAI, Anthropic Claude, Google Gemini, Meta Llama, Mistral, Cohere, Amazon Nova, xAI Grok) and every open-source or self-hosted model on the market via OpenAI- or Anthropic-compatible endpoints. Per-project, per-agent connections, with per-role routing — so the same workflow can mix a strong reasoning model as the planner, a fast model as the worker, and a self-hosted model for sensitive steps.

Can I switch LLM providers without redeploying?

Yes. For the Enterprise Agent suite, swap in a config table — effective on next message. For Orchestrator, change the connection at the project level. New models from existing providers usually need zero platform changes.

Which providers are supported?

Native integrations: OpenAI, Azure OpenAI (GovCloud-eligible), AWS Bedrock (frontier and open-source models in one integration), Google AI (Gemini), xAI (Grok), Anthropic direct, and any OpenAI- or Anthropic-compatible endpoint. That covers every major frontier model plus every open-source model that ships an OpenAI-style API.

Workflows & extensibility

Can my workflows be invoked by other AI agents?

Yes. Every workflow exposes input and output JSON Schemas and an example payload, so any AI agent or external system can discover, validate, and call them with a single API key.

Do you support MCP (Model Context Protocol)?

Yes — natively, in both directions. Agents in Orchestrator and Chat can call any MCP-compatible tool server (L2H-shipped, customer-built, or third-party), and L2H workflows and platform capabilities can be exposed as MCP servers for external clients like Claude Desktop, IDEs, and other agent frameworks. Plugins ship MCP tool servers as a first-class component.

Do you support A2A (Agent-to-Agent)?

Yes. Workflows are agent-callable by design — typed contracts, scoped API keys, and structured discovery — so external A2A-compatible agents can find, validate, and invoke them. L2H agents can also delegate subtasks to A2A peers, which makes it easy to compose with agents running outside the platform.

How do you handle long-running steps?

Approval gates (suspend until human acts), delay (relative or absolute time), foreach + join (parallel fan-out + rejoin), orchestrator (planner → workers → finalizer with sharding).

Can customers extend it?

Yes. Plugins ship as a single declarative manifest contributing MCP tool servers, workflow nodes, blueprints, and typed credentials. Upload via the dashboard, site-admin approval, sync into the platform on approval. Same path L2H ships its own plugins on.

Security & compliance

What about audit / compliance?

Append-only audit log with broad event coverage; scoped + expiring API keys; signed outbound webhooks; workload-identity-bound cloud access; secret-store integration; hardened, non-root containers.

What about IL5 / GovCloud?

Currently operational at IL5 — production deployments today on Azure GovCloud (DoD/DISA) with private routing. Same deployment, same codebase as commercial.

Do you run on JWICS or other classified networks?

Yes. Currently operational on JWICS (TS/SCI) using the air-gapped deployment pattern: on-prem Kubernetes, self-hosted LLMs via OpenAI-compatible endpoints (vLLM, llama.cpp, Ollama), local storage adapter, and a self-hosted IDP. Zero external network dependencies at runtime. The same recipe is IL6+ ready.

How do you handle authentication?

OIDC (Auth0, Azure AD, Okta, Keycloak, Google), SAML 2.0, LDAP / Active Directory, local accounts, PKI / mTLS. One identity across Orchestrator + Chat.

Enterprise Agent (umbrella)

Which platforms does Enterprise Agent support?

Enterprise Agent for ServiceNow and Enterprise Agent for VS Code are GA today. Custom platforms (Salesforce, Jira, Slack, JetBrains, internal portals, etc.) are available on request — typically launch in weeks, not quarters.

Why an umbrella product instead of separate ones?

Because the engine is the same. Multi-LLM routing, audit, governance, deployment — all shared. The only thing that changes per implementation is the surface (ServiceNow form vs VS Code editor) and the platform-specific tools.

Can we use Enterprise Agent for VS Code as a Copilot alternative?

Yes. It's customer-hosted in your AWS or Azure account, with your choice of LLM (every major frontier provider plus any OpenAI- or Anthropic-compatible self-hosted model). Your code does not leave your environment without explicit consent.

ServiceNow-specific

How do we control AI cost?

Token budgets per role + interactive budget modal + per-request usage tracking.

Will it actually work on our data?

Built-in eval/benchmark framework. Test any model on customer data before going live. Our top benchmarked model hits 100% on the 59-test KB suite — repeat the test on your KB.

We have a Virtual Agent already.

Enterprise Agent complements VA — adds context-aware AI on every page and routes to your existing live-agent infrastructure.

Re-deploy for every change?

Custom system prompt, slash commands, model choice, token budgets — all config-table edits. Effective on next message.

What happens when the AI fails?

Every error has a stable, structured code, request ID, retryability flag, and triage guide.

Pricing & contracts

How is pricing structured?

Four products on one platform. The Base Platform is the required foundation (identity, policy, governance, orchestration engine) on an annual platform license. On top, you add what you need: Chat (per named user), Enterprise Agent for ServiceNow and VS Code (per named user), and Orchestrator (usage-based — pay for what you run). Typically 60–80% less than comparable enterprise AI platforms, with additional discounts on multi-year terms.

How do I buy?

Three paths: (1) direct from L2H on an annual or multi-year subscription; (2) on-contract through an authorized reseller — Optiv + ClearShark, CACI idt., or ThunderCat Technology; (3) federal contract vehicles. POC pricing for in-environment evaluation, MSA + DPA standard, and a named L2H technical resource on every account.

Do you support federal procurement vehicles?

Yes — GSA Schedule, SEWP, CIO-SP4, and STARS III paths are supported, directly or via reseller subcontracting. POC pricing available for evaluation. Capability Statement, ATO support, and GovCloud / IL5 / IL6+ readiness included. Email sales@l2h.ai.

Still have questions?