Skip to main content
Enterprise Agent overview
Enterprise Agent · VS Code implementation

AI that lives
inside your IDE.

Codebase-aware chat in the sidebar, inline edits with diff preview, git-aware summaries, and command-palette shortcuts — all powered by the LLM you choose, running in your AWS or Azure account.

VS Code · src/services/parser.ts
Open file: parser.ts · 247 lines · branch: feat/streaming-refactor

User

Refactor the parser to use the streaming buffer pattern from utils/stream.ts.

L2H

Found the matching helper. Drafting refactor with the new streaming pattern.

- const data = await fetchAll(input);
+ const data = await stream.read(input);

Codebase

Symbol-aware retrieval

BYOLLM

Every major LLM

Inline

Diff preview before apply

Customer

Hosted in your cloud

Headline claims

What the IDE assistant gives you.

AI that lives inside VS Code.

Sidebar + inline chat + command palette — no separate app, no copy-pasting.

Codebase-aware, project-scoped.

Full repository context, file-tree awareness, symbol-level retrieval. Stays inside your project boundary.

Customer-hosted, BYOLLM.

Same backbone as Enterprise Agent for ServiceNow. Every major LLM — frontier or open-source. Your data, your account.

Core capabilities

What it does inside VS Code.

Inline & sidebar chat

Multi-turn conversation in a panel + inline code-aware chat with the file you have open.

Codebase retrieval

Symbol-level semantic search across the project. Pulls just-enough context for accurate answers.

Code generation & edits

Generate functions, refactor, write tests, fix lints — with a diff view before applying.

Inline documentation lookup

Pulls answers from your knowledge base, internal docs, or the open web. Cited responses.

Git-aware

Understands diffs, commits, and branches. Can summarize PRs, draft commit messages, or explain changes.

Command palette commands

Custom slash commands and prompts curated by your platform team — one keystroke away.

Audience cuts

Built for every developer role.

Engineers

  • Project-scoped chat with codebase context
  • Inline refactor and test-writing
  • Git diff explanations and PR summaries
  • Command palette shortcuts to standard prompts

Engineering Managers

  • Standardized prompts via slash commands
  • Per-team / per-project policies
  • Token budget controls
  • Usage tracking per request and per repo

Platform / DevX Teams

  • Custom prompt + tool packaging
  • Curated KB integration
  • Multi-LLM provider control
  • Audit trail per workspace

Security / IT

  • Customer-hosted backend (your AWS/Azure)
  • No code leaves your environment without consent
  • SSO / SAML / mTLS for the chat backend
  • Air-gapped via OpenAI-compatible self-hosted LLMs

BYOLLM

Switch in a config table. No code. No redeployment.

Same every-major-LLM story as Enterprise Agent for ServiceNow. Per-team routing, per-project entitlements, per-user token budgets.

AWS Bedrock

Frontier and open-source models in one integration

OpenAI

Full frontier lineup

Anthropic (direct)

Direct API for newest model rollouts

Azure OpenAI

GovCloud-eligible · vision deployments

Google Gemini

Multimodal · native search grounding

xAI Grok

Native live search

Self-hosted / Open-source

Any model via OpenAI- or Anthropic-compatible endpoints

Deployment options

Customer-hosted, every cloud.

AWS

Best for: Standard cloud customers

Infrastructure-as-code supplied

Azure

Best for: Microsoft-aligned customers

Commercial and GovCloud paths supported

Azure GovCloud

Best for: Federal & DoD developer environments

Configured for high-assurance environments

Kubernetes / on-prem

Best for: Air-gapped or fully internal IDE deployments

Container image + standard config/secrets pattern

Distribution: private extension install, internal extension marketplace, or org-only VS Code Marketplace publishing. Backend deploy in hours.

Common questions, answered.

How is this different from GitHub Copilot?

Copilot is closed SaaS. Enterprise Agent for VS Code is customer-hosted in your AWS/Azure account, with your choice of LLM and your own data plane. Same multi-LLM, multi-deployment story as Enterprise Agent for ServiceNow.

Will it leak our code?

Code stays in your environment. The backend runs in your cloud. The LLM provider you choose is the only third party — and you can choose self-hosted (vLLM/Ollama) for fully air-gapped.

Do we need to install anything per user?

Yes — a small VS Code extension. It can be installed via your internal extension marketplace, group policy, or org-only VS Code Marketplace publishing.

Can we customize prompts and tools?

Yes. Custom system prompts, slash commands, and tool integrations are all config-driven. Effective on the next message — no extension redeploy.

Does it work with non-VS Code IDEs?

VS Code first. Cursor (a VS Code fork) works out of the box. We can build for almost any IDE — ask us.

Technical specifications

VS Code surfaces
Sidebar panel, inline chat, command palette, status bar
Backend runtime
Customer-hosted service with a structured agent framework
Cloud targets
AWS, Azure, Azure GovCloud, Kubernetes, on-prem
Auth (extension ↔ backend)
OAuth client credentials, REST API Key, mTLS, optional SSO
LLM providers
Every major frontier and open-source provider (Bedrock, OpenAI, Anthropic, Azure OpenAI, Google Gemini, xAI Grok, plus any OpenAI- or Anthropic-compatible endpoint)
Codebase context
Symbol-aware retrieval; repo embeddings cached locally; configurable file globs
Token budgets
Fully configurable per role, including unlimited
Observability
Cloud and container logs with structured error codes, request IDs, token tracking per chat
Distribution
Private extension install, internal marketplace, or org-only marketplace publishing

Enterprise Agent for VS Code.
Try it on your codebase.

Live walkthrough on a real repo, with your choice of LLM and your deployment target.

Need a different IDE or platform? See ServiceNow implementation or the Enterprise Agent overview (custom platforms on request).