A free, open-source, editor-agnostic tool that connects LLMs to your editor through a well-defined protocol — giving you the best AI coding experience everywhere.
A protocol-first approach to AI coding — one server connecting any editor to any LLM.
For ones interested in being in the loop inside your editor.
Three core interactions powered by LLMs, plus a rich ecosystem of configuration and extensibility.
Pair program with an AI agent that can read, write, and refactor your code. Supports rollback, context injection, and multi-turn conversations.
Learn more ✏️Select code, describe the change, accept or reject the diff. Fast, low-ceremony edits that work great with lightweight models.
Learn more ⚡Inline AI-powered code suggestions as you type — multi-line, context-aware, similar to Copilot-style predictions.
Learn moreBuilt-in code and plan agents, with subagents for isolated parallel tasks and context window savings.
Native file/shell/editor tools, MCP server integration, and custom tool definitions with fine-grained approval control.
Attach files, directories, cursor position, MCP resources, or AGENTS.md auto-context to any prompt.
Define coding standards, conventions, and constraints the LLM must follow — globally or per project.
Structured knowledge units that teach the LLM how to handle specific tasks. Follows the agentskills.io spec.
Slash commands like /init, /compact, /resume, and custom commands you define yourself.
Before/after event callbacks to validate, notify, or trigger side effects on tool calls and prompts.
Switch between configuration presets on the fly — different models, reasoning effort, or custom setups.
OpenTelemetry integration for exporting tool usage, prompt performance, and server metrics.
Everything is driven by a single JSON config. Here are a few things you can do.