- Published on
Neovim metting AI (CodeCompanion)
CodeCompanion
CodeCompanion enables you to code with any LLM using built-in adapters, community adapters, or by building your own.
Our configuration file for CodeCompanion is here. If others want to achieve the same effect, you can copy it to your own config.
Chat buffer
The chat buffer follows Neovim's interactive buffer style, displaying all information in a single buffer rather than requiring multiple buffers for references, user input, and LLM responses.
You can enhance your chat using:
#
- Add context (e.g.,#buffer
,#lsp
)/
- Insert slash commands (e.g.,/file
,/help
)@
- Include AI agent roles (e.g.,@editor
,@cmd_runner
)
Customization: All context options and slash commands are fully configurable. You can add your own custom context or slash commands to the chat buffer.
Important: References capture data at a specific point in time and don't auto-update by default. Use
gw
to watch for buffer changes and update references automatically. Usegp
to pin a reference to each request.
Inline Assistant
The Inline Assistant provides a streamlined strategy for quick, direct code interaction. Unlike the chat buffer, it integrates AI responses directly into your current buffer, allowing the LLM to add or replace code seamlessly.
Use ga
to accept and gr
to reject the inline assistant's suggestions.
Inline == #buffer + @editor
Workflow (Agent) Model
Concept: Chat + function calling = agent
CodeCompanion implements tools as context and actions shared with an LLM via system prompts. The LLM acts as an agent by requesting tools through the chat buffer, which orchestrates their use within Neovim. You can add agents and tools to the chat buffer using the @
symbol.
Auto-approval: Use
gta
to enable automatic tool approval, allowing function calls to execute without manual confirmation. This is particularly useful with the@full_stack_dev
context.
Agent Functionality: Agents leverage LLM responses to call specific tools. For example, @editor
provides file editing capabilities through the editor tool. All tools are configurable to suit your workflow.
Here's an example of custom agent configuration:
require("codecompanion").setup({
strategies = {
chat = {
tools = {
["my_tool"] = {
description = "Run a custom task",
callback = require("user.codecompanion.tools.my_tool")
opts = {
requires_approval = false, -- Set to true if you want to require user approval before executing the tool
}
},
groups = {
["my_group"] = {
description = "A custom agent combining tools",
system_prompt = "Describe what the agent should do",
tools = {
"cmd_runner",
"editor",
"my_tool", -- Include your custom tool
-- Add your own tools or reuse existing ones
},
},
},
},
},
},
})
How Tools Work
When you add a tool to the chat buffer, CodeCompanion instructs the LLM to return a structured JSON schema defined for each tool. The chat buffer parses the LLM's response, detects tool usage, and triggers the agent system.
The agent initiates a sequence of events where tools are queued and processed sequentially. Each tool's output is shared back to the LLM via the chat buffer, creating a continuous workflow.
Workflow (Action Palette)
How it works
When initiated from the Action Palette, workflows attach themselves to a chat buffer through a subscription mechanism. The workflow subscribes to the conversation and data flow occurring in the chat buffer. After the LLM sends a response, the chat is automatically removed from the subscription to prevent the workflow from being executed again. Detail see codecompanion doc of workflows.
Summary
Core Functionality
- Chat Buffer: Interactive LLM conversations in Neovim
- Inline Assistant: AI code writing with diff preview
- Action Palette: Quick access via
:CodeCompanionActions
- Command Generation: Custom commands with
:CodeCompanionCmd
Context Integration
- Variables:
#buffer
,#lsp
,#viewport
for current state - Slash Commands:
/buffer
,/file
,/fetch
,/help
,/symbols
,/terminal
,/now
- Visual Selection: Send code for AI analysis
AI Agents & Tools
- @editor: Direct buffer editing
- @cmd_runner: Shell command execution
- @files: File system operations
- @full_stack_dev: All tools combined
Adapter Support
- Multiple LLMs: Anthropic, OpenAI, GitHub Copilot
- Flexible Config: Custom APIs, environment variables
- Copilot Integration: Works with existing installations
Built-in Prompts
/commit
- Generate commit messages/explain
- Code explanations/fix
- Code improvements/lsp
- LSP diagnostics
One more things
- Code Indexing: Add VectorCode to index the codebase for automatically adding relevant context to the chat buffer
- Model Context Protocol: Add mcphub.nvim to support MCP, similar to function calling but more powerful and flexible
- Web Search: Add web search capability to Chat using services like Tavily
- Create workflows: Simple workflows can configure multiple steps/tasks, which is super great for complex operations. Agents workflows can automated reflection.
- Multi-agent collaboration: Super ways to enable collaborative workflows between multiple agents.