Void is an open source AI code editor forked from VS Code that connects directly to any LLM without middlemen. No vendor lock-in, no data concerns – you own your data completely. Supports Agent Mode with any model including open source options, Checkpoints for version control, and local deployment via Ollama.




Every developer knows that moment of hesitation before pasting sensitive code into an AI assistant. You're wondering: where does this code go? Who sees it? And those locked-in feelings when you're dependent on a single AI provider for everything? We've all been there.
That's exactly why we built Void — an open source AI code editor built as a fork of VS Code, designed from the ground up to give you complete control over your data and your choice of AI models.
Void connects directly to your LLM provider of choice, with no intermediary server whatsoever. Your code and conversations travel straight from your editor to the AI model you select. No middleman. No private backend logging your prompts. Just you and the model.
For teams and individuals who need absolute data sovereignty, Void supports fully local deployment through Ollama and vLLM. Your code never leaves your machine. That's the level of control we believe developers deserve.
Today, our community has grown to 28.3k GitHub Stars, 2.3k Forks, and 46 Contributors who have collectively made 2,771 commits. We're proud to be Y Combinator-backed, built by Glass Devtools, Inc., with Andrew and Mathew Pareles leading development. Our entire codebase is open source under the Apache 2.0 license — every line inspectable, every feature auditable.
But we need to be transparent with you: Void has paused active development to explore new coding paradigms. The editor will continue running, but without maintenance, some existing features may stop working over time. We believe you deserve to know this upfront before building your workflow around it.
What makes Void different isn't just one feature — it's the philosophy that you should own your AI coding experience entirely. Here's how that translates into real functionality.
Tab Smart Completion lets you press Tab and instantly apply AI-generated code suggestions. Under the hood, Void uses custom Fill-in-the-Middle (FIM) model support to understand your code context and predict what comes next. It's like having an intelligent pair programmer who anticipates your next line.
Quick Edit activates with Ctrl+K for inline editing of selected code. Void's FIM-prompting technology analyzes your selection, generates contextually appropriate modifications, and manages edit history so you can always step back. No need to describe your change in a chat — just select, modify, and move on.
Chat Mode offers three distinct interactions: standard conversation for asking questions, Agent Mode for full autonomous editing capabilities, and Gather Mode for read-only exploration of your codebase. You can reference specific files or entire folders using @file and @folder mentions, making it effortless to discuss code context with AI.
Agent Mode is where Void truly shines. This isn't just a chatbot — it's a full-privileged AI agent that can search, create, edit, and delete files and folders in your project. It accesses your terminal, integrates with MCP tools, and can automatically fix lint errors. Here's what makes it special: Void can run Agent Mode with any model, even those that don't natively support tool calling. That means you can use open source models like R1, Gemma3, or any custom model — not just the big-name providers.
Checkpoints gives you visual version control for AI-generated edits. Every time Void's AI modifies your code, it creates a checkpoint you can visualize as a diff and jump back to. It's an undo system specifically designed for AI assistance — because sometimes the AI's direction isn't quite right, and you need to explore alternatives.
Fast Apply skips the conversational summary entirely and directly outputs Search/Replace blocks. This makes editing large files — we're talking 1000+ lines — remarkably fast. You tell Void what to change, and it delivers the patch without the back-and-forth.
MCP Support (Model Context Protocol), added in v1.4.1, connects Void to external tools and services, expanding what your AI assistant can accomplish beyond your codebase.
Void attracts developers who prioritize control, privacy, and flexibility. Here are the real scenarios our community members have shared:
Privacy-First Developers are our core audience. If you've ever hesitated before pasting proprietary code into an AI tool, Void's architecture eliminates that concern. Your messages go directly to your chosen LLM provider — there's no Void backend logging, storing, or processing your data. Run Ollama locally, and your code never leaves your machine at all. One community member told us they switched to Void specifically because their company's security policy prohibited cloud-based AI coding tools. Now they develop with AI assistance while remaining fully compliant.
Model Freedom Seekers love that Void doesn't force you into a single AI provider ecosystem. Use Claude for reasoning-heavy tasks, GPT-4.1 for code generation, Gemini for multimodal capabilities, or DeepSeek for cost efficiency — switch between them based on the task. Some of our users maintain different model configs for different project types. The point is: you're in control, not us.
VS Code Power Users appreciate that Void requires zero retraining. Your muscle memory works. Your favorite extensions work (within reason). Your themes and settings carry over. Several community members told us they evaluated Cursor and other AI editors but couldn't justify the learning curve. With Void, they got AI assistance without disrupting their workflow.
Open Source Model Enthusiasts have discovered something unique in Void: the ability to run Agent Mode with models that don't officially support tool calling. Using Ollama with Llama 3, Qwen, or DeepSeek V3 locally means you can have a fully autonomous AI coding assistant running on your own hardware. For teams building with open source models, this is genuinely valuable — you get the Agent Mode experience without being forced to use proprietary models.
Enterprise Security Teams appreciate that Void is fully auditable. Every line of code is available on GitHub. You can self-host your LLM with Ollama or vLLM, run everything offline, and pass security audits because there's no opaque backend. Several enterprise users have told us they chose Void specifically because their compliance requirements demanded full source code access.
Start with cloud API keys (OpenAI or Anthropic) for the smoothest experience. Once you're comfortable, experiment with Ollama for local deployment. Privacy-sensitive work? Go fully local. Want the latest model capabilities? Use cloud APIs. Void supports both seamlessly — you can even switch between them based on what you're working on.
Getting Void running takes about 10 minutes. Here's how to go from download to your first AI-assisted edit.
Step 1: Download and Install Visit voideditor.com/download-beta and select your platform — we support Mac (Intel and ARM), Windows (x64 and ARM), and Linux. Download the installer and run through the standard setup process.
Step 2: Configure Your LLM On first launch, Void prompts you to add your model configuration. You have two paths:
For first-time users, we recommend starting with Claude 3.7 or GPT-4.1 via API — they provide the most reliable Agent Mode experience while you're learning.
Step 3: Try Tab Completion Start typing code in any file. When Void suggests a completion, press Tab to accept it. The AI analyzes your context and predicts what you're likely writing next. It's that simple.
Step 4: Try Quick Edit Select a block of code you want to modify, press Ctrl+K (Cmd+K on Mac), and describe what you want to change. Void generates the modified code inline. Accept with Tab or refine your request.
Step 5: Explore Agent Mode Open the Chat panel on the left. Switch to Agent Mode and type a task like "Add error handling to this function" or "Refactor this module to use async/await." Watch as Void reads, edits, and creates files autonomously. For best results with Agent Mode, use models with strong reasoning capabilities.
Running local models via Ollama requires adequate hardware. We recommend at least 16GB RAM for smooth performance. If you experience lag, start with cloud API models before trying local deployment.
Void's architecture reflects our philosophy: transparency, extensibility, and developer control.
The foundation is a fork of VS Code v1.99.0, which means Void inherits the full VS Code ecosystem — your extensions, themes, and settings work as expected. We built our AI features on top of this solid base rather than reinventing the wheel.
Our codebase is primarily TypeScript (95.3%), with minimal Rust (0.7%) for performance-critical operations, JavaScript (1.2%), and CSS (1.4%). Every line is open source under Apache 2.0 — you can audit, fork, and contribute.
The latest release is v1.4.1 (Beta Patch #7, June 5, 2025), which added MCP support, AI commit generation, and visual diffs for the Edit tool. Check our changelog at voideditor.com/changelog for the full version history.
What makes our LLM integration unique:
Supported models span both cloud and local:
Void itself is completely free and open source. However, you'll need to provide your own LLM API key (from OpenAI, Anthropic, Google, etc.) or run local models via Ollama. Cloud API calls are billed by those providers; local models require your own compute resources.
Void is fully open source (Cursor is closed-source), meaning you can audit every line of code. Void connects directly to any LLM without an intermediary — no vendor lock-in. Your data goes straight from the editor to your chosen model. And because Void is a VS Code fork, there's no new interface to learn.
The software itself is free. But yes, AI capabilities require either paid API access (OpenAI, Anthropic, Google bill you directly) or your own hardware for local deployment. Void doesn't add any markup — you pay exactly what the model provider charges.
Yes. Void integrates with Ollama and vLLM, allowing fully local AI inference. Your code and prompts never leave your machine. This is ideal for privacy-sensitive work or organizations with security requirements.
All of them. That's Void's differentiator. We've built runtime support that lets any model — even those without native tool calling capabilities — run Agent Mode. This includes open source models like DeepSeek R1, Google Gemma 3, Llama variants, Qwen, and Mistral. You're not limited to models that officially support tools.
Void has paused active development to explore new approaches to coding. The editor will continue functioning, but without maintenance, some features may degrade over time. We are not currently reviewing issues or pull requests, though we respond to email inquiries at hello@voideditor.com. There's no timeline for resuming development.
We welcome contributions via GitHub pull requests. Void is licensed under Apache 2.0, so you're free to fork, modify, and distribute your changes. Check our GitHub repository for contribution guidelines. Even though development is paused, we appreciate community involvement.
Void is an open source AI code editor forked from VS Code that connects directly to any LLM without middlemen. No vendor lock-in, no data concerns – you own your data completely. Supports Agent Mode with any model including open source options, Checkpoints for version control, and local deployment via Ollama.
AI-powered jewelry virtual try-on and photography
AIpowered SVG generation and editing platform
AI dating photos that actually get you matches
AllinOne AI video generation platform
1000+ curated no-code templates in one place
We tested 30+ AI coding tools to find the 12 best in 2026. Compare features, pricing, and real-world performance of Cursor, GitHub Copilot, Windsurf & more.
Looking for free AI coding tools? We tested 8 of the best free AI code assistants for 2026 — from VS Code extensions to open-source alternatives to GitHub Copilot.