Logo
ProductsBlogs
Submit

Categories

  • AI Coding
  • AI Writing
  • AI Image
  • AI Video
  • AI Audio
  • AI Chatbot
  • AI Design
  • AI Productivity
  • AI Data
  • AI Marketing
  • AI DevTools
  • AI Agents

Featured Tools

  • AI Jewelry Model
  • SVGMaker
  • iMideo
  • DatePhotos.AI
  • No Code Website Builder
  • Coachful
  • Wix
  • TruShot
  • AIToolFame
  • ProductFame

Featured Articles

  • The Complete Guide to AI Content Creation in 2026
  • 5 Best AI Agent Frameworks for Developers in 2026
  • Cursor vs Windsurf vs GitHub Copilot: The Ultimate Comparison (2026)
  • 12 Best AI Coding Tools in 2026: Tested & Ranked
  • 5 Best AI Blog Writing Tools for SEO in 2026
  • 8 Best Free AI Code Assistants in 2026: Tested & Compared
  • View All →

Subscribe to our newsletter

Receive weekly updates with the newest insights, trends, and tools, straight to your email

Browse by Alphabet

ABCDEFGHIJKLMNOPQRSTUVWXYZOther
Logo
English中文PortuguêsEspañolDeutschFrançais|Terms of ServicePrivacy PolicyTicketsSitemapllms.txt

© 2025 All rights reserved

  • Home
  • /
  • Products
  • /
  • AI Coding
  • /
  • Void - Open source AI code editor with full data control
Void

Void - Open source AI code editor with full data control

Void is an open source AI code editor forked from VS Code that connects directly to any LLM without middlemen. No vendor lock-in, no data concerns – you own your data completely. Supports Agent Mode with any model including open source options, Checkpoints for version control, and local deployment via Ollama.

AI CodingFeaturedOpen PricingIDE PluginCode GenerationLarge Language ModelCode CompletionOpen Source
Visit Website
Product Details
Void - Main Image
Void - Screenshot 1
Void - Screenshot 2
Void - Screenshot 3

What is Void

Every developer knows that moment of hesitation before pasting sensitive code into an AI assistant. You're wondering: where does this code go? Who sees it? And those locked-in feelings when you're dependent on a single AI provider for everything? We've all been there.

That's exactly why we built Void — an open source AI code editor built as a fork of VS Code, designed from the ground up to give you complete control over your data and your choice of AI models.

Void connects directly to your LLM provider of choice, with no intermediary server whatsoever. Your code and conversations travel straight from your editor to the AI model you select. No middleman. No private backend logging your prompts. Just you and the model.

For teams and individuals who need absolute data sovereignty, Void supports fully local deployment through Ollama and vLLM. Your code never leaves your machine. That's the level of control we believe developers deserve.

Today, our community has grown to 28.3k GitHub Stars, 2.3k Forks, and 46 Contributors who have collectively made 2,771 commits. We're proud to be Y Combinator-backed, built by Glass Devtools, Inc., with Andrew and Mathew Pareles leading development. Our entire codebase is open source under the Apache 2.0 license — every line inspectable, every feature auditable.

TL;DR
  • Forked from VS Code (v1.99.0), preserving your existing workflow, themes, and keyboard shortcuts
  • Direct connection to any LLM — OpenAI, Claude, Gemini, DeepSeek, Llama, Qwen, Mistral, Grok, and more
  • Checkpoints feature provides visual version control for AI-generated edits
  • Fully open source and self-hostable for complete data control

But we need to be transparent with you: Void has paused active development to explore new coding paradigms. The editor will continue running, but without maintenance, some existing features may stop working over time. We believe you deserve to know this upfront before building your workflow around it.


Core Features of Void

What makes Void different isn't just one feature — it's the philosophy that you should own your AI coding experience entirely. Here's how that translates into real functionality.

Tab Smart Completion lets you press Tab and instantly apply AI-generated code suggestions. Under the hood, Void uses custom Fill-in-the-Middle (FIM) model support to understand your code context and predict what comes next. It's like having an intelligent pair programmer who anticipates your next line.

Quick Edit activates with Ctrl+K for inline editing of selected code. Void's FIM-prompting technology analyzes your selection, generates contextually appropriate modifications, and manages edit history so you can always step back. No need to describe your change in a chat — just select, modify, and move on.

Chat Mode offers three distinct interactions: standard conversation for asking questions, Agent Mode for full autonomous editing capabilities, and Gather Mode for read-only exploration of your codebase. You can reference specific files or entire folders using @file and @folder mentions, making it effortless to discuss code context with AI.

Agent Mode is where Void truly shines. This isn't just a chatbot — it's a full-privileged AI agent that can search, create, edit, and delete files and folders in your project. It accesses your terminal, integrates with MCP tools, and can automatically fix lint errors. Here's what makes it special: Void can run Agent Mode with any model, even those that don't natively support tool calling. That means you can use open source models like R1, Gemma3, or any custom model — not just the big-name providers.

Checkpoints gives you visual version control for AI-generated edits. Every time Void's AI modifies your code, it creates a checkpoint you can visualize as a diff and jump back to. It's an undo system specifically designed for AI assistance — because sometimes the AI's direction isn't quite right, and you need to explore alternatives.

Fast Apply skips the conversational summary entirely and directly outputs Search/Replace blocks. This makes editing large files — we're talking 1000+ lines — remarkably fast. You tell Void what to change, and it delivers the patch without the back-and-forth.

MCP Support (Model Context Protocol), added in v1.4.1, connects Void to external tools and services, expanding what your AI assistant can accomplish beyond your codebase.

  • Completely open source: Every line of code is inspectable under Apache 2.0
  • Any model, any provider: From OpenAI to Ollama, you're not locked in
  • Zero migration pain: Forked from VS Code, so your themes and shortcuts work instantly
  • Checkpoints for AI edits: Version control specifically designed for AI-assisted changes
  • Agent Mode on any model: Run autonomous AI agents even with open source models lacking tool calling support
  • Development paused: No active maintenance, features may degrade over time
  • No official support channel: Community-driven, no guaranteed response times
  • Self-sufficient required: You're responsible for your own LLM API keys or local deployment
  • Beta stability: Some features may be buggy or inconsistent

Who's Using Void

Void attracts developers who prioritize control, privacy, and flexibility. Here are the real scenarios our community members have shared:

Privacy-First Developers are our core audience. If you've ever hesitated before pasting proprietary code into an AI tool, Void's architecture eliminates that concern. Your messages go directly to your chosen LLM provider — there's no Void backend logging, storing, or processing your data. Run Ollama locally, and your code never leaves your machine at all. One community member told us they switched to Void specifically because their company's security policy prohibited cloud-based AI coding tools. Now they develop with AI assistance while remaining fully compliant.

Model Freedom Seekers love that Void doesn't force you into a single AI provider ecosystem. Use Claude for reasoning-heavy tasks, GPT-4.1 for code generation, Gemini for multimodal capabilities, or DeepSeek for cost efficiency — switch between them based on the task. Some of our users maintain different model configs for different project types. The point is: you're in control, not us.

VS Code Power Users appreciate that Void requires zero retraining. Your muscle memory works. Your favorite extensions work (within reason). Your themes and settings carry over. Several community members told us they evaluated Cursor and other AI editors but couldn't justify the learning curve. With Void, they got AI assistance without disrupting their workflow.

Open Source Model Enthusiasts have discovered something unique in Void: the ability to run Agent Mode with models that don't officially support tool calling. Using Ollama with Llama 3, Qwen, or DeepSeek V3 locally means you can have a fully autonomous AI coding assistant running on your own hardware. For teams building with open source models, this is genuinely valuable — you get the Agent Mode experience without being forced to use proprietary models.

Enterprise Security Teams appreciate that Void is fully auditable. Every line of code is available on GitHub. You can self-host your LLM with Ollama or vLLM, run everything offline, and pass security audits because there's no opaque backend. Several enterprise users have told us they chose Void specifically because their compliance requirements demanded full source code access.

💡 Choosing Your Setup

Start with cloud API keys (OpenAI or Anthropic) for the smoothest experience. Once you're comfortable, experiment with Ollama for local deployment. Privacy-sensitive work? Go fully local. Want the latest model capabilities? Use cloud APIs. Void supports both seamlessly — you can even switch between them based on what you're working on.


Quick Start

Getting Void running takes about 10 minutes. Here's how to go from download to your first AI-assisted edit.

Step 1: Download and Install Visit voideditor.com/download-beta and select your platform — we support Mac (Intel and ARM), Windows (x64 and ARM), and Linux. Download the installer and run through the standard setup process.

Step 2: Configure Your LLM On first launch, Void prompts you to add your model configuration. You have two paths:

  • Cloud APIs: Enter your API key for OpenAI, Anthropic, or Google. These services bill you directly for usage — Void itself is free.
  • Local with Ollama: Connect to a locally running Ollama instance. This is free but requires you to have Ollama installed and models pulled.

For first-time users, we recommend starting with Claude 3.7 or GPT-4.1 via API — they provide the most reliable Agent Mode experience while you're learning.

Step 3: Try Tab Completion Start typing code in any file. When Void suggests a completion, press Tab to accept it. The AI analyzes your context and predicts what you're likely writing next. It's that simple.

Step 4: Try Quick Edit Select a block of code you want to modify, press Ctrl+K (Cmd+K on Mac), and describe what you want to change. Void generates the modified code inline. Accept with Tab or refine your request.

Step 5: Explore Agent Mode Open the Chat panel on the left. Switch to Agent Mode and type a task like "Add error handling to this function" or "Refactor this module to use async/await." Watch as Void reads, edits, and creates files autonomously. For best results with Agent Mode, use models with strong reasoning capabilities.

💡 Performance Note

Running local models via Ollama requires adequate hardware. We recommend at least 16GB RAM for smooth performance. If you experience lag, start with cloud API models before trying local deployment.


Technical Details

Void's architecture reflects our philosophy: transparency, extensibility, and developer control.

The foundation is a fork of VS Code v1.99.0, which means Void inherits the full VS Code ecosystem — your extensions, themes, and settings work as expected. We built our AI features on top of this solid base rather than reinventing the wheel.

Our codebase is primarily TypeScript (95.3%), with minimal Rust (0.7%) for performance-critical operations, JavaScript (1.2%), and CSS (1.4%). Every line is open source under Apache 2.0 — you can audit, fork, and contribute.

The latest release is v1.4.1 (Beta Patch #7, June 5, 2025), which added MCP support, AI commit generation, and visual diffs for the Edit tool. Check our changelog at voideditor.com/changelog for the full version history.

What makes our LLM integration unique:

  • Direct provider connection: No Void servers in the middle. Your prompts go straight from editor to model.
  • Any-model Agent Mode: We've built a runtime that lets any model execute Agent Mode, even without native tool calling support. This opens up open source models for autonomous agent workflows.
  • Dynamic Context Squashing: Void intelligently compresses context to stay within model token limits while preserving what matters.
  • Checkpoint system: Every AI edit creates a retrievable checkpoint with visual diffs.

Supported models span both cloud and local:

  • Cloud: OpenAI (o3, o4-mini), Anthropic (Claude 3.7, Claude 4), Google (Gemini 2.5), xAI (Grok 3)
  • Local: Ollama (Llama, Qwen, Mistral), DeepSeek V3, Google Gemma, vLLM
  • VS Code ecosystem compatibility: Extensions, themes, and settings carry over seamlessly
  • Complete data sovereignty: Direct LLM connection with optional fully local deployment
  • Unprecedented model flexibility: Run any model in Agent Mode, including open source models without tool calling
  • Fully auditable: Apache 2.0 license, entire codebase on GitHub
  • Development status: Active development is paused with no confirmed return timeline
  • Maintenance uncertainty: No regular bug fixes or security patches
  • Beta limitations: Some features may be experimental or unstable
  • Self-support model: No dedicated support team; community resources are limited

FAQ

Is Void free to use?

Void itself is completely free and open source. However, you'll need to provide your own LLM API key (from OpenAI, Anthropic, Google, etc.) or run local models via Ollama. Cloud API calls are billed by those providers; local models require your own compute resources.

How is Void different from Cursor?

Void is fully open source (Cursor is closed-source), meaning you can audit every line of code. Void connects directly to any LLM without an intermediary — no vendor lock-in. Your data goes straight from the editor to your chosen model. And because Void is a VS Code fork, there's no new interface to learn.

Do I need to pay for AI features?

The software itself is free. But yes, AI capabilities require either paid API access (OpenAI, Anthropic, Google bill you directly) or your own hardware for local deployment. Void doesn't add any markup — you pay exactly what the model provider charges.

Does Void support local deployment?

Yes. Void integrates with Ollama and vLLM, allowing fully local AI inference. Your code and prompts never leave your machine. This is ideal for privacy-sensitive work or organizations with security requirements.

Which models support Agent Mode?

All of them. That's Void's differentiator. We've built runtime support that lets any model — even those without native tool calling capabilities — run Agent Mode. This includes open source models like DeepSeek R1, Google Gemma 3, Llama variants, Qwen, and Mistral. You're not limited to models that officially support tools.

What's the current development status?

Void has paused active development to explore new approaches to coding. The editor will continue functioning, but without maintenance, some features may degrade over time. We are not currently reviewing issues or pull requests, though we respond to email inquiries at hello@voideditor.com. There's no timeline for resuming development.

How can I contribute?

We welcome contributions via GitHub pull requests. Void is licensed under Apache 2.0, so you're free to fork, modify, and distribute your changes. Check our GitHub repository for contribution guidelines. Even though development is paused, we appreciate community involvement.

Explore AI Potential

Discover the latest AI tools and boost your productivity today.

Browse All Tools
Void
Void

Void is an open source AI code editor forked from VS Code that connects directly to any LLM without middlemen. No vendor lock-in, no data concerns – you own your data completely. Supports Agent Mode with any model including open source options, Checkpoints for version control, and local deployment via Ollama.

Visit Website

Featured

AI Jewelry Model

AI Jewelry Model

AI-powered jewelry virtual try-on and photography

SVGMaker

SVGMaker

AIpowered SVG generation and editing platform

DatePhotos.AI

DatePhotos.AI

AI dating photos that actually get you matches

iMideo

iMideo

AllinOne AI video generation platform

No Code Website Builder

No Code Website Builder

1000+ curated no-code templates in one place

Featured Articles
5 Best AI Blog Writing Tools for SEO in 2026

5 Best AI Blog Writing Tools for SEO in 2026

We tested the top AI blog writing tools to find the 5 best for SEO. Compare Jasper, Frase, Copy.ai, Surfer SEO, and Writesonic — with pricing, features, and honest pros/cons for each.

8 Best Free AI Code Assistants in 2026: Tested & Compared

8 Best Free AI Code Assistants in 2026: Tested & Compared

Looking for free AI coding tools? We tested 8 of the best free AI code assistants for 2026 — from VS Code extensions to open-source alternatives to GitHub Copilot.

Information

Views
Updated

Related Content

How to Build a Full-Stack App with Cursor in 30 Minutes (2026 Tutorial)
Blog

How to Build a Full-Stack App with Cursor in 30 Minutes (2026 Tutorial)

Learn how to build a complete full-stack app with Cursor AI in under 30 minutes. Step-by-step tutorial covering project setup, AI-assisted coding, and deployment.

Cursor vs Windsurf vs GitHub Copilot: The Ultimate Comparison (2026)
Blog

Cursor vs Windsurf vs GitHub Copilot: The Ultimate Comparison (2026)

Cursor vs Windsurf vs GitHub Copilot — we compare features, pricing, AI models, and real-world performance to help you pick the best AI code editor in 2026.

Inworld - Transform your games with intelligent AI agents
Tool

Inworld - Transform your games with intelligent AI agents

Inworld is a cutting-edge framework designed for creating interactive and immersive AI experiences in gaming and media. It allows developers to build real-time AI agents that can operate with client-side logic and local model inference, ensuring low latency and high scalability. With features like real-time data ingestion, optimized user engagement, and the ability to rapidly move from prototyping to production, Inworld empowers creators to enhance gameplay, increase retention, and drive audience engagement. The platform supports various programming languages and game engines, providing flexibility and avoiding vendor lock-in.

Lume AI - Automate your data mapping effortlessly
Tool

Lume AI - Automate your data mapping effortlessly

Lume AI is the ultimate solution for automating data mapping. With our powerful AI-driven platform, users can easily map, clean, and validate data in any workflow step. Whether you prefer a no-code solution or the flexibility of an API, Lume AI meets your needs. Trusted by market leaders and startups alike, our enterprise-grade security ensures your data is protected at all times. Say goodbye to manual data mapping and hello to efficiency and accuracy with Lume AI.