WSL 2 + Docker: Running Linux Containers Natively on Windows
For developers or sysadmins working in Windows but dealing with Linux-based workloads, there’s usually a choice: spin up a VM, SSH into a remote box, or try to simulate it locally. WSL 2 changes that equation by running a full Linux kernel directly under Windows. And when Docker is layered on top of that — it gets a lot more interesting.
Instead of pretending to be Linux, the system actually behaves like it. That small difference makes things smoother than expected.
What Each Piece Handles
Component | Function | How It Plays in the Stack |
WSL 2 | Runs a Linux distribution on top of Windows | Acts as a lightweight Linux VM, with near-native performance |
Docker | Manages and runs containers | Uses WSL 2 as its backend for Linux container support |
Why This Combo Works
Docker used to rely on a separate Hyper-V virtual machine to run on Windows. That setup worked, but it was bulky, slow to boot, and didn’t always play nice with other tools. WSL 2 flipped that model. It gives Docker a proper Linux kernel to hook into — without the overhead of a full VM window.
Once set up, everything feels surprisingly native. Pulling images, running containers, sharing volumes — it behaves like Docker on any other Linux machine. The difference? The host is still Windows. That can be useful — or strange — depending on the task.
Setup (Tested on Windows 10/11 Pro)
Step 1: Enable WSL 2
Run this from an admin terminal:
wsl –install
Pick a distribution (Ubuntu is usually a safe bet) and let it download.
Step 2: Install Docker Desktop
Grab it from Docker’s official site. During install, choose the option to use WSL 2 as the backend. No need to enable Hyper-V manually — Docker handles that internally if needed.
After installation, Docker runs in the background. The Linux containers live inside WSL. No full VM. No separate IP. Just a working engine.
Real Uses for This Setup
– Running Linux-based tools (like Ansible or bash scripts) from a Windows laptop
– Testing container images without spinning up a cloud VM
– Mixing Windows-based GUI apps with backend services running in containers
– Mounting local dev directories into containers without weird path issues
– Having Linux CLI tools available, without dual-booting or RDP
Good to Know
What it handles well:
– Fast startup — containers launch in seconds
– Shared memory and CPU — no need to assign static resources
– File access across Windows/Linux paths is surprisingly seamless
– No need for full dual-boot setups anymore
What can be tricky:
– File I/O between Windows and Linux is slower than native
– Some networking quirks when using `localhost` across subsystems
– Not all Docker extensions or CLI flags behave the same way
– WSL processes don’t always show up in Task Manager
– Docker Desktop auto-updates can reset some WSL settings
Final Notes
This setup doesn’t feel like a workaround — it’s more like the way containers should’ve always run on Windows. No VMs to babysit, no constant switching between OS contexts. Just one system that runs both sides well enough. It’s not flawless, but once in place, it rarely gets in the way. And that, for most people, is exactly what’s needed.