puter-architecture - how puter works internally
puter is built around two binaries and a guest agent. This page
describes the architecture and data flow.
- puter
- The unprivileged CLI. User-facing commands for VM lifecycle management,
snapshots, cloning, SSH access, and cluster operations. Communicates with
puterd over a Unix socket.
- puterd
- The privileged daemon. Manages networking: bridge setup, TAP device
creation and destruction, NAT/masquerade via nftables, and a built-in DHCP
server. Listens on a Unix socket at
~/.local/share/puter/run/daemon.sock. Requires
CAP_NET_ADMIN, CAP_NET_RAW, and CAP_NET_BIND_SERVICE
capabilities (not root).
- puter-agent
- The guest agent. Runs inside each VM as a systemd service. Listens on
vsock port 1024. Handles ping and configure requests
(hostname, MAC address, netplan) via a JSON-line protocol.
The daemon creates a Linux bridge (puter0) on startup. Each
VM gets a TAP device (pt-name) attached to the bridge.
NAT/masquerade rules are applied via nftables so VMs can reach the
internet.
A built-in DHCP server runs on the bridge interface. VMs obtain
their IP addresses via DHCP after the guest agent writes a netplan
configuration and runs netplan apply.
IP addresses are derived deterministically from the VM ID:
10.0.0.<id+10>. MAC addresses follow the pattern
52:54:00:00:00:<id+10>.
VMs use qcow2 copy-on-write overlays backed by OCI-derived base
images. The base image is pulled from a container registry on first use and
cached locally.
Snapshots capture the full VM state (memory, CPU, disk) via
cloud-hypervisor's snapshot API. Clones create new qcow2 overlays backed by
snapshot disk images.
All state lives under ~/.local/share/puter/:
- state/<vm>.json
- VM metadata: name, overlay path, PID, status, MAC, IP, TAP device, vsock
CID.
- overlays/<vm>.qcow2
- Copy-on-write disk overlays backed by the OCI-derived base image.
- run/
- Unix sockets: daemon socket, cloud-hypervisor API sockets, console
sockets, vsock listeners.
- snapshot/<vm>/<timestamp>/
- Cloud-hypervisor snapshots: config.json, memory ranges, state, and a disk
overlay.
- images/
- OCI image cache. Base images are resolved automatically on first puter
start.
- vmlinuz,
initrd
- Kernel and initrd installed via puter init.
When puter start <name> runs:
- 1.
- CLI creates a qcow2 overlay backed by the base image.
- 2.
- CLI requests a TAP device from puterd (HTTP POST to Unix
socket).
- 3.
- CLI launches cloud-hypervisor with API socket, kernel, initrd,
disk, net, and vsock arguments.
- 4.
- CLI waits for the cloud-hypervisor API socket to become ready and saves VM
state as JSON.
- 5.
- CLI waits for the guest agent on vsock, then sends a configure request
with hostname and networking parameters.