1 [[!meta title="Virtualization: NVMM Hypervisor"]]
5 [[!toc startlevel=2 levels=3]]
9 NVMM is a Type-2 hypervisor, and hypervisor platform, that provides support for hardware-accelerated virtualization.
10 A virtualization API is shipped in libnvmm(3), and allows existing emulators (e.g., QEMU) to easily create and manage virtual machines via NVMM.
12 NVMM can support up to 128 virtual machines, each having a maximum of 128 vCPUs and 127TB RAM.
13 It works with both x86 AMD CPUs (SVM/AMD-V) and x86 Intel CPUs (VMX/VT-x).
15 NVMM was designed and written by Maxime Villard (m00nbsd.net), first appeared in NetBSD 9, and was ported to DragonFly 6.1 by Aaron LI (aly@) with significant help from Matt Dillon (dillon@) and Maxime.
19 In order to achieve hardware-accelerated virtualization, two components need to interact together:
21 * A kernel driver that will switch machine's CPU to a mode where it will be able to safely execute guest instructions.
22 * A userland emulator, which talks to the kernel driver to run virtual machines.
24 NVMM provides the infrastructure needed for both the kernel driver and the userland emulators.
26 The kernel NVMM driver comes as a kernel module.
27 It is made of a generic machine-independent frontend, and of several machine-dependent backends (currently only x86 AMD SVM and x86 Intel VMX backends).
28 During initialization, NVMM selects the appropriate backend for the system.
29 The frontend handles everything that is not CPU-specific: the virtual machines, the virtual CPUs, the guest physical address spaces, and so forth.
30 The frontend also provides an IOCTL interface for userland emulators.
32 When it comes to the userland emulators, NVMM does not provide one.
33 In other words, it does not re-implement a QEMU, a VirtualBox, a Bhyve (FreeBSD) or a VMD (OpenBSD).
34 Rather, it provides a virtualization API via the libnvmm(3) library, which allows to effortlessly add NVMM support in already existing emulators.
35 This API is meant to be simple and straightforward, and is fully documented.
36 It has some similarities with WHPX on Windows and HVF on macOS.
37 The idea is to provide an easy way for applications to use NVMM to implement services, which can go from small sandboxing systems to advanced system emulators.
39 An overview of NVMM's unique design:<br>
40 [[!img NvmmDesign.png alt="NVMM Design" size=400x]]
42 (Credit: [https://m00nbsd.net/NvmmDesign.png](https://m00nbsd.net/NvmmDesign.png))
44 Read blog [From Zero to NVMM (by Maxime Villard)](https://blog.netbsd.org/tnf/entry/from_zero_to_nvmm) for a detailed analysis of the design.
50 * AMD CPU with SVM and RVI/NPT support, or Intel CPU with VT-x/VMX and EPT support
51 * DragonFly master (6.1) at/after [commit 11755db6](https://gitweb.dragonflybsd.org/dragonfly.git/commit/11755db6b4ad8e01f32f277b591496121e03fc5d)
55 1. Add yourself to the `nvmm` group (so you can later run examples and QEMU without using `root`):
57 # pw groupmod nvmm -m $USER
59 2. Re-login to make it effective.
61 3. Load the `nvmm` kernel module:
69 On my AMD Ryzen 3700X, it shows:
71 nvmm: Kernel API version 3
74 nvmm: Max machines 128
75 nvmm: Max VCPUs per machine 128
76 nvmm: Max RAM per machine 127T
77 nvmm: Arch Mach conf 0
78 nvmm: Arch VCPU conf 0x1<CPUID>
79 nvmm: Guest FPU states 0x3<x87,SSE>
85 $ cd /usr/src/test/nvmm
87 $ /tmp/calc-vm <integer1> <integer2>
92 $ cd /usr/src/test/nvmm/demo
94 $ /tmp/toyvirt /tmp/smallkern
98 $ cd /usr/src/test/testcases/libnvmm
111 $ qemu-img create -f qcow2 dfly.qcow2 50G
113 3. Boot an ISO with NVMM acceleration:
115 $ qemu-system-x86_64 \
116 -machine type=q35,accel=nvmm \
118 -cdrom dfly.iso -boot d \
119 -drive file=dfly.qcow2,if=none,id=disk0 \
120 -device virtio-blk-pci,drive=disk0 \
121 -netdev user,id=net0,hostfwd=tcp:127.0.0.1:6022-:22 \
122 -device virtio-net-pci,netdev=net0 \
123 -object rng-random,id=rng0,filename=/dev/urandom \
124 -device virtio-rng-pci,rng=rng0 \
127 -spice addr=127.0.0.1,port=5900,ipv4=on,disable-ticketing=on,seamless-migration=on
129 This setup creates a VM of settings:
131 * Modern machine (Q35) with NVMM hardware acceleration
132 * Emulate the host processor with 2 vCPUs
134 * VirtIO-BLK hard disk
135 * VirtIO-NET network card
136 * VirtIO-RNG random number generator
137 * Display video output via curses
138 * User-mode networking (TCP/UDP pass-through; no ICMP), with port-forwarding (host `127.0.0.1:6022` to guest `22` port)
139 * [SPICE](https://www.spice-space.org/) remote desktop support (use `qxl` for more powerful graphics support), accessible via `127.0.0.1:5900`
141 To connect to guest via SSH:
143 $ ssh -p 6022 user@127.0.0.1
145 To connect to guest via SPICE (install package `spice-gtk` to get the `spicy` utility):
149 By the way, the created VMs can be shown with:
152 Machine ID VCPUs RAM Owner PID Creation Time
153 ---------- ----- ---- --------- ------------------------
154 0 2 4.1G 91101 Sat Jul 24 17:55:22 2021
158 The above setup uses user-mode networking, which has limitations in both performance and functionalities. A more advanced network can be achieved by using the TAP device.
160 1. Create a bridge (`bridge0`) and configure it:
162 # ifconfig bridge0 create
163 # ifconfig bridge0 inet 10.66.6.1/24
164 # ifconfig bridge0 up
166 2. Create a TAP device (`tap666`) and add it to the bridge:
168 # ifconfig tap666 create
170 # ifconfig bridge0 addm tap666
172 3. Adjust TAP sysctls:
174 # sysctl net.link.tap.up_on_open=1
175 # sysctl net.link.tap.user_open=1
177 4. Make the TAP device can be opened by yourself:
179 # chown $USER /dev/tap666
182 Should have a better way to do this; devd(8) could be used.
184 5. Start QEMU with option `-netdev tap,ifname=tap666,id=net0,script=no,downscript=no`, i.e.,
186 $ qemu-system-x86_64 \
188 -netdev tap,ifname=tap666,id=net0,script=no,downscript=no \
189 -device virtio-net-pci,netdev=net0,mac=52:54:00:34:56:66
192 QEMU by default assigns the link-level address `52:54:00:12:34:56` to guest.
193 If unspecified, all guests would have the **same** MAC address.
194 Specify the MAC address with `-device xxx,netdev=xxx,mac=52:54:xx:xx:xx:xx`.
196 6. Configure guest IP address:
198 guest# ifconfig vtnet0 inet 10.66.6.2/24 up
199 guest# route add default 10.66.6.1
201 And then the guest can communicate with host and vice versa.
203 guest# ping 10.66.6.1
208 With the above setup, guests can only talk to each other and the host, but can't access the external network.
209 One way to make VM access the external work is configuring a **bridged network**:
210 the host machine is acting as a switch, and the VM appears as another machine (similar to the host machine)
213 1. Create the bridge interface in the same way as above, but no need to configure its address:
215 # ifconfig bridge0 create
216 # ifconfig bridge0 up
218 2. Create the TAP interface and configure it the same way as above:
220 # ifconfig tap666 create
222 # sysctl net.link.tap.up_on_open=1
223 # sysctl net.link.tap.user_open=1
224 # chown $USER /dev/tap666
226 3. Add both the TAP interface and the **host network interface** (e.g., `re0` in my case) to the bridge:
228 # ifconfig bridge0 addm re0
229 # ifconfig bridge0 addm tap666
232 Adding an interface to the bridge will auto enable *promiscuous mode* for it.
234 4. Start the VM the same as before, and run DHCP inside the VM, e.g.:
238 Now the VM can obtain IP configuration from the LAN router and can access the Internet, e.g.,:
241 vtnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
242 options=28<VLAN_MTU,JUMBO_MTU>
243 ether 52:54:00:12:34:56
244 inet6 fe80::5054:ff:fe12:3456%vtnet0 prefixlen 64 scopeid 0x1
245 inet 10.6.20.34 netmask 0xffffff00 broadcast 10.6.20.255
246 media: Ethernet 1000baseT <full-duplex>
250 This method exposes the VM to the LAN and makes it accessible to all other LAN machines
251 besides the host machine.
252 This can render the VM in risk and also may reveal sensitive information from VM!!
256 Another way to allow VM access the Internet is to configure the NAT on the host side.
257 This method doesn't expose the VM beyond the host machine, and thus is regarded more secure.
259 1. Enable IP forwarding:
261 # sysctl net.inet.ip.forwarding=1
263 2. Configure NAT with PF(4) by adding the follow snippet to `/etc/pf.conf`:
267 nat on $ext_if inet from $br_if:network to !$br_if:network -> ($ext_if:0)
269 3. Enable and start PF:
271 # echo 'pf_enable=YES' >> /etc/rc.conf
274 Now, the guest can access the external network.
278 A DHCP server can be run on the bridge interface to provide guests with auto IP address configuration.
279 Similarly, a DNS service can be provided to guests.
305 * QEMU `-cpu host` doesn't work. Need to investigate the root cause.
309 * NVMM kernel code: [machine-independent frontend](https://gitweb.dragonflybsd.org/dragonfly.git/tree/HEAD:/sys/dev/virtual/nvmm), [machine-dependent x86 backends](https://gitweb.dragonflybsd.org/dragonfly.git/tree/HEAD:/sys/dev/virtual/nvmm/x86)
310 * [libnvmm API code](https://gitweb.dragonflybsd.org/dragonfly.git/tree/HEAD:/lib/libnvmm)
311 * [libnvmm test cases](https://gitweb.dragonflybsd.org/dragonfly.git/tree/HEAD:/test/testcases/libnvmm)
312 * [nvmmctl utility code](https://gitweb.dragonflybsd.org/dragonfly.git/tree/HEAD:/usr.sbin/nvmmctl)
313 * Examples: [calc-vm](https://gitweb.dragonflybsd.org/dragonfly.git/blob/HEAD:/test/nvmm/calc-vm.c), [demo](https://gitweb.dragonflybsd.org/dragonfly.git/blob/HEAD:/test/nvmm/deomo)
314 * [nvmm(4) man page](https://man.dragonflybsd.org/?command=nvmm§ion=4)
315 * [libnvmm(3) man page](https://man.dragonflybsd.org/?command=libnvmm§ion=3)
316 * [nvmmctl(8) man page](https://man.dragonflybsd.org/?command=nvmmctl§ion=8)
320 * [m00nbsd: NVMM](https://m00nbsd.net/4e0798b7f2620c965d0dd9d6a7a2f296.html)
321 * [NetBSD: From Zero to NVMM](https://blog.netbsd.org/tnf/entry/from_zero_to_nvmm)
322 * [NetBSD: Chapter 30. Using virtualization: QEMU and NVMM](https://netbsd.org/docs/guide/en/chap-virt.html)
323 * [QEMU: Networking](https://wiki.qemu.org/Documentation/Networking)
324 * [Gentoo: QEMU/Options](https://wiki.gentoo.org/wiki/QEMU/Options)
325 * [ArchWiki: QEMU](https://wiki.archlinux.org/title/QEMU)
326 * [SPICE: Simple Protocol for Independent Computing Environments](https://www.spice-space.org/)