2 .\" Copyright (c) 2006, 2007
3 .\" The DragonFly Project. All rights reserved.
5 .\" Redistribution and use in source and binary forms, with or without
6 .\" modification, are permitted provided that the following conditions
9 .\" 1. Redistributions of source code must retain the above copyright
10 .\" notice, this list of conditions and the following disclaimer.
11 .\" 2. Redistributions in binary form must reproduce the above copyright
12 .\" notice, this list of conditions and the following disclaimer in
13 .\" the documentation and/or other materials provided with the
15 .\" 3. Neither the name of The DragonFly Project nor the names of its
16 .\" contributors may be used to endorse or promote products derived
17 .\" from this software without specific, prior written permission.
19 .\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 .\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 .\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
22 .\" FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
23 .\" COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
24 .\" INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES (INCLUDING,
25 .\" BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
26 .\" LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
27 .\" AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 .\" OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
29 .\" OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
40 .Nd virtual kernel architecture
42 .Cd "platform vkernel64 # for 64 bit vkernels"
47 .Pa /var/vkernel/boot/kernel/kernel
50 .Op Fl e Ar name Ns = Ns Li value : Ns Ar name Ns = Ns Li value : Ns ...
52 .Op Fl I Ar interface Ns Op Ar :address1 Ns Oo Ar :address2 Oc Ns Oo Ar /netmask Oc Ns Oo Ar =mac Oc
55 .Op Fl n Ar numcpus Ns Op Ar :lbits Ns Oo Ar :cbits Oc
57 .Op Fl r Ar file Ns Op Ar :serno
58 .Op Fl R Ar file Ns Op Ar :serno
62 architecture allows for running
66 The following options are available:
67 .Bl -tag -width ".Fl m Ar size"
69 Specify a readonly CD-ROM image
71 to be used by the kernel, with the first
83 option specified on the command line will be the boot disk.
84 The CD9660 filesystem is assumed when booting from this media.
86 Disables hardware pagetable for
88 .It Fl e Ar name Ns = Ns Li value : Ns Ar name Ns = Ns Li value : Ns ...
89 Specify an environment to be used by the kernel.
90 This option can be specified more than once.
92 Shows a list of available options, each with a short description.
94 Specify a memory image
96 to be used by the virtual kernel.
99 option is given, the kernel will generate a name of the form
100 .Pa /var/vkernel/memimg.XXXXXX ,
103 being replaced by a sequential number, e.g.\&
105 .It Fl I Ar interface Ns Op Ar :address1 Ns Oo Ar :address2 Oc Ns Oo Ar /netmask Oc Ns Oo Ar =MAC Oc
106 Create a virtual network device, with the first
116 argument is the name of a
118 device node or the path to a
123 path prefix does not have to be specified and will be automatically prepended
127 will pick the first unused
135 arguments are the IP addresses of the
146 interface is added to the specified
151 address is not assigned until the interface is brought up in the guest.
155 argument applies to all interfaces for which an address is specified.
159 argument is the MAC address of the
162 If not specified, a pseudo-random one will be generated.
164 When running multiple vkernels it is often more convenient to simply
167 socket and let vknetd deal with the tap and/or bridge.
168 An example of this would be
169 .Pa /var/run/vknet:0.0.0.0:10.2.0.2/16 .
171 Specify which, if any, real CPUs to lock virtual CPUs to.
175 .Cm map Ns Op Ns , Ns Ar startCPU ,
180 does not map virtual CPUs to real CPUs.
183 .Cm map Ns Op Ns , Ns Ar startCPU
184 maps each virtual CPU to a real CPU starting with real CPU 0 or
189 locks all virtual CPUs to the real CPU specified by
192 Locking the vkernel to a set of cpus is recommended on multi-socket systems
193 to improve NUMA locality of reference.
195 Specify the amount of memory to be used by the kernel in bytes,
203 Lowercase versions of
208 .It Fl n Ar numcpus Ns Op Ar :lbits Ns Oo Ar :cbits Oc
210 specifies the number of CPUs you wish to emulate.
211 Up to 16 CPUs are supported with 2 being the default unless otherwise
215 specifies the number of bits within APICID(=CPUID) needed for representing
217 Controls the number of threads/core (0 bits - 1 thread, 1 bit - 2 threads).
218 This parameter is optional (mandatory only if
223 specifies the number of bits within APICID(=CPUID) needed for representing
225 Controls the number of core/package (0 bits - 1 core, 1 bit - 2 cores).
226 This parameter is optional.
228 Specify a pidfile in which to store the process ID.
229 Scripts can use this file to locate the vkernel pid for the purpose of
230 shutting down or killing it.
232 The vkernel will hold a lock on the pidfile while running.
233 Scripts may test for the lock to determine if the pidfile is valid or
234 stale so as to avoid accidentally killing a random process.
235 Something like '/usr/bin/lockf -ks -t 0 pidfile echo -n' may be used
237 A non-zero exit code indicates that the pidfile represents a running
240 An error is issued and the vkernel exits if this file cannot be opened for
241 writing or if it is already locked by an active vkernel process.
242 .It Fl r Ar file Ns Op Ar :serno
243 Specify a R/W disk image
245 to be used by the kernel, with the first
252 A serial number for the virtual disk can be specified in
259 option specified on the command line will be the boot disk.
260 .It Fl R Ar file Ns Op Ar :serno
263 but treats the disk image as copy-on-write. This allows
264 a private copy of the image to be modified but does not
265 modify the image file. The image file will not be locked
266 in this situation and multiple vkernels can run off the
267 same image file if desired.
269 Since modifications are thrown away, any data you wish
270 to retain across invocations needs to be exported over
271 the network prior to shutdown.
272 This gives you the flexibility to mount the disk image
273 either read-only or read-write depending on what is
275 However, keep in mind that when mounting a COW image
276 read-write, modifications will eat system memory and
277 swap space until the vkernel is shut down.
279 Boot into single-user mode.
281 Tell the vkernel to use a precise host timer when calculating clock values.
282 If the TSC isn't used, this will impose higher overhead on the vkernel as it
283 will have to make a system call to the real host every time it wants to get
285 However, the more precise timer might be necessary for your application.
287 By default, the vkernel uses the TSC cpu timer if possible, or an imprecise
288 (host-tick-resolution) timer which uses a user-mapped kernel page and does
289 not have any syscall overhead.
291 Force the vkernel to not use the TSC cpu timer.
293 Enable writing to kernel memory and module loading.
294 By default, those are disabled for security reasons.
296 Turn on verbose booting.
298 Force the vkernel's ram to be pre-zerod. Useful for benchmarking on
299 single-socket systems where the memory allocation does not have to be
301 This options is not recommended on multi-socket systems or when the
306 A number of virtual device drivers exist to supplement the virtual kernel.
310 driver allows for up to 16
313 The root device will be
317 for further information on how to prepare a root image).
321 driver allows for up to 16 virtual CD-ROM devices.
322 Basically this is a read only
324 device with a block size of 2048.
325 .Ss Network interface
328 driver supports up to 16 virtual network interfaces which are associated with
333 device, the per-interface read only
336 .Va hw.vke Ns Em X Ns Va .tap_unit
337 holds the unit number of the associated
341 By default, half of the total mbuf clusters available is distributed equally
342 among all the vke devices up to 256.
343 This can be overridden with the tunable
344 .Va hw.vke.max_ringsize .
345 Take into account the number passed will be aligned to the lower power of two.
347 The virtual kernel only enables
351 while operating in regular console mode.
355 to the virtual kernel causes the virtual kernel to enter its internal
357 debugger and re-enable all other terminal signals.
360 to the virtual kernel triggers a clean shutdown by passing a
362 to the virtual kernel's
366 It is possible to directly gdb the virtual kernel's process.
367 It is recommended that you do a
368 .Ql handle SIGSEGV noprint
369 to ignore page faults processed by the virtual kernel itself and
370 .Ql handle SIGUSR1 noprint
371 to ignore signals used for simulating inter-processor interrupts.
373 To compile a vkernel with profiling support, the
375 variable needs to be used to pass
381 make -DNO_MODULES CONFIGARGS=-p buildkernel KERNCONF=VKERNEL64
384 .Bl -tag -width ".It Pa /sys/config/VKERNEL64" -compact
391 .It Pa /sys/config/VKERNEL64
395 configuration file, for
397 .Sh CONFIGURATION FILES
398 Your virtual kernel is a complete
400 system, but you might not want to run all the services a normal kernel runs.
401 Here is what a typical virtual kernel's
403 file looks like, with some additional possibilities commented out.
406 network_interfaces="lo0 vke0"
412 .Sh BOOT DRIVE SELECTION
413 You can override the default boot drive selection and filesystem
414 using a kernel environment variable. Note that the filesystem
415 selected must be compiled into the vkernel and not loaded as
416 a module. You need to escape some quotes around the variable data
417 to avoid mis-interpretation of the colon in the
422 vfs.root.mountfrom=\\"hammer:vkd0s1d\\"
423 .Sh DISKLESS OPERATION
426 from a NFS root, a number of tunables need to be set:
427 .Bl -tag -width indent
429 IP address to be set in the vkernel interface.
430 .It Va boot.netif.netmask
431 Netmask for the IP to be set.
432 .It Va boot.netif.name
433 Network interface name inside the vkernel.
434 .It Va boot.nfsroot.server
437 .It Va boot.nfsroot.path
438 Host path where a world and distribution
439 targets are properly installed.
442 See an example on how to boot a diskless
448 A couple of steps are necessary in order to prepare the system to build and
449 run a virtual kernel.
450 .Ss Setting up the filesystem
453 architecture needs a number of files which reside in
455 Since these files tend to get rather big and the
457 partition is usually of limited size, we recommend the directory to be
460 partition with a link to it in
463 mkdir -p /home/var.vkernel/boot
464 ln -s /home/var.vkernel /var/vkernel
467 Next, a filesystem image to be used by the virtual kernel has to be
468 created and populated (assuming world has been built previously).
469 If the image is created on a UFS filesystem you might want to pre-zero it.
470 On a HAMMER filesystem you should just truncate-extend to the image size
471 as HAMMER does not re-use data blocks already present in the file.
473 vnconfig -c -S 2g -T vn0 /var/vkernel/rootimg.01
474 disklabel -r -w vn0s0 auto
475 disklabel -e vn0s0 # add `a' partition with fstype `4.2BSD'
477 mount /dev/vn0s0a /mnt
479 make installworld DESTDIR=/mnt
481 make distribution DESTDIR=/mnt
482 echo '/dev/vkd0s0a / ufs rw 1 1' >/mnt/etc/fstab
483 echo 'proc /proc procfs rw 0 0' >>/mnt/etc/fstab
490 entry with the following line and turn off all other gettys.
492 console "/usr/libexec/getty Pc" cons25 on secure
499 if you would like to automatically log in as root.
501 Then, unmount the disk.
506 .Ss Compiling the virtual kernel
507 In order to compile a virtual kernel use the
509 kernel configuration file residing in
511 (or a configuration file derived thereof):
514 make -DNO_MODULES buildkernel KERNCONF=VKERNEL64
515 make -DNO_MODULES installkernel KERNCONF=VKERNEL64 DESTDIR=/var/vkernel
517 .Ss Enabling virtual kernel operation
520 .Va vm.vkernel_enable ,
521 must be set to enable
525 sysctl vm.vkernel_enable=1
527 .Ss Configuring the network on the host system
528 In order to access a network interface of the host system from the
530 you must add the interface to a
532 device which will then be passed to the
538 ifconfig bridge0 create
539 ifconfig bridge0 addm re0 # assuming re0 is the host's interface
542 .Ss Running the kernel
543 Finally, the virtual kernel can be run:
546 \&./boot/kernel/kernel -m 1g -r rootimg.01 -I auto:bridge0
554 commands from inside a virtual kernel.
555 After doing a clean shutdown the
557 command will re-exec the virtual kernel binary while the other two will
558 cause the virtual kernel to exit.
559 .Ss Diskless operation (vkernel as a NFS client)
564 network configuration. The line continuation backslashes have been
565 omitted. For convenience and to reduce confusion I recommend mounting
566 the server's remote vkernel root onto the host running the vkernel binary
567 using the same path as the NFS mount. It is assumed that a full system
568 install has been made to /var/vkernel/root using a kernel KERNCONF=VKERNEL64
569 for the kernel build.
571 \&/var/vkernel/root/boot/kernel/kernel
572 -m 1g -n 4 -I /var/run/vknet
573 -e boot.netif.ip=10.100.0.2
574 -e boot.netif.netmask=255.255.0.0
575 -e boot.netif.gateway=10.100.0.1
576 -e boot.netif.name=vke0
577 -e boot.nfsroot.server=10.0.0.55
578 -e boot.nfsroot.path=/var/vkernel/root
581 In this example vknetd is assumed to have been started as shown below, before
582 running the vkernel, using an unbridged TAP configuration routed through
584 IP forwarding must be turned on, and in this example the server resides
585 on a different network accessible to the host executing the vkernel but not
586 directly on the vkernel's subnet.
589 sysctl net.inet.ip.forwarding=1
590 vknetd -t tap0 10.100.0.1/16
593 You can run multiple vkernels trivially with the same NFS root as long as
594 you assign each one a different IP on the subnet (2, 3, 4, etc). You
595 should also be careful with certain directories, particularly /var/run
596 and possibly also /var/db depending on what your vkernels are going to be
598 This can complicate matters with /var/db/pkg.
599 .Sh BUILDING THE WORLD UNDER A VKERNEL
600 The virtual kernel platform does not have all the header files expected
601 by a world build, so the easiest thing to do right now is to specify a
602 pc64 (in a 64 bit vkernel) target when building the world under a virtual
605 vkernel# make MACHINE_PLATFORM=pc64 buildworld
606 vkernel# make MACHINE_PLATFORM=pc64 installworld
622 .%A Aggelos Economopoulos
624 .%T "A Peek at the DragonFly Virtual Kernel"
627 Virtual kernels were introduced in
632 thought up and implemented the
634 architecture and wrote the
641 This manual page was written by