# DragonFly BSD Quick Start This document describes the DragonFly environment one will find on a newly installed system. While you are getting started, please pay careful attention to the version or level of DragonFly that the documentation was written for. ## Some Unix and BSD Fundamentals If you have used another Unix flavor before, you may need to spend some time learning the differences between DragonFly and the system you are experienced in. If you have never used any flavor of Unix and have only used Windows or something else before, please be prepared for a lengthy period of learning. If you already know your way around a Unix filesystem, and already know what the `/etc` folder is, how to use `vi` or `vim` or `emacs` to edit a file, how to use a shell like `tcsh` or `ksh` or `bash`, how to configure that shell, or change what shell you're using, how `su` and `doas` or `sudo` work, and what a `root` account is, the rest of this page may be enough to orient you to your surroundings. ## Software/Programs and Configuration Files Location The DragonFly default installation contains the base software/programs from the DragonFly project itself and additional software from other sources. The base system binary software programs are located in the folders /bin/ /sbin/ /usr/bin/ /usr/sbin/ The configuration files for the base system can be found in `/etc`. Third-party programs use `/usr/local/etc`. There are several different ways to install software and which version you use depends on which DragonFly BSD version you have. You can compile things from source code, or you can use binary packages. ## Disk layout of a New Dragonfly BSD System using the HAMMER filesystem If you chose to install on the HAMMER file system during installation you will be left with a system with the following disk configuration: # df -h Filesystem Size Used Avail Capacity Mounted on ROOT 288G 12G 276G 4% / devfs 1.0K 1.0K 0B 100% /dev /dev/serno/9VMBWDM1.s1a 756M 138M 558M 20% /boot /pfs/@@-1:00001 288G 12G 276G 4% /var /pfs/@@-1:00002 288G 12G 276G 4% /tmp /pfs/@@-1:00003 288G 12G 276G 4% /usr /pfs/@@-1:00004 288G 12G 276G 4% /home /pfs/@@-1:00005 288G 12G 276G 4% /usr/obj /pfs/@@-1:00006 288G 12G 276G 4% /var/crash /pfs/@@-1:00007 288G 12G 276G 4% /var/tmp procfs 4.0K 4.0K 0B 100% /proc In this example * `/dev/serno/9VMBWDM1` is the hard disk specified with serial number, * `/dev/serno/9VMBWDM1.s1` is the first slice on the hard disk. The disk label looks as follows: # disklabel /dev/serno/9VMBWDM1.s1 # /dev/serno/9VMBWDM1.s1: # # Informational fields calculated from the above # All byte equivalent offsets must be aligned # # boot space: 1044992 bytes # data space: 312567643 blocks # 305241.84 MB (320069266944 bytes) # # NOTE: If the partition data base looks odd it may be # physically aligned instead of slice-aligned # diskid: e67030af-d2af-11df-b588-01138fad54f5 label: boot2 data base: 0x000000001000 partitions data base: 0x000000100200 partitions data stop: 0x004a85ad7000 backup label: 0x004a85ad7000 total size: 0x004a85ad8200 # 305242.84 MB alignment: 4096 display block size: 1024 # for partition display only 16 partitions: # size offset fstype fsuuid a: 786432 0 4.2BSD # 768.000MB b: 8388608 786432 swap # 8192.000MB d: 303392600 9175040 HAMMER # 296281.836MB a-stor_uuid: eb1c8aac-d2af-11df-b588-01138fad54f5 b-stor_uuid: eb1c8aec-d2af-11df-b588-01138fad54f5 d-stor_uuid: eb1c8b21-d2af-11df-b588-01138fad54f5 The slice has 3 partitions: * `a` - for `/boot` * `b` - for swap * `d` - for `/`, a HAMMER file system labeled ROOT When you create a HAMMER file system, you must give it a label. Here, the installer labelled it as "ROOT" and mounted it as ROOT 288G 12G 276G 4% / A PFS is a Pseudo File System inside a HAMMER file system. The HAMMER file system in which the PFSes are created is referred to as the root file system. You should not confuse the "root" file system with the label "ROOT": the label can be anything. The installer labeled it as ROOT because it is mounted at `/`. Now inside the root HAMMER file system you find the installer created 7 PFSes from the `df -h` output above, let us see how they are mounted in `/etc/fstab`: # cat /etc/fstab # Device Mountpoint FStype Options Dump Pass# /dev/serno/9VMBWDM1.s1a /boot ufs rw 1 1 /dev/serno/9VMBWDM1.s1b none swap sw 0 0 /dev/serno/9VMBWDM1.s1d / hammer rw 1 1 /pfs/var /var null rw 0 0 /pfs/tmp /tmp null rw 0 0 /pfs/usr /usr null rw 0 0 /pfs/home /home null rw 0 0 /pfs/usr.obj /usr/obj null rw 0 0 /pfs/var.crash /var/crash null rw 0 0 /pfs/var.tmp /var/tmp null rw 0 0 proc /proc procfs rw 0 0 The PFSes are mounted using a NULL mount because they are also HAMMER file systems. You can read more on NULL mounts at the [mount_null(8)](http://leaf.dragonflybsd.org/cgi/web-man?command=mount_null§ion=8) manpage. You don't need to specify a size for the PFSes like you do for logical volumes inside a volume group for LVM. All the free space in the root HAMMER file system is available to all the PFSes; it can be seen in the `df -h` output above that the free space is the same for all PFSes and the root HAMMER file system. If you look in `/var` # cd /var/ # ls account backups caps cron empty log msgs run spool yp at cache crash db games lib mail preserve rwho tmp you will find the above directories. If you look at the status of one of the PFSes, e.g. `/usr` you will see `/var/hammer` is the default snapshot directory. # hammer pfs-status /usr/ /usr/ PFS #3 { sync-beg-tid=0x0000000000000001 sync-end-tid=0x0000000117ac6270 shared-uuid=f33e318e-d2af-11df-b588-01138fad54f5 unique-uuid=f33e31cb-d2af-11df-b588-01138fad54f5 label="" prune-min=00:00:00 operating as a MASTER snapshots directory defaults to /var/hammer/ } At installation time, it will be seen that there is no `hammer` directory in `/var`. The reason for this is that no snapshots have yet been taken. You can verify this by checking the snapshots available for `/usr` # hammer snapls /usr Snapshots on /usr PFS #3 Transaction ID Timestamp Note Snapshots will appear automatically each night as the system performs housekeeping on the Hammer filesystem. For a new volume, an immediate snapshot can be taken by running the command 'hammer cleanup'. Among other activities, it will take a snapshot of the filesystem. # sudo hammer cleanup cleanup / - HAMMER UPGRADE: Creating snapshots Creating snapshots in /var/hammer/root handle PFS #0 using /var/hammer/root snapshots - run prune - run rebalance - run.. reblock - run.... recopy - run.... cleanup /var - HAMMER UPGRADE: Creating snapshots [...] cleanup /tmp - HAMMER UPGRADE: Creating snapshots [...] cleanup /usr - HAMMER UPGRADE: Creating snapshots [...] cleanup /home - HAMMER UPGRADE: Creating snapshots [...] cleanup /usr/obj - HAMMER UPGRADE: Creating snapshots [...] cleanup /var/crash - HAMMER UPGRADE: Creating snapshots [...] cleanup /var/tmp - HAMMER UPGRADE: Creating snapshots [...] cleanup /var/isos - HAMMER UPGRADE: Creating snapshots [...] No snapshots were taken for `/tmp`, `/usr/obj` and `/var/tmp`. This is because the PFSes are flagged as `nohistory`. HAMMER tracks history for all files in a PFS. Naturally, this consumes disk space until history is pruned, at which point the available disk space will stabilise. To prevent temporary files on the mentioned PFSes (e.g., object files, crash dumps) from consuming disk space, the PFSes are marked as `nohistory`. After performing nightly housekeeping, a new directory called *hammer* will be found in `/var` with the following sub directories: # cd hammer/ # ls -l total 0 drwxr-xr-x 1 root wheel 0 Oct 13 11:51 home drwxr-xr-x 1 root wheel 0 Oct 13 11:42 root drwxr-xr-x 1 root wheel 0 Oct 13 11:43 tmp drwxr-xr-x 1 root wheel 0 Oct 13 11:51 usr drwxr-xr-x 1 root wheel 0 Oct 13 11:54 var Looking inside `/var/hammer/usr`, one finds: # cd usr/ # ls -l total 0 drwxr-xr-x 1 root wheel 0 Oct 13 11:54 obj lrwxr-xr-x 1 root wheel 25 Oct 13 11:43 snap-20101013-1143 -> /usr/@@0x0000000117ac6cb0 We have a symlink pointing to the snapshot transaction ID shown below. # hammer snapls /usr Snapshots on /usr PFS #3 Transaction ID Timestamp Note 0x0000000117ac6cb0 2010-10-13 11:43:04 IST - # You can read more about snapshots, prune, rebalance, reblock, recopy etc from [hammer(8)](http://leaf.dragonflybsd.org/cgi/web-man?command=hammer§ion=8). Make especially sure to look under the heading "cleanup [filesystem ...]". You can learn more about PFS mirroring [here](http://www.dragonflybsd.org/docs/how_to_implement_hammer_pseudo_file_system__40___pfs___41___slave_mirroring_from_pfs_master/) In order to correctly map hard disk sernos to device names you can use the 'devattr' command. # udevd # devattr -d "ad*" -p serno Device ad4: serno = Z2AD9WN4 Device ad4s1: Device ad4s1d: Device ad5: serno = 9VMRFDSY Device ad5s1: Device ad5s1d: Device ad3: serno = Z2AD9WLW Device ad3s1: Device ad3s1a: Device ad3s1b: Device ad3s1d: If your disks are 'da', change as appropriate. ##SSH Server on DragonFly Unix is multi-user and multi-tasking system. It is therefore possible, and in fact very common, to have a situation where many users are logged on to one computer, and every one of these users is running many different jobs. Although only one user can physically sit at the computer and use the monitor, keyboard, and mouse connected thereto, others can log in through the network. This document is very detailed so that a new user can be familiar with the environment. If you look in **/etc/ssh**, you will find the SSH host key files: % ls /etc/ssh moduli ssh_host_ecdsa_key ssh_host_rsa_key ssh_config ssh_host_ecdsa_key.pub ssh_host_rsa_key.pub ssh_host_dsa_key ssh_host_ed25519_key sshd_config ssh_host_dsa_key.pub ssh_host_ed25519_key.pub At this point if you try to ssh to the DragonFly you will get the following error: % ssh sgeorge@172.16.50.62 The authenticity of host '172.16.50.62 (172.16.50.62)' can't be established. RSA key fingerprint is 46:77:28:c2:70:86:93:1a:23:32:5f:01:2c:80:de:de. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.16.50.62' (RSA) to the list of known hosts. Permission denied (publickey,password,keyboard-interactive). This is because of the following configuration option in the default **/etc/ssh/sshd_config** file. # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no Change it to: PasswordAuthentication yes and reload **sshd** configuration: # /etc/rc.d/sshd reload Reloading sshd config files. Now you can login to the dragonfly system as a normal user. % ssh sgeorge@172.16.50.62 sgeorge at 172.16.50.62's password: Last login: Fri Jan 12 01:47:48 201 DragonFly v5.0.2-RELEASE (X86_64_GENERIC) #4: Sun Dec 3 17:42:25 EST 2017 Welcome to DragonFly! ... % But if you try to login by SSH as root you will get the following error. % ssh root@172.16.50.62 root at 172.16.50.62's password: Permission denied, please try again. If you investigate the log of the dragonfly system **/var/log/auth.log** you will find a line similar to: Oct 19 07:29:36 dfly-vmsrv sshd[17269]: Failed password for root from 172.16.2.0 port 56447 ssh2 even if you typed the right password for root. If you want to log in as root, change the following line in **/etc/ssh/sshd_config** file: #PermitRootLogin prohibit-password to: PermitRootLogin yes and reload **sshd** configuration: # /etc/rc.d/sshd reload Reloading sshd config files. you can login as root: % ssh root@172.16.50.62 root at 172.16.50.62's password: Last login: Fri Jan 12 02:01:22 2018 DragonFly v5.0.2-RELEASE (X86_64_GENERIC) #4: Sun Dec 3 17:42:25 EST 2017 Welcome to DragonFly! # Now in the **/var/log/auth.log** you will find a line similar to Oct 19 07:30:32 dfly-vmsrv sshd[17894]: Accepted password for root from 172.16.2.0 port 56468 ssh2 ###WARNING: It is not advisable to allow Root Login with password especially if your System is connected to the Internet unless you use Very Strong Passwords. You could be a victim of [ssh password based brute force attacks](http://en.wikipedia.org/wiki/Password_cracking). If you are victim of one such attack you can find entries like the following in your **/var/log/auth.log** file. Oct 18 18:54:54 cross sshd[9783]: Invalid user maryse from 218.248.26.6 Oct 18 18:54:54 cross sshd[9781]: input_userauth_request: invalid user maryse Oct 18 18:54:54 cross sshd[9783]: Failed password for invalid user maryse from 218.248.26.6 port 34847 ssh2 Oct 18 18:54:54 cross sshd[9781]: Received disconnect from 218.248.26.6: 11: Bye Bye Oct 18 18:54:55 cross sshd[27641]: Invalid user may from 218.248.26.6 Oct 18 18:54:55 cross sshd[3450]: input_userauth_request: invalid user may Oct 18 18:54:55 cross sshd[27641]: Failed password for invalid user may from 218.248.26.6 port 34876 ssh2 Oct 18 18:54:55 cross sshd[3450]: Received disconnect from 218.248.26.6: 11: Bye Bye Oct 18 18:54:56 cross sshd[8423]: Invalid user admin from 218.248.26.6 Oct 18 18:54:56 cross sshd[3131]: input_userauth_request: invalid user admin Oct 18 18:54:56 cross sshd[8423]: Failed password for invalid user admin from 218.248.26.6 port 34905 ssh2 Oct 18 18:54:56 cross sshd[3131]: Received disconnect from 218.248.26.6: 11: Bye Bye Oct 18 18:54:57 cross sshd[7373]: Invalid user admin from 218.248.26.6 Oct 18 18:54:57 cross sshd[28059]: input_userauth_request: invalid user admin Oct 18 18:54:57 cross sshd[7373]: Failed password for invalid user admin from 218.248.26.6 port 34930 ssh2 Oct 18 18:54:57 cross sshd[28059]: Received disconnect from 218.248.26.6: 11: Bye Bye Oct 18 18:54:58 cross sshd[12081]: Invalid user admin from 218.248.26.6 Oct 18 18:54:58 cross sshd[22416]: input_userauth_request: invalid user admin Oct 18 18:54:58 cross sshd[12081]: Failed password for invalid user admin from 218.248.26.6 port 34958 ssh2 Oct 18 18:54:58 cross sshd[22416]: Received disconnect from 218.248.26.6: 11: Bye Bye