First of all, you need to install “dkms (dynamic kernel module support) :

Logon as root :

wou@ubuntu:~$ sudo bash
[sudo] password for wou:
root@ubuntu:~# id
uid=0(root) gid=0(root) groups=0(root)

Then install “dkms” :

root@ubuntu:~# apt-get -y install dkms

ZFS installation

Add this repository :

root@ubuntu:~# add-apt-repository ppa:zfs-native/stable

Launch an update :

root@ubuntu:~# apt-get -y update

and install “ubuntu-zfs” :

root@ubuntu:~# apt-get install -y ubuntu-zfs

ZFS uses “dkms” :

root@ubuntu:~# ls -l /var/lib/dkms/zfs
total 4
drwxr-xr-x 4 root root 4096 Oct 20 10:37 0.6.5.2
lrwxrwxrwx 1 root root 32 Oct 20 10:37 kernel-3.19.0-15-generic-x86_64 -> 0.6.5.2/3.19.0-15-generic/x86_64

Create your first zpool

We already added an empty drive :

root@ubuntu:~# ls /dev/sdb
/dev/sdb
root@ubuntu:~# fdisk -l /dev/sdb
Disk /dev/sdb: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

We create our first zpool on it :

root@ubuntu:~# zpool create -f datapool /dev/sdb
root@ubuntu:~# zpool list
NAME       SIZE  ALLOC  FREE  EXPANDSZ   FRAG    CAP DEDUP  HEALTH  ALTROOT
datapool  15.9G    64K 15.9G         -     0%     0% 1.00x  ONLINE  -
root@ubuntu:~# zfs list
NAME       USED  AVAIL  REFER  MOUNTPOINT
datapool    55K  15.4G    19K  /datapool

Mirror your zpool

We added another new drive :

root@ubuntu:~# fdisk -l /dev/sdc
Disk /dev/sdc: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

This drive will be used for mirror :

root@ubuntu:~# zpool attach -f datapool /dev/sdb /dev/sdc

The mirror is done :

root@ubuntu:~# zpool status datapool
 pool: datapool
 state: ONLINE
 scan: resilvered 59.5K in 0h0m with 0 errors on Tue Oct 20 11:11:11 2015
config:
       NAME        STATE    READ WRITE CKSUM
       datapool    ONLINE      0     0     0
         mirror-0  ONLINE      0     0     0
       sdb         ONLINE      0     0     0
       sdc         ONLINE      0     0     0
errors: No known data errors

Create your first ZFS file system

root@ubuntu:~# zfs create datapool/myzfs

Now, I have 2 ZFS file systems :

root@ubuntu:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
datapool 93.5K 15.4G 19K /datapool
datapool/myzfs 19K 15.4G 19K /datapool/myzfs

I can see that with legacy “df” command :

root@ubuntu:~# df -h
Filesystem                   Size Used Avail Use% Mounted on
udev                         486M    0 486M    0% /dev
tmpfs                        100M 4.8M  95M    5% /run
/dev/mapper/ubuntu--vg-root   31G 1.5G  28G    5% /
tmpfs                        497M    0 497M    0% /dev/shm 
tmpfs                        5.0M    0 5.0M    0% /run/lock
tmpfs                        497M    0 497M    0% /sys/fs/cgroup
/dev/sda1                    236M  42M 182M   19% /boot
tmpfs                        100M    0 100M    0% /run/user/1000
datapool                      16G    0  16G    0% /datapool
datapool/myzfs                16G    0  16G    0% /datapool/myzfs

Change your file system settings

We don’t want to have “/datapool” mounted :

root@ubuntu:~# zfs set mountpoint=none datapool
Nothing mounted :
root@ubuntu:~# zfs list
NAME             USED  AVAIL  REFER MOUNTPOINT
datapool         110K  15.4G    19K none
datapool/myzfs    19K  15.4G    19K none

Now mount the ZFS file system on “/data” :

root@ubuntu:~# zfs set mountpoint=/data datapool/myzfs
root@ubuntu:~# df -h /data
Filesystem Size Used Avail Use% Mounted on
datapool/myzfs 16G 0 16G 0% /data

Set parameters :

  • size : 10GB
  • no access time recorded
root@ubuntu:~# zfs set quota=10G datapool/myzfs
root@ubuntu:~# zfs set reservation=10G datapool/myzfs
root@ubuntu:~# zfs set atime=off datapool/myzfs

Your first file in ZFS

Generate a large random file :

root@ubuntu:~# dd if=/dev/urandom of=/data/my_gaga_file bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 479.921 s, 8.9 MB/s

Here is this file :

root@ubuntu:~# ls -l /data/my_gaga_file
-rw-r--r-- 1 root root 4294967296 Oct 20 14:54 /data/my_gaga_file
root@ubuntu:~# sum /data/my_gaga_file
33222 4194304

Migrate file system from mirrored zpool to RAIDZ zpool

First of all create a RAIDZ zpool

We have 3 new virtual drives :

root@ubuntu:~# rescan-scsi-bus
/sbin/rescan-scsi-bus: line 592: [: 1.39: integer expression expected
Host adapter 0 (ata_piix) found.
Host adapter 1 (ata_piix) found.
Host adapter 2 (mptspi) found.
Scanning SCSI subsystem for new devices
Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning host 1 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning for device 1 0 0 0 ...
OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 00
      Vendor: NECVMWar Model: VMware IDE CDR10 Rev: 1.00
      Type: CD-ROM ANSI SCSI revision: 05
Scanning host 2 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning for device 2 0 0 0 ...
OLD: Host: scsi2 Channel: 00 Id: 00 Lun: 00
      Vendor: VMware Model: Virtual disk Rev: 1.0
      Type: Direct-Access ANSI SCSI revision: 02
Scanning for device 2 0 1 0 ...
OLD: Host: scsi2 Channel: 00 Id: 01 Lun: 00
      Vendor: VMware Model: Virtual disk Rev: 1.0
      Type: Direct-Access ANSI SCSI revision: 02
Scanning for device 2 0 2 0 ...
OLD: Host: scsi2 Channel: 00 Id: 02 Lun: 00
      Vendor: VMware Model: Virtual disk Rev: 1.0
      Type: Direct-Access ANSI SCSI revision: 02
Scanning for device 2 0 3 0 ...
NEW: Host: scsi2 Channel: 00 Id: 03 Lun: 00
      Vendor: VMware Model: Virtual disk Rev: 1.0
      Type: Direct-Access ANSI SCSI revision: 02
Scanning for device 2 0 4 0 ...
NEW: Host: scsi2 Channel: 00 Id: 04 Lun: 00
      Vendor: VMware Model: Virtual disk Rev: 1.0
      Type: Direct-Access ANSI SCSI revision: 02
Scanning for device 2 0 5 0 ...
NEW: Host: scsi2 Channel: 00 Id: 05 Lun: 00
      Vendor: VMware Model: Virtual disk Rev: 1.0
      Type: Direct-Access ANSI SCSI revision: 02
3 new device(s) found.
0 device(s) removed.
root@ubuntu:~# ls -1 /dev/sd{d,e,f}
/dev/sdd
/dev/sde
/dev/sdf

We create the zpool :

root@ubuntu:~# zpool create -f z1pool raidz /dev/sd{d,e,f}
root@ubuntu:~# zpool status z1pool
  pool: z1pool
 state: ONLINE
  scan: none requested
config:
        NAME        STATE   READ WRITE CKSUM
        z1pool      ONLINE     0     0     0
          raidz1-0  ONLINE     0     0     0
            sdd     ONLINE     0     0     0
            sde     ONLINE     0     0     0
            sdf     ONLINE     0     0     0
errors: No known data errors

Is a RAIDZ zpool faster than a mirrored one ?

Create a file system with same size and parameters as we did previously :

root@ubuntu:~# zfs set mountpoint=none z1pool
root@ubuntu:~# zfs create z1pool/myzfs
root@ubuntu:~# zfs set mountpoint=/raidzdata z1pool/myzfs
root@ubuntu:~# zfs set quota=10G z1pool/myzfs
root@ubuntu:~# zfs set reservation=10G z1pool/myzfs
root@ubuntu:~# zfs set atime=off z1pool/myzfs

Faster for large random file creation :

root@ubuntu:~# dd if=/dev/urandom of=/raidzdata/my_gaga_file bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 334.65 s, 12.8 MB/s

RAIDZ file systems are faster than mirrored in  read mode :

  • mirrored :
root@ubuntu:~# time cat /data/my_gaga_file >/dev/null
real 1m24.711s
user 0m0.044s
sys 0m1.648s
  • RAIDZ :
root@ubuntu:~# time cat /raidzdata/my_gaga_file >/dev/null
real 0m56.241s
user 0m0.032s
sys 0m1.720s

Ready for migration ?

We will migrate “/data” :

root@ubuntu:~# zpool list
NAME       SIZE  ALLOC    FREE  EXPANDSZ   FRAG   CAP   DEDUP HEALTH ALTROOT
datapool  15.9G  4.00G   11.9G         -    22%   25%   1.00x  ONLINE -
z1pool    47.8G  6.00G   41.7G         -     7%   12%   1.00x  ONLINE -
root@ubuntu:~# zfs list -r datapool
NAME             USED  AVAIL  REFER  MOUNTPOINT
datapool        10.0G  5.38G    19K  none
datapool/myzfs  4.00G  6.00G  4.00G  /data

Create a first snapshot :

root@ubuntu:~# zfs snapshot datapool/myzfs@snapshot1

Send this snapshot to the RAIDZ zpool (pay your attention : this operation will take a while) :

root@ubuntu:~# zfs send datapool/myzfs@snapshot1 | zfs receive z1pool/mytargetzfs

In the RAIDZ zpool, we have from now :

root@ubuntu:~# zfs list -r z1pool
NAME                 USED  AVAIL  REFER  MOUNTPOINT
z1pool              14.0G  16.8G  24.0K  none
z1pool/mytargetzfs  4.00G  16.8G  4.00G  none
z1pool/myzfs        4.00G  6.00G  4.00G  /raidzdata

This “/data” file system is still mounted, so we can continue using it, for example add this parse file :

root@ubuntu:~# dd if=/dev/zero of=/data/myparsefile bs=1 seek=1G count=0
0+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000496499 s, 0.0 kB/s
root@ubuntu:~# ls -l /data
total 4197146
-rw-r--r-- 1 root root 4294967296 Oct 20 14:54 my_gaga_file
-rw-r--r-- 1 root root 1073741824 Oct 21 10:55 myparsefile

Umount this file system and take a second snapshot (this one will contain “/data/myparsefile”) :

root@ubuntu:~# zfs umount datapool/myzfs && zfs snapshot datapool/myzfs@snapshot2

Send this incremental snapshot to update differences :

root@ubuntu:~# zfs send -i datapool/myzfs@snapshot1 datapool/myzfs@snapshot2 | zfs receive z1pool/mytargetzfs

Set the mountpoint :

root@ubuntu:~# zfs set mountpoint=none datapool/myzfs && zfs set mountpoint="/data" z1pool/mytargetzfs

You will be happy to discover all your files in this new file system :

root@ubuntu:~# ls -l /data
total 4193304
-rw-r--r-- 1 root root 4294967296 Oct 20 14:54 my_gaga_file
-rw-r--r-- 1 root root 1073741824 Oct 21 10:55 myparsefile
root@ubuntu:~# sum /data/my_gaga_file
33222 4194304
root@ubuntu:~# du /data/myparsefile
1 /data/myparsefile

Now, finish the job !

Remove all snapshots (you don’t need them anymore) :

root@ubuntu:~# zfs list -t snapshot
NAME                           USED  AVAIL  REFER MOUNTPOINT
datapool/myzfs@snapshot1         9K      -  4.00G -
datapool/myzfs@snapshot2          0      -  4.00G -
z1pool/mytargetzfs@snapshot1  12.0K      -  4.00G -
z1pool/mytargetzfs@snapshot2  12.0K      -  4.00G -
root@ubuntu:~# zfs destroy z1pool/mytargetzfs@snapshot1
root@ubuntu:~# zfs destroy z1pool/mytargetzfs@snapshot2
root@ubuntu:~# zfs destroy datapool/myzfs@snapshot1
root@ubuntu:~# zfs destroy datapool/myzfs@snapshot2

Remove the old file system :

root@ubuntu:~# zfs destroy datapool/myzfs

And rename the new one :

root@ubuntu:~# zfs rename z1pool/mytargetzfs z1pool/data

What about settings ?

root@ubuntu:~# zfs get quota,reservation,atime z1pool/data
NAME PROPERTY VALUE SOURCE
z1pool/data quota none default
z1pool/data reservation none default
z1pool/data atime on default

The snashots didn’t copy settings, so we have to do it :

root@ubuntu:~# zfs set quota=10G z1pool/myzfs
root@ubuntu:~# zfs set reservation=10G z1pool/myzfs
root@ubuntu:~# zfs set atime=off z1pool/myzfs

That the end !

 

PDF24    Send article as PDF   

Leave a Reply

Your email address will not be published. Required fields are marked *


*