I just attached a new volume to my vps and usually I follow the instructions provided using parted
and mkfs.ext4
but I decided to try ZFS.
The guides I’ve found online are all very different and I’m not sure if I did everything correct to know the data will be safe.
What I mean is running lsblk -o name,size,fstype,type,mountpoint
shows this
NAME SIZE FSTYPE TYPE MOUNTPOINT
vdb 100G disk
└─vdb1 100G ext4 part /mnt/storage
vdc 100G disk
├─vdc1 100G part
└─vdc9 8M part
You can see the type and mountpoint of the previous volume are listed, but the ZFS’ ones aren’t.
Still I can properly access the ZFS pool I created and I also already copied some test data.
root@vps:~/services# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
local-zfs 99.5G 6.88G 92.6G - - 0% 6% 1.00x ONLINE -
root@vps:~/services# zfs list
NAME USED AVAIL REFER MOUNTPOINT
local-zfs 6.88G 89.5G 6.88G /mnt/zfs
The commands I ran were these ones
parted -s /dev/vdc mklabel gpt
parted -s /dev/vdc unit mib mkpart primary 0% 100%
zpool create -o ashift=12 -O canmount=on -O atime=off -O recordsize=8k -O compression=lz4 -O mountpoint=/mnt/zfs local-zfs /dev/vdc
Does this look good?
Should I do something else? (like writing something to fstab)
The list of properties is very long, is there any one you recommend I should look into for a simple server where currently non-critical data is stored?
(I already have a separate backup solution, maybe I’ll check to update it later)
Oh, for some reason I thought the reason to remove it was that it limited or broke some of the built-in sanity checking to maintain data integrity. I do remember reading in a number of places that it absolutely should be turned off at all times, but it’s been awhile since I set up a new zfs pool so I was just going off my notes of what settings I use (and confirmed I still used those on all the current pools).