Archive for the ‘ZFS’ Category.

Windows previous versions for ZFS backed Samba shares

If you want ZFS’s snapshots to show up as previous versions in Windows File Shares you need to have ZFS backed, samba data set with 1 or more snapshots. My zfs data set is tank in my examples. A snapshot can be made manually or automatically. The easy way to manage automatic snapshots is to use ‘zfs-auto-snapshot’ that is bundled with zfsonlinux. Example for hourly snapshots of the tank.
zfs-auto-snapshot -l hourly tank
Or you can use cron to do hourly snapshots for you.
5 */1 * * * zfs snapshot tank@`date +%F-%H%M`
Also cron can clean up old snapshots. (Careful with this one. Verify zfs list -H -o name -t snapshot -r tank | head -n 24 output first)
30 0 * * * zfs list -H -o name -t snapshot -r tank | head -n 24 | xargs -n1 sudo zfs destroy

Lastly to have samba to use ZFS snapshots you’ll need shadow: format, vfs objects, shadow: sort, and shadow: snapdir added to your samba share. Here is example config with if using zfs-auto-snapshot hourly.
[tank]

shadow: format = zfs-auto-snap_hourly-%F-%H%M
vfs objects = shadow_copy2
shadow: sort = desc
path = /tank
comment = ZFS dataset with Previous Versions enabled
writeable = yes
public = yes
shadow: snapdir = .zfs/snapshot

ZFS Notes

Things not to forget
  • ZFS  can be CPU and Memory intensive.
  • ZFS raidz1, raidz2, etc requires a lot of CPU time. My low power dual core maxes out on 8xraidz2. I upgraded from AMD 270u to AMD X4 960T and my write performance tripled
  • pool = zfs collection of vdevs
  • vdev = collection of disk drive(s), single, n-way mirror, raidz, raidz2, raidz3
  • raidz = similar to raid5
  • Striping happens automatically between vdevs (Use multiple vdevs to increase throughput and I/O)
  • Can not add additional disk(s) to a raidz vdev. But can add addition vdevs to a pool.
  • Can not remove vdevs from pools. Only disks from fault tolerant vdevs (mirror/raidz) can be removed and replaced.
  • Use whole disks, not partitions. Easier that way. (… and faster too?)
  • So far, ZFS is smart enough that if you plug the drives in different SATA ports the pool can still be imported. Example, I moved 2 drives off my motherboard controller to a PCIe addon controller without issue.
  • zpool status
    show the status of all disks in a pool
  • zpool iostat -v 5 [pool name]
    shows I/O’s and bandwidth with a 5 second average on each disk
  • zpool export pool_name / zpool import pool_name
    if needed to move pool to a different machine

Current setup is 2 raidz1 vdevs with 3x3TB drives each. Yielding 12TB

First!

Hello World! This will be the most boring post ever. For who ever reads this post, my goal with the website is to store notes, steps on how I solved problems, tips, and other computer related things. There has been many times blogs have shown me the way to fix complicated computer issues. So now on I will try to post every computer issues I have with a solution so maybe I can help someone with the same issue.