Archive for August 2011

DD over SSH

I wanted to copy my pfsense disk without taking down the machine. (Moving from 3.5″  to 2.5″ disk for increased power savings)  So I’m using DD and piping it over SSH to my file server. No progress bar but my drive is small.

dd if=/dev/sda | ssh root@freenas "dd of=/backup/pfsense.img"

found out adding ‘bs=4k’ or larger made transfer go little faster

dd if=/dev/sda bs=32k | ssh root@freenas "dd of=/backup/pfsense.img"

I topped out around 200mbit, with my router CPU maxed out. Must be SSH’s encryption overhead.

Credit – http://karlherrick.com/dev/2008/09/12/dd-backups-over-ssh/
DD Man Page – http://linux.die.net/man/1/dd

Multi-Tasking with crappy applications (MS Word)

Some applications while processing a request can ‘freeze’ or lag the UI of the OS. But most are single threaded applications and I sit on a powerful multi-core desktop. My example Mail Merge in Microsoft Word.  My work around so I can keep working in other applications on my desktop is to make multiple user accounts. So when I want to run a MS Word mail merge I first switch to ‘Jason2’ account and start the long mail merge process, and switch back to my main account to continue working with no UI Lag from the OS.

Silly Windows.

ZFS Notes

Things not to forget
  • ZFS  can be CPU and Memory intensive.
  • ZFS raidz1, raidz2, etc requires a lot of CPU time. My low power dual core maxes out on 8xraidz2. I upgraded from AMD 270u to AMD X4 960T and my write performance tripled
  • pool = zfs collection of vdevs
  • vdev = collection of disk drive(s), single, n-way mirror, raidz, raidz2, raidz3
  • raidz = similar to raid5
  • Striping happens automatically between vdevs (Use multiple vdevs to increase throughput and I/O)
  • Can not add additional disk(s) to a raidz vdev. But can add addition vdevs to a pool.
  • Can not remove vdevs from pools. Only disks from fault tolerant vdevs (mirror/raidz) can be removed and replaced.
  • Use whole disks, not partitions. Easier that way. (… and faster too?)
  • So far, ZFS is smart enough that if you plug the drives in different SATA ports the pool can still be imported. Example, I moved 2 drives off my motherboard controller to a PCIe addon controller without issue.
  • zpool status
    show the status of all disks in a pool
  • zpool iostat -v 5 [pool name]
    shows I/O’s and bandwidth with a 5 second average on each disk
  • zpool export pool_name / zpool import pool_name
    if needed to move pool to a different machine

Current setup is 2 raidz1 vdevs with 3x3TB drives each. Yielding 12TB