Recently a nearly brand new WD MyBook 4TB died unexpectedly, so I have been reorganizing for the future.
Enter ZFS as my first choice in filesystems to move forward with and the results are very promising so far. In particular I have had very decent results with the deduplication feature and compression.
Interestingly though these features are so new they lack of current support in many common system utilities like df so using them will get you inaccurate results. For instance df cannot handle deduplication, once you start putting extra GB's of dedup'd files on it starts lying about it and tells you the disk is much bigger than it really is. Also because a lot of stuff is going on in memory you never really know exactly how fast things are going, so the throughput is probably a bit lied about too. So the end result is basically you don't really know how much space you have at any given time or how fast it is, yet you get the general sense that its safe, at least I did. I was able to break and repair the zpool several times, simulate corruption and scrub it back and resilver new disks.
Also to be as accurate as possible with these numbers, to make sure the files weren't simply sitting in memory cached and not written I kept checking zpool iostat -v (I saw what looked like write operations queued up and slowly dwindling down). Strangely du -sh said the full file was there, I md5sum'd it and it had the same signature as the original file and finally to settle it with certainty I shutdown zfs-fuse itself (which quickly unmounted the filesystems) and disconnected / reconnected the drives and then brought zfs-fuse back up and sure enough the full file was there. Seems a bit like Voodoo, but it works well enough for me.
I realize these numbers I achieved are probably not 100% accurate because of all the memory caching and compression but it doesn't really matter for my needs. What I need is to be able to survive a drive loss and continue to function until the new drive is brought online while not being sluggish. ZFS accomplishes all that and more - like deduplication, compression, scrubbing (against bitrot) and snapshots. Plus its free so I am very content with ZFS. Here is my setup and results:
Acer C720 (w/2GB RAM) - Chromeos Crouton/chroot'd to Ubuntu 14.04 (Trusty)
Upgraded SSD to a 128 GB MyDigital SSD w/6GB SuperCache2
1 Vantec 10 port USB 3.0 hub ($45 from NewEgg)
(Update Feb 19, 2015 looks like the price is now $60 for this)
5 USB 3.0 MicroSD SDXC Card Readers (5 x $5 AliExpress)
5 SanDisk MicroSD 128GB (5 x $13 AliExpress)
Grand Total $135
ZFS-FUSE (apt-get install zfs-fuse)
cp normal file speed averages approximately 300MB/s !!
The normal speed for these drives (formatted exFat or Fat32) is approximately 29MB/s, so even with striping redundancy the speed is approximately 1000% increased. Not bad! =)
time cp big_file_00.big_file /MYWINPOOL/
‘big_file_00.big_file’ -> ‘/MYWINPOOL/big_file_00.big_file’
real 0m1.030s
user 0m0.007s
sys 0m0.360s
(trusty)cronkilla@localhost:/MYWINPOOL$ du -sh big_file_00.big_file
300M big_file_00.big_file
(Note: big_file was created via dd if=/dev/urandom)
Sequential benchmark performance using /dev/zero (my alias bm): ranges from 490MB/s to 540MB/s
To obtain these results I upped the max-arc-size from the the ZFS configuration file (/etc/zfs/zfsrc) from 100 to 1000,
this had a big impact. I also changed a few other parameters:
max-arc-size = 1000
fuse-mount-options = default_permissions,big_writes,allow_other
#zfs-prefetch-disable ### This was uncommented and I commented it out
(trusty)cronkilla@localhost:~$ sudo zpool history MYWINPOOL
History for 'MYWINPOOL':
2015-02-18.18:51:15 zpool create -f MYWINPOOL raidz1 /dev/disk/by-id/usb-Generic_STORAGE_DEVICE_FUNWAY091552-0:0 /dev/disk/by-id/usb-Generic_STORAGE_DEVICE_FUNWAY091125-0:0 /dev/disk/by-id/usb-Generic_STORAGE_DEVICE_FUNWAY091147-0:0 /dev/disk/by-id/usb-Generic_STORAGE_DEVICE_FUNWAY091067-0:0 /dev/disk/by-id/usb-Generic_STORAGE_DEVICE_FUNWAY090855-0:0
2015-02-18.18:53:26 zfs set compression=zle MYWINPOOL
2015-02-18.18:53:27 zfs set checksum=fletcher4 MYWINPOOL
2015-02-18.18:53:29 zfs set dedup=on MYWINPOOL
2015-02-18.18:53:30 zfs set xattr=off MYWINPOOL
2015-02-18.18:53:31 zfs set atime=off MYWINPOOL
Here is a quick glimpse at what it looked like during a write test via zpool iostat -v:
(trusty)cronkilla@localhost:~$ sudo zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
-------------------------------------- ----- ----- ----- ----- ----- -----
MYWINPOOL 151M 596G 0 27 834 1.89M
raidz1 151M 596G 0 27 834 1.89M
disk/by-id/usb-Generic_STORAGE_DEVICE_FUNWAY091552-0:0 - - 0 9 13.7K 493K
disk/by-id/usb-Generic_STORAGE_DEVICE_FUNWAY091125-0:0 - - 0 9 13.7K 495K
disk/by-id/usb-Generic_STORAGE_DEVICE_FUNWAY091147-0:0 - - 0 10 13.7K 501K
disk/by-id/usb-Generic_STORAGE_DEVICE_FUNWAY091067-0:0 - - 0 9 14.6K 500K
disk/by-id/usb-Generic_STORAGE_DEVICE_FUNWAY090855-0:0 - - 0 11 13.7K 509K
-------------------------------------- ----- ----- ----- ----- ----- -----
Here is more detailed disk data, showing through the first disk not the remaining disks:
(trusty)cronkilla@localhost:~$ sudo zdb
MYWINPOOL:
version: 23
name: 'MYWINPOOL'
state: 0
txg: 4
pool_guid: 17797988667815477235
hostid: 8323328
hostname: 'localhost'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 17797988667815477235
create_txg: 4
children[0]:
type: 'raidz'
id: 0
guid: 16975055505754696246
nparity: 1
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 644221501440
is_log: 0
create_txg: 4
children[0]: id: 0
guid: 13728714778704373774
path: '/dev/disk/by-id/usb-Generic_STORAGE_DEVICE_FUNWAY091552-0:0'
whole_disk: 0
create_txg: 4
...
In conclusion, I found ZFS with MicroSD cards to be particularly a decent pairing since MicroSD cards are super cheap and ZFS is already a software RAID Controller. ZFS provides resiliency against bitrot via scrubbing, RAID redundancy against drive failures and snapshots so ZFS w/MicroSD cards - in my opinion - makes for a nearly perfect complementary match.
As of Feb 19, 2015 - this won't work directly for Windows since Windows doesn't support ZFS - but there is VirtualBox / drag-and-drop Guest Additions pass-throughs.. =)