How to run ZFS on Linux via FUSE

So today I decided it was time for me to research into the mythical ZFS filesystem. My curiosity for this is due to my interest of building a large multi-disk linux system in the near future.So today I decided it was time for me to research into the mythical ZFS filesystem. My curiosity for this is due to my interest of building a large multi-disk linux system in the near future.

I started by creating a new Virtual Machine within Virtual Box, which is a free Virtual Machine application from Sun Oracle. I created 7 virtual disks: One 8 GB disk for the main OS and 6x 2 GB disks, which I would test ZFS on. Afterward, I proceeded to install a standard stable debian system (sans the Desktop environment) on the 8GB partition. Once Debian booted up, it was time to get ZFS installed.

First step was to simply pull the ZFS FUSE module’s source down by doing the following:

wget http://zfs-fuse.net/releases/0.6.9/zfs-fuse-0.6.9.tar.bz2
tar -jxf zfs-fuse-0.6.9.tar.bz2
rm -rf zfs-fuse-0.6.9.tar.bz2

This provides a nice folder containing the ZFS FUSE module source code, amongst a few other things. Now, to take care of a few dependencies and required programs to build said module. I ran the following command to install glibc, zlib, fuse, aio, scons, libssl, and attr:

sudo aptitude install glibc-2.7-1 zlib1-gdev zlibc libfuse-dev libaio-dev scons libssl-dev attr-dev

Now that I finally had the dependencies and required programs for the module, I went about building it:

cd zfs-fuse-0.6.9/src/
scons
scons install

You can think of scons being similar to make, so in this step, I simply compiled the module, then installed it. Surprisingly quite simple. Make sure that you run at least the scons install command as a root (or sudo-ed) user.

Now, the only step left is to make sure that we automatically load the FUSE module and that the ZFS FUSE daemon automatically starts & mounts our ZFS pools on boot. To do this, I went through the following commands:

cd ../contrib/
echo "fuse" >> /etc/modules
cp zfs-fuse.initd.ubuntu /etc/init.d/zfs-fuse
update-rc.d zfs-fuse defaults

Keep in mind that all of these commands should be run as a root (or sudo-ed) user, save for the first. The first command simply changes the folder, while the second command adds the fuse module to be automatically loaded. The third command copies the provided script that automatically starts the ZFS FUSE daemon in Ubuntu and since Ubuntu is based upon Debian, I figured it would work for a Debian system – and it did. The final command, then, simply adds the ZFS FUSE daemon auto-start script to our boot process.

Now we get to the meat and potatoes: creating our ZFS pool. Run the following command to make a single logical volume from the 6 disks we created earlier:

zpool create tank raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg -m /tank

This creates a 6-disk logical volume named “tank” and mounts it as /tank (you can obviously go with almost any mount point or naming scheme you want). Notice that I used /dev/sdb and so on as my drives – these may differ depending on how you setup your virtual hardware structure. One special keyword you will see is raidz2; what this means is that we are creating a logical volume employing the RAIDZ2 technique which places two chunks of parity on each disk. With the current version of ZFS, one can utilize RAIDZ1, RAIDZ2, and even RAIDZ3, each specifying the number of parity chunks. Additionally, there is also basic mirroring and striping support.

With that single command, I had a working thriving ZFS setup! I was floored at how simplistic the actual creation of the ZFS volume was after installing the module. I then checked the status of my ZFS pool to see the status of each disk and to see the size of the logical volume:

zpool status tank

This command will show the status of each RAIDZ and disk.

zpool list tank

This command will show the size and usage of each pool. For the one I created, it displayed 11.9 GB available. I then went through a scenario: what if I had 3 disks in a RAIDZ2, then wanted to add 3 more? After a bit of research, it seems that there is a bit of work to enable ZFS to expand RAIDZ configurations, but currently no such feature exists. Thus, a secondary RAIDZ must be added:

zpool create tank raidz2 /dev/sdb /dev/sdc /dev/sdd -m /tank
zpool add tank raidz2 /dev/sde /dev/sdf /dev/sdg

The first command of course creates a pool with 3 disks. The second command creates another RAIDZ2 set with 3 additional disks. Checking the status of the pool now will show you how there are two RAIDZ2 sets of 3 disks each. Checking the status again, it seems the storage amount remained the same (11.9 GB).

I then wanted to replicate an example of a disaster: a destroyed disk:

/etc/init.d/zfs-fuse stop
dd if=/dev/zero of=/dev/sdc bs=1M
dd if=/dev/zero of=/dev/sdf bs=1M
/etc/init.d/zfs-fuse start

This basically nukes two drives (one per RAIDZ2 setup). Checking the status shows that each disk is “unavailable” due to corrupted data, which is proper. Now, since we know the two virtual drives are in working order, we can simply notify ZFS that we have replaced the “bad” drives with good ones by running the following:

zpool replace tank /dev/sdc
zpool replace tank /dev/sdf

Which will have ZFS start rebuilding the RAIDZ2 setup – perfect! Alternatively, you can force the ZFS pool to resynchronize its data by running the following:

zpool scrub tank

From these experiments, it seems ZFS is an excellent solution for software RAID. Even so much that I am not sure if I will be going back to MDADM anytime soon. On the other hand, the XFS filesystem does claim to be good at logical volumes as well, but for now I can say that ZFS is simplistic, yet powerful.

Comments are closed.