Grow a ZFS Partition on Areca Hardware RAID

I have a server with an Areca RAID card with two RAID1 Mirrors. The first mirror is made up of 2x 80GB SATA disks for the OS. The second mirror is made up of 2x 500GB disks for the data. The data disks have 1 partition on them as /dev/da1s1 and is a ZFS filesystem mounted as /opt. This was fine for a few months, until I started getting tight on space in /opt. As many of you know, when a ZFS filesystem is low on space the performance turns really poor. So as soon as I recieved the warning from Nagios I ordered 2x 2TB disks to replace the 2x 500GB.

When the disks arrived I swapped them out one by one and let the RAID rebuild after each swap. Monitoring the entire process with the Areca cli64 tool. When the machine was running off the new 2TB disks, I rebooted and entired the Areca RAID Bios. In the Areca BIOS it saw the new disks, but the RAID Set was still only 500GB. So I picked the `Rescue RAID Set' option and entered the following command to tell it to resize:

RESETCAPACITY Raid Set # 01

After this runs and the recompute is done, it will reinit the array. I choose the `Foreground Initilization' option and it took 4 or more hours to reinit. Following that I rebooted to make sure the OS saw the correct size (I do not know if this is required or not).

Looking at the partition with gpart I saw this:

# gpart show da1
  =>        63  3906249606  da1  MBR  (1.8T)
            63   976559157    1  freebsd  [active]  (466G)
  976559220  2929690449       - free -  (1.4T)

Then I attempted to resize the partition:

# gpart resize -i 1 da1
gpart: Device busy

So I rebooted in single user to resize the partition with (this requires FreeBSD 8.2):

# gpart resize -i 1 da1

Which was successful and gave me:

# gpart show da1
  =>        63  3906249606  da1  MBR  (1.8T)
            63  3906249606    1  freebsd  [active]  (1.8T)

Now we set the slice size to be the same as the partition size shown in from `gpart show' above, using bsdlabel:

# bsdlabel -e /dev/da1s1

And set the size for the c and d partitions and save and quit:

# /dev/da1s1:
8 partitions:
#        size   offset    fstype   [fsize bsize bps/cpg]
  c: 3906249606        0    unused        0     0         # "raw" part, don't edit
  d: 3906249606        0    4.2BSD     2048 16384 28528

Reboot back into multiuser mode and df shows the new file system size:

# df -h /opt/
Filesystem    Size    Used   Avail Capacity  Mounted on
opt           1.8T    397G    1.4T    22%    /opt

ZFS on import/export, which happens on boot, automatically discovers the new size and expands to use it. So that is one less thing we have to worry about.

Powered by FreeBSD! r4l domain registration