Tuesday, March 30, 2010

Sun Fire V120 as NAS part 4: Testing ZFS RAID-Z

I wanted to test a RAID5 configuration but having my server booting from a ZFS disk, seems that the Solaris Volume Manager (SVM) databases are not created and I can't use it. Anyway my main project is focuses in ZFS. Start!!!!!
From the SSM software or the format utility I get the disks names (SCSI nodes 1 to 3, scsi bus 2 - onle array disk):
  • c2t1d0
  • c2t2d0
  • c2t3d0

Create a ZFS RAIDZ array is easy, just type:
# zpool create fsshared raidz c2t1d0 c2t2d0 c2t3d0
Now I have a new folder named /fsshared:
# ls
bin       dev       export    kernel    mnt       platform  sbin      usr
boot      devices   fsshared  lib       net       proc      system    var
cdrom     etc       home      lom2      opt       rpool     tmp       vol
# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
fsshared                   89,9K  33,1G  28,0K  /fsshared
rpool                      4,67G  28,6G    97K  /rpool
rpool/ROOT                 3,67G  28,6G    21K  legacy
rpool/ROOT/s10s_u8wos_08a  3,67G  28,6G  3,67G  /
rpool/dump                  512M  28,6G   512M  -
rpool/export                 44K  28,6G    23K  /export
rpool/export/home            21K  28,6G    21K  /export/home
rpool/swap                  512M  28,8G   230M  -
At the list you can see the main boot zfs systems named as root and my new fsshared array. Now it's time to check speed.
Create a new 2 GB file and watch how the array free space decreases:
# cd /fsshared
# mkfile 2g testfile
# ls
testfile
# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
fsshared                   2,00G  31,1G  2,00G  /fsshared
rpool                      4,74G  28,5G    97K  /rpool
rpool/ROOT                 3,74G  28,5G    21K  legacy
rpool/ROOT/s10s_u8wos_08a  3,74G  28,5G  3,74G  /
rpool/dump                  512M  28,5G   512M  -
rpool/export                 44K  28,5G    23K  /export
rpool/export/home            21K  28,5G    21K  /export/home
rpool/swap                  512M  28,8G   230M  -
To check the write speed I copied the testfile to / and measured the time manualy
# cp /fsshared/testfile /
2048 MB  -  161s  - 12.8 MB/s
# rm /fsshared/testfile
# cp /testfile /fsshared
2048 MB  -  181 s  -  11.3 MB/s
This is not a good result and I'm start to think that the small amount of  RAM in the server is one of the causes (with the slow CPU of course). Writing from the main disk to the array is slower than the reverse option. This speed is not fantastic but it's a write speed, now I need a gigabit card to check read speed. If read performance is ok, the V120 will be enough as NAS.

A new test copying the same file from the array to the disk 2 of the server:
# zpool create disc2 c1t1d0
# cp /fsshared/testfile /disc2
2048 MB  -  164 s  -  12,5 MB/s
Not better results here... And from the main disk to the second one of the server? And reverse?

# rm /disc2/testfile
# cp /testfile /disc2
2048 MB  -  121s  - 16.9 MB/s

Wow... Maybe the ZFS RAID calculations are too heavy for this server.
 Ok. Next must be test NFS, FTP and SMB trasfer results.

1 comment:

Anonymous said...

Pretty great post. I simply stumbled upon your weblog and wished to
mention that I've really loved surfing around your weblog posts. In any case I will be subscribing for your rss feed and I hope you write once more very soon!
my site > Tremec T56