Total Pageviews

Thursday, August 5, 2010

Zpooling continued

Now that I have a working mirror again, I will 'repair' the failed disk:

# ls -l /media/Lexar/zfs-testbed/
total 3145729
-rwxrwxrwx 1 root root 512 2010-08-04 14:46 disk1
-rwxrwxrwx 1 root root 1073741824 2010-08-05 08:35 disk2
-rwxrwxrwx 1 root root 1073741824 2010-08-05 08:35 disk3
-rwxrwxrwx 1 root root 1073741824 2010-08-04 14:01 disk4

# rm /media/Lexar/zfs-testbed/disk1

# mkfile 1g /media/Lexar/zfs-testbed/disk1

# ls -l /media/Lexar/zfs-testbed/
total 4194304
-rwxrwxrwx 1 root root 1073741824 2010-08-05 08:48 disk1
-rwxrwxrwx 1 root root 1073741824 2010-08-05 08:45 disk2
-rwxrwxrwx 1 root root 1073741824 2010-08-05 08:45 disk3
-rwxrwxrwx 1 root root 1073741824 2010-08-04 14:01 disk4
#

Now time to add in the two 'spare disks' to the mirror:

# zpool add clifford mirror /media/Lexar/zfs-testbed/disk1 /media/Lexar/zfs-testbed/disk4

I'll use a different command to take a look at the mirror:

# zpool iostat -v clifford
capacity operations bandwidth
pool used avail read write read write
---------------------------------- ----- ----- ----- ----- ----- -----
clifford 150K 1.98G 0 0 1 826
mirror 120K 1016M 0 0 1 823
/media/Lexar/zfs-testbed/disk2 - - 0 0 62 878
/media/Lexar/zfs-testbed/disk3 - - 0 0 41 878
mirror 29.5K 1016M 0 0 0 565
/media/Lexar/zfs-testbed/disk1 - - 0 0 319 31.4K
/media/Lexar/zfs-testbed/disk4 - - 0 0 319 31.4K
---------------------------------- ----- ----- ----- ----- ----- -----

#

Wednesday, August 4, 2010

Creation of zpool

A while back I created a zpool. since I didn't have unused disks lying around, I used a 32 GB usb drive. I created four files to be treated as separate disks so that I could create a zpool and mirror disks. Although this would not really offer the protection of mirroring, it did allow me to become familiar with zpools and zfs filesystems, though I had previously created a ZFS filesystem on a spare disk.

The first thing I did was create the four disk files. The USB drive was mounted as /media/Lexar. I then created a sub-directory called 'zfs-testbed'. I "cd'd" to it and created the files:

mkfile 1g /media/Lexar/zfs-testbed/disk1
mkfile 1g /media/Lexar/zfs-testbed/disk2
mkfile 1g /media/Lexar/zfs-testbed/disk3
mkfile 1g /media/Lexar/zfs-testbed/disk4

That created four 1 GB files.

Next I created a zpool consisting of the one 'dive' :

zpool create gregory /media/Lexar/zfs-testbed/disk1

zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
gregory 1016M 73K 1016M 0% ONLINE -
rpool 148G 17.1G 131G 11% ONLINE -
zfs-ramos 148G 12.4G 136G 8% ONLINE -

Then I destroyed the pool, so I could create a mirror:

zpool destroy gregory

zpool create clifford mirror /media/Lexar/zfs-testbed/disk1 /media/Lexar/zfs-testbed/disk2


zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
clifford 1016M 74.5K 1016M 0% ONLINE -
rpool 148G 17.1G 131G 11% ONLINE -
zfs-ramos 148G 12.4G 136G 8% ONLINE -


Now I intentionally destroy the label:

dd if=/dev/random of=/media/Lexar/zfs-testbed/disk1 bs=512 count=1

zpool status
pool: clifford
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: scrub completed after 0h0m with 0 errors on Wed Aug 4 14:47:35 2010
config:

NAME STATE READ WRITE CKSUM
clifford DEGRADED 0 0 0
mirror DEGRADED 0 0 0
/media/Lexar/zfs-testbed/disk1 UNAVAIL 0 0 0 corrupted data
/media/Lexar/zfs-testbed/disk2 ONLINE 0 0 0


Now detach the bad disk:

zpool detach clifford /media/Lexar/zfs-testbed/disk1


Attach a new 'disk':

zpool attach clifford /media/Lexar/zfs-testbed/disk2 /media/Lexar/zfs-testbed/disk3

# zpool status clifford
pool: clifford
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Wed Aug 4 14:55:33 2010
config:

NAME STATE READ WRITE CKSUM
clifford ONLINE 0 0 0
mirror ONLINE 0 0 0
/media/Lexar/zfs-testbed/disk2 ONLINE 0 0 0
/media/Lexar/zfs-testbed/disk3 ONLINE 0 0 0 85K resilvered

errors: No known data errors


Here I rebooted to complete the process. After the reboot:

zpool status clifford
pool: clifford
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
clifford ONLINE 0 0 0
mirror ONLINE 0 0 0
/media/Lexar/zfs-testbed/disk2 ONLINE 0 0 0
/media/Lexar/zfs-testbed/disk3 ONLINE 0 0 0

errors: No known data errors