Wednesday, February 24, 2016

Changes in 11gr2 regarding votedisk

Changes in 11gr2 regarding votedisk

There are few noticeable changes made in 11gR2 specific to voting disks.

As we are aware that Voting disk and OCR can now be stored in ASM. But we also know in previous versions, CSS need voting disk to start.

So how grid starts, when voting disk is in ASM diskgroup?

We have created ASM diskgroup DATA and my voting and OCR resides in it.

SQL> select name, type, total_mb, usable_file_mb
from v$asm_diskgroup;

NAME    TYPE    TOTAL_MB   USABLE_FILE_MB
------ ------   ---------- --------------
DATA   EXTERN   3072       2672

SQL> set line 200
SQL> column path format a30
SQL> select name, path, header_status from v$asm_disk;

NAME       PATH    HEADER_STATU
------     ------- ------------
DATA_0002 /dev/sdg   MEMBER
DATA_0000 /dev/sdi   MEMBER
DATA_0001 /dev/sdh   MEMBER

crsctl query css votedisk

## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
 1. ONLINE 4f2e374b254e4f57bf85ac5db31a91fb (/dev/sdi) [DATA]
Located 1 voting disk(s).
Few more things -

1. Voting disks are stored on individual disks Vs OCR which are stored as files in the ASM diskgroup. So voting disks will not span multiple disks i.e. they are confined in single disk.

2. At the start of ohasd, all disks as part asm_diskstring are scanned. ohasd know about asm_diskstring from GPnP profile.

This functionality allows CSS start before ASM.

3. How ohasd will know which disk contains voting disk?

Following is the ASM disk header metadata (dumped using kfed). ohasd use the markers between vfstart & vfend. If the

markers between vfstart & vfend are 0 then disk does *NOT* contain voting disk

grid@db1:~> kfed read ‘/dev/sdi’ | grep vf
kfdhdb.vfstart: 128 ; 0x0ec: 0×00000080
kfdhdb.vfend: 160 ; 0x0f0: 0x000000a0

grid@db1:~> kfed read ‘/dev/sdg’ | grep vf
kfdhdb.vfstart: 0 ; 0x0ec: 0×00000000
kfdhdb.vfend: 0 ; 0x0f0: 0×00000000

grid@db1:~> kfed read ‘/dev/sdh’ | grep vf
kfdhdb.vfstart: 0 ; 0x0ec: 0×00000000
kfdhdb.vfend: 0 ; 0x0f0: 0×00000000

So from above output, disk ‘/dev/sdi’ contains voting disk.

4. Number of voting disks, is decided based on the diskgroup redundancy.

5. If we want to store a voting disk in a new diskgroup (apart from one, which already holds a voting disk) we need to create a QUORAM failure group.

From Oracle docs – A quorum failure group is a special type of failure group and disks in these failure groups do not contain user data and are not considered when determining redundancy requirements.

6. Manual backup of the voting disk is not required anymore. Required data is backed up

No comments:

Post a Comment