Monday, July 29, 2019

Cluster Verification Utility (CLUVFY )

Cluster Verification Utility (CLUVFY )


The Cluster Verification Utility (CVU) performs system checks in preparation for installation, patch updates, or other system changes. Using CVU ensures that you have completed the required system configuration and preinstallation steps so that your Oracle Grid Infrastructure or Oracle Real Application Clusters (Oracle RAC) installation, update, or patch operation completes successfully.


1. Pre-check for CRS installation


Use the cluvfy stage -pre crsinst command to check the specified nodes before installing Oracle Clusterware. CVU performs additional checks on OCR and voting disks if you specify the -c and -q options.

cluvfy stage -pre crsinst -n node1,node2 -verbose

2. Post-Check for CRS Installation


Use the cluvfy stage -post crsinst command to check the specified nodes after installing Oracle Clusterware.

cluvfy stage -post crsinst -n node1,node2 -verbose

3. Post-check for hardware and operating system


-- Use the cluvfy stage -post hwos stage verification command to perform network and storage verifications on the specified nodes in the cluster before installing
   Oracle software. This command also checks for supported storage types and checks each one for sharing.

cluvfy stage -post hwos -n node_list [-s storageID_list] [-verbose]
cluvfy.sh stage -post hwos -n node1,node2 -verbose

4. Pre-check for ACFS Configuration


-- the cluvfy stage -pre acfscfg command to verify your cluster nodes are set up correctly before configuring Oracle ASM Cluster File System (Oracle ACFS).

cluvfy stage -pre acfscfg -n node_list [-asmdev asm_device_list] [-verbose]
cluvfy stage -pre acfscfg -n node1,node2 -verbose

5. Post-check for ACFS Configuration


-- Use the cluvfy stage -post acfscfg to check an existing cluster after you configure Oracle ACFS.

cluvfy stage -post acfscfg -n node_list [-verbose]
cluvfy stage -post acfscfg -n node1,node2 -verbose

6. Pre-check for OCFS2 or OCFS


-- Use the cluvfy stage -pre cfs stage verification command to verify your cluster nodes are set up correctly before setting up OCFS2 or OCFS for Windows.

cluvfy stage -pre cfs -n node_list -s storageID_list [-verbose]
cluvfy stage -pre cfs -n node1,node2 -verbose

7. Post-check for OCFS2 or OCFS


-- Use the cluvfy stage -post cfs stage verification command to perform the appropriate checks on the specified nodes after setting up OCFS2 or OCFS for Windows.

cluvfy stage -post cfs -n node_list -f file_system [-verbose]
cluvfy stage -post cfs -n node1,node2 -verbose

8. Pre-check for database configuration


-- Use the cluvfy stage -pre dbcfg command to check the specified nodes before configuring an Oracle RAC database to verify whether your system meets all of the
   criteria for creating a database or for making a database configuration change.

cluvfy stage -pre dbcfg -n node_list -d Oracle_home [-fixup [-fixupdir fixup_dir]] [-verbose]
cluvfy stage -pre dbcfg -n node1,node2 -d Oracle_home -verbose

9. Pre-check for database installation


-- Use the cluvfy stage -pre dbinst command to check the specified nodes before installing or creating an Oracle RAC database to verify that your system meets all of
   the criteria for installing or creating an Oracle RAC database.

cluvfy stage -pre dbinst -n node_list [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}]  [-osdba osdba_group] [-d Oracle_home] [-fixup [-fixupdir fixup_dir] [-verbose] 

10. Pre-check for configuring Oracle Restart


-- Use the cluvfy stage -pre hacfg command to check a local node before configuring Oracle Restart.

cluvfy stage -pre hacfg [-osdba osdba_group] [-orainv orainventory_group] [-fixup [-fixupdir fixup_dir]] [-verbose]
cluvfy stage -pre hacfg -verbose

11. Post-check for configuring Oracle Restart


-- Use the cluvfy stage -post hacfg command to check the local node after configuring Oracle Restart.

cluvfy stage -post hacfg [-verbose]
cluvfy stage -post hacfg -verbose

12. Pre-check for add node.


/*Use the cluvfy stage -pre nodeadd command to verify the specified nodes are configured correctly before adding them to your existing cluster, and to verify the integrity of the cluster before you add the nodes.

This command verifies that the system configuration, such as the operating system version, software patches, packages, and kernel parameters, for the nodes that you want to add, is compatible with the existing cluster nodes, and that the clusterware is successfully operating on the existing nodes. Run this node on any node of the existing cluster.
*/

cluvfy stage -pre nodeadd -n node_list [-vip vip_list]  [-fixup [-fixupdir fixup_dir]] [-verbose]
cluvfy stage -pre nodeadd -n node1,node2 -verbose

13. Post-check for add node.


/*
Use the cluvfy stage -post nodeadd command to verify that the specified nodes have been successfully added to the cluster at the network, shared storage, and clusterware levels.
*/

cluvfy stage -post nodeadd -n node_list [-verbose]
cluvfy stage -post nodeadd -n node1,node2 -verbose

14. Post-check for node delete.


/*
Use the cluvfy stage -post nodedel command to verify that specific nodes have been successfully deleted from a cluster. Typically, this command verifies that the node-specific interface configuration details have been removed, the nodes are no longer a part of cluster configuration, and proper Oracle ASM cleanup has been performed.
*/

cluvfy stage -post nodedel -n node_list [-verbose]
cluvfy stage -post nodedel -n node1, node2 -verbose


15. Check ACFS integrity


-- Use the cluvfy comp acfs component verification command to check the integrity of Oracle ASM Cluster File System on all nodes in a cluster.

cluvfy comp acfs [-n [node_list] | [all]] [-f file_system] [-verbose]
cluvfy comp acfs -n node1,node2 -f /acfs/share -verbose

16. Checks user accounts and administrative permissions


/*
Use the cluvfy comp admprv command to verify user accounts and administrative permissions for installing Oracle Clusterware and Oracle RAC software, and for creating an Oracle RAC database or modifying an Oracle RAC database configuration.
*/

cluvfy comp admprv [-n node_list]
{ -o user_equiv [-sshonly] |
 -o crs_inst [-orainv orainventory_group] |
 -o db_inst [-osdba osdba_group] [-fixup [-fixupdir fixup_dir]] |
 -o db_config -d oracle_home [-fixup [-fixupdir fixup_dir]] }
 [-verbose]

17. Check ASM integrity


Use the cluvfy comp asm component verification command to check the integrity of Oracle Automatic Storage Management (Oracle ASM) on all nodes in the cluster. This check ensures that the ASM instances on the specified nodes are running from the same Oracle home and that asmlib, if it exists, has a valid version and ownership.

cluvfy comp asm [-n node_list | all ] [-verbose]
cluvfy comp asm -n node1,node2 -verbose

18. Check CFS integrity


Use the cluvfy comp cfs component verification command to check the integrity of the clustered file system (OCFS for Windows or OCFS2) you provide using the -f option. CVU checks the sharing of the file system from the nodes in the node list.

cluvfy comp cfs [-n node_list] -f file_system [-verbose]
cluvfy comp cfs -n node1,node2 -f /ocfs2/share -verbose

19. Check Clock Synchronization


Use the cluvfy comp clocksync component verification command to clock synchronization across all the nodes in the node list. CVU verifies a time synchronization service is running (Oracle Cluster Time Synchronization Service (CTSS) or Network Time Protocol (NTP)), that each node is using the same reference server for clock synchronization, and that the time offset for each node is within permissible limits.

cluvfy comp clocksync [-noctss] [-n node_list [all]] [-verbose]
cluvfy comp clocksync [-noctss] [-n node_list [all]] [-verbose]

-noctss
If you specify this option, then CVU does not perform a check on CTSS. Instead, CVU checks the platform's native time synchronization service, such as NTP.

20. Check cluster integrity


Use the cluvfy comp clu component verification command to check the integrity of the cluster on all the nodes in the node list.

cluvfy comp clu [-n node_list] [-verbose]
cluvfy comp clu -n node1,node2 -verbose

21. Check cluster manager integrity


Use the cluvfy comp clumgr component verification command to check the integrity of cluster manager subcomponent, or Oracle Cluster Synchronization Services (CSS), on all the nodes in the node list.

cluvfy comp clumgr [-n node_list] [-verbose]
cluvfy comp clumgr -n node1, node2 -verbose

22. Check CRS integrity


Run the cluvfy comp crs component verification command to check the integrity of the Cluster Ready Services (CRS) daemon on the specified nodes.

cluvfy comp crs [-n node_list] [-verbose]
cluvfy comp crs -n node1, node2 -verbose

23. Check DHCP


Starting with Oracle Database 11g release 2 (11.2.0.2), use the cluvfy comp dhcp component verification command to verify that the DHCP server exists on the network and is capable of providing a required number of IP addresses. This verification also verifies the response time for the DHCP server. You must run this command as root.

# cluvfy comp dhcp -clustername cluster_name [-vipresname vip_resource_name] [-port dhcp_port] [-n node_list] [-verbose]

-clustername cluster_name
The name of the cluster of which you want to check the integrity of DHCP.

-vipresname vip_resource_name
The name of the VIP resource.

-port dhcp_port
The port on which DHCP listens. The default port is 67.

24. Check DNS

Starting with Oracle Database 11g release 2 (11.2.0.2), use the cluvfy comp dns component verification command to verify that the Grid Naming Service (GNS) subdomain delegation has been properly set up in the Domain Name Service (DNS) server.

Run cluvfy comp dns -server on one node of the cluster. On each node of the cluster run cluvfy comp dns -client to verify the DNS server setup for the cluster.

25. Check HA integrity

Use the cluvfy comp ha component verification command to check the integrity of Oracle Restart on the local node.

cluvfy comp ha [-verbose]
cluvfy comp ha -verbose

26. Check space availability

Use the cluvfy comp freespace component verification command to check the free space available in the Oracle Clusterware home storage and ensure that there is at least 5% of the total space available. For example, if the total storage is 10GB, then the check ensures that at least 500MB of it is free.

cluvfy comp freespace [-n node_list | all]
cluvfy comp freespace -n node1, node2

27. Check GNS

Use the cluvfy comp gns component verification command to verify the integrity of the Oracle Grid Naming Service (GNS) on the cluster.

cluvfy comp gns -precrsinst -domain gns_domain -vip gns_vip [-n node_list]  [-verbose]

cluvfy comp gns -postcrsinst [-verbose]

28. Check GPNP

Use the cluvfy comp gpnp component verification command to check the integrity of Grid Plug and Play on all of the nodes in a cluster.

cluvfy comp gpnp [-n node_list] [-verbose]
cluvfy comp gpnp -n node1,node2 -verbose

29. Check healthcheck

Use the cluvfy comp healthcheck component verification command to check your Oracle Clusterware and Oracle Database installations for their compliance with mandatory requirements and best practices guidelines, and to ensure that they are functioning properly.

cluvfy comp healthcheck [-collect {cluster|database}] [-db db_unique_name]
 [-bestpractice|-mandatory] [-deviations] [-html] [-save [-savedir directory_path]]

30. Checks node applications existence

Use the component cluvfy comp nodeapp command to check for the existence of node applications, namely VIP, NETWORK, ONS, and GSD, on all of the specified nodes.

cluvfy comp nodeapp [-n node_list] [-verbose]
cluvfy comp nodeapp -n node1, node2 -verbose

31. Check node connectivity

Use the cluvfy comp nodecon component verification command to check the connectivity among the nodes specified in the node list. If you provide an interface list, then CVU checks the connectivity using only the specified interfaces.

cluvfy comp nodecon -n node_list [-i interface_list] [-verbose]
cluvfy comp nodecon -i eth2 -n node1,node2 -verbose
cluvfy comp nodecon -i eth3 -n node1,node2 -verbose

32. Checks reachability between nodes

Use the cluvfy comp nodereach component verification command to check the reachability of specified nodes from a source node.

cluvfy comp nodereach -n node_list [-srcnode node] [-verbose]

-srcnode node
The name of the source node from which CVU performs the reachability test. If you do not specify a source node, then the node on which you run the command is used as the source node.

33. Check OCR integrity

Use the cluvfy comp ocr component verification command to check the integrity of Oracle Cluster Registry (OCR) on all the specified nodes.

cluvfy comp ocr [-n node_list] [-verbose]
cluvfy comp ocr -n node1,node2 -verbose

34. Check OHASD integrity

Use the cluvfy comp ohasd component verification command to check the integrity of the Oracle High Availability Services daemon.

cluvfy comp ohasd [-n node_list] [-verbose]
cluvfy comp ohasd -n node1,node2 -verbose

35. Check OLR integrity

Use the cluvfy comp olr component verification command to check the integrity of Oracle Local Registry (OLR) on the local node.

cluvfy comp olr [-verbose]
cluvfy comp olr -verbose

36. Check node comparison and verification

Use the cluvfy comp peer component verification command to check the compatibility and properties of the specified nodes against a reference node. You can check compatibility for non-default user group names and for different releases of the Oracle software. This command compares physical attributes, such as memory and swap space, as well as user and group values, kernel settings, and installed operating system packages.

cluvfy comp peer -n node_list [-refnode node]  [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}] [-orainv orainventory_group]  [-osdba osdba_group] [-verbose]

-refnode
The node that CVU uses as a reference for checking compatibility with other nodes. If you do not specify this option, then CVU reports values for all the nodes in the node list.

37. Checks SCAN configuration

Use the cluvfy comp scan component verification command to check the Single Client Access Name (SCAN) configuration.

cluvfy comp scan -verbose

38. Checks software component verification

Use the cluvfy comp software component verification command to check the files and attributes installed with the Oracle software.

cluvfy comp software [-n node_list] [-d oracle_home] [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}] [-verbose]

39. Checks space availability

Use the cluvfy comp space component verification command to check for free disk space at the location you specify in the -l option on all the specified nodes.

cluvfy comp space [-n node_list] -l storage_location -z disk_space {B | K | M | G} [-verbose]

cluvfy comp space -n all -l /u01/oracle -z 2g -verbose

40. Checks shared storage accessibility

Use the cluvfy comp ssa component verification command to discover and check the sharing of the specified storage locations. CVU checks sharing for nodes in the node list.

cluvfy comp ssa [-n node_list] [-s storageID_list] [-t {software | data | ocr_vdisk}] [-verbose]

cluvfy comp ssa -n node1,node2 -verbose
cluvfy comp ssa -n node1,node2 -s /dev/sdb

41. Check minimum system requirements

Use the cluvfy comp sys component verification command to check that the minimum system requirements are met for the specified product on all the specified nodes.

cluvfy comp sys [-n node_list] -p {crs | ha | database}  [-r {10gR1 | 10gR2 | 11gR1 | 11gR2}] [-osdba osdba_group]  [-orainv orainventory_group] [-fixup [-fixupdir fixup_dir]] [-verbose]

cluvfy comp sys -n node1,node2 -p crs -verbose
cluvfy comp sys -n node1,node2 -p ha -verbose
cluvfy comp sys -n node1,node2 -p database -verbose

42. Check Voting Disk Udev settings

Use the cluvfy comp vdisk component verification command to check the voting disks configuration and the udev settings for the voting disks on all the specified nodes.

cluvfy comp vdisk [-n node_list] [-verbose]
cluvfy comp vdisk -n node1,node2 -verbose

43. Run cluvfy before doing an upgrade

runcluvfy stage -pre crsinst -upgrade -n  -rolling -src_crshome  -dest_crshome  -dest_version  -verbose
runcluvfy stage -pre crsinst -upgrade -n rac1,rac2 -rolling -src_crshome /u01/app/grid/11.2.0.1 -dest_crshome /u01/app/grid/11.2.0.3 -dest_version 11.2.0.4.0 -verbose

44. Strace the command

Strace the command to get more details
eg: strace -t -f -o clu.trc cluvfy comp olr -verbose
/*
[oracle@rac1 ~]$ strace -t -f -o clu.trc cluvfy comp olr -verbose

Verifying OLR integrity

Checking OLR integrity...

Checking OLR config file...

OLR config file check successful


Checking OLR file attributes...

OLR file check successful


WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.

OLR integrity check passed

Verification of OLR integrity was successful.
[oracle@rac1 ~]$ ls -ltr clu.trc
-rw-r--r-- 1 oracle oinstall 4206376 Jun 12 01:15 clu.trc
[oracle@rac1 ~]$

Oracle RAC 11gr2 Installation Steps

Oracle RAC 11gr2 Installation Steps



Basic Requirements and Assumptions:

Min 2/3 nodes
storage


Node1
- Linux Machine
Partitions
-- 100g
/ - 25g
swap - 10g
/tmp - 10g
/u01 - 50g
Network card -2 NIC
Ram  - 4gb
cpu - 1


Node2
- Linux Machine
Partitions
-- 100g
/ - 25g
swap - 10g
/tmp - 10g
/u01 - 50g
Network card -2 NIC
Ram  - 4gb
cpu - 1



Node3
- Linux Machine
Partitions
-- 100g
/ - 25g
swap - 10g
/tmp - 10g
/u01 - 50g
Network card -2 NIC
Ram  - 4gb
cpu - 1

Storage

openfiler(Linux)
starwind

Openfiler
10g storage
Partitions
- / - 8g
- swap - 102

ram - 512M
NIC - 1
cpu - 1
+ Extra storage 100g

1) OpenFiler Configuration:

http://mammqm.blogspot.com/2019/07/openfiler-installation-for-rac-setup.html

2) Define your IQN:

iscsiadm -m discovery -t st -p 147.43.0.10 (on all nodes)
service iscsi restart
service iscsi status

3) setting up host file:

Setting up host file:
vi /etc/hosts

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
##-- Public-IP
192.168.0.1    rac10.oracle.com   rac10
192.168.0.2    rac20.oracle.com   rac20
192.168.0.3    rac30.oracle.com   rac30
##-- Private-IP
147.43.0.1 rac10-priv.oracle.com rac10-priv
147.43.0.2 rac20-priv.oracle.com rac20-priv
147.43.0.3 rac30-priv.oracle.com rac30-priv
##-- Virtual-IP
192.168.0.4 rac10-vip.oracle.com rac10-vip
192.168.0.5 rac20-vip.oracle.com rac20-vip
192.168.0.6 rac30-vip.oracle.com rac30-vip
##-- SCAN IP
192.168.0.7 oracle-scan.oracle.com oracle-scan
192.168.0.8 oracle-scan.oracle.com oracle-scan
192.168.0.9 oracle-scan.oracle.com oracle-scan
##-- Storage-IP
192.168.0.10    san.oracle.com    oracle-san

4) Download and Install ASMlib

rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm --force --nodeps
rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm --force --nodeps
rpm -ivh oracleasm-support-2.1.7-1.el5.i386.rpm --force --nodeps

5) Deletion of bydefault users and groups:

userdel -r oracle
groupdel dba
groupdel oinstall

6) User and groups creation:

groupadd -g 501 oinstall
groupadd -g 502 dba
groupadd -g 503 asmadmin
useradd -u 504 -g oinstall -G dba,asmadmin -m oracle
chown -R oracle:dba /u01
chmod -R 775 /u01
passwd oracle

7) Stop the ntpd services on all the nodes:

[root@rac10 ~]# mv /etc/ntp.conf /etc/ntp.conf_bkp
[root@rac10 ~]# service ntpd restart
Shutting down ntpd:                                        [FAILED]

8) Disk Partitioning using fdisk

[root@rac10 u01]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 12446.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-12446, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-12446, default 12446): +10g

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

# partprobe /dev/sdb --------------on all nodes

[root@rac10 u01]# oracleasm configure -i ------ on all nodes
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

[root@rac10 u01]# oracleasm exit ----- stopping the services
[root@rac10 u01]# oracleasm init ----- starting the services

Example:

[root@rac10 u01]# oracleasm exit
[root@rac10 u01]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@rac10 u01]# oracleasm createdisk OCR /dev/sdb1 (only on one node)
Writing disk header: done
Instantiating disk: done

below commands should be executed on all nodes:

#oracleasm exit
#oracleasm init
#oracleasm scandisks
#oracleasm listdisks

8) Check the newly created partitions:

[root@rac10 u01]# cat /proc/partitions
major minor  #blocks  name

   8        0  104857600 sda
   8        1   25599546 sda1
   8        2   12289725 sda2
   8        3    8193150 sda3
   8        4          1 sda4
   8        5   58773771 sda5
   8       16   99975168 sdb
   8       17    9775521 sdb1 ---- new

9) Run the cluvfy utility:

[oracle@rac10 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac10,rac20,rac30  -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "rac10"
  Destination Node                      Reachable?           
  ------------------------------------  ------------------------
  rac30                                 yes                   
  rac20                                 yes                   
  rac10                                 yes                   
Result: Node reachability check passed from node "rac10"


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Status               
  ------------------------------------  ------------------------
  rac10                                 failed               
  rac20                                 failed               
  rac30                                 failed               
Result: PRVF-4007 : User equivalence check failed for user "oracle"

ERROR:
User equivalence unavailable on all the specified nodes
Verification cannot proceed

Pre-check for cluster services setup was unsuccessful on all the nodes.

Above error is the expected error, please proceed with Grid installation.


Oracle Grid Infrastructure Installation Steps


[root@rac10 u01]#  xhost +
[root@rac10 u01]# su - oracle
[oracle@rac10 u01] /u01/installation/grid
[oracle@rac10 u01]./runInstaller











Installation is failed with SCAN listeners, this is expected as we are not using DNS, please proceed with next.


Oracle Database Installation Steps















Oracle Database Creation Using DBCA


















Homes and path sets for RDBMS and ASM instance :

RDBMS:

[oracle@rac1 ~]$ cat rdbms.env
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH:.

ASM:

[oracle@rac1 ~]$ cat grid.env
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=$ORACLE_HOME/bin:$PATH:.

Openfiler Installation for RAC Setup

Openfiler Installation for RAC Setup





























open the GUI of openfiler and prepare the disk for shareable

















Define your IQN:

iscsiadm -m discovery -t st -p <san_ip_address> (on all nodes)
service iscsi restart
service iscsi status