Oracle RAC 11gr2 Installation Steps
Basic Requirements and Assumptions:
Min 2/3 nodes
storage
Node1
- Linux Machine
Partitions
-- 100g
/ - 25g
swap - 10g
/tmp - 10g
/u01 - 50g
Network card -2 NIC
Ram - 4gb
cpu - 1
Node2
- Linux Machine
Partitions
-- 100g
/ - 25g
swap - 10g
/tmp - 10g
/u01 - 50g
Network card -2 NIC
Ram - 4gb
cpu - 1
Node3
- Linux Machine
Partitions
-- 100g
/ - 25g
swap - 10g
/tmp - 10g
/u01 - 50g
Network card -2 NIC
Ram - 4gb
cpu - 1
Storage
openfiler(Linux)
starwind
Openfiler
10g storage
Partitions
- / - 8g
- swap - 102
ram - 512M
NIC - 1
cpu - 1
+ Extra storage 100g
1) OpenFiler Configuration:
http://mammqm.blogspot.com/2019/07/openfiler-installation-for-rac-setup.html
2) Define your IQN:
iscsiadm -m discovery -t st -p 147.43.0.10 (on all nodes)
service iscsi restart
service iscsi status
3) setting up host file:
Setting up host file:
vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
##-- Public-IP
192.168.0.1 rac10.oracle.com rac10
192.168.0.2 rac20.oracle.com rac20
192.168.0.3 rac30.oracle.com rac30
##-- Private-IP
147.43.0.1 rac10-priv.oracle.com rac10-priv
147.43.0.2 rac20-priv.oracle.com rac20-priv
147.43.0.3 rac30-priv.oracle.com rac30-priv
##-- Virtual-IP
192.168.0.4 rac10-vip.oracle.com rac10-vip
192.168.0.5 rac20-vip.oracle.com rac20-vip
192.168.0.6 rac30-vip.oracle.com rac30-vip
##-- SCAN IP
192.168.0.7 oracle-scan.oracle.com oracle-scan
192.168.0.8 oracle-scan.oracle.com oracle-scan
192.168.0.9 oracle-scan.oracle.com oracle-scan
##-- Storage-IP
192.168.0.10 san.oracle.com oracle-san
4) Download and Install ASMlib
rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm --force --nodeps
rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm --force --nodeps
rpm -ivh oracleasm-support-2.1.7-1.el5.i386.rpm --force --nodeps
5) Deletion of bydefault users and groups:
userdel -r oracle
groupdel dba
groupdel oinstall
6) User and groups creation:
groupadd -g 501 oinstall
groupadd -g 502 dba
groupadd -g 503 asmadmin
useradd -u 504 -g oinstall -G dba,asmadmin -m oracle
chown -R oracle:dba /u01
chmod -R 775 /u01
passwd oracle
7) Stop the ntpd services on all the nodes:
[root@rac10 ~]# mv /etc/ntp.conf /etc/ntp.conf_bkp
[root@rac10 ~]# service ntpd restart
Shutting down ntpd: [FAILED]
8) Disk Partitioning using fdisk
[root@rac10 u01]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 12446.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-12446, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-12446, default 12446): +10g
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
# partprobe /dev/sdb --------------on all nodes
[root@rac10 u01]# oracleasm configure -i ------ on all nodes
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@rac10 u01]# oracleasm exit ----- stopping the services
[root@rac10 u01]# oracleasm init ----- starting the services
Example:
[root@rac10 u01]# oracleasm exit
[root@rac10 u01]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@rac10 u01]# oracleasm createdisk OCR /dev/sdb1 (only on one node)
Writing disk header: done
Instantiating disk: done
below commands should be executed on all nodes:
#oracleasm exit
#oracleasm init
#oracleasm scandisks
#oracleasm listdisks
8) Check the newly created partitions:
[root@rac10 u01]# cat /proc/partitions
major minor #blocks name
8 0 104857600 sda
8 1 25599546 sda1
8 2 12289725 sda2
8 3 8193150 sda3
8 4 1 sda4
8 5 58773771 sda5
8 16 99975168 sdb
8 17 9775521 sdb1 ---- new
9) Run the cluvfy utility:
[oracle@rac10 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac10,rac20,rac30 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac10"
Destination Node Reachable?
------------------------------------ ------------------------
rac30 yes
rac20 yes
rac10 yes
Result: Node reachability check passed from node "rac10"
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Status
------------------------------------ ------------------------
rac10 failed
rac20 failed
rac30 failed
Result: PRVF-4007 : User equivalence check failed for user "oracle"
ERROR:
User equivalence unavailable on all the specified nodes
Verification cannot proceed
Pre-check for cluster services setup was unsuccessful on all the nodes.
Above error is the expected error, please proceed with Grid installation.
Oracle Grid Infrastructure Installation Steps
[root@rac10 u01]# xhost +
[root@rac10 u01]# su - oracle
[oracle@rac10 u01] /u01/installation/grid
[oracle@rac10 u01]./runInstaller
Installation is failed with SCAN listeners, this is expected as we are not using DNS, please proceed with next.
Oracle Database Installation Steps
Oracle Database Creation Using DBCA
Homes and path sets for RDBMS and ASM instance :
RDBMS:
[oracle@rac1 ~]$ cat rdbms.env
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH:.
ASM:
[oracle@rac1 ~]$ cat grid.env
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=$ORACLE_HOME/bin:$PATH:.
No comments:
Post a Comment