Saturday, March 3, 2018

Oracle RAC Administration Commands


Shutdown and Start sequence of Oracle RAC

STOP ORACLE RAC (11g)
1. emctl stop dbconsole
2. srvctl stop listener -n racnode1
3. srvctl stop database -d RACDB
4. srvctl stop asm -n racnode1 -f
5. srvctl stop asm -n racnode2 -f
6. srvctl stop nodeapps -n racnode1 -f
7. crsctl stop crs

START ORACLE RAC (11g)
1. crsctl start crs
2. crsctl start res ora.crsd -init
3. srvctl start nodeapps -n racnode1
4. srvctl start nodeapps -n racnode2
5. srvctl start asm -n racnode1
6. srvctl start asm -n racnode2
7. srvctl start database -d RACDB
8. srvctl start listener -n racnode1
9. emctl start dbconsole

-To start and stop oracle clusterware (run as the superuser) :

[root@node1 ~]# crsctl stop crs
[root@node1 ~]# crsctl start crs

-To start and stop oracle cluster resources running on all nodes :

[root@node1 ~]#  crsctl stop cluster -all
[root@node1 ~]#  crsctl start cluster -all

-To check the current status of a cluster :

[oracle@node1~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

-To check the current status of CRS :

[oracle@node1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

-To display the status cluster resources :

[oracle@node1 ~]$ crsctl stat res -t

-To check version of  Oracle Clusterware :

[oracle@node1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [node1] is [11.2.0.4.0]
[oracle@node1 ~]$
[oracle@node1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.4.0]
[oracle@node1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.4.0]

-To check current status of OHASD (Oracle High Availability Services) daemon :

[oracle@node1 ~]$ crsctl check has
CRS-4638: Oracle High Availability Services is onli

-Forcefully deleting resource :

[oracle@node1 ~]$ crsctl delete resource testresource -f

-Enabling and disabling CRS daemons (run as the superuser) :

[root@node1 ~]# crsctl enable crs
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@node1 ~]#
[root@node1 ~]# crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.

-To check the status of Oracle CRS :

[oracle@node1 ~]$ olsnodes
node1
node2

-To print node name with node number :

[oracle@node1 ~]$ olsnodes -n
node1 1
node2 2

-To print private interconnect address for the local node :

[oracle@node1 ~]$ olsnodes -l -p
node1 192.168.1.101

-To print virtual IP address with node name :

[oracle@node1 ~]$ olsnodes -i
node1 node1-vip
node2 node2-vip
[oracle@node1 ~]$ olsnodes -i node1
node1 node1-vip

-To print information for the local node :

[oracle@node1 ~]$ olsnodes -l
node1
pl

-To print node status (active or inactive) :

[oracle@node1 ~]$ olsnodes -s
node1 Active
node2 Active
[oracle@node1 ~]$ olsnodes -l -s
node1 Active

-To print node type (pinned or unpinned) :

[oracle@node1 ~]$ olsnodes -t
node1 Unpinned
node2 Unpinned
[oracle@node1 ~]$ olsnodes -l -t
node1 Unpinned

-To print clusterware name :

[oracle@node1 ~]$ olsnodes -c
rac-scan

-To display global public and global cluster_interconnect :

[oracle@node1 ~]$ oifcfg getif
eth0  192.168.100.0  global  public
eth1  192.168.1.0  global  cluster_interconnect

-To display the database registered in the repository :

[oracle@gpp4 ~]$ srvctl config database
TESTRACDB

-To display the configuration details of the database :

[oracle@TEST4 ~]$ srvctl config database -d TESTRACDB
Database unique name: TESTRACDB
Database name: TESTRACDB
Oracle home: /home/oracle/product/11.2.0/db_home1
Oracle user: oracle
Spfile: +DATA/TESTRACDB/spfileTESTRACDB.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: TESTRACDB
Database instances: TESTRACDB1,TESTRACDB2
Disk Groups: DATA,ARCH
Mount point paths:
Services: SRV_TESTRACDB
Type: RAC
Database is administrator managed

-To change  policy of database from automatic to manual :

[oracle@TEST4 ~]$ srvctl modify database -d TESTRACDB -y MANUAL

-To change  the startup option of database from open to mount :

[oracle@TEST4 ~]$ srvctl modify database -d TESTDB -s mount

-To start RAC listener :

[oracle@TEST4 ~]$ srvctl start listener

-To display the status of the database :

[oracle@TEST4 ~]$ srvctl status database -d TESTRACDB
Instance TESTRACDB1 is running on node TEST4
Instance TESTRACDB2 is running on node TEST5

-To display the status services running in the database :

[oracle@TEST4 ~]$ srvctl status service -d TESTRACDB
Service SRV_TESTRACDB is running on instance(s) TESTRACDB1,TESTRACDB2

-To check nodeapps running on a node :

[oracle@TEST4 ~]$ srvctl status nodeapps
VIP TEST4-vip is enabled
VIP TEST4-vip is running on node: TEST4
VIP TEST5-vip is enabled
VIP TEST5-vip is running on node: TEST5
Network is enabled
Network is running on node: TEST4
Network is running on node: TEST5
GSD is enabled
GSD is not running on node: TEST4
GSD is not running on node: TEST5
ONS is enabled
ONS daemon is running on node: TEST4
ONS daemon is running on node: TEST5

[oracle@TEST4 ~]$  srvctl status nodeapps -n TEST4
VIP TEST4-vip is enabled
VIP TEST4-vip is running on node: TEST4
Network is enabled
Network is running on node: TEST4
GSD is enabled
GSD is not running on node: TEST4
ONS is enabled
ONS daemon is running on node: TEST4

-To start or stop all instances associated with a database. This command also starts services and listeners on each node :

[oracle@TEST4 ~]$ srvctl start database -d TESTRACDB

-To shut down instances and services (listeners not stopped):

[oracle@TEST4 ~]$ srvctl stop database -d TESTRACDB

You can use -o option to specify startup/shutdown options.
To shutdown immediate database – srvctl stop database -d TESTRACDB -o immediate
To startup force all instances – srvctl start database -d TESTRACDB -o force
To perform normal shutdown – srvctl stop database -d TESTRACDB -i instance racnode1

-To start or stop the ASM instance on racnode01 cluster node :

[oracle@TEST4 ~]$ srvctl start asm -n racnode1
[oracle@TEST4 ~]$ srvctl stop asm -n racnode1

-To display current configuration of the SCAN VIP’s :

[oracle@test4 ~]$ srvctl config scan
SCAN name: vmtestdb.exo.local, Network: 1/192.168.5.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /vmtestdb.exo.local/192.168.5.100
SCAN VIP name: scan2, IP: /vmtestdb.exo.local/192.168.5.101
SCAN VIP name: scan3, IP: /vmtestdb.exo.local/192.168.5.102

Refreshing  SCAN VIP’s with new IP addresses from DNS :

[oracle@test4 ~]$ srvctl modify scan -n your-scan-name.example.com

-To stop or start SCAN listener and the  SCAN VIP resources :

[oracle@test4 ~]$ srvctl stop scan_listener
[oracle@test4 ~]$ srvctl start scan_listener
[oracle@test4 ~]$ srvctl stop scan
[oracle@test4 ~]$ srvctl start scan

-To display the status of SCAN VIP’s and SCAN listeners :

[oracle@test4 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node test4
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node test5
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node test5

[oracle@test4 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node test4
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node test5
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node test5

-To add/remove/modify SCAN :

[oracle@test4 ~]$ srvctl add scan -n your-scan
[oracle@test4 ~]$ srvctl remove scan
[oracle@test4 ~]$ srvctl modify scan -n new-scan

-To add/remove SCAN listener :

[oracle@test4 ~]$ srvctl add scan_listener
[oracle@test4 ~]$ srvctl remove scan_listener

-To modify SCAN listener port :

srvctl modify scan_listener -p <port_number>
srvctl modify scan_listener -p <port_number>  (reflect changes to the current SCAN listener only)

-To start the ASM instnace in mount state :

ASMCMD> startup --mount

-To shut down ASM instance immediately(database instance must be shut down before the ASM instance is shut down) :

ASMCMD> shutdown --immediate

Use lsop command on ASMCMD to list ASM operations :

ASMCMD > lsop

-To perform quick health check of OCR :

[oracle@test4 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version                  :          3
Total space (kbytes)     :     262120
Used space (kbytes)      :       3304
Available space (kbytes) :     258816
ID                       : 1555543155
Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
Device/File Name         :       +OCR
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

-To dump content of OCR file into an xml :

[oracle@test4 ~]$ ocrdump testdump.xml -xml

-To add or relocate the OCR mirror file to the specified location :

[oracle@test4 ~]$ ocrconfig -replace ocrmirror ‘+TESTDG’
[oracle@test4 ~]$ ocrconfig -replace +CURRENTOCRDG -replacement +NEWOCRDG

-To relocate existing OCR file :

[oracle@test4 ~]$ ocrconfig  -replce ocr ‘+TESTDG’

-To add mirrod disk group for OCR :

[oracle@test4 ~]$ ocrconfig -add +TESTDG

-To remove OCR mirror :

ocrconfig -delete +TESTDG

-To remove the OCR or the OCR mirror :

[oracle@test4 ~]$ ocrconfig -replace ocr

[oracle@test4 ~]$ ocrconfig replace ocrmirror

-To list ocrbackup list :

[oracle@test4 ~]$ ocrconfig -showbackup

test5     2016/04/16 17:30:29     /home/oracle/app/11.2.0/grid/cdata/vmtestdb/backup00.ocr
test5     2016/04/16 13:30:29     /home/oracle/app/11.2.0/grid/cdata/vmtestdb/backup01.ocr
test5     2016/04/16 09:30:28     /home/oracle/app/11.2.0/grid/cdata/vmtestdb/backup02.ocr
test5     2016/04/15 13:30:26     /home/oracle/app/11.2.0/grid/cdata/vmtestdb/day.ocr
test5     2016/04/08 09:30:03     /home/oracle/app/11.2.0/grid/cdata/vmtestdb/week.ocr

-Performs OCR backup manually :

[root@testdb1 ~]# ocrconfig -manualbackup

testdb1     2016/04/16 17:31:42     /votedisk/backup_20160416_173142.ocr     0 

-Changes OCR autobackup directory

[root@testdb1 ~]# ocrconfig -backuploc /backups/ocr

-To verify the integrity of all the cluster nodes:

[oracle@node1]$ cluvfy comp ocr -n all -verbose
Verifying OCR integrity
Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations

Checking daemon liveness...

Check: Liveness for "CRS daemon"
  Node Name                             Running?               
  ------------------------------------  ------------------------
  node2                                yes                   
  node1                                yes                   
Result: Liveness check passed for "CRS daemon"

Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful

Disk group for ocr location "+DATA/testdb-scan/OCRFILE/registry.255.903592771" is available on all the nodes
Disk group for ocr location "+CRS/testdb-scan/OCRFILE/registry.255.903735431" is available on all the nodes
Disk group for ocr location "+MULTIPLEX/testdb-scan/OCRFILE/registry.255.903735561" is available on all the nodes

Checking OCR backup location "/bkpdisk"
OCR backup location "/bkpdisk" check passed
Checking OCR dump functionality
OCR dump check passed

NOTE:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Verification of OCR integrity was successful. 

No comments:

Post a Comment