Friday, July 16, 2021
Saturday, June 26, 2021
Thursday, June 24, 2021
Saturday, June 19, 2021
Cloning A Schema
Cloning A Schema
Example:
SQL> @cln_user.sql
Enter user to model new user to: SCOTT -----> Existing User
Enter new user name: MQMTEST -----> New User
Enter new user's password: TEST123! -----> New User Password
create user MQMTEST identified by TEST123! default tablespace USERS temporary tablespace TEMP profile DEFAULT;
grant RESOURCE to MQMTEST;
grant CONNECT to MQMTEST;
grant UNLIMITED TABLESPACE to MQMTEST;
alter user MQMTEST default role RESOURCE;
alter user MQMTEST default role CONNECT;
Health Check Script
Health Check Script
Using below script we can database status, lsnrctl status and mountpoint.
vi status.sh
and save!
Friday, June 18, 2021
Monday, May 3, 2021
Oracle Single Instance Vs RAC Vs RAC One Node
Oracle Single Instance Vs RAC Vs RAC One Node
There are three types of deployment options you have :
1.Oracle Single Instance (Single Instance)
2.Oracle Real Application Cluster (RAC)
3.Oracle RAC One Node
Below are the overview of these databases :
Single Instance:
Single Instance is a simple and basic option of deploying oracle database. In single instance database there is only one instance of database running on one node. Single Instance is simple to install and maintain. This could be a good choice for small production and development environments.
Oracle RAC Database:
Oracle RAC database is a clustered database. It can have more than one instance running on one or more hosts. To make is database RAC , there is another software needs to be installed called Oracle Grid infrastructure which takes care of multinode sync, ASM, Oracle restart etc. Oracle GI is necessary as a base of Oracle RAC. In Oracle RAC, the load is balanced across multiple instance of databases running on different nodes. This is best for Production database. Failure of any instance is abstracted from user and forwarded to another instance from the database server pool. No of instances can be controlled using Policy Managed or Admin Managed.
Oracle RAC One Node:
Some time we need to have RAC databases but don't need many instances, why? Simple answers to it is leverage the high availability of RAC. The Oracle RAC One node is a RAC DB runs on one host. The advantage is it saves your resource, can to converted to full RAC any time, Cluster fail over i.e. the Database instance is moved to new instance in case of any failure on host known as Instance Relocation, easy to upgrade etc. This type of database is well suited for small production and dev/test environments.
Friday, April 30, 2021
Backup Based RMAN DUPLICATE Without Connecting To Target Database
Backup Based RMAN DUPLICATE Without Connecting To Target Database
If you are performing a backup based RMAN duplicate and using a recovery catalog as well, it is not required to connect to the source database as TARGET in RMAN.
This method is advantageous where network connections from the auxiliary host to the source database are restricted or prone to intermittent disruptions. In duplication without a TARGET connection, the source database is unaffected by the duplication.
Steps:
1.Prepare the shell script for backup and source the database:
vi rmanclone.sh
chmod 775 rmanclone.sh
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
BACKUP_LOG_PATH=/home/oracle
export ORACLE_BASE
export ORACLE_HOME
export ORACLE_SID=GTP2PROD
export BACKUP_LOG_PATH
LOG_FILE=${BACKUP_LOG_PATH}/clone_db.log
$ORACLE_HOME/bin/rman msglog=${LOG_FILE} <<EOF
connect target /
run {
allocate channel t1 type disk;
allocate channel t2 type disk;
backup database format '/u01/rman_bkp/database_%d_%u_%s';
release channel t1;
release channel t2;
}
sql 'alter system archive log current';
run {
allocate channel a1 type disk;
allocate channel a2 type disk;
backup archivelog all format '/u01/rman_bkp/arch_%d_%u_%s';
release channel a1;
release channel a2;
}
run {
allocate channel c1 type disk;
backup current controlfile format '/u01/rman_bkp/control_%d_%u_%s';
release channel c1;
}
exit;
EOF
chmod 775 rmanclone.sh
nohup ./rmanclone.sh &
jobs -l
2.Verify the backups and transfer to destination side:
[oracle@host01 rman_bkp]$ ls -ltrh
total 1.2G
-rw-r----- 1 oracle oinstall 405M Apr 29 22:51 database_GTP2PROD_0ivtht5d_18
-rw-r----- 1 oracle oinstall 9.7M Apr 29 22:52 database_GTP2PROD_0jvtht8e_19
-rw-r----- 1 oracle oinstall 96K Apr 29 22:52 database_GTP2PROD_0kvtht8m_20
-rw-r----- 1 oracle oinstall 600M Apr 29 22:52 database_GTP2PROD_0hvtht5d_17
-rw-r----- 1 oracle oinstall 45M Apr 29 22:52 arch_GTP2PROD_0mvtht8v_22
-rw-r----- 1 oracle oinstall 52M Apr 29 22:52 arch_GTP2PROD_0lvtht8v_21
-rw-r----- 1 oracle oinstall 6.6M Apr 29 22:52 arch_GTP2PROD_0nvtht9f_23
-rw-r----- 1 oracle oinstall 9.7M Apr 29 22:52 control_GTP2PROD_0ovtht9i_24
scp -p * oracle@host02:/u01/oracle/rman_backups
mkdir -p /u01/app/oracle/admin/MQMARC/adump
mkdir -p /u01/app/oracle/oradata/MQMARC
mkdir -p /u01/app/oracle/fast_recovery_area/MQMARC
mkdir -p /u01/app/oracle/fast_recovery_area
4.Prepare pfile and nomount the database:
vi initMQMARC.ora';
MQMARC.__db_cache_size=2415919104
MQMARC.__java_pool_size=16777216
MQMARC.__large_pool_size=16777216
MQMARC.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
MQMARC.__pga_aggregate_target=1040187392
MQMARC.__sga_target=3103784960
MQMARC.__shared_io_pool_size=0
MQMARC.__shared_pool_size=603979776
MQMARC.__streams_pool_size=16777216
*.audit_file_dest='/u01/app/oracle/admin/MQMARC/adump'
*.audit_trail='none'
*.compatible='11.2.0.0.0'
*.control_files='/u01/app/oracle/oradata/MQMARC/control01.ctl','/u01/app/oracle/fast_recovery_area/MQMARC/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='MQMARC'
*.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area'
*.db_recovery_file_dest_size=4322230272
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=MQMARCXDB)'
*.open_cursors=300
*.pga_aggregate_target=1032847360
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=3098542080
*.undo_tablespace='UNDOTBS1'
*.db_flashback_retention_target=4320
*.global_names=TRUE
#*.local_listener='MQMARC'
*.log_file_name_convert='/u01/app/oracle/oradata/GTP2PROD/','/u01/app/oracle/oradata/MQMARC/'
*.db_file_name_convert='/u01/app/oracle/oradata/GTP2PROD/','/u01/app/oracle/oradata/MQMARC/'
startup nomount pfile='$ORACLE_HOME/dbs/initMQMARC.ora';
5.Run the RMAN duplicate command:
vi clone_MQMARC_db.sh
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
BACKUP_LOG_PATH=/home/oracle
export ORACLE_BASE
export ORACLE_HOME
export ORACLE_SID=MQMARC
export BACKUP_LOG_PATH
LOG_FILE=${BACKUP_LOG_PATH}/clone_mqmarc_db.log
/u01/app/oracle/product/11.2.0/dbhome_1/bin/rman msglog=${LOG_FILE} <<EOF
connect auxiliary /
run {
allocate auxiliary channel t1 type disk;
allocate auxiliary channel t2 type disk;
duplicate target database to MQMARC backup location '/u01/oracle/backup';
}
exit
EOF
chmod 775 clone_MQMARC_db.sh
nohup ./clone_MQMARC_db.sh &
6.Connect to database and check the status:
SQL> select name, open_mode, status from v$database,v$instance;
NAME OPEN_MODE STATUS
--------- -------------------- ------------
MQMARC READ WRITE OPEN
Note:
-RMAN will internally changed the SID during the duplicate command execution.
-It will open the database with resetlog option.
-Flashback is automatically on by duplicate command.
Tuesday, April 13, 2021
RAC Manual Patching Steps On 12c
RAC Manual Patching Steps On 12c
On NODE.1
1. Stop the database resources from DB home.
[oracle@test01 ~]$ ps -ef|grep pmon
oracle 15294 1 0 00:35 ? 00:00:00 asm_pmon_+ASM1
oracle 18171 1 0 00:39 ? 00:00:00 ora_pmon_RUMPROD1
oracle 19097 9655 0 00:40 pts/2 00:00:00 grep pmon
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
[oracle@test01 ~]$ ps -ef|grep pmon
oracle 15294 1 0 00:35 ? 00:00:00 asm_pmon_+ASM1
oracle 28749 9655 0 01:16 pts/2 00:00:00 grep pmon
2. On GI Home, as the root user or sudo execute below:
[root@test01 install]# ps -ef|grep pmon
oracle 15294 1 0 00:35 ? 00:00:00 asm_pmon_+ASM1
root 29836 6253 0 01:20 pts/1 00:00:00 grep pmon
[root@test01 install]# $ORACLE_HOME/crs/install/rootcrs.pl -prepatch
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'test01'
CRS-2673: Attempting to stop 'ora.crsd' on 'test01'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'test01'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'test01'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'test01'
CRS-2677: Stop of 'ora.OCR.dg' on 'test01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'test01'
CRS-2677: Stop of 'ora.asm' on 'test01' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'test01' succeeded
CRS-2673: Attempting to stop 'ora.test01.vip' on 'test01'
CRS-2677: Stop of 'ora.test01.vip' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.test01.vip' on 'test02'
CRS-2676: Start of 'ora.test01.vip' on 'test02' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'test01'
CRS-2677: Stop of 'ora.ons' on 'test01' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'test01'
CRS-2677: Stop of 'ora.net1.network' on 'test01' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'test01' has completed
CRS-2677: Stop of 'ora.crsd' on 'test01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'test01'
CRS-2673: Attempting to stop 'ora.evmd' on 'test01'
CRS-2673: Attempting to stop 'ora.storage' on 'test01'
CRS-2673: Attempting to stop 'ora.crf' on 'test01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'test01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'test01'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'test01'
CRS-2677: Stop of 'ora.storage' on 'test01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'test01'
CRS-2677: Stop of 'ora.drivers.acfs' on 'test01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'test01' succeeded
CRS-2677: Stop of 'ora.evmd' on 'test01' succeeded
CRS-2677: Stop of 'ora.crf' on 'test01' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'test01' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'test01' succeeded
CRS-2677: Stop of 'ora.asm' on 'test01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'test01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'test01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'test01'
CRS-2677: Stop of 'ora.cssd' on 'test01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'test01'
CRS-2677: Stop of 'ora.gipcd' on 'test01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'test01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2021/04/14 01:22:01 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2021/04/14 01:22:37 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2021/04/14 01:22:37 CLSRSC-347: Successfully unlock /u01/app/12.1.0/grid
[root@test01 install]# ps -ef|grep pmon
root 3037 6253 0 01:27 pts/1 00:00:00 grep pmon
3. Check the patch Conflicts
4. Patch GI home [as GI home owner]
Goto patch directory and apply the patch.
opatch apply
5. Run the post script (as the root or sudo user)
[root@test01 install]# $ORACLE_HOME/rdbms/install/rootadd_rdbms.sh
[oracle@test01 ~]$ ps -ef|grep pmon
oracle 21867 9655 0 01:30 pts/2 00:00:00 grep pmon
[root@test01 install]# $ORACLE_HOME/crs/install/rootcrs.pl -postpatch
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2021/04/14 01:30:33 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2021/04/14 01:30:49 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'test01'
CRS-2672: Attempting to start 'ora.evmd' on 'test01'
CRS-2676: Start of 'ora.mdnsd' on 'test01' succeeded
CRS-2676: Start of 'ora.evmd' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'test01'
CRS-2676: Start of 'ora.gpnpd' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'test01'
CRS-2676: Start of 'ora.gipcd' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'test01'
CRS-2676: Start of 'ora.cssdmonitor' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'test01'
CRS-2672: Attempting to start 'ora.diskmon' on 'test01'
CRS-2676: Start of 'ora.diskmon' on 'test01' succeeded
CRS-2676: Start of 'ora.cssd' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'test01'
CRS-2672: Attempting to start 'ora.ctssd' on 'test01'
CRS-2676: Start of 'ora.ctssd' on 'test01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'test01'
CRS-2676: Start of 'ora.asm' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'test01'
CRS-2676: Start of 'ora.storage' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'test01'
CRS-5013: Agent "ORAROOTAGENT" failed to start process "/u01/app/12.1.0/grid/bin/osysmond" for action "start": details at "(:CLSN00008:)" in "/u01/app/oracle/diag/crs/test01/crs/trace/ohasd_orarootagent_root.trc"
CRS-2674: Start of 'ora.crf' on 'test01' failed
CRS-2679: Attempting to clean 'ora.crf' on 'test01'
CRS-2681: Clean of 'ora.crf' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'test01'
CRS-2676: Start of 'ora.crsd' on 'test01' succeeded
CRS-6017: Processing resource auto-start for servers: test01
CRS-2672: Attempting to start 'ora.net1.network' on 'test01'
CRS-2676: Start of 'ora.net1.network' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'test01'
CRS-2673: Attempting to stop 'ora.test01.vip' on 'test02'
CRS-2677: Stop of 'ora.test01.vip' on 'test02' succeeded
CRS-2672: Attempting to start 'ora.test01.vip' on 'test01'
CRS-2676: Start of 'ora.test01.vip' on 'test01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'test01'
CRS-2676: Start of 'ora.ons' on 'test01' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'test01' succeeded
CRS-6016: Resource auto-start has completed for server test01
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].
PRCC-1010 : _mgmtdb was already enabled
PRCR-1002 : Resource ora.mgmtdb is already enabled
[oracle@test01 ~]$ ps -ef|grep pmon
oracle 1099 9655 0 01:36 pts/2 00:00:00 grep pmon
oracle 1502 1 0 01:32 ? 00:00:00 asm_pmon_+ASM1
6. Start the database resources from DB home.
[oracle@test01 ~]$ ps -ef|grep pmon
oracle 1099 9655 0 01:36 pts/2 00:00:00 grep pmon
oracle 1502 1 0 01:32 ? 00:00:00 asm_pmon_+ASM1
SQL> startup;
ORACLE instance started.
Total System Global Area 754974720 bytes
Fixed Size 2928968 bytes
Variable Size 385879736 bytes
Database Buffers 360710144 bytes
Redo Buffers 5455872 bytes
Database mounted.
Database opened.
[oracle@test01 ~]$ ps -ef|grep pmon
oracle 1502 1 0 01:32 ? 00:00:00 asm_pmon_+ASM1
oracle 16180 1 0 01:37 ? 00:00:00 ora_pmon_RUMPROD1
oracle 19651 19507 0 01:38 pts/2 00:00:00 grep pmon
Perform all the above steps in same manner on NODE.2
Note:
Always read the patch readme to understand the patching instructions, general procedure to apply the database prepatch and postpatch steps.
7. Run the postpatch datapatch apply (In rac only datapatch will be run only one node)
[oracle@test01 ~]$ cd $ORACLE_HOME/OPatch
[oracle@test01 OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.2.0.0.0 on Wed Apr 14 01:41:32 2021
Copyright (c) 2014, Oracle. All rights reserved.
Connecting to database...OK
Determining current state...done
Current state of SQL patches:
Adding patches to installation queue and performing prereq checks...
Installation queue:
Nothing to roll back
Nothing to apply
SQL Patching tool complete on Wed Apr 14 01:43:17 2021
Monday, April 12, 2021
Switchover and Switchback Using Dataguard Broker For Oracle 11.2.0.3.0 Database
Switchover and Switchback Using Dataguard Broker For Oracle 11.2.0.3.0 Database
---Primary---
[oracle@host01 ~]$ dgmgrl
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/Welcome1
Connected.
DGMGRL> show configuration
Configuration - gtp2prod
Protection Mode: MaxPerformance
Databases:
gtp2_live - Primary database
gtp2_stdy - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
To check the PRIMARY status:
DGMGRL> show database verbose gtp2_live
Database - gtp2_live
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
GTP2PROD
Properties:
DGConnectIdentifier = 'gtp2_live'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '30'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = 'GTP2_STDY, GTP2_LIVE'
LogFileNameConvert = 'GTP2_STDY, GTP2_LIVE'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName = 'GTP2PROD'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.7)(PORT=1523))(CONNECT_DATA=(SERVICE_NAME=GTP2_LIVE_DGMGRL)(INSTANCE_NAME=GTP2PROD)(SERVER=DEDICATED)))'
StandbyArchiveLocation = '/u01/app/oracle/arch'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'arch%s%t%r.arc'
TopWaitEvents = '(monitor)'
Database Status:
SUCCESS
To check the STANDBY status:
DGMGRL> show database verbose gtp2_stdy
Database - gtp2_stdy
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds
Apply Lag: 0 seconds
Real Time Query: ON
Instance(s):
GTP2PROD
Properties:
DGConnectIdentifier = 'gtp2_stdy'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '30'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = 'GTP2_LIVE, GTP2_STDY'
LogFileNameConvert = 'GTP2_LIVE, GTP2_STDY'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName = 'GTP2PROD'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.8)(PORT=1524))(CONNECT_DATA=(SERVICE_NAME=GTP2_STDY_DGMGRL)(INSTANCE_NAME=GTP2PROD)(SERVER=DEDICATED)))'
StandbyArchiveLocation = '/u01/app/oracle/arch'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'arch%s%t%r.arc'
TopWaitEvents = '(monitor)'
Database Status:
SUCCESS
Check the current status:
DGMGRL> show configuration
Configuration - gtp2prod
Protection Mode: MaxPerformance
Databases:
gtp2_live - Primary database
gtp2_stdy - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
Switchover
It is always advised to view the alert.log files for both PRIMARY and STANDBY databases.
Issue the below command:
DGMGRL> switchover to gtp2_stdy
Performing switchover NOW, please wait...
New primary database "gtp2_stdy" is opening...
Operation requires shutdown of instance "GTP2PROD" on database "gtp2_live"
Shutting down instance "GTP2PROD"...
ORACLE instance shut down.
Operation requires startup of instance "GTP2PROD" on database "gtp2_live"
Starting instance "GTP2PROD"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "gtp2_stdy"
Check the current status:
DGMGRL> show configuration
Configuration - gtp2prod
Protection Mode: MaxPerformance
Databases:
gtp2_stdy - Primary database
gtp2_live - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
Switchback
To revert (switch back) to the previous situation, perform the same action. Remember now your primary is your previous STANDBY and standby is previous PRIMARY.
Issue the below command:
DGMGRL> swtichover to gtp2_live
Unrecognized command "swtichover", try "help"
DGMGRL> switchover to gtp2_live
Performing switchover NOW, please wait...
New primary database "gtp2_live" is opening...
Operation requires shutdown of instance "GTP2PROD" on database "gtp2_stdy"
Shutting down instance "GTP2PROD"...
ORACLE instance shut down.
Operation requires startup of instance "GTP2PROD" on database "gtp2_stdy"
Starting instance "GTP2PROD"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "gtp2_live"
Check the current status:
DGMGRL> show configuration
Configuration - gtp2prod
Protection Mode: MaxPerformance
Databases:
gtp2_live - Primary database
gtp2_stdy - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
Wednesday, February 17, 2021
Check Whether Physical Standby Is In Sync With The Primary Or NOT
Friday, January 29, 2021
About SQL Hints
About SQL Hints
* A hint is an instruction to the optimizer to follow an ‘application developer desired’ execution plan
* Make decisions for the optimizer because you have more in-depth knowledge about data distribution patterns
* In a test or development environments, hints are useful for testing the performance of a specific access path- test use of a specific index when there are several indexes defined on the same table
* Changes in the database or host environment can make hints obsolete and even have negative consequences
* Recommended to use tools like SQL Tuning Advisor, SQL plan management, and SQL Performance Analyzer, to address performance problems not solved by the optimizer before considering hints
How do we gather optimizer statistics?
How do we gather optimizer statistics?
* By default, Oracle Database uses automatic optimizer statistics collection
* Database automatically runs DBMS_STATS to collect optimizer statistics for all schema objects for which statistics are missing or stale
* You can also update and manage optimizer statistics by manually executing DBMS_STATS
* Also, when optimizer statistics are missing, stale, or insufficient, the database automatically gathers dynamic statistics during a parse (depends on level of dynamic sampling)
What is An Optimizer ?
What is An Optimizer ?
* Generates the optimal execution plan
* Determines the most efficient method for a SQL statement to access requested data
* The optimizer choose the plan with the lowest cost among all considered candidate plans
* The optimizer uses available statistics to calculate cost
* Cost computation accounts for factors of query execution like I/O , CPU, Network, Memory
Reasons Of Changing Execution Plans ?
Reasons Of Changing Execution Plans ?
* Stale or missing statistics
* Have gathered fresh statistics
* Optimizer engine has changed after a database upgrade
* Index has been added or dropped or Table structure has changed
* Database optimizer related init.ora parameters have been changed
* Parsed representation of the SQL statement is not in the Library Cache – statements are aged out
* Reparsing the same SQL statement now leads to generation of a new plan
Difference Between Execution plan & Explain plan ?
Difference Between Execution plan & Explain plan ?
Execution Plan?
* The execution plan for a SQL statement is a set of instructions.
* Tell the database how to access the data and join it together.
Explain Plan?
* An explain plan predicts how Oracle will process your query
* An execution plan describes the steps it actually took