Sunday, August 18, 2019

What is UNDO DATA?

What is UNDO DATA?


UNDO DATA is the copy of pre-modified data that is captured for every transactions that changes data

Why UNDO required?

* rollback transactions
* support read consistency
* support flashback operations
* recover from failed transactions

How long does UNDO data stay ?

It is retained until one of the following happens:

* users commits the transactions
* users rollback the transactions
* user execute the DDL statements (CREATE, DROP, ALTER, RENAME)
* user session is terminated abnormally (transactions rollback)
* user session terminates normally with an exit (transactions commits)

Where does UNDO data stay ?

* Stored in UNDO segments/UNDO tablespace
* Only 1 active tablespace for an instance
* They are owned by the user SYS

Starting and Stopping OEM Cloud Control 13c

Starting and Stopping OEM Cloud Control 13c


Starting OEM 13c Components

1) Start the listener.

[oracle@MQM ~]$ $ORACLE_HOME/bin/lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 02-JUN-2016 16:44:07

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/12.1.0/db/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0/db/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/MQM/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=MQM.ORACLE.com)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MQM.ORACLE.com)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                02-JUN-2016 16:44:07
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0/db/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/MQM/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=MQM.ORACLE.com)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully

2) Start the repository database.

[oracle@MQM ~]$ $ORACLE_HOME/bin/sqlplus /nolog

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 2 16:48:18 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> conn sys as sysdba
Enter password:
Connected to an idle instance.
SQL>startup
ORACLE instance started.

Total System Global Area 3003121664 bytes
Fixed Size                  2928920 bytes
Variable Size             771755752 bytes
Database Buffers         2214592512 bytes
Redo Buffers               13844480 bytes
Database mounted.
Database opened.

3) Start the OMS

[oracle@MQM ~]$ $OMS_HOME/bin/emctl start oms
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Starting Oracle Management Server...
WebTier Successfully Started
Oracle Management Server Successfully Started
Oracle Management Server is Up
JVMD Engine is Up
Starting BI Publisher Server ...
BI Publisher Server Successfully Started
BI Publisher Server is Up

4) Start the agent

[oracle@MQM ~]$ $AGENT_HOME/bin/emctl start agent
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Starting agent .................................................started.

Stoping OEM 13c Components

1) Stop the OMS

[oracle@MQM ~]$ $OMS_HOME/bin/emctl stop oms -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping Oracle Management Server...
WebTier Successfully Stopped
Oracle Management Server Successfully Stopped
Oracle Management Server is Down
JVMD Engine is Down
Stopping BI Publisher Server...
BI Publisher Server Successfully Stopped
AdminServer Successfully Stopped
BI Publisher Server is Down

2)Stop the agent.

[oracle@MQM ~]$ $AGENT_HOME/bin/emctl stop agent
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping agent ... stopped.

3) Stop repository database

[oracle@MQM ~]$ $ORACLE_HOME/bin/sqlplus /nolog

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 2 16:41:31 2016
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
SQL> conn sys as sysdba
Enter password:
Connected.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

4) Stop the listener

[oracle@MQM ~]$ $ORACLE_HOME/bin/lsnrctl stop

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 02-JUN-2016 16:43:15

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MQM.ORACLE.com)(PORT=1521))

Friday, August 16, 2019

Convert Single Instance to RAC using RCONFIG

Convert Single Instance to RAC using RCONFIG


In $ORACLE_HOME/assistants/rconfig/sampleXMLs directory, there are two templates ConvertToRAC_AdminManaged.xml and ConvertToRAC_PolicyManaged.xml that are used to convert a single instance database to RAC admin managed or policy managed database respectively.

Below are the prerequisites that should be completed.

1-> Ensure the clusterware is configured, up and running on the nodes.
2-> Oracle RDBMS binaries for RAC database must be installed.
3-> Active ASM instance across the nodes.
4-> Create and mount required ASM diskgroups.
5-> Stand alone database running on the local node.
6-> Stand alone DB and RAC must be of same db versions.

All the steps to be performed from RAC1 Node(cluster node)
Note: It is assumed that we are also having ORACLE_HOME for single instance (i.e dbhome_1) at RAC1

1-> #su - oracle
2-> $. oraenv
3-> $export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
4-> $export PATH=$ORACLE_HOME/bin:$PATH
5-> $dbca => for creating a database for single instance
6-> $export ORACLE_HOME=/u01/home/oracle/product/11.2.0/dbhome_1
7->  $export PATH=$ORACLE_HOME/bin:$PATH
8-> $cp $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC_Adminmanged.xml  /convert1.xml

Convert parameters:

– Source $OH = OracleHome of non-rac database for SourceDBHome
– Destination $OH = OracleHome where the RAC database should be configured.
– SourceDBInfo SID
– List of nodes that should have rac instances running for the Admin Managed Cluster Database. Local Node should be the first node.
– Instance Prefix tag is optional starting with 11.2.
– Specify the type of storage to be used by RAC database (ASM or Cluster File system).
– Specify Database Area Location to be configured for RAC database.
– Specify Fast Recovery Area to be configured for RAC database.

Example:

9-> $vi convert1.xml
:se nu
7=> verify="ONLY" (for verify and after give NO for convert)
9=> /u01/app/oracle/product/11.2.0/db_1   (source db home location)
11=>/u01/app/oracle/product/11.2.0/db_1  (target db home location)
13=> SID="nonrac" (sid of non-rac db)
16=>password=manager
22=> name="rac1"    (cluster(rac) nodes)
23=> name="rac2"
28=> prefix>nonrac<    (after migration rconfig creates nonrac1,nonrac2 intances)
31=> type="ASM"      (storage type (only two are there (ASM|CFS)))
33=> +DATA (storage location)
35=> +DATA (FRA location)
:wq!

10-> $rconfig convert1.xml (Only for verifying whether successful migrate or not)

11-> Set ONLY->NO in convert1.xml and run the below again:

$rconfig convert1.xml (Now For Migration after giving NO or YES option)

Note:

The Convert verify option in the ConvertToRAC.xml file has three options:

Convert verify="YES": rconfig performs checks to ensure that the prerequisites for single-instance to Oracle RAC conversion have been met before it starts conversion
Convert verify="NO": rconfig does not perform prerequisite checks, and starts conversion
Convert verify="ONLY" rconfig only performs prerequisite checks; it does not start conversion after completing prerequisite checks

Post conversion steps:

1-> Adjust the SGA for the RAC database, at least 15% for buffer cache and shared pool.
2-> Create SERVICE and apply TAF & Load balancing options, if required.
3-> Backup the RAC database.
4-> Backup OCR.
5-> Remove single-instance database.

How to resume from a failed conversion:

1-> Simply rerun the command if the rconfig exits in the mid-of the conversion for any fatal reasons.
2-> Perform clean-up operations on converted instance before reinitializing the conversion procedure.
3-> You can manually add the resources to CRS, in case rconfig utility fails to perform the final steps.

Logs to monitor:

1-> Single instance database alert log file.
2-> RAC database alert log file.
3-> $ORACLE_BASE/cfgtools/rconfig or /u01/app/oracle/cfgtoollogs/rconfig [records entire operation step-by-step]
4-> For RMAN log and SQL log, check the respective directories under /u01/app/oracle/cfgtoollogs/rconfig/db


Rconfig FAQ's:

What is the best scenario for using RCONFIG?

RCONFIG invokes RMAN internally to back up  the database to proceed with converting non-ASM to ASM.  This is time consuming task if the DB is in Terabyte size  and it will cause longer outage.
Hence this is good option only for small database and customer allows significant amount downtime.

How do we improve the conversion time with RCONFIG?

We can increase RMAN backup channel to reduce the backup time. Again it depends on how many number of CPU you have in your node. Here is the command to adjust the parallelism.  The  channel can be adjusted according to your server capacity.

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 6;

What does RCONFIG do during the conversion?

Migrate the database to ASM storage if we specified in XML file
Create Database Instances on all nodes in the cluster
Configure Listener and NetService entries
Configure and register CRS resources
Start the instances on all nodes in the cluster.
What are the Pre-requisite for RCONFIG utility?
Cluster should  installed and all the cluster process is up and running on all the nodes.
The oracle RAC RDBMS is installed on all nodes.
ASM disk must be mounted across all nodes if we want to migrate from file system to ASM.
The source and target should be in same home and same OS version.
Make sure, ASM disk has enough storage to migrate all the data files
How do we test RCONFIG before we do real conversion?

We have convert verify parameter in XML file and it has below options. Please choose ONLY.

Convert verify="YES"
Runs through verification checks to ensure that the
prerequisites for single-instance to RAC db
conversion have been met before it acutually starts
the conversion.

Convert verify="NO"
Bypass the prerequisite verification checks, and
starts conversion straight away

Convert verify="ONLY"
Do not perform the acutal conversion process.
Rather, just run through the prerequisite verification
checks and ends.

What can do if it fails middle of the conversion?

Don't get panic.  This will perform the clean-up operation on converted Instance or will delete the files created by earlier run.

How do we monitor conversion process during the migration?

There are few logs and we can tail the log during the process. We can monitor (tail -f)
single instance alert log, RAC database alert log, rconfig alert log, rman logs. Log info will give better idea and we can see what is going on.

How do we convert using RCONFIG if the stand alone DB is in different host?

You would accomplish this task by first restoring single instance database  in one of RAC node and , then you would use RCONFIG  to convert to RAC option.

Thursday, August 8, 2019

Steps to Start and Stop Oracle Standby Database

Steps to Start and  Stop Oracle Standby Database


Syntax to Start a Oracle Standby Database

startup nomount;
ORACLE instance started.
Total System Global Area 3206836224 bytes
Fixed Size                  2257520 bytes
Variable Size            1191185808 bytes
Database Buffers         1979711488 bytes
Redo Buffers               33681408 bytes

alter database mount standby database;
Database altered.

alter database recover managed standby database disconnect from session;
Database altered.

Syntax to Stop a running Oracle Standby Database

alter database recover managed standby database cancel;
Database altered.
shut immediate;

The commands will stop the recover process which is identified as the MRP0 process, followed by the “shutdown immediate” – a clean shutdown of the database.

Wednesday, August 7, 2019

How Update Statement Works

How Update Statement Works


When Oracle receives sql/update query, it requires to run some pre-tasks before actually being able to really run the query.

During parsing, Database validate the syntax of the statement whether the query is valid or not.

Database validate the semantic of the statement. It checks whether a statement is meaningful or not.

If syntax/Semantic check pass, then server process will continue execution of the query.The server process will go to the library cache.In the library cache the server process will search from the MRU (Most Recently Used) end to the LRU (Least Recently Used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

As already mentioned above about semantic check, this check is done in the data dictionary cache by the server process. Server process will check the definition of the object, if that is already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve required information from the system tablespace.

If the submitted sql statement is the same as a reusable SQL statement in the shared pool, then Oracle Database reuses the existing code. This is called Soft parse.

In case of hard parsing the server process will check with the optimizer, because the optimizer will read the sql statement and generate the execution plan of the query. The optimizer generates multiple execution plans during parsing.

Once the execution plan generates by the optimizer, then the server process will pick the best possible and cost effective execution plan and go to the library cache.

Server process will keep the execution plan along with the original sql text in the library cache.Here the parsing ends and the execution of sql statement will start.

Server process will keep the plan in the library cache on the MRU (Most Recently Used) end after generation of execution plan. Then the plan is picked up and execution of the update job will start.

Server process will bring the required blocks from respective datafile of tablespace in which table exist and which has to be updated.Blocks will be brought into database buffer cache.Blocks contain original data of the table.

Server process will bring same no of empty blocks from undo tablespace. Server process will copy the address of the actual data blocks of userdata datafiles into the empty rollback/undo blocks and a before image will be created.

Server process will bring set of userdata blocks.Once Filter operation completed, the selected rows will be updated with new content.

Above Update process will continue till all the userdata blocks have been checked and updated.

Once the update job completes, DBWR(dbwriter) will write the data back to the respective datafiles.

How Delete Statement Works

How Delete Statement Works


When Oracle receives sql/Delete query, it requires to run some pre-tasks before actually being able to really run the query.

During parsing, Database validate the syntax of the statement whether the query is valid or not.

Database validate the semantic of the statement. It checks whether a statement is meaningful or not.

If syntax/Semantic check pass, then server process will continue execution of the query.The server process will go to the library cache.In the library cache the server process will search from the MRU (Most Recently Used) end to the LRU (Least Recently Used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

As already mentioned above about semantic check, this check is done in the data dictionary cache by the server process. Server process will check the definition of the object, if that is already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve required information from the system tablespace.

If the submitted sql statement is the same as a reusable SQL statement in the shared pool, then Oracle Database reuses the existing code. This is called Soft parse.

In case of hard parsing the server process will check with the optimizer, because the optimizer will read the sql statement and generate the execution plan of the query. The optimizer generates multiple execution plans during parsing.

Once the execution plan generates by the optimizer, then the server process will pick the best possible and cost effective execution plan and go to the library cache.

Server process will keep the execution plan along with the original sql text in the library cache.Here the parsing ends and the execution of sql statement will start.

Server process will keep the plan in the library cache on the MRU (Most Recently Used) end after generation of execution plan. Then the plan is picked up and execution of the delete job will start.

Server process will bring the required blocks from respective datafile of tablespace in which table exist and which rows must be deleted.Blocks will be brought into database buffer cache.Blocks contain original data of the table.

Server process will bring same no of empty blocks from rollback/undo tablespace. Server process will copy the address of the actual data blocks of userdata datafiles into the empty rollback/undo blocks and a before image will be created.

Server process will bring set of userdata blocks.Once Filter operation completed, the selected rows will be deleted. That means data will be removed from original data blocks.

Above Delete process will continue till all the userdata blocks have been checked and removed.

Once the Delete job completes, DBWR(dbwriter) will write the data back to the respective datafiles.