Wednesday, August 7, 2019

How Insert Statement Works

How Insert Statement Works


When Oracle receives sql/insert query, it requires to run some pre-tasks before actually being able to really run the query.

During parsing, Database validate the syntax of the statement whether the query is valid or not.

Database validate the semantic of the statement. It checks whether a statement is meaningful or not.

If syntax/Semantic check pass, then server process will continue execution of the query.The server process will go to the library cache.In the library cache the server process will search from the MRU (Most Recently Used) end to the LRU (Least Recently Used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

As already mentioned above about semantic check, this check is done in the data dictionary cache by the server process. Server process will check the definition of the object, if that is already available within the data dictionary cache, server process will process the check. If not available then server process will retrieve required information from the system tablespace.

If the submitted sql statement is the same as a reusable SQL statement in the shared pool, then Oracle Database reuses the existing code. This is called Soft parse.

In case of hard parsing the server process will check with the optimizer, because the optimizer will read the sql statement and generate the execution plan of the query. The optimizer generates multiple execution plans during parsing.

Once the execution plan generates by the optimizer, then the server process will pick the best possible and cost effective execution plan and go to the library cache.

Server process will keep the execution plan along with the original sql text in the library cache.Here the parsing ends and the execution of sql statement will start.

Server process will keep the plan in the library cache on the MRU (Most Recently Used) end after generation of execution plan. Then the plan is picked up and execution of the insert job will start.

Server process will bring empty blocks from respective datafile of tablespace in which table exist and into which rows will be inserted.Blocks will be brought into database buffer cache.Blocks does not contain any data.

Server process will bring same no of empty blocks from rollback/undo tablespace. Server process will copy the address of the actual data blocks of userdata datafiles into the empty rollback/undo blocks.

Server process will bring set of userdata blocks and data will be added from the insert sql statement into user data blocks.

After the insert job completes in the database buffer cache, then database writer will write the data back to respective datafiles.

How Select Statement Works

How Select Statement Works


When Oracle receives a sql query, it requires to run some pre-tasks before actually being able to really run the query. Combination of these tasks is called parsing.



During parsing the below operations used to perform.

Database validate the syntax of the statement whether the query is valid or not.For example, the following statement fails because the keyword FROM is missed:
Select employee where name=’RAM’;
Select employee where name=’RAM’
*
ERROR at line 1:
ORA-00923: FROM keyword not found where expected

Database validate the semantic of the statement.It checks whether a statement is meaningful or not.For an example, whether the objects and columns in the statement exist or not.
Although the statement is syntactically correct, but it can fail in semantic check.
SELECT * FROM exist;
SELECT * FROM exist
*
ERROR at line 1:
ORA-00942: table or view does not exist

If syntax/Semantic check pass, then server process will continue execution of the query.The server process will go to the library cache.In the library cache the server process will search from the mru (most recently used) end to the lru (least recently used) end for a match for the sql statement. It does this by using a hash algorithm that returns a hash value. If the hash value of the query we have written matches with that of the query in library cache then server process need not generate an execution plan (soft parsing) but if no match is found then server process has to proceed with the generation of execution plan (hard parsing).

Hard parse:
If Oracle Database cannot reuse existing code, then it must build a new executable version of the application code. This operation is known as a hard parse.

Soft Parse:
If the submitted statement is the same as a reusable SQL statement in the shared pool, then Oracle Database reuses the existing code. This is called Soft parse.
in case of hard parsing the server process will approach the optimizer, who will read the sql statement and generate the execution plan of the query. the optimizer generates multiple execution plans during parsing.
After generation of the execution plan’s by the optimizer the server process will take the best and cost effective execution plan and go to the library cache.
In the library cache the server process will keep the execution plan along with the original sql text.
At this point in time the parsing ends and the execution of the sql sataement will begin.
Server Process will then go to the database buffer cache and checks whether the data required by the query is already available or not in the cache.
If available that data can be returned to the client else it brings the data from the database files.

Row Source Generation:
The row source generator is software that receives the optimal execution plan from the optimizer and produces an iterative execution plan that is usable by the rest of the database.Each iterative execution plan is a binary program which is executed by SQL engine and it produces resultset.

Tuesday, August 6, 2019

RMAN Encrypt Backup in Oracle

RMAN Encrypt Backup in Oracle


Configure RMAN Backups With Password Protection

rman target /
RMAN> set encryption on identified by ‘yourpassword’ only;

You can backup with backup database command. Do not need to change any backup script.

You can change encryption algorithm on RMAN Configuration.

Show Encryption Algorithm

SQL> select algorithm_id, algorithm_name, algorithm_description, is_default from v$rman_encryption_algorithms;

Change Encryption Algorithm

RMAN> show encryption algorithm;
RMAN>configure encryption algorithm ‘AES256’;

If you want to restore from encrypt backup, you can use below decrypt backup.

Decrypt RMAN Backup

RMAN> set decryption identified by ‘yourpassword’;

Important situations of Oracle Dataguard

Important situations of Oracle Dataguard


* Nologging or unrecoverable operations are not applied to standby – so be cautious in using those options. Alarming data block corruption error messages are reported if a user attempts to access any nologging or unrecoverable data blocks.

* ‘Drop tablespace’, ‘drop datafile’, ‘rename datafile’, and add/drop online redo log files operations need done on both nodes manually.

* Initialization parameter changes need done on both nodes manually.

* For databases upgrades, the software in the new ORACLE_HOME must be separately installed on both nodes. The database part of the upgrade is carried out only on the primary database.

* Often Data Guard replication is suspended for a significant period of time during and following the primary database upgrade. That is done with DGMGRL> edit database ‘mydb_sby’ set state=’APPLY-OFF’ ;. The standby server is only * upgraded, and replication resumed, once the primary upgrade has been determined to be a success.

* For patches, the software/binaries in the ORACLE_HOME must be separately patched on both nodes.  The database part of the patch (if any) is carried out only on the primary database.

Dataguard start/stop or restart redo apply

Dataguard start/stop or restart redo apply


To start or stop redo apply without dgbroker.
These commands are compatible with Oracle 18c, 12c, 11g, 10g, 9i.

Starting Redo Apply on standby database
The managed recovery can be started as a background or foreground process. FYI :) today, no one uses Dataguard in foreground mode.
The “disconnect from session” option allows the background process to do the managed recovery. It will start the MRP (managed recovery process) on the standby.

To start Redo Apply in the foreground, issue the following SQL statement.

Without Real Time Apply (RTA) on standby database
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;


With Real Time Apply (RTA)
If you configured your standby redo logs, you can start real-time apply using the following command:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;


Stopping Redo Apply on standby database
To stop Redo Apply in the foreground, issue the following SQL statement.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;


Monitoring Redo Apply on Physical Standby Databases

Last sequence received and applied
You can use this (important) SQL to check whether your physical standby is in Sync with the Primary:

SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE
ARCH.THREAD# = APPL.THREAD#;


Standby database process status
select distinct process, status, thread#, sequence#, block#, blocks from v$managed_standby ;


If using real time apply
select TYPE, ITEM, to_char(TIMESTAMP, 'DD-MON-YYYY HH24:MI:SS') from v$recovery_progress where ITEM='Last Applied Redo';
or

select recovery_mode from v$archive_dest_status where dest_id=1;

DATA GUARD TROUBLESHOOTING

DATA GUARD TROUBLESHOOTING


Determine if archive logs of your DG environment are successfully being transferred to the standby
Run the following query:

select dest_id,status,error from v$archive_dest
where target=’STANDBY’;

If all remote destinations have a status of VALID then proceed to next step.
Else proceed to Troubleshooting Log Transport Services.

How many archives need to be transfered from Primary?
Connected on primary:

select thread#, min(sequence#), min(min_date), count(*) from (
select thread#, sequence#, count(*), max(first_time) min_date
from v$archived_log
where first_time > sysdate – 2
group by thread#, sequence#
having count(*) <2
order by 1,2
)
group by thread#;

Determine if the standby is a Physical standby or a Logical Standby
To determine the standby type run the following query on the standby:

select database_role from v$database;

If the standby is a physical standby then proceed to Troubleshooting Redo
Apply. Else proceed to Troubleshooting Logical Apply.

Troubleshooting Log transport services
Verify that the primary database is in archive log mode and has automatic archiving enabled
select log_mode from v$database;

or in SQL*Plus

archive log list

Verify that sufficient space exist in all archive destinations
The following query can be used to determine all local and mandatory destinations that need to be checked:

select dest_id,destination from v$archive_dest
where schedule=’ACTIVE’
and (binding=’MANDATORY’ or target=’PRIMARY’);

Determine if the last log switch to any remote destinations resulted in an
error
select dest_id,status,error from v$archive_dest
where target=’STANDBY’;

Address any errors that are returned in the error column. Perform a log
switch and re-query to determine if the issue has been resolved.

Determine if any error conditions have been reached
Query the v$dataguard_status view:

select message, to_char(timestamp,’HH:MI:SS’) timestamp
from v$dataguard_status
where severity in (‘Error’,’Fatal’)
order by timestamp

Gather information about how the remote destinations are performing the
archival
select dest_id,archiver,transmit_mode,affirm,net_timeout,delay_mins,async_blocs
from v$archive_dest where target=’STANDBY’

Determine the current sequence number, the last sequence archived, and the last sequence applied to a standby

Perhaps, the most important query to troubleshoot a stsandby configuration:

select ads.dest_id,
max(sequence#) “Current Sequence”,
max(log_sequence) “Last Archived”,
max(applied_seq#) “Last Sequence Applied”
from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads
where ad.dest_id=al.dest_id
and al.dest_id=ads.dest_id
group by ads.dest_id

If you are remotely archiving using the LGWR process then the archived
sequence should be one higher than the current sequence. If remotely
archiving using the ARCH process then the archived sequence should be equal to the current sequence. The applied sequence information is updated at
log switch time.

Troubleshooting Redo Apply services
Verify that the last sequence# received and the last sequence# applied to
standby database
select max(al.sequence#) “Last Seq Recieved”,
max(lh.sequence#) “Last Seq Applied”
from v$archived_log al, v$log_history lh

If the two numbers are the same then the standby has applied all redo sent
by the primary. If the numbers differ by more than 1 then proceed to next step.

Verify that the standby is in the mounted state
select open_mode from v$database;

Determine if there is an archive gap on your physical standby database
By querying the V$ARCHIVE_GAP view as shown in the following query:

select * from v$archive_gap;

The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking redo apply from continuing.

After resolving the identified gap and starting redo apply, query the
V$ARCHIVE_GAP fixed view again on the physical standby database to
determine the next gap sequence, if there is one. Repeat this process
until there are no more gaps.

If v$archive_gap view does’nt exists:

WITH prod as (select max(sequence#) as seq from v_$archived_log where RESETLOGS_TIME = (select RESETLOGS_TIME from v_$database)), stby as (select max(sequence#) as seq,dest_id dest_id from v_$archived_log where first_change# > (select resetlogs_change# from v_$database) and applied = ‘YES’ and dest_id in (1,2) group by dest_id) select prod.seq-stby.seq,stby.dest_id from prod, stby

Verify that managed recovery is running
select process,status from v$managed_standby;

When managed recovery is running you will see an MRP process. If you do not see an MRP process then start managed recovery by issuing the following command:

recover managed standby database disconnect;

Some possible statuses for the MRP are listed below:

ERROR – This means that the process has failed. See the alert log or v$dataguard_status for further information.

WAIT_FOR_LOG – Process is waiting for the archived redo log to be completed. Switch an archive log on the primary and query v$managed_standby to see if the status changes to APPLYING_LOG.

WAIT_FOR_GAP – Process is waiting for the archive gap to be resolved. Review the alert log to see if FAL_SERVER has been called to resolve the gap.

APPLYING_LOG – Process is applying the archived redo log to the standby database.à

Troubleshooting SQL Apply services
Verify that log apply services on the standby are currently running.
To verify that logical apply is currently available to apply changes perform the following query:

SELECT PID, TYPE, STATUS, HIGH_SCN  FROM V$LOGSTDBY;

When querying the V$LOGSTDBY view, pay special attention to the HIGH_SCN column. This is an activity indicator. As long as it is changing each time you query the V$LOGSTDBY view, progress is being made. The STATUS column gives a text description of the current activity.

If the query against V$LOGSTDBY returns no rows then logical apply is not running. Start logical apply by issuing the following statement:

SQL> alter database start logical standby apply;

If the query against V$LOGSTDBY continues to return no rows then proceed to  next step.

Determine if there is an archive gap in your dataguard configuration
Query the DBA_LOGSTDBY_LOG view on the logical standby database.

SELECT SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) “SEQ#”, FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(TIMESTAMP, ‘HH:MI:SS’) TIMESTAMP, DICT_BEGIN BEG, DICT_END END, SUBSTR(THREAD#,1,4) “THR#” FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;

Copy the missing logs to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby database. For example:

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE ‘/u01/oradata/arch/myarc_57.arc’;

After you register these logs on the logical standby database, you can restart log apply services. The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL apply operations from continuing.

After resolving the identified gap and starting log apply services, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.

Verify iflogical apply is receiving errors while performing apply operations.
Log apply services cannot apply unsupported DML statements, DDL statements and Oracle supplied packages to a logical standby database in SQL apply mode. When an unsupported statement or package is encountered, SQL apply operations stop. To determine if SQL apply has stopped due to errors you should query the DBA_LOGSTDBY_EVENTS view. When querying the view, select the columns in order by EVENT_TIME. This ordering ensures that a shutdown
failure appears last in the view. For example:

SQL> SELECT XIDUSN, XIDSLT, XIDSQN, STATUS, STATUS_CODE FROM DBA_LOGSTDBY_EVENTS WHERE EVENT_TIME = (SELECT MAX(EVENT_TIME) FROM DBA_LOGSTDBY_EVENTS);

If an error requiring database management occurred (such as adding a tablespace, datafile, or running out of space in a tablespace), then you can fix the problem manually and resume SQL apply.

If an error occurred because a SQL statement was entered incorrectly, conflicted with an existing object, or violated a constraint then enter the correct SQL statement and use the DBMS_LOGSTDBY.SKIP_TRANSACTION procedure to ensure that the incorrect statement is ignored the next time SQL apply operations are run.

Query DBA_LOGSTDBY_PROGRESS to verify that log apply services is progressing
The DBA_LOGSTDBY_PROGRESS view describes the progress of SQL apply operations on the logical standby databases. For example:

SQL> SELECT APPLIED_SCN, APPLIED_TIME, READ_SCN, READ_TIME, NEWEST_SCN, NEWEST_TIME FROM DBA_LOGSTDBY_PROGRESS;

The APPLIED_SCN indicates that committed transactions at or below that SCN have been applied. The NEWEST_SCN is the maximum SCN to which data could be applied if no more logs were received. This is usually the MAX(NEXT_CHANGE#)-1
from DBA_LOGSTDBY_LOG when there are no gaps in the list. When the value of NEWEST_SCN and APPLIED_SCN are the equal then all available changes have been applied. If you APPLIED_SCN is below NEWEST_SCN and is increasing then
SQL apply is currently processing changes.

Verify that the table that is not receiving rows is not listed in the DBA_LOGSTDBY_UNSUPPORTED.
The DBA_LOGSTDBY_USUPPORTED view lists all of the tables that contain datatypes not supported by logical standby databases in the current release. These tables are not maintained (will not have DML applied) by the logical
standby database. Query this view on the primary database to ensure that those tables necessary for critical applications are not in this list. If the primary database includes unsupported tables that are critical, consider using a physical standby database.