Tuesday, March 24, 2020

Difference Between UNDO & REDO

Difference Between UNDO & REDO


Check the default undo tablespace.
show parameter undo

How to create new undo tablespace?
create undo tablespace UNDOTBS2 datafile '/u01/app/oracle/oradata/PRIM/undotbs2.dbf' size 400m;

Now create test tablespace:
1.create tablespace test datafile '/u01/app/oracle/oradata/PRIM/test001.dbf' size 400m;
2.create table test_tab (txt char(1000) tablespace test;
3.insert into test_tab values ('teststring1');
4.insert into test_tab values ('teststring2');
commit;

alter system checkpoint;

How to check the which redolog file is using by my database?

set linesize 1000
col member for a60
select member, l,status from v$log l, v$logfile f where l,group# = f.group# and l.status = 'CURRENT';

Check the redolog files status:
strings /u01/redo3.log | grep teststring

Check the undo tablespace status:
strings /u01/undotbs01.log | grep teststring

Note:
Insert statements does not generate undo data, any changes or modified/update/delete data will generate undo data

Chech the datafile status:
strings /u01/test01.dbf | grep teststring (because we used alter system checkpoint)

How to change the default undo tablespace?
alter system set undo_tablespace='UNDOTBS2'; (dynamic parameter)

Note:
We can create multiple undo tablespaces in database, but only 1 undo tablespaces can be active.

alter system switch logfile;

set linesize 1000
col member for a60
select member, l,status from v$log l, v$logfile f where l,group# = f.group# and l.status = 'CURRENT';

update test_tab set txt= 'teststring_uncommit' where txt = 'teststring1';
Don't hit commit, open another session and another update statement.

update test_tab set txt = 'teststring_committed' where txt = 'teststring2';
commit;

Now we have 2 types of data, commit and uncommitted data under test_tab table.

Check the redolog files status:
strings /u01/redo3.log | grep teststring

Note:
Inside current Redo log files can contain redo and undo
Redo log files can contain committed and uncommitted data.

Check the undo tablespace status:
strings /u01/undotbs01.log | grep teststring

Note: lgwr write very fast compare to dbwr process, it writes every commit and every 3 seconds.

alter system checkpoint,
all the buffers will be pushed/flushed from the memory into the datafiles.

When ckpt will write?
alter system checkpoint, online redo log switch and alter database begin backup command.

Check the undo tablespace status:
strings /u01/undotbs01.log | grep teststring

Now it will contain old images/data.

Check the datafile status:
strings /u01/test01.dbf | grep teststring (because we used alter system checkpoint)

Now it will store committed and uncommitted data.

Note:
lgwr writes both undo and redo for every dml in a transaction, a transaction can be committed and as well uncommitted data irrespective of whether ckpt has taken place or not.

Working on Different Database Issues

Working on Different Database Issues



1.Working on Agent/Listner/DB down Incident Ticket
------------------------------------------------------------------:
Login to Server and DB where the actual issue is. You can find the server name and DB name from the ticket.
ps -ef |grep emagent --- Check the Agent availability. If it is running then fine other wise you need to start the agent.
echo $AGENT_HOME     --- You can find the agent location in general agent location is same as ORACLE_HOME location.
emctl start agent    --- From the agent location run this command.
ps -ef | grep tns    --- Check the Listener availability. You can also use LSNRCTL>status command to check the listener. If it is running then fine other wise you need to start the listener.
LSNRCTL>start      --- From the LISTENER prompt run this command.
tnsping dbname       --- Check the DB is ping. Login to the database. Check the instance status and database open_mode. If the instance is OPEN and database is in READ WRITE mode then fine other wise you need to startup the database.
uptime               --- From the oracle user run this command to check the database uptime.
last reboot   --- From the orale user run this command to check the server reboot time.

2.Working on Metrics Level High Incident Ticket
------------------------------------------------------------:
Login to Server and DB where the actual issue is. Check the Current_utilization of process if it is less than threshold value then fine (Make the % Calculation of current_utilization out of limit_value) otherwise you need to increase the limit_value.
Select resource_name, current_utilization,  max_utilization,  limit_value from v$resource_limit
where resource_name in ('sessions', 'processes');

Note: When the number of sessions increases (activity is high) the matrics level suddenly increases after the sessions finished their jobs matrics level goes down so perform some more check before increasing the limit_value.

###INACTIVE SESSIONS COUNT WITH USERNAME  AND STATUS
select s.username,s.status, count(s.program) Count
from gv$session s,v$process p
where p.addr=s.paddr AND s.status='INACTIVE'
group by s.username, s.status order by 2 desc;

select  count(s.status) "INACTIVE SESSIONS > 1HOUR"
from gv$session s, v$process p
where p.addr=s.paddr and s.last_call_et > 3600 and s.status='INACTIVE';

3.Working on Tablespace full Incident Ticket
-------------------------------------------------------:
Login to the server and database check tablespace usage if the threshold is up > 85% then add a datafile to the tablespace. Before add datafile you must check filesystem /DG usage as datafile is autoextend.

select tablespace_name, used_percent from dba_tablespace_usage_metrics where used_percent > 85;

select df.tablespace_name "Tablespace", totalusedspace "Used MB",
(df.totalspace - tu.totalusedspace) "Free MB",df.totalspace "Total MB",
round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace)) "Pct. Free"
from (select tablespace_name,round(sum(bytes) / 1048576) TotalSpace
from dba_data_files group by tablespace_name) df,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name from dba_segments
group by tablespace_name) tu where df.tablespace_name = tu.tablespace_name ;

ALTER TABLESPACE xyz ADD DATAFILE '/u01/oradata/xyz/xyz01.dbf' SIZE 30G;   ### in case of normal file system.

From the below query you can get the datafile size and location of particular tablespace.
COLUMN tablespace_name FORMAT A20
COLUMN file_id FORMAT A5
COLUMN file_name FORMAT A40
select substr(tablespace_name,1,15) as Tablespace, substr(file_id,1,5) as File_id, substr(file_name,1,40) as File_Name,
bytes/1024/1024/1024 as Size_GB from dba_data_files
where tablespace_name = 'XYZ';

ALTER TABLESPACE xxxxx ADD DATAFILE SIZE 30G;   ### in case of OMF file system.

### in case of Big file tablespace
You need to resize the tablespace. First check the tablespace size information from the above query then calculate resize value.
Resize value = Total_Size + Expected Size to increase

SELECT TABLESPACE_NAME, BIGFILE FROM DBA_TABLESPACES where tablespace_name = 'XYZ';

Alter Tablespace XYZ resize <resize_value>; ### resize value must be greater than Total_size

### in case of Containers and Pluggable database
Find the name of container database and you must into that container database before using alter tablespace command.
SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;
ALTER SESSION SET CONTAINER = pdb1;
ALTER TABLESPACE xxxxx ADD DATAFILE SIZE 30G;   ### in case of OMF file system.

### in case of Temp tablespace full alert
Check the TEMP tablespace usage details. If the TEMP tablespace usage is above threshold value then perform some more check to confirm who is using temp segemnt.

select a.tablespace_name tablespace,d.TEMP_TOTAL_MB,sum (a.used_blocks * d.block_size) / 1024 / 1024 TEMP_USED_MB,
d.TEMP_TOTAL_MB - sum (a.used_blocks * d.block_size) / 1024 / 1024 TEMP_FREE_MB,
round(100 * ( (d.TEMP_TOTAL_MB - sum (a.used_blocks * d.block_size) / 1024 / 1024)/ d.TEMP_TOTAL_MB)) "Pct. Free"
from v$sort_segment a, (Select   b.name, c.block_size, sum (c.bytes) / 1024 / 1024 TEMP_TOTAL_MB
from v$tablespace b, v$tempfile c where b.ts#= c.ts#
group by b.name, c.block_size) d where a.tablespace_name = d.name
group by a.tablespace_name, d.TEMP_TOTAL_MB;

###Sort space used by the session
select S.username, S.osuser, P.spid, s.program, SUM(T.blocks)*TBS.block_size/1024/1024 mb_used, T.tablespace, Count(*) sort_ops
from v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
where T.session_addr = S.saddr and S.paddr = P.addr and T.tablespace = TBS.tablespace_name
Group by S.username, S.osuser, P.spid, S.program, TBS.block_size, T.tablespace order by P.spid;

Note: Temp space is used by the sessions once the sessions finishes their jobs it is automatically reclaimed temp space.

4.Archive Area Full or FLASH drive full incident ticket
--------------------------------------------------------------------:
Login to the server and validate the FLASH mount point usage, if the usage below threshold (85%) fine otherwise you need to take archive backup with delete input command. To remove all archive which is already backed up from the archive area.

rman target / connect catalog catalog/catalog@rman
Backup archive log all delete input;

5.FILESYTEM Full Alert (DATA/TEMP/FLASH/ASMFS
-----------------------------------------------------------------------:
Login to the server where the actual issue is.Validate /u01 mount point if the threshold value < 85% then fine otherwise you need to perform some clean up better to move older logs to other location.

In case of ASM file system. you need to add ASM disk to the diskgroup to increase the diskgroup size. Reach to the storage team or create a ticket for storage team to provide same size of luns then create ASM disk of same size. Finally add these disks to the diskgroup.
asmcmd> lsdg   --To check the diskgroup size information
asmcmd> lsdsk  --To check the ASM disks. Same check you can perform from ASM instance prompt.
/etc/init.d/oracleasm listdisks   ### to check the ASM all disk present on the server.

SQL> SELECT GROUP_NUMBER, NAME, SECTOR_SIZE, BLOCK_SIZE, ALLOCATION_UNIT_SIZE,STATE,TYPE TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
SQL> SELECT name, free_mb, total_mb, free_mb/total_mb*100 as percentage FROM v$asm_diskgroup;
SQL> select mount_status,header_status,mode_status,state,total_mb,free_mb,name,path,label from v$asm_disk where group_number = 1 order by name;

SQL> connect / as sysasm
ALTER DISKGROUP DATA ADD DISK 'ORCL:DISK074';

If this were a RAC installation, the disks would only be stamped by one node. The other nodes would just scan for the disks to make sure new disk is accessible to all node.
/etc/init.d/oracleasm scandisks

ALTER DISKGROUP DATA ADD DISK 'ORCL:DATA_0001';  ### to add new disk to diskgroup

ALTER DISKGROUP DATA RESIZE ALL SIZE 100G;  ### to resize the existing disk size.

6.Working on DB User Unlock Incident
-------------------------------------------------:
Login to the server and database where the actual issue is. Check current status (last lock date) of the user then unlock the user

ALTER USER username ACCOUNT unlock;

In case of reset the user password. Get the information of user and password from dba_users view.

ALTER user username identified by *****; ### paste the same password getting from dba_users in case you do not want to change the password.

7.Working on Flash recovery area full incident
---------------------------------------------------------:
Login to the server and Db where the actual issue is and check the flash area usage.

select name, ceil( space_limit / 1024 / 1024) size_mb,
ceil( space_used / 1024 / 1024) used_mb,decode( nvl( space_used, 0),0, 0, ceil ( ( space_used / space_limit) * 100) ) pct_used
from v$recovery_file_dest order by name desc;

Perform some more check to confirm who is using the flash recovery area before increasing the flash_recovery size.

select * from v$flash_recovery_area_usage;
show parameter db_recovery_file_dest_size;

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE = 6000G SCOPE=BOTH;

8.Working on database block corruption incident
-----------------------------------------------------------:
Logon to the server and database where the actual issue is and perform check block corruption.

select file#, BLOCK#, CORRUPTION_CHANGE# , CORRUPTION_TYPE from V$DATABASE_BLOCK_CORRUPTION;

SELECT DISTINCT owner, segment_name FROM v$database_block_corruption dbc
JOIN dba_extents e ON dbc.file# = e.file_id AND dbc.block# BETWEEN e.block_id and e.block_id+e.blocks-1
ORDER BY 1,2;

Note: Block corruption is automatically repaired by standby database other wise you need to perform recovery of that particualr block.

Recover datafile block <corrupt_block>;

Friday, March 20, 2020

Check Long Running operations

Check Long Running operations:



select v$session_longops.sid || ' : ' || v$session_longops.serial# "session",
       V$session.username || ' : ' || V$session.osuser "DB / OS USER",
       v$process.program,
       V$session.machine || ' | ' || V$session.terminal Machine,
       v$session_longops.opname || ' | ' || v$session_longops.target OPNAME_TARGET,
       v$session_longops.elapsed_seconds || ' | ' ||
       v$session_longops.time_remaining Progress,
       v$process.spid,
       round(v$process.pga_used_mem / 1024 / 1024) || '  |  ' ||
       round(v$process.pga_alloc_mem / 1024 / 1024) "PGA USE | ALLOC",
       v$sql.sql_text,
       v$sql.sorts,
       v$sql.executions,
       v$sql.disk_reads,
       v$sql.parse_calls,
       v$sql.service,
       v$sql.module
  from v$session_longops, V$process, V$sql, V$session
 where v$session.sid = v$session_longops.sid
   and v$process.addr = v$session.paddr
   and v$session.sql_id = v$sql.sql_id
   and V$session_longops.time_remaining > 0;

Scripts For Database Details

Scripts For Database Details


Database default information:

Select username,profile,default_tablespace,temporary_tablespace from dba_users;

Database Structure information:

SELECT /*+ ordered */ d.tablespace_name tablespace, d.file_name filename
, d.bytes filesize, d.autoextensible autoextensible, d.increment_by * e.value increment_by, d.maxbytes maxbytes
FROM sys.dba_data_files d, v$datafile v
, (SELECT value FROM v$parameter WHERE name = 'db_block_size') e
WHERE (d.file_name = v.name)
UNION
SELECT d.tablespace_name tablespace, d.file_name filename, d.bytes filesize, d.autoextensible autoextensible
, d.increment_by * e.value increment_by, d.maxbytes maxbytes
FROM sys.dba_temp_files d, (SELECT value
FROM v$parameter
WHERE name = 'db_block_size') e
UNION
SELECT '[ ONLINE REDO LOG ]', a.member, b.bytes, null, TO_NUMBER(null), TO_NUMBER(null)
FROM v$logfile a, v$log b
WHERE a.group# = b.group#
UNION
SELECT '[ CONTROL FILE ]', a.name, TO_NUMBER(null), null, TO_NUMBER(null), TO_NUMBER(null)
FROM v$controlfile a
ORDER BY 1,2;

Database Character Set Informations:

Select * from nls_database_parameters;

Database Segment Managment  Informations:

Select TABLESPACE_NAME, BLOCK_SIZE, EXTENT_MANAGEMENT, SEGMENT_SPACE_MANAGEMENT from dba_tablespaces;

Database Object Information:

Select owner,object_type,count(*) from dba_objects Where owner not IN ('SYS','MDSYS','CTXSYS','HR','ORDSYS','OE','ODM_MTR','WMSYS','XDB','QS_WS',
'RMAN','SCOTT','QS_ADM','QS_CBADM', 'ORDSYS', 'OUTLN', 'PM', 'QS_OS', 'QS_ES', 'ODM', 'OLAPSYS','WKSYS','SH','SYSTEM','ORDPLUGINS','QS','QS_CS')
Group by owner,object_type order by owner;

Find the last record from a table:

select * from employees where rowid in(select max(rowid) from employees);
select * from employees minus select * from employees where rownum < (select count(*) from employees);

Last SQL fired by the User on Database:
Select S.USERNAME||'('||s.sid||')-'||s.osuser UNAME  ,s.program||'-'||s.terminal||'('||s.machine||')' PROG ,s.sid||'/'||s.serial# sid,s.status "Status",p.spid,sql_text sqltext
from v$sqltext_with_newlines t,V$SESSION s , v$process p
where t.address =s.sql_address and p.addr=s.paddr(+) and t.hash_value = s.sql_hash_value
order by s.sid,t.piece;

Find Oracle timestamp from current SCN:

Select to_char(CURRENT_SCN) from v$database;  -- oracle Ver. 10g or above
Select current_scn, dbms_flashback.get_system_change_number from v$database;  --standby case
SQL> select scn_to_timestamp(8843525) from dual;

Find UNDO information Table:

select to_char(begin_time,'hh24:mi:ss'),to_char(end_time,'hh24:mi:ss')
, maxquerylen,ssolderrcnt,nospaceerrcnt,undoblks,txncount from v$undostat
order by undoblks ;

Shared Pool Information:

select to_number(value) shared_pool_size, sum_obj_size, sum_sql_size, sum_user_size,
(sum_obj_size + sum_sql_size+sum_user_size)* 1.3 min_shared_pool
  from (select sum(sharable_mem) sum_obj_size
  from v$db_object_cache where type <> 'CURSOR'),
 (select sum(sharable_mem) sum_sql_size from v$sqlarea),
 (select sum(250 * users_opening) sum_user_size from v$sqlarea), v$parameter
 where name = 'shared_pool_size';

How to determine whether the datafiles are synchronized or not:

select status, checkpoint_change#, to_char(checkpoint_time, 'DD-MON-YYYY ‎HH24:MI:SS') as checkpoint_time, count(*) from v$datafile_header
group by status, checkpoint_change#, checkpoint_time
order by status, checkpoint_change#, checkpoint_time;

Long Query Progress in database:

SELECT a.sid, a.serial#, b.username , opname OPERATION, target OBJECT,
TRUNC(elapsed_seconds, 5) "ET (s)", TO_CHAR(start_time, 'HH24:MI:SS') start_time,
ROUND((sofar/totalwork)*100, 2) "COMPLETE (%)"
FROM v$session_longops a, v$session b
WHERE a.sid = b.sid AND b.username not IN ('SYS', 'SYSTEM') AND totalwork > 0
ORDER BY elapsed_seconds;

How can we see the oldest flashback available:

You can use the following query to see the flashback data available.
SELECT to_char(sysdate,'YYYY-MM-DD HH24:MI') current_time, to_char(f.oldest_flashback_time, 'YYYY-MM-DD HH24:MI')OLDEST_FLASHBACK_TIME, (sysdate - f.oldest_flashback_time)*24*60 HIST_MIN FROM v$database d, V$FLASHBACK_DATABASE_LOG f;

How to get current session id, process id, client process id:

select b.sid, b.serial#, a.spid processid, b.process clientpid from v$process a, v$session b
where a.addr = b.paddr and b.audsid = userenv('sessionid');
V$SESSION.SID and V$SESSION.SERIAL# are database process id
V$PROCESS.SPID – Shadow process id on the database server
V$SESSION.PROCESS – Client process id, on windows it is “:” separated the first # is the process id on the client and 2nd one is the thread id.

How to find running jobs in oracle database:

select sid, job,instance from dba_jobs_running;
‎select sid, serial#,machine, status, osuser,username from v$session where username!='NULL'; --all active users
select owner, job_name from DBA_SCHEDULER_RUNNING_JOBS; --for oracle 10g

How to find long running jobs in oracle database:

select username,to_char(start_time, 'hh24:mi:ss dd/mm/yy') started, time_remaining remaining, message from v$session_longops
where time_remaining = 0 order by time_remaining desc;

Report Longest Rman Backup Job:

Select username, to_char(start_time, 'hh24:mi:ss dd/mm/yy') started, TOTALWORK, SOFAR COMPLETED, time_remaining remaining, ELAPSED_SECONDS, message from v$session_longops where time_remaining = 0 and message like 'RMAN%' order by  ELAPSED_SECONDS DESC;

Last SQL Fired from particular Schema or Table:

Select CREATED, TIMESTAMP, last_ddl_time from all_objects
WHERE OWNER='HRMS' AND OBJECT_TYPE='TABLE' AND OBJECT_NAME='PAYROLL_MAIN_FILE';

Display Log on Information of database:

Select SYSDATE-logon_time "Days", (SYSDATE-logon_time)*24 "Hours"
from  sys.v_$session where  sid=1;
Note: The above query will display since how many days and time your database is up. That means you can estimate the last login days and time. Here Sid=1 is the PMON

How do you find whether the instance was started with pfile or spfile

SELECT name, value FROM v$parameter WHERE name = 'spfile';

This query will return NULL if you are using PFILE

SELECT COUNT(*) FROM v$spparameter WHERE value IS NOT NULL;

If the count is non-zero then the instance is using a spfile, and if the count is zero then it is using a pfile:

SELECT DECODE(value, NULL, 'PFILE', 'SPFILE') "Init File Type" FROM sys.v_$parameter WHERE name = 'spfile';

How can you check which user has which Role:

Select * from DBA_ROLE_PRIVS order by grantee;

How to detect Migrated and chained row in a Table
You must execute script UTLCHAIN.SQL from before doing actual query
ANALYZE TABLE scott.emp LIST CHAINED ROWS;
SELECT * FROM chained_rows;
You can also detect migrated and chained rows by checking the ‘table fetch continued row’ statistic in the v$sysstat view.
SQL> SELECT name, value FROM v$sysstat WHERE name = ‘table fetch continued row’;

How to Get Database Version:

SELECT * from v$version;
SELECT VALUE  FROM v$system_parameter  WHERE name = 'compatible';

Find the Size of Schema:

SELECT SUM (bytes / 1024 / 1024) "size"
FROM dba_segments WHERE owner = '&owner';
Oracle SQL query over the view that shows actual Oracle Connections.
SELECT osuser, username, machine, program
FROM v$session ORDER BY osuser; 
SELECT program application, COUNT (program) Numero_Sesiones
FROM v$session GROUP BY program
ORDER BY Numero_Sesiones DESC;

Getting Current Schema:

SELECT SYS_CONTEXT ('userenv', 'current_schema') FROM DUAL;

How to find the last time a session performed any activity:

select username, floor(last_call_et / 60) "Minutes", status from v$session
where username is not null order by last_call_et;

How to find parameters that will take into effect for new sessions:

SELECT name FROM v$parameter WHERE issys_modifiable = 'DEFERRED';

How to find tables that have a specific column name:

SELECT owner, table_name, column_name
FROM dba_tab_columns WHERE column_name like 'AMOUNT' ORDER by table_name;

Display database Recovery status:

SELECT * FROM   v$backup;
SELECT * FROM   v$recovery_status;
SELECT * FROM   v$recover_file;
SELECT * FROM   v$recovery_file_status;
SELECT * FROM   v$recovery_log;

How to Monitor RMAN Backup through SQL Query

How to Monitor RMAN Backup through SQL Query


Script to check RMAN Configuration:

SELECT  name "PARAMETERS", value "RMAN CONFIGURATION"
FROM  v$rman_configuration
ORDER BY name;

Script to List RMAN Backup Piece:

SELECT bs.recid,
DECODE(   bp.status, 'A', 'Available', 'D', 'Deleted', 'X', 'Expired') status , bp.handle
handle, TO_CHAR(bp.start_time, 'dd/mm/yyyy HH24:MI:SS')start_time
, TO_CHAR(bp.completion_time, 'dd/mm/yyyy HH24:MI:SS') completion_time, bp.elapsed_seconds  "ELAPSED"
FROM
    v$backup_set bs JOIN v$backup_piece bp USING (set_stamp,set_count)
WHERE
    bp.status IN ('A', 'X') AND bp.completion_time > sysdate-1
ORDER BY bp.completion_time desc, bs.recid, bp.piece#;

Script to List RMAN Backup Set:

SELECT  bs.recid, DECODE(backup_type, 'L', 'Archived Logs', 'D', 'Datafile Full', 'I', 'Incremental') backup_type,
device_type "type", DECODE(bs.controlfile_included, 'NO', null, bs.controlfile_included) controlfile,
sp.spfile_included spfile, bs.incremental_level L,TO_CHAR(bs.start_time, 'dd/mm/yyyy HH24:MI:SS') start_time
  , TO_CHAR(bs.completion_time, 'dd/mm/yyyy HH24:MI:SS')  completion_time, bs.elapsed_seconds "ELAPSED", bp.tag, bs.block_size "BLOCK"
  FROM  v$backup_set  bs, (select distinct set_stamp, set_count, tag, device_type from v$backup_piece where status in ('A', 'X'))  bp,
   (select distinct set_stamp, set_count, 'YES' spfile_included from v$backup_spfile) sp
WHERE completion_time > sysdate -1
  AND bs.set_stamp = bp.set_stamp
  AND bs.set_count = bp.set_count
  AND bs.set_stamp = sp.set_stamp (+)
  AND bs.set_count = sp.set_count (+)
ORDER BY  completion_time desc, bs.recid;

Script to List RMAN Job done:

select SID, serial#, opname, START_TIME,TOTALWORK, sofar, (sofar/totalwork) * 100 done,
sysdate + TIME_REMAINING/3600/24 end_at, elapsed_seconds elapsed, username
from v$session_longops
Where totalwork = sofar
AND opname NOT LIKE '%aggregate%'
AND start_time > sysdate -1
AND opname like 'RMAN%';

Note: If you need to query running status of rman backup then you have to set totalwork > sofar in where condition.

Script to Monitor RMAN Job in Progress:

SELECT sid, serial#, sofar, totalwork, opname, round(sofar/totalwork*100,2) AS pct_complete
FROM v$session_longops
WHERE opname LIKE 'RMAN%'
AND opname NOT LIKE '%aggregate%' AND totalwork != 0
AND sofar <> totalwork;
Script to list RMAN spfile & Controlfile Backup:
SELECT bs.recid, sp.spfile_included spfile
, TO_CHAR(bs.completion_time, 'dd/mm/yyyy HH24:MI:SS') completion_time
, DECODE(status, 'A', 'Available', 'D', 'Deleted', 'X', 'Expired') status, handle
FROM v$backup_set  bs, v$backup_piece  bp, (select distinct set_stamp, set_count, 'YES' spfile_included
 from v$backup_spfile) sp
WHERE bs.set_stamp = bp.set_stamp
  AND bs.completion_time > sysdate -1
  AND bs.set_count = bp.set_count
  AND bp.status IN ('A', 'X')
  AND bs.set_stamp = sp.set_stamp
  AND bs.set_count = sp.set_count
ORDER BY  bs.completion_time desc, bs.recid, piece#;

Script to find size of Datafile and Archivelog Backup:

select trunc(completion_time) "BACKUP_DATE", sum(blocks*block_size)/1024/1024 "DBF_BACKUP",
(SELECT sum(blocks*block_size)/1024/1024  from v$backup_redolog
WHERE first_time > sysdate-1) "ARC_BACKUP"
from v$backup_datafile
WHERE completion_time > sysdate - 1
group by trunc(completion_time)
order by 1 DESC;

You can also use V$SESSION_LONGOPS to estimate when a job will complete for an RMAN backup or restore operation, because the total amount of work, the time the job began, and the amount of work left are known values.

SELECT sid, serial#, opname, ROUND(sofar/totalwork*100,2) AS pct_complete,
TO_CHAR(start_time,'dd-mon-yy hh24:mi') start_time, (sysdate-start_time)*24 hours_running,
((sysdate-start_time)*24*60)/(sofar/totalwork)-(sysdate-start_time)
*24*60 minutes_left, TO_CHAR((sysdate-start_time)/(sofar/totalwork)+start_time,'dd-mon-yy hh24:mi') est_comp_time
FROM v$session_longops
WHERE opname LIKE 'RMAN%'
AND opname NOT LIKE '%aggregate%'
AND totalwork != 0
AND sofar <> totalwork;                                             

Script to Report RMAN full, incremental and archivelog backups:

select SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,'mm/dd/yy hh24:mi') start_time,
to_char(END_TIME,'mm/dd/yy hh24:mi')   end_time,
elapsed_seconds/3600 hrs
from V$RMAN_BACKUP_JOB_DETAILS
order by session_key;

Script to Report RMAN full, incremental without archivelog backups:

col STATUS format a9
col hrs format 999.99
select
SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,'mm/dd/yy hh24:mi') start_time,
to_char(END_TIME,'mm/dd/yy hh24:mi')   end_time,
elapsed_seconds/3600                   hrs
from V$RMAN_BACKUP_JOB_DETAILS
where input_type='DB INCR'
order by session_key;

Tracking Alert Log file by x$dbgalertext

Tracking Alert Log file by x$dbgalertext


Instead of reviewing whole alert log file for any Particular information, We can search using that key word from X$DBGALERTEXT Table.

Grep The word “Starting Up” of Alert log file from Instance.

col ORIGINATING_TIMESTAMP for a40
col MESSAGE_TEXT for a80
set linesize 500
SELECT originating_timestamp, message_text FROM sys.x$dbgalertext WHERE message_text LIKE '%Starting up%';


Scripts To Monitor Tablespaces and datafiles

Scripts To Monitor Tablespaces and datafiles



To check the tablespace in detail.

set linesize 160
set pagesize 300
select distinct
a.tablespace_name,
SUM(a.bytes)/1024/1024 "Used Size MB",
SUM(decode(b.maxextend, null, A.BYTES/1024/1024, b.maxextend*8192/1024/1024)) "Max Size Mb",
(SUM(a.bytes)/1024/1024 - round(c."Free"/1024/1024)) "Total Used MB",
(SUM(decode(b.maxextend, null, A.BYTES/1024/1024, b.maxextend*8192/1024/1024)) - (SUM(a.bytes)/1024/1024 - round(c."Free"/1024/1024))) "Total Free MB",
round(100*(SUM(a.bytes)/1024/1024 - round(c."Free"/1024/1024))/(SUM(decode(b.maxextend, null, A.BYTES/1024/1024, b.maxextend*8192/1024/1024)))) "Used Percentage"
from
dba_data_files a,
sys.filext$ b,
(SELECT d.tablespace_name , sum(nvl(c.bytes,0)) "Free" FROM dba_tablespaces d,DBA_FREE_SPACE c where d.tablespace_name = c.tablespace_name(+) group by d.tablespace_name) c
where a.file_id = b.file#(+)
and a.tablespace_name = c.tablespace_name
GROUP by a.tablespace_name, c."Free"/1024
order by round(100*(SUM(a.bytes)/1024/1024 - round(c."Free"/1024/1024))/(SUM(decode(b.maxextend, null, A.BYTES/1024/1024, b.maxextend*8192/1024/1024)))) desc;

Note:
Tablespace usage, Free sizes will play a role when datafiles are enabled AUTOEXTENSIBLE.

Another script to check the tablespace in detail.

set pages 999
set lines 400
SELECT df.tablespace_name tablespace_name,
 max(df.autoextensible) auto_ext,
 round(df.maxbytes / (1024 * 1024), 2) max_ts_size,
 round((df.bytes - sum(fs.bytes)) / (df.maxbytes) * 100, 2) max_ts_pct_used,
 round(df.bytes / (1024 * 1024), 2) curr_ts_size,
 round((df.bytes - sum(fs.bytes)) / (1024 * 1024), 2) used_ts_size,
 round((df.bytes-sum(fs.bytes)) * 100 / df.bytes, 2) ts_pct_used,
 round(sum(fs.bytes) / (1024 * 1024), 2) free_ts_size,
 nvl(round(sum(fs.bytes) * 100 / df.bytes), 2) ts_pct_free
FROM dba_free_space fs,
 (select tablespace_name,
 sum(bytes) bytes,
 sum(decode(maxbytes, 0, bytes, maxbytes)) maxbytes,
 max(autoextensible) autoextensible
 from dba_data_files
 group by tablespace_name) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name, df.bytes, df.maxbytes
UNION ALL
SELECT df.tablespace_name tablespace_name,
 max(df.autoextensible) auto_ext,
 round(df.maxbytes / (1024 * 1024), 2) max_ts_size,
 round((df.bytes - sum(fs.bytes)) / (df.maxbytes) * 100, 2) max_ts_pct_used,
 round(df.bytes / (1024 * 1024), 2) curr_ts_size,
 round((df.bytes - sum(fs.bytes)) / (1024 * 1024), 2) used_ts_size,
 round((df.bytes-sum(fs.bytes)) * 100 / df.bytes, 2) ts_pct_used,
 round(sum(fs.bytes) / (1024 * 1024), 2) free_ts_size,
 nvl(round(sum(fs.bytes) * 100 / df.bytes), 2) ts_pct_free
FROM (select tablespace_name, bytes_used bytes
 from V$temp_space_header
 group by tablespace_name, bytes_free, bytes_used) fs,
 (select tablespace_name,
 sum(bytes) bytes,
 sum(decode(maxbytes, 0, bytes, maxbytes)) maxbytes,
 max(autoextensible) autoextensible
 from dba_temp_files
 group by tablespace_name) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name, df.bytes, df.maxbytes
ORDER BY 4 DESC;

To check the free and used space:

SELECT /* + RULE */  df.tablespace_name "Tablespace",
df.bytes / (1024 * 1024) "Size (MB)", SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
FROM dba_free_space fs,
(SELECT tablespace_name,SUM(bytes) bytes
FROM dba_data_files
GROUP BY tablespace_name) df
WHERE fs.tablespace_name (+)  = df.tablespace_name
GROUP BY df.tablespace_name,df.bytes
UNION ALL
SELECT /* + RULE */ df.tablespace_name tspace,
fs.bytes / (1024 * 1024), SUM(df.bytes_free) / (1024 * 1024),
Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
FROM dba_temp_files fs,
(SELECT tablespace_name,bytes_free,bytes_used
 FROM v$temp_space_header
GROUP BY tablespace_name,bytes_free,bytes_used) df
 WHERE fs.tablespace_name (+)  = df.tablespace_name
 GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
 ORDER BY 4 DESC;

To check the tablespace used in GB's

set linesize 100
set pagesize 100
col file_name for a60

select
    a.tablespace_name,
    round(SUM(a.bytes)/(1024*1024)) CURRENT_MB,
    round(SUM(decode(a.AUTOEXTENSIBLE, 'NO', A.BYTES/(1024*1024), GREATEST (a.MAXBYTES/(1024*1024),A.BYTES/(1024*1024))))) MAX_MB,
    round((SUM(a.bytes)/(1024*1024) - c.Free/1024/1024)) USED_MB,
    round((SUM(decode(a.AUTOEXTENSIBLE, 'NO', A.BYTES/(1024*1024), GREATEST (a.MAXBYTES/(1024*1024),A.BYTES/(1024*1024)))) - (SUM(a.bytes)/(1024*1024) - round(c.Free/1024/1024))),2) FREE_MB,
    round(100*(SUM(a.bytes)/(1024*1024) - round(c.Free/1024/1024))/(SUM(decode(a.AUTOEXTENSIBLE, 'NO', A.BYTES/(1024*1024),GREATEST (a.MAXBYTES/(1024*1024),A.BYTES/(1024*1024)))))) USED_PCT
from
    dba_data_files a,
    (
        SELECT
            d.tablespace_name ,sum(nvl(c.bytes,0)) Free
        FROM
            dba_tablespaces d,
            DBA_FREE_SPACE c
        WHERE
            d.tablespace_name = c.tablespace_name(+)
            --AND d.contents='PERMANENT'
            --AND d.status='ONLINE'
            group by d.tablespace_name
    ) c
WHERE
    a.tablespace_name = c.tablespace_name
GROUP BY a.tablespace_name, c.Free/1024
order by 6;


To check Growth rate of Tablespaces

SELECT TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY') days,
 ts.tsname , max(round((tsu.tablespace_size* dt.block_size )/(1024*1024),2) ) cur_size_MB,
 max(round((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) usedsize_MB
 FROM DBA_HIST_TBSPC_SPACE_USAGE tsu, DBA_HIST_TABLESPACE_STAT ts, DBA_HIST_SNAPSHOT sp,
 DBA_TABLESPACES dt
 WHERE tsu.tablespace_id= ts.ts# AND tsu.snap_id = sp.snap_id
 AND ts.tsname = dt.tablespace_name AND ts.tsname NOT IN ('SYSAUX','SYSTEM')
 GROUP BY TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY'), ts.tsname
 ORDER BY ts.tsname, days;


Which schema are taking how much space:

Select obj.owner "Owner", obj_cnt "Objects", decode(seg_size, NULL, 0, seg_size) "size MB"
from (select owner, count(*) obj_cnt from dba_objects group by owner) obj,
 (select owner, ceil(sum(bytes)/1024/1024) seg_size  from dba_segments group by owner) seg
  where obj.owner  = seg.owner(+)
  order    by 3 desc ,2 desc, 1;


To Check Default Temporary Tablespace Name:

Select * from database_properties where PROPERTY_NAME like '%DEFAULT%';


To know default and Temporary Tablespace for particualr User:

Select username,temporary_tablespace,default_tablespace from dba_users where username='HRMS';


To know Default Tablespace for All User:

Select default_tablespace,temporary_tablespace,username from dba_users;
To Check Datafiles used and Free Space:
SELECT SUBSTR (df.NAME, 1, 40) file_name,dfs.tablespace_name, df.bytes / 1024 / 1024 allocated_mb,
((df.bytes / 1024 / 1024) -  NVL (SUM (dfs.bytes) / 1024 / 1024, 0)) used_mb,
NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb
FROM v$datafile df, dba_free_space dfs
WHERE df.file# = dfs.file_id(+)
GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes,dfs.tablespace_name
ORDER BY file_name;


To check Used free space in Temporary Tablespace:

SELECT tablespace_name, SUM(bytes_used/1024/1024) USED, SUM(bytes_free/1024/1024) FREE
FROM   V$temp_space_header GROUP  BY tablespace_name;
SELECT   A.tablespace_name tablespace, D.mb_total,
         SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
         D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM     v$sort_segment A,
         ( SELECT   B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
         FROM     v$tablespace B, v$tempfile C
         WHERE    B.ts#= C.ts#
         GROUP BY B.name, C.block_size
         ) D
WHERE    A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;

Who is using which UNDO or TEMP segment:

SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial,
NVL(s.username, 'None') orauser,s.program, r.name undoseg,
t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo"
FROM sys.v_$rollname r, sys.v_$session s, sys.v_$transaction t, sys.v_$parameter   x
WHERE s.taddr = t.addr AND r.usn   = t.xidusn(+) AND x.name  = 'db_block_size';

Who is using the Temp Segment:

SELECT b.tablespace, ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",
a.sid||','||a.serial# SID_SERIAL, a.username, a.program
FROM sys.v_$session a,
sys.v_$sort_usage b, sys.v_$parameter p
WHERE p.name  = 'db_block_size' AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks;

Total Size and Free Size of Database:

Select round(sum(used.bytes) / 1024 / 1024/1024 ) || ' GB' "Database Size",
round(free.p / 1024 / 1024/1024) || ' GB' "Free space"
from (select bytes from v$datafile
      union all
      select bytes from v$tempfile
      union all
      select bytes from v$log) used,
(select sum(bytes) as p from dba_free_space) free
group by free.p;


To find used space of datafiles:

SELECT SUM(bytes)/1024/1024/1024 "GB" FROM dba_segments;
IO status of all of the datafiles in database:
WITH total_io AS
     (SELECT SUM (phyrds + phywrts) sum_io
        FROM v$filestat)
SELECT   NAME, phyrds, phywrts, ((phyrds + phywrts) / c.sum_io) * 100 PERCENT,
         phyblkrd, (phyblkrd / GREATEST (phyrds, 1)) ratio
    FROM SYS.v_$filestat a, SYS.v_$dbfile b, total_io c
   WHERE a.file# = b.file#
ORDER BY a.file#;
Displays Smallest size the datafiles can shrink to without a re-organize.
SELECT a.tablespace_name, a.file_name, a.bytes AS current_bytes, a.bytes - b.resize_to AS shrink_by_bytes, b.resize_to AS resize_to_bytes
FROM   dba_data_files a, (SELECT file_id, MAX((block_id+blocks-1)*&v_block_size) AS resize_to
        FROM   dba_extents
        GROUP by file_id) b
        WHERE  a.file_id = b.file_id
        ORDER BY a.tablespace_name, a.file_name;

Scripts to Find datafiles increment details:

Select  SUBSTR(fn.name,1,DECODE(INSTR(fn.name,'/',2),0,INSTR(fn.name,':',1),INSTR(fn.name,'/',2))) mount_point,tn.name   tabsp_name,fn.name   file_name,
ddf.bytes/1024/1024 cur_size, decode(fex.maxextend,
NULL,ddf.bytes/1024/1024,fex.maxextend*tn.blocksize/1024/1024) max_size,
nvl(fex.maxextend,0)*tn.blocksize/1024/1024 - decode(fex.maxextend,NULL,0,ddf.bytes/1024/1024)   unallocated,nvl(fex.inc,0)*tn.blocksize/1024/1024  inc_by
from  sys.v_$dbfile fn,    sys.ts$  tn,    sys.filext$ fex,    sys.file$  ft,    dba_data_files ddf
where    fn.file# = ft.file# and  fn.file# = ddf.file_id
and    tn.ts# = ft.ts# and    fn.file# = fex.file#(+)
order by 1;


Saturday, March 14, 2020

Lag in an Active Data Guard Configuration

Lag in an Active Data Guard Configuration


Oracle 11g comes up with a New Feature on Oracle Data Guard that is Active Data Guard. In Active Data Guard we can use a real-time query on Standby Database and can gain benefits by offloading read-only workload to Standby Database.

To know more about Active Data Guard Oracle Active Data Guard Overview and Architecture

But, a Standby Database Configured with Real-Time Apply can lag behind the Primary Database due to hardware or network issue.

There could be following reasons :

1.Insufficient CPU capacity: If the standby database is not capable enough to apply redo data as quickly as it receives from the primary database.

2.High Network Latency: When Standby and Primary database’s network link suffers from a high latency.

3.Limited bandwidth: It may prevent the primary database from shipping redo as quickly as it is generated, particularly during periods of peak workload.


Oracle Database 11g Release 2 (11.2) includes features to enable you to determine the lag time and take appropriate action.

We can enable a tolerance level for data staleness by configuring a maximum value for apply lag. Query results are returned to the application if the lag is within the acceptable tolerance level, otherwise, an error results.

SQL> SELECT NAME,VALUE,DATUM_TIME,TIME_COMPUTED FROM V$DATAGUARD_STATS WHERE UPPER(NAME)='APPLY LAG';

NAME VALUE   DATUM_TIME             TIME_COMPUTED
-------------------------------- ------------------------ ------------------------  ---------------------
apply lag +00 00:00:00   05/06/2018 22:46:06     05/06/2018 22:46:08

Apply Lag: This is the difference, in elapsed time, between when the last applied change became visible on the
standby and when that same change was first visible on the primary.

In our case apply lag is 2 seconds.

Note: This lag should be less than 30 seconds.

We can monitor apply lag using a v$standby_event_histogram view. It shows a histogram of the apply lag on a physical standby database.

SQL> SELECT * FROM V$STANDBY_EVENT_HISTOGRAM WHERE UPPER(NAME)='APPLY LAG';

NAME        TIME UNIT   COUNT     LAST_TIME_UPDATED
----------------- ----------------------------   ---------- ----------------
apply lag   0 seconds    5874     05/06/2018 22:53:53
apply lag           1 seconds       1     05/06/2018 21:37:08
apply lag           2 seconds       1     05/06/2018 22:50:23

Use the histogram to focus on periods of time when the apply lag exceeds desired levels. Determine the cause of the lag during those time periods and take steps to resolve the excessive lag.


Dataguard commands and SQL scripts

Dataguard commands and SQL scripts



PHYSICAL STANDBY COMMANDS:


To start redo apply in foreground:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;

To stop redo apply process on the Standby database (to stop MRP):

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

To start real-time redo apply:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;

To start redo apply in background:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
or
SQL> RECOVER MANAGED STANDBY DATABASE DISCONNECT;

To check redo apply and Media recovery service status:

SQL> SELECT PROCESS,STATUS, THREAD#,SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY ;

If managed standby recovery is not running or not started with real-time apply, restart managed recovery with real-time apply enabled:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

To gather Data Guard configuration information(standby)

SQL> SELECT DATABASE_ROLE,OPEN_MODE, PROTECTION_MODE FROM V$DATABASE;

DATABASE_ROLE OPEN_MODE PROTECTION_MODE
---------------- -------------------- --------------------
PHYSICAL STANDBY MOUNTED MAXIMUM PERFORMANCE

SQL> SELECT RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS WHERE RECOVERY_MODE!='IDLE';

RECOVERY_MODE
-----------------------
MANAGED REAL TIME APPLY

To calculate the Redo bytes per second

SQL> SELECT SUM (BLOCKS * BLOCK_SIZE) / 1024 / 1024 / 60 / 60 / 30 REDO_MB_PER_SEC
 FROM GV$ARCHIVED_LOG
 WHERE FIRST_TIME BETWEEN TO_DATE ('01.05.2016', 'DD.MM.YYYY')
 AND TO_DATE ('01.06.2016', 'DD.MM.YYYY');

To check status of Data Guard synchronization(standby):

SQL> SELECT NAME, VALUE FROM V$DATAGUARD_STATS;

NAME VALUE
--------------------- -------------------------------
transport lag          +00 00:00:00
apply lag              +00 00:00:00
apply finish time      +00 00:00:00.000
estimated startup time 32

To verify there is no log file gap between the primary and the standby database:

SQL> SELECT STATUS, GAP_STATUS FROM V$ARCHIVE_DEST_STATUS WHERE DEST_ID = 3;

STATUS GAP_STATUS
--------- ------------------------
VALID NO GAP

To verify that the primary database can be switched to the standby role:

A value of TO STANDBY or SESSIONS ACTIVE indicates that the primary database can be switched to the standby role. If neither of these values is returned, a switchover is not possible because redo transport is either misconfigured or is not functioning properly.

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
--------------------
TO STANDBY

You can use verify command to verfy switchover

This comman will generate warnings in alert log file and you can check it before switchover

SQL> ALTER DATABASE SWITCHOVER to STBY_DB_SID VERIFY;

To convert the primary database into a physical standby :

Before switchover the current control file is backed up to the current SQL session trace file and it possible to reconstruct a current control file, if necessary.

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

To verify Managed Recovery is running on the standby :

SQL> SELECT PROCESS FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';

PROCESS
---------
MRP0

To show information about the protection mode, the protection level, the role of the database, and switchover status:

SQL> SELECT DATABASE_ROLE, DB_UNIQUE_NAME INSTANCE, OPEN_MODE, PROTECTION_MODE, PROTECTION_LEVEL, SWITCHOVER_STATUS FROM V$DATABASE;

DATABASE_ROLE     INSTANCE    OPEN_MODE    PROTECTION_MODE     PROTECTION_LEVEL     SWITCHOVER_STATUS
---------------- ---------- ------------ -------------------- -------------------- -------------------- --------------------
PRIMARY           TESTCDB    READ WRITE    MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE   TO STANDBY

On the standby database, query the V$ARCHIVED_LOG view to identify existing files in the archived redo log.

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
Or
SQL> SELECT THREAD#, MAX(SEQUENCE#) AS "LAST_APPLIED_LOG" FROM V$LOG_HISTORY GROUP BY THREAD#;

On the standby database, query the V$ARCHIVED_LOG view to verify the archived redo log files were applied.

SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

To determine which log files were not received by the standby site.

SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE#
FROM (SELECT THREAD#, SEQUENCE#
FROM V$ARCHIVED_LOG
WHERE DEST_ID = 1) LOCAL
WHERE LOCAL.SEQUENCE# NOT IN (SELECT SEQUENCE#
FROM V$ARCHIVED_LOG
WHERE DEST_ID = 2 AND THREAD# = LOCAL.THREAD#);


Archivelog difference: Run this on the primary database. (not for real-time apply):

ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';

SELECT A.THREAD#,
 B.LAST_SEQ,
 A.APPLIED_SEQ,
 A.LAST_APP_TIMESTAMP,
 B.LAST_SEQ - A.APPLIED_SEQ ARC_DIFF
 FROM ( SELECT THREAD#,
 MAX (SEQUENCE#) APPLIED_SEQ,
 MAX (NEXT_TIME) LAST_APP_TIMESTAMP
 FROM GV$ARCHIVED_LOG
 WHERE APPLIED = 'YES'
 GROUP BY THREAD#) A,
 ( SELECT THREAD#, MAX (SEQUENCE#) LAST_SEQ
 FROM GV$ARCHIVED_LOG
 GROUP BY THREAD#) B
 WHERE A.THREAD# = B.THREAD#;

 THREAD#   LAST_SEQ    APPLIED_SEQ  LAST_APP_TIMESTAMP   ARC_DIFF
---------- ---------- -----------   --------------------- ----------
 1         21282      21281         09-IYUL-2016 12:06:5     1
 2         23747      23746         09-IYUL-2016 12:16:13    1

To check archive log apply on primary database:

SQL> SET LINESIZE 150
SET PAGESIZE 999
COL NAME FORMAT A60
COL DEST_TYPE FORMAT A10
COL ARCHIVED FORMAT A10
COL APPLIED FORMAT A10

SELECT SEQUENCE#,
NAME,
DEST_ID ,
CASE WHEN STANDBY_DEST = 'YES' THEN 'Standby' ELSE 'Local' END
AS DEST_TYPE ,
ARCHIVED ,
APPLIED
FROM V$ARCHIVED_LOG
WHERE SEQUENCE# > (SELECT MAX (SEQUENCE#)
FROM V$ARCHIVED_LOG
WHERE STANDBY_DEST = 'YES' AND APPLIED = 'YES')
ORDER BY SEQUENCE# , DEST_ID ;


 SEQUENCE#  NAME                                                          DEST_ID  DEST_TYPE  ARCHIVED APPLIED
---------- -------------------------------------------------------------- -------  ---------- -------- --------
 23748      +FRA/TESTCDB/ARCHIVELOG/2016_07_09/thread_2_seq_23748.10041.9   1      Local        YES       NO
 23748      +DATA/TESTCDB/ARCHIVELOG/2016_07_09/thread_2_seq_23748.10062.   2      Local        YES       NO
 23748      TESTSTB                                                         3      Standby      YES       NO



LOGICAL STANDBY COMMANDS:


To Restart SQL apply on logical standby

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

To Stop SQL apply on logical standby

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

Run the following SQL against the logical standby to start real-time SQL apply if the SQL apply failed with an error, and you are 100% certain that the transaction is safe to skip

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE SKIP FAILED TRANSACTION;

To see unsupported tables for logical standby:

SQL> SELECT * FROM DBA_LOGSTDBY_UNSUPPORTED_TABLE ORDER BY OWNER, TABLE_NAME;

To know which archive log sequences are at what stage for logical standby?

SQL> SELECT 'RESTART' "TYPE",
 P.RESTART_SCN "SCN",
 TO_CHAR (P.RESTART_TIME, 'yyyy/mm/dd hh24:mi:ss') "TIME",
 L.SEQUENCE# "SEQ#"
 FROM V$LOGSTDBY_PROGRESS P, DBA_LOGSTDBY_LOG L
 WHERE P.RESTART_SCN >= L.FIRST_CHANGE# AND P.RESTART_SCN < L.NEXT_CHANGE#
UNION
SELECT 'RESTART',
 P.RESTART_SCN,
 TO_CHAR (P.RESTART_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, V$STANDBY_LOG L
 WHERE P.RESTART_SCN >= L.FIRST_CHANGE# AND P.LATEST_SCN <= L.LAST_CHANGE#
UNION
SELECT 'APPLIED',
 P.APPLIED_SCN,
 TO_CHAR (P.APPLIED_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, DBA_LOGSTDBY_LOG L
 WHERE P.APPLIED_SCN >= L.FIRST_CHANGE# AND P.APPLIED_SCN < L.NEXT_CHANGE#
UNION
SELECT 'APPLIED',
 P.APPLIED_SCN,
 TO_CHAR (P.APPLIED_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, V$STANDBY_LOG L
 WHERE P.APPLIED_SCN >= L.FIRST_CHANGE# AND P.LATEST_SCN <= L.LAST_CHANGE#
UNION
SELECT 'MINING',
 P.MINING_SCN,
 TO_CHAR (P.MINING_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, DBA_LOGSTDBY_LOG L
 WHERE P.MINING_SCN >= L.FIRST_CHANGE# AND P.MINING_SCN < L.NEXT_CHANGE#
UNION
SELECT 'MINING',
 P.MINING_SCN,
 TO_CHAR (P.MINING_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, V$STANDBY_LOG L
 WHERE P.MINING_SCN >= L.FIRST_CHANGE# AND P.LATEST_SCN <= L.LAST_CHANGE#
UNION
SELECT 'SHIPPED',
 P.LATEST_SCN,
 TO_CHAR (P.LATEST_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, DBA_LOGSTDBY_LOG L
 WHERE P.LATEST_SCN >= L.FIRST_CHANGE# AND P.LATEST_SCN < L.NEXT_CHANGE#
UNION
SELECT 'SHIPPED',
 P.LATEST_SCN,
 TO_CHAR (P.LATEST_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, V$STANDBY_LOG L
 WHERE P.LATEST_SCN >= L.FIRST_CHANGE# AND P.LATEST_SCN <= L.LAST_CHANGE#;

To know is the SQL Apply up to date

SQL> SELECT TO_CHAR(LATEST_TIME,'yyyy/mm/dd hh24:mi:ss') "LATEST_TIME",
TO_CHAR(APPLIED_TIME,'yyyy/mm/dd hh24:mi:ss') "APPLIED_TIME",
APPLIED_SCN, LATEST_SCN
FROM V$LOGSTDBY_PROGRESS;

To know is the Logical standby applying changes? Run the following SQL against the Logical standby database:

SQL> SELECT REALTIME_APPLY, STATE FROM V$LOGSTDBY_STATE;

If the value of STATE is “NULL” or “SQL APPLY NOT ON” then the Sql Apply is not running.The value of REALTIME_APPLY should be Y to allow for real time apply from the standby redo logs. To know what major Sql Apply events have occurred, run the following SQL against the Logical standby database:

SQL> SELECT TO_CHAR (EVENT_TIME, 'YYYY/MM/DD HH24:MI:SS') "EVENT_TIME",
STATUS, EVENT
FROM DBA_LOGSTDBY_EVENTS
ORDER BY EVENT_TIME;

To know what major Dataguard events have occurred, run the following SQL against the Logical standby database:

SQL> SELECT TO_CHAR (TIMESTAMP, 'yyyy/mm/dd hh24:mi:ss') "TIME",
ERROR_CODE "ERROR", DEST_ID "DEST", MESSAGE
FROM V$DATAGUARD_STATUS
WHERE timestamp > TRUNC (SYSDATE + 6 / 24)
ORDER BY timestamp DESC;

To know where are the archive logs going and are there any achieving issues, run the following SQL against either the logical standby or primary database:

SQL> SELECT DEST_ID "DID",
STATUS, DESTINATION, ARCHIVER, VALID_NOW, VALID_TYPE, VALID_ROLE, ERROR
FROM V$ARCHIVE_DEST
WHERE STATUS <> 'INACTIVE';

How to Enable Active Dataguard In Physical Standby Database

How to Enable Active Dataguard In Physical Standby Database


In Active dataguard the STANDBY database is open with read only mode, even when redo logs are getting applied in real time.

Below are the benefit of using active dataguard.

-Reporting queries can be offloaded to standby database.
-Physical block corruptions are repaired automatically either at primary or physical standby database.
-RMAN backups can be initiated from standby, instead  of primary which will reduce cpu load from primary.

NOTE – To use active dataguard, you need additional license from oracle

1. Check the database status PHYSICAL STANDBY.

SQL> select name, open_mode, database_role, INSTANCE_NAME from v$database,v$instance;

NAME      OPEN_MODE            DATABASE_ROLE    INSTANCE_NAME
--------- -------------------- ---------------- ----------------
ORCLNEW   MOUNTED              PHYSICAL STANDBY ORCLNEWQ

2. Cancel the media recovery on PHYSICAL STANDBY.

SQL> alter database recover managed standby database cancel;

3. Open the database PHYSICAL STANDBY.

SQL> alter database open;

4. Start media recovery with real-time log apply PHYSICAL STANDBY.

SQL> alter database recover managed standby database using current logfile disconnect from session;

Database altered.

5. Check the database status PHYSICAL STANDBY

SQL> select name, open_mode, database_role, INSTANCE_NAME from v$database,v$instance;

NAME      OPEN_MODE            DATABASE_ROLE    INSTANCE_NAME
--------- -------------------- ---------------- ----------------
ORCLNEW   READ ONLY WITH APPLY PHYSICAL STANDBY ORCLNEWQ



Monday, March 2, 2020

How to set a setnewname


select 'set newname for datafile '||file#||' to NEW;' from v$datafile;
-- select 'set newname for datafile '||file#||' to /newpath/NEW;' from v$datafile;

Check Last Incarnation Restore Time of Oracle Database

Check Last Incarnation Restore Time of Oracle Database



SQL> set linesize 1000
SQL> set pagesize 1000
SQL> SELECT incarnation#, resetlogs_change#, TO_CHAR(resetlogs_time, 'dd-mon-yyyy hh24:mi:ss') db_restored_time , scn_to_timestamp(resetlogs_change#) db_recovery_till_time FROM v$database_incarnation
WHERE resetlogs_change# != (SELECT MIN(resetlogs_change#) FROM v$database_incarnation );  2 

INCARNATION# RESETLOGS_CHANGE# DB_RESTORED_TIME     DB_RECOVERY_TILL_TIME
------------ ----------------- -------------------- ---------------------------------------------------------------------------
2           1082432 03-mar-2020 02:09:27 03-MAR-20 12.05.55.000000000 AM


SQL> select incarnation#, resetlogs_change# from v$database_incarnation;

INCARNATION# RESETLOGS_CHANGE#
------------ -----------------
1           1069685
2           1082432


To check from RMAN level:

oracle@rac2 dbs]$ rman target /
Recovery Manager: Release 11.2.0.3.0 - Production on Tue Mar 3 02:16:17 2020
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
connected to target database: QTEST (DBID=2573381335)

RMAN> list incarnation of database;
using target database control file instead of recovery catalog

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       QTEST    2573381335       PARENT  1069685    28-FEB-20
2       2       QTEST    2573381335       CURRENT 1082432    03-MAR-20

Backup-Based Duplication Database with RMAN

Backup-Based Duplication Database with RMAN


RMAN creates the duplicate database by using pre-existing RMAN backups and copies. RMAN can perform backup-based duplication with or without either of the following connections:

* Target
* Recovery catalog

How RMAN duplicates a database:

As a part of the duplicating operation, RMAN automates the following steps:-

1) Creates a default server parameter file for the auxiliary instance.
2) Mounts the restored or copied backup control file from the active database.
3) Uses the RMAN repository to select the backups for restoring the data files to the auxiliary instance.
4) Restores and copies the duplicate data files and recovers them with incremental backups and archived redo log files to a noncurrent point in time.
5) Shuts down and restarts the database instance in NOMOUNT mode.
6) Creates a new control file, which then creates and stores the new DBID in the data files.
7) Opens the duplicate database with the RESETLOGS option and creates the online redo log for the new database.

Actual Steps:

1) Create a backup of the source database:

$ rman target /

RMAN> backup AS COMPRESSED BACKUPSET full database tag ORCL_FULL format '/u01/backup/full%u' ;
RMAN> backup current controlfile format '/u01/backup/ctl%u';
RMAN> backup tag ORCL_ARCHIVE format '/u01/backup/%d_%T_%s_%p_ARCHIVE' archivelog all;
RMAN> backup spfile format '/u01/backup/spfile%u';

2) Transfer the backup from source to target:

scp -r * rac2:/u01/backup

3) Create a password file for the duplicate instance or source and target side.

cd $ORACLE_HOME/dbs
orapwd file=orapw$ORACLE_SID password=oracle force=y

Notes: The Oracle orapwd command line utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users.

4) Prepare pfile and add only two parameters:

vi initQTEST.ora

db_name=QTEST
audit_file_dest=/u01/app/oracle/admin/QPROD/adump/

5) Startup the database in nomount with a pfile.

SQL> startup nomount pfile='initQTEST.ora';
ORACLE instance started.
Total System Global Area  238034944 bytes
Fixed Size                  2227136 bytes
Variable Size             180356160 bytes
Database Buffers           50331648 bytes
Redo Buffers                5120000 bytes
SQL> host

5) Connect to rman with auxiliary command.

[oracle@rac2 dbs]$ rman auxiliary /
Recovery Manager: Release 11.2.0.3.0 - Production on Tue Mar 3 00:22:39 2020
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
connected to auxiliary database: QTEST (not mounted)

RMAN> DUPLICATE DATABASE TO QTEST
SPFILE set control_files='/u01/QTEST/c1.ctl' set db_file_name_convert='/u09/QPROD','/u01/QTEST' set log_file_name_convert='/u09/QPROD','/u01/QTEST'
BACKUP LOCATION '/u01/backup';

RMAN Duplicate Output:

Starting Duplicate Db at 03-MAR-20
contents of Memory Script:
{
   restore clone spfile to  '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfileQTEST.ora' from
 '/u01/backup/spfile1iuq48k6';
   sql clone "alter system set spfile= ''/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfileQTEST.ora''";
}
executing Memory Script
Starting restore at 03-MAR-20
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=20 device type=DISK
channel ORA_AUX_DISK_1: restoring spfile from AUTOBACKUP /u01/backup/spfile1iuq48k6
channel ORA_AUX_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 03-MAR-20

sql statement: alter system set spfile= ''/u01/app/oracle/product/11.2.0/dbhome_1/dbs/spfileQTEST.ora''

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''QTEST'' comment=
 ''duplicate'' scope=spfile";
   sql clone "alter system set  control_files =
 ''/u01/QTEST/c1.ctl'' comment=
 '''' scope=spfile";
   sql clone "alter system set  db_file_name_convert =
 ''/u09/QPROD'', ''/u01/QTEST'' comment=
 '''' scope=spfile";
   sql clone "alter system set  log_file_name_convert =
 ''/u09/QPROD'', ''/u01/QTEST'' comment=
 '''' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script
sql statement: alter system set  db_name =  ''QTEST'' comment= ''duplicate'' scope=spfile
sql statement: alter system set  control_files =  ''/u01/QTEST/c1.ctl'' comment= '''' scope=spfile
sql statement: alter system set  db_file_name_convert =  ''/u09/QPROD'', ''/u01/QTEST'' comment= '''' scope=spfile
sql statement: alter system set  log_file_name_convert =  ''/u09/QPROD'', ''/u01/QTEST'' comment= '''' scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area    2689060864 bytes
Fixed Size                     2231392 bytes
Variable Size                587203488 bytes
Database Buffers            2097152000 bytes
Redo Buffers                   2473984 bytes
contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''QPROD'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''QTEST'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile from  '/u01/backup/ctl1fuq48c8';
   alter clone database mount;
}
executing Memory Script
sql statement: alter system set  db_name =  ''QPROD'' comment= ''Modified by RMAN duplicate'' scope=spfile
sql statement: alter system set  db_unique_name =  ''QTEST'' comment= ''Modified by RMAN duplicate'' scope=spfile
Oracle instance shut down
Oracle instance started
Total System Global Area    2689060864 bytes
Fixed Size                     2231392 bytes
Variable Size                587203488 bytes
Database Buffers            2097152000 bytes
Redo Buffers                   2473984 bytes

Starting restore at 03-MAR-20
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=19 device type=DISK
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/QTEST/c1.ctl
Finished restore at 03-MAR-20

database mounted
released channel: ORA_AUX_DISK_1
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=19 device type=DISK

contents of Memory Script:
{
   set until scn  1082431;
   set newname for datafile  1 to
 "/u01/QTEST/system01.dbf";
   set newname for datafile  2 to
 "/u01/QTEST/sysaux01.dbf";
   set newname for datafile  3 to
 "/u01/QTEST/undotbs01.dbf";
   set newname for datafile  4 to
 "/u01/QTEST/users01.dbf";
   restore
   clone database
   ;
}
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 03-MAR-20
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/QTEST/system01.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /u01/QTEST/sysaux01.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u01/QTEST/undotbs01.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /u01/QTEST/users01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/backup/full1euq48b5
channel ORA_AUX_DISK_1: piece handle=/u01/backup/full1euq48b5 tag=ORCL_FULL
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:55
Finished restore at 03-MAR-20
contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script
datafile 1 switched to datafile copy
input datafile copy RECID=5 STAMP=1034036763 file name=/u01/QTEST/system01.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=1034036763 file name=/u01/QTEST/sysaux01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=7 STAMP=1034036763 file name=/u01/QTEST/undotbs01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=8 STAMP=1034036763 file name=/u01/QTEST/users01.dbf
contents of Memory Script:
{
   set until scn  1082431;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script
executing command: SET until clause
Starting recover at 03-MAR-20
using channel ORA_AUX_DISK_1
starting media recovery
channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=9
channel ORA_AUX_DISK_1: reading from backup piece /u01/backup/QPROD_20200303_48_1_ARCHIVE
channel ORA_AUX_DISK_1: piece handle=/u01/backup/QPROD_20200303_48_1_ARCHIVE tag=ORCL_ARCHIVE
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/app/oracle/fast_recovery_area/QTEST/archivelog/2020_03_03/o1_mf_1_9_h5top40c_.arc thread=1 sequence=9
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/QTEST/archivelog/2020_03_03/o1_mf_1_9_h5top40c_.arc RECID=1 STAMP=1034036764
media recovery complete, elapsed time: 00:00:00
Finished recover at 03-MAR-20
Oracle instance started
Total System Global Area    2689060864 bytes
Fixed Size                     2231392 bytes
Variable Size                587203488 bytes
Database Buffers            2097152000 bytes
Redo Buffers                   2473984 bytes
contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''QTEST'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script
sql statement: alter system set  db_name =  ''QTEST'' comment= ''Reset to original value by RMAN'' scope=spfile
sql statement: alter system reset  db_unique_name scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area    2689060864 bytes
Fixed Size                     2231392 bytes
Variable Size                587203488 bytes
Database Buffers            2097152000 bytes
Redo Buffers                   2473984 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "QTEST" RESETLOGS ARCHIVELOG
  MAXLOGFILES     16
  MAXLOGMEMBERS      3
  MAXDATAFILES      100
  MAXINSTANCES     8
  MAXLOGHISTORY      292
 LOGFILE
  GROUP   1 ( '/u01/QTEST/redo01.log' ) SIZE 50 M  REUSE,
  GROUP   2 ( '/u01/QTEST/redo02.log' ) SIZE 50 M  REUSE,
  GROUP   3 ( '/u01/QTEST/redo03.log' ) SIZE 50 M  REUSE
 DATAFILE
  '/u01/QTEST/system01.dbf'
 CHARACTER SET WE8MSWIN1252
contents of Memory Script:
{
   set newname for tempfile  1 to
 "/u01/QTEST/temp01.dbf";
   switch clone tempfile all;
   catalog clone datafilecopy  "/u01/QTEST/sysaux01.dbf",
 "/u01/QTEST/undotbs01.dbf",
 "/u01/QTEST/users01.dbf";
   switch clone datafile all;
}
executing Memory Script
executing command: SET NEWNAME
renamed tempfile 1 to /u01/QTEST/temp01.dbf in control file
cataloged datafile copy
datafile copy file name=/u01/QTEST/sysaux01.dbf RECID=1 STAMP=1034036771
cataloged datafile copy
datafile copy file name=/u01/QTEST/undotbs01.dbf RECID=2 STAMP=1034036771
cataloged datafile copy
datafile copy file name=/u01/QTEST/users01.dbf RECID=3 STAMP=1034036771

datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=1034036771 file name=/u01/QTEST/sysaux01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=1034036771 file name=/u01/QTEST/undotbs01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=1034036771 file name=/u01/QTEST/users01.dbf

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script
database opened
Finished Duplicate Db at 03-MAR-20