Pages

Tuesday, December 31, 2019

Gather Statistics Commands

Gather Statistics Commands



1. Gather dictionary stats:


EXEC DBMS_STATS.gather_dictionary_stats;
It gathers statistics for dictionary schemas 'SYS', 'SYSTEM' and other internal schemas.


2. Gather fixed object stats:


EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
Fixed object means gv$ or v$views


3. Gather full database stats:


EXEC DBMS_STATS.gather_database_stats;

-- With estimate_percent to 15 percent or any other value , if the db size very huge.

EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15);
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15, cascade => TRUE);

-- With auto sample size and parallel degree

EXEC DBMS_STATS.gather_database_stats(estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);


4. Gather schema statistics:


EXEC DBMS_STATS.gather_schema_stats('QRUM');

EXEC DBMS_STATS.gather_schema_stats('QRUM', estimate_percent => 25);

EXEC DBMS_STATS.gather_schema_stats('QRUM', estimate_percent => 100, cascade => TRUE);

-- STATS WITH AUTO ESTIMATION and degree 8

exec dbms_stats.gather_schema_stats( ownname => 'QRUM',method_opt => 'FOR ALL COLUMNS SIZE 1',
granularity => 'ALL', degree => 8, cascade => TRUE,
estimate_percent=>dbms_stats.auto_sample_size);


5. Gather table statistics: 


EXEC DBMS_STATS.gather_table_stats('DBACLASS', 'EMP');
EXEC DBMS_STATS.gather_table_stats('DBACLASS', 'EMP', estimate_percent => 15);
EXEC DBMS_STATS.gather_table_stats('DBACLASS', 'EMP', estimate_percent => 15, cascade => TRUE);

exec DBMS_STATS.GATHER_TABLE_STATS  (ownname => 'DBACLASS' , tabname => 'EMP',cascade => true,
method_opt=>'for all indexed columns size 1', granularity => 'ALL', degree => 8);

exec DBMS_STATS.GATHER_TABLE_STATS (ownname => 'DBACLASS' , tabname => 'EMP',
cascade => true, method_opt=>'FOR ALL COLUMNS SIZE 1', granularity => 'ALL', degree => 8);


6. Gather stats for single partition of a table:


BEGIN
DBMS_STATS.GATHER_TABLE_STATS (
ownname => 'SCOTT',
tabname => 'TEST', --- TABLE NAME
partname => 'TEST_JAN2016' --- PARTITOIN NAME
method_opt=>'for all indexed columns size 1',
GRANULARITY => 'APPROX_GLOBAL AND PARTITION',
degree => 8);
END;
/


7. Lock/unlock statistics:


-- Lock stats of a schema:
EXEC DBMS_STATS.lock_schema_stats('DBACLASS');

-- Lock stats of a table:
EXEC DBMS_STATS.lock_table_stats('DBACLASS', 'EMP');

-- Lock stats of a partition:
EXEC DBMS_STATS.lock_partition_stats('DBACLASS', 'EMP', 'EMP');

-- unlock stats of a schema:

EXEC DBMS_STATS.unlock_schema_stats('DBACLASS');
-- unlock stats of a table:

EXEC DBMS_STATS.unlock_table_stats('DBACLASS', 'DBACLASS');
--unlock stats of a partition:

EXEC DBMS_STATS.unlock_partition_stats('DBACLASS', 'EMP', 'TEST_JAN2016');

--- check stats status:

SELECT stattype_locked FROM dba_tab_statistics WHERE table_name = 'TEST' and owner = 'SCOTT';


8 . Delete statistics:


-- Delete complete db statistics:
EXEC DBMS_STATS.delete_database_stats;

-- Delete schema statistics:
EXEC DBMS_STATS.delete_schema_stats('DBACLASS');

-- Delete table statistics:
EXEC DBMS_STATS.delete_table_stats('DBACLASS', 'EMP');

-- Delete column statistics:
EXEC DBMS_STATS.delete_column_stats('DBACLASS', 'EMP', 'EMPNO');

-- Delete index statistics:

EXEC DBMS_STATS.delete_index_stats('DBACLASS', 'EMP_PK');

-- Delete dictionary statistics:
EXEC DBMS_STATS.delete_dictionary_stats;


-- Delete fixed object statistics:

exec dbms_stats.delete_fixed_objects_stats;

-- Delete system statistics:

exec dbms_stats.delete_system_stats('STAT_TAB');


8. Setting statistics preference:


-- View preference details for the database:

SELECT dbms_stats.get_prefs('PUBLISH') EST_PCT FROM dual;

-- View Publish preference for table


-- View Publish preference for schema:

select dbms_stats.get_prefs('PUBLISH', 'SCOTT') from dual

-- View preference details for table

select dbms_stats.get_prefs(ownname=>'DBACLASS',tabname=>'EMP',pname=>'PUBLISH') FROM DUAL;
select  DBMS_STATS.get_prefs(ownname=>'DBACLASS',tabname=>'EMP',pname=>'INCREMENTAL') FROM DUAL;
select  DBMS_STATS.get_prefs(ownname=>'DBACLASS',tabname=>'EMP',pname=>'GRANULARITY') FROM DUAL;
select  DBMS_STATS.get_prefs(ownname=>'DBACLASS',tabname=>'EMP',pname=>'STALE_PERCENT')  FROM DUAL;
select   DBMS_STATS.get_prefs(ownname=>'DBACLASS',tabname=>'EMP',pname=>'ESTIMATE_PERCENT')  FROM DUAL;
select   DBMS_STATS.get_prefs(ownname=>'DBACLASS',tabname=>'EMP',pname=>'DEGREE')  FROM DUAL;

-- Set table  preferences

exec dbms_stats.set_table_prefs('DBACLASS','EMP','PUBLISH','FALSE');
exec dbms_stats.set_table_prefs('DBACLASS','EMP','ESTIMATE_PERCENT','20');
exec dbms_stats.set_table_prefs('DBACLASS','EMP','DEGREE','8');

-- Set schema preferences:

exec dbms_stats.SET_SCHEMA_PREFS('DBATEST','PUBLISH','FALSE');
exec dbms_stats.SET_SCHEMA_PREFS('DBATEST','ESTIMATE_PERCENT','20');
exec dbms_stats.SET_SCHEMA_PREFS('DBATEST','CASCADE','TRUE');

-- Set database preference:

exec dbms_stats.set_database_prefs('PUBLISH', 'TRUE');
exec dbms_stats.set_database_prefs('DEGREE', '16');

-- Set global preference:

exec dbms_stats.set_global_prefs('PUBLISH', 'TRUE');
exec dbms_stats.set_global_prefs('DEGREE', '16');


9 . Deleting preferences:


-- Deleting schema preference:

exec dbms_stats.delete_schema_prefs('DBACLASS', 'DEGREE');
exec dbms_stats.delete_schema_prefs('DBACLASS', 'CASCADE');


-- Delete database preference:
exec dbms_stats.delete_database_prefs('ESTIMATE_PERCENT', FALSE);
exec dbms_stats.delete_database_prefs('DEGREE', FALSE);


10 . Publish pending statistics:


-- For schema DBACLASS
exec dbms_stats.publish_pending_stats('DBACLASS',null);

-- For table DBACLASS.EMP
EXEC DBMS_STATS.PUBLISH_PENDING_STATS ('DBACLASS','EMP');


11. Delete pending statistics:


-- for table DBACLASS.EMP
exec dbms_stats.delete_pending_stats('DBACLASS', 'EMP');

-- For schema DBACLASS
exec dbms_stats.delete_pending_stats('DBACLASS', null);


12. Upgrade stats table:


----- If we are importing stats table from higher version to lower version,
then before importing in the database, we need to upgrade the stats table.


EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE(OWNNAME =>'RAJ',STATTAB =>'STAT_TEST');


13. View/modify statistics retention period:


-- View current stats retention

select dbms_stats.get_stats_history_retention from dual;

-- Modify the stats retention

exec DBMS_STATS.ALTER_STATS_HISTORY_RETENTION(60);


14. create stats table:


--- Create staging table to store the statistics data

exec dbms_stats.create_stat_table(ownname => 'SCOTT', stattab => 'STAT_BACKUP',tblspace=>'USERS');


15. Export stats data:


-- Export full database stats to a table SCOTT.STAT_BACKUP

exec dbms_stats.export_database_stats(statown => 'SCOTT' ,stattab=>'STAT_BACKUP');

-- Export stats for table DBACLASS.EMP to a stats table SCOTT.STAT_BACKUP

exec dbms_stats.export_table_stats(ownname=>'DBACLASS', tabname=>'EMP', statown =>'SCOTT',stattab=>'STAT_BACKUP', cascade=>true);

-- Export stats for schema DBACLASS to a stats table SCOTT.STAT_BACKUP

exec dbms_stats.export_schema_stats(ownname=>'DBACLASS', statown =>'SCOTT' , stattab=>'STAT_BACKUP');

-- Export fixed object stats to table SCOTT.STAT_BACKUP

exec dbms_stats.export_fixed_objects_stats(statown => 'SCOTT' ,stattab=>'STAT_BACKUP');

-- Export dictionary stats to table SCOTT.STAT_BACKUP

exec dbms_stats.export_dictionary_stats(statown => 'SCOTT' ,stattab=>'STAT_BACKUP');

-- Export stats for index DBACLAS.EMP_UK1 to SCOTT.STAT_BACKUP table

exec dbms_stats.export_index_stats(ownname=>'DBACLASS', indname=>'EMP_UK1', statown =>'SCOTT',stattab=>'STAT_BACKUP')


16. Import stats table data:


-- Import full database stats from stats table SCOTT.STAT_BACKUP

exec dbms_stats.import_database_stats(statown => 'SCOTT' ,stattab=>'STAT_BACKUP');

-- Import stats for table DBACLASS.EMP from stats table SCOTT.STAT_BACKUP

exec dbms_stats.import_table_stats(ownname=>'DBACLASS', tabname=>'EMP', statown =>'SCOTT',stattab=>'STAT_BACKUP', cascade=>true);

-- Import stats for schema DBACLASS from  stats table SCOTT.STAT_BACKUP

exec dbms_stats.import_schema_stats(ownname=>'DBACLASS', statown =>'SCOTT' , stattab=>'STAT_BACKUP');

-- Import fixed object stats from stats table SCOTT.STAT_BACKUP

exec dbms_stats.import_fixed_objects_stats(statown => 'SCOTT' ,stattab=>'STAT_BACKUP');

-- Import dictionary stats from  table SCOTT.STAT_BACKUP

exec dbms_stats.import_dictionary_stats(statown => 'SCOTT' ,stattab=>'STAT_BACKUP');

-- Import stats for index DBACLAS.EMP_UK1 from  SCOTT.STAT_BACKUP table

exec dbms_stats.import_index_stats(ownname=>'DBACLASS', indname=>'EMP_UK1', statown =>'SCOTT',stattab=>'STAT_BACKUP');


17 . Few stats related sql queries:


-- Check stale stats for table:

select owner,table_name,STALE_STATS from dba_tab_statistics where owner='&SCHEMA_NAME' and table_name='&TABLE_NAME';

--Check stale stats for index:

select owner,INDEX_NAME,TABLE_NAME from DBA_IND_STATISTICS where owner='&SCHEMA_NAME' and index_name='&INDEX_NAME';

-- For getting history of TABLE statistics
setlines 200
col owner for a12
col table_name for a21
select owner,TABLE_NAME,STATS_UPDATE_TIME from dba_tab_stats_history where table_name='&TABLE_NAME';

-- Space used to store statistic data in SYSAUX tablespace:

SQL> select occupant_desc, space_usage_kbytes from v$sysaux_occupants where OCCUPANT_DESC like '%Statistics%';

-- Check whether table stats locked or not:

select owner, table_name, stattype_locked from dba_tab_statistics where stattype_locked is not null and owner not in ('SYS','SYSTEM');

Monday, December 30, 2019

Oracle Enterprise Manager 13c Default Ports

Oracle Enterprise Manager 13c Default Ports




Enable archivelog mode in RAC

Enable archivelog mode in RAC


1. stop the database service.
srvctl stop database -d MAPROD

2. start the database in mount state.
srvctl start database -d MAPROD -o mount

3. enable archive log mode.
alter database archivelog;

4. Restart the database service (using srvctl)
srvctl stop database -d MAPROD
srvctl start database -d MAPROD






Sunday, December 22, 2019

Database Corruptions - Physical & Logical Corruptions

Database Corruptions - Physical & Logical Corruptions



--logical corruptions (table level):

SQL> ANALYZE TABLE scott.emp VALIDATE STRUCTURE;

--physical and logical corruptions through RMAN:

RMAN> validate database;

--logical corruptions inside the database:

RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL;

--logical corruption for datafile

RMAN> validate datafile 4; (block level)

--physical corruptions at os level

[oracle@OEL-11g ~]$ dbv

DBVERIFY: Release 11.2.0.3.0 - Production on Thu Oct 8 09:33:10 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Keyword     Description                    (Default)
----------------------------------------------------
FILE        File to Verify                 (NONE)
START       Start Block                    (First Block of File)
END         End Block                      (Last Block of File)
BLOCKSIZE   Logical Block Size             (8192)
LOGFILE     Output Log                     (NONE)
FEEDBACK    Display Progress               (0)
PARFILE     Parameter File                 (NONE)
USERID      Username/Password              (NONE)
SEGMENT_ID  Segment ID (tsn.relfile.block) (NONE)
HIGH_SCN    Highest Block SCN To Verify    (NONE)
            (scn_wrap.scn_base OR scn)         
[oracle@OEL-11g ~]$ dbv file=/oradata/ORADB/users01.dbf

DBVERIFY: Release 11.2.0.3.0 - Production on Thu Oct 8 09:33:32 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
DBVERIFY - Verification starting : FILE = /oradata/ORADB/users01.dbf
DBVERIFY - Verification complete

Total Pages Examined         : 640
Total Pages Processed (Data) : 14
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 13
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 596
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 17
Total Pages Marked Corrupt   : 0
Total Pages Influx           : 0
Total Pages Encrypted        : 0
Highest block SCN            : 1022004 (0.1022004)

RMAN provides a data recovery advisory tool logical and physical corruptions:

RMAN> list failure;
RMAN> advise failure;
RMAN> repair failure;

RMAN> list failure;

List of Database Failures
=========================

Failure ID Priority Status    Time Detected Summary
---------- -------- --------- ------------- -------
502        HIGH     OPEN      08-OCT-15     One or more non-system datafiles are corrupt

RMAN>

RMAN> advise failure;

List of Database Failures
=========================

Failure ID Priority Status    Time Detected Summary
---------- -------- --------- ------------- -------
502        HIGH     OPEN      08-OCT-15 

One or more non-system datafiles are corrupt analyzing automatic repair options; this may take some time using channel ORA_DISK_1
analyzing automatic repair options complete

Mandatory Manual Actions
========================
no manual actions available

Optional Manual Actions
=======================
no manual actions available

Automated Repair Options
========================
Option Repair Description
------ ------------------
1      Restore and recover datafile 4
  Strategy: The repair includes complete media recovery with no data loss
  Repair script: /oracle/base/diag/rdbms/oradb/ORADB/hm/reco_1379785849.hm

RMAN> repair failure;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /oracle/base/diag/rdbms/oradb/ORADB/hm/reco_1379785849.hm

contents of repair script:
   # restore and recover datafile
   sql 'alter database datafile 4 offline';
   restore datafile 4;
   recover datafile 4;
   sql 'alter database datafile 4 online';

Do you really want to execute the above repair (enter YES or NO)? yes
executing repair script

sql statement: alter database datafile 4 offline

Starting restore at 08-OCT-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to /oradata/ORADB/users01.dbf
channel ORA_DISK_1: reading from backup piece /oradata/fra/ORADB/backupset/2015_10_07/o1_mf_nnnd0_TAG20151007T093119_c1968k0o_.bkp
channel ORA_DISK_1: piece handle=/oradata/fra/ORADB/backupset/2015_10_07/o1_mf_nnnd0_TAG20151007T093119_c1968k0o_.bkp tag=TAG20151007T093119
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 08-OCT-15

Starting recover at 08-OCT-15
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /oradata/fra/arch/1_4_892373901.dbf
archived log for thread 1 with sequence 5 is already on disk as file /oradata/fra/arch/1_5_892373901.dbf
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=3
channel ORA_DISK_1: reading from backup piece /oradata/fra/ORADB/backupset/2015_10_07/o1_mf_annnn_TAG20151007T093251_c196ccmm_.bkp
channel ORA_DISK_1: piece handle=/oradata/fra/ORADB/backupset/2015_10_07/o1_mf_annnn_TAG20151007T093251_c196ccmm_.bkp tag=TAG20151007T093251
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/oradata/fra/arch/1_3_892373901.dbf thread=1 sequence=3
media recovery complete, elapsed time: 00:00:01
Finished recover at 08-OCT-15

sql statement: alter database datafile 4 online
repair failure complete

Query to check corruptions:


SQL> SELECT DISTINCT owner, segment_name FROM  v$database_block_corruption dbc JOIN dba_extents e ON dbc.file# = e.file_id AND dbc.block# BETWEEN e.block_id and e.block_id+e.blocks-1 ORDER BY 1,2;


Monitoring a session activity

Monitoring a session activity


Scenario:

There is an application, in which there are 100 users and for this 100 users, consider there is 1 tab/page. After clicking a tab in the application all 99 users are able get the results out of the tab as quick as possible, but only 1% whenever he clicks on the tab it will take lot of time to load the page, and if all the tabs are working fine thenonly 1 tab when he actually clicks if it is taking long time defintely that user raised a issue to the support team and it might reach to database team to look into it.

Now you have to analyze the 1% user, whenever he clicking the tab what is actual happening in the database.

SQL> select sid,serial#, from v$session where username='SCOTT';

SID    SERIAL#
---------- ----------
24    15

SQL> select s.sid,s.sql_id,p.spid from v$session s, v$process p where s.sid=24 and s.paddr=p.addr; 

SIDSQL_ID SPID
----------------------
24 2472

SQL> !ps -ef | grep 2472
oracle    2472     1  0 09:20 ?        00:00:00 oracleORADB (LOCAL=NO)
oracle    2490  2257  0 09:27 pts/0    00:00:00 /bin/bash -c ps -ef | grep 2472
oracle    2492  2490  0 09:27 pts/0    00:00:00 grep 2472

SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>24, serial#=>15, sql_trace=>TRUE); --- To enable trace

SQL> select * from emp;
SQL> select * from dept;
SQL> select * from salgrade;

SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>24, serial#=>15, sql_trace=>FALSE); --- To disable trace

SQL> show parameter user

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
license_max_users      integer 0
parallel_adaptive_multi_user    boolean TRUE
redo_transport_user      string
user_dump_dest      string /oracle/base/diag/rdbms/oradb/QADB/trace

[oracle@OEL-11g trace]$ ls -ltr *2472*
-rw-r----- 1 oracle oinstall   315 Oct  2 09:33 QADB_ora_2472.trm
-rw-r----- 1 oracle oinstall 23582 Oct  2 09:33 QADB_ora_2472.trc

[oracle@OEL-11g trace]$ tkprof ORADB_ora_2472.trc /home/oracle/sql_trace.oitut sys =no

Note: If you choose sys=no it will not capture sys operations.

-rw-r--r-- 1 oracle oinstall 5902 Oct  1 09:38 ctl.bkp
-rw-r--r-- 1 oracle oinstall 5943 Oct  2 09:37 sql_trace.out
[oracle@OEL-11g ~]$
[oracle@OEL-11g ~]$ more sql_trace.out

TKPROF: Release 11.2.0.3.0 - Development on Fri Oct 2 09:37:59 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Trace file: ORADB_ora_2472.trc
Sort options: default

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

SQL ID: 3154rqzb8xudy Plan Hash: 3383998547

select * from emp;


--More--(13%)
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.05          4         58          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.01          2          3          0           4
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.07          6         61          0           4

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 85 
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         4          4          4  TABLE ACCESS FULL DEPT (cr=3 pr=2 pw=0 time=13922 us cost=3 size=80 c
ard=4)

********************************************************************************

SQL ID: a2dk8bdn0ujx7 Plan Hash: 3956160932

select * from dept;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0         73          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          6          3          0          14
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          6         76          0          14

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 85 
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
        14         14         14  TABLE ACCESS FULL EMP (cr=3 pr=6 pw=0 time=2150 us cost=3 size=532 ca
rd=14)

********************************************************************************

SQL ID: 30p00zk91247c Plan Hash: 2489195056

select * from salgrade;

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          6          3          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          6          3          0           5

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 85 
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         5          5          5  TABLE ACCESS FULL SALGRADE (cr=3 pr=6 pw=0 time=2214 us cost=3 size=5
0 card=5)

********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        3      0.01       0.06          4        131          0           0
Execute      3      0.00       0.00          0          0          0           0
Fetch        6      0.00       0.01         14          9          0          23
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       12      0.01       0.08         18        140          0          23

Misses in library cache during parse: 3


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute     51      0.00       0.00          0          0          0           0
Fetch       73      0.00       0.02          4        165          0          51
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      124      0.00       0.02          4        165          0          51


Misses in library cache during parse: 0

    3  user  SQL statements in session.
   13  internal SQL statements in session.
   16  SQL statements in session.
********************************************************************************
Trace file: ORADB_ora_2472.trc
Trace file compatibility: 11.1.0.7
Sort options: default

       1  session in tracefile.
       3  user  SQL statements in trace file.
      13  internal SQL statements in trace file.
      16  SQL statements in trace file.
      16  unique SQL statements in trace file.
     283  lines in trace file.
       8  elapsed seconds in trace file.

Friday, December 20, 2019

Database triggers for different notifications

Database triggers for different notifications


--shutdown trigger
--startup trigger
--logon trigger
--logoff trigger
--server error trigger


Create a table to store the trigger information:
create table trigger_table (database_name varchar2(30), event_name varchar2(20), event_time date, triggered_by_user varchar2(30));

Set the date format a session level:
alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS'

--shutdown

create or replace trigger log_shutdown
before shutdown on database
begin
insert into trigger_table (database_name, event_name, event_time, triggered_by_user)
values ('QADER', 'SHUTDOWN INITIATED', sysdate, user);
commit;
end;
/

--startup

create or replace trigger log_startup
after startup on database
begin
insert into trigger_table (database_name, event_name, event_time, triggered_by_user)
values ('QADER','STARTUP INITIATED',sysdate,user);
commit;
end;
/

--logon

create table LOGON_table (login_date date, user_name varchar2(10), status varchar2(10));

create or replace trigger logon_trigger
after logon on database
begin
insert into logon_table
values
(SYSDATE, USER, 'logged_in');
commit;
end logon_trigger;
/

--logout

create or replace trigger log_out
before logoff on database
begin
insert into logon_table
values
(SYSDATE, USER, 'logged_out');
commit;
end logon_trigger;
/

--logon_failure

create or replace trigger logon_failures
after servererror on database
begin
if (IS_SERVERERROR(1017)) THEN
INSERT INTO logon_table
(login_date, user_name, status)
values
(sysdate, sys_context('USERENV','AUTHENTICATED_IDENTITY'),'ORA-01017');
END IF;
COMMIT;
END logon_failures;
/

Saturday, December 14, 2019

Datapump Activities

Datapump Activities


-Take refresh in the same database and from the same schema
-Take refresh in the same database but different schema
-Take refresh in the same database into different schema with different name
-Take backup activity and restoration back the same table.
-Take refresh to the remote database, to the same schema with different name.
-Multiple table backups from multiple schemas.
-Restore specific set of tables from the multiple tables backup

select segment_name, bytes/1024, owner from dba_segments where segment_name='DEPT' and owner='SCOTT';
select * from dba_directories;
mkdir -p /u01/oradata/dpbkps
create or replace directory DPBKPS as '/u01/oradata/dpbkps';

--Take the table dept backup from scott schema and refresh to the same schema when requested later.

expdp dumpfile=exp_scott.dept.dmp logfile=scott.dept.log directory=DPBKPS tables=scott.dept compression=all

Note: compression will include additional load on cpu, and it will take more time compare to normal.

You can drop the existing table or used table_exists_action command.

impdp directory=DPBKPS dumpfile=exp_scott.dept.dmp logfile=impdp_scott.dept.log table_exists_action=replace/truncate/append

--Table refresh in the same database but different schema

impdp directory=DPBKPS dumpfile=exp_scott.dept.dmp logfile=impdp_scott.dept.log remap_schema=SCOTT:SYSTEM

--Take refresh in the same database into different schema with different name

select table_name from dba_tables where table_name='DEPT_TMP';

impdp system/manager directory=DPBKPS dumpfile=exp_scott.dept.dmp logfile=impdp_scott.dept.log remap_schema=SCOTT:SYSTEM remap_table=DEPT.DEPT_TEMP

--Take refresh to the remote database, to the same schema with different name.

scp the export dumpfile to target location:

impdp system/manager directory=DPBKPS dumpfile=exp_scott.dept.dmp logfile=impdp_scott.dept.log table_exists_action=replace

--Multiple table backups from multiple schemas.

check the size of all tables.

select sum(bytes/1024) from dba_segments where segment_name in ('DEPT','DEPT_TEMP','EMP') and owner in ('SCOTT','SYSTEM');

expdp directory=DPBKPS dumpfile=expdp_multi_tables.dmp logfile=expdp_multi_tables.log tables=scott.dept,scott.emp,system.dept,system.dept_temp

impdp directory=DPBKPS dumpfile=
expdp_multi_tables.dmp logfile=expdp_multi_tables.log tables=system.dept table_exists_action=replace

-A schema backup when required
-Restore the same schema on the same database when requested
-A schema refresh from PROD to NON-PROD environment.
-A schema backup when requested and restore only set of tables from the backup dump.
-Taking multiple schemas backup backup and only restore one schema to the database or only one table of one schema to restore.

--A schema backup when required
Take the backup of SCOTT schema on MYDB database.

check the schema size:
select sum(byes/1024/1024) from dba_segments where owner='SCOTT';

check the directory location and its space:
select * from dba_directories;

expdp directory=dpbkps dumpfile=expdp_scott.dmp logfile=expdp_scott.log compression=all schema=SCOTT

Note: compression option will take longer time to export the backup.

expdp directory=dpbkps dumpfile=expdp_scott_%U.dmp logfile=expdp_scott.log compression=all schema=SCOTT parallel=2

--Restore the same schema on the same database when requested

drop the user if it is already available using cascade option.

impdp directory=dpbkp dumpfile=exp_scott_%U.dmp logfile=exp_scott.log parallel=2

--How to take DDL's backup only ?

expdp directory=dpbkps dumpfile=expdp_scott.dmp logfile=expdp_scott.log compression=all schema=SCOTT content=metadata_only

--A schema backup when requested and restore only set of tables from the backup dump.

impdp directory=dpbkp dumpfile=exp_scott_%U.dmp logfile=impdp_scott.dept.log parallel=2 tables=scott.dept table_exist_action=replace

--Taking multiple schemas backup backup and only restore one schema to the database or only one table of one schema to restore.

check the schema size:
select sum(byes/1024/1024) from dba_segments where owner in('SCOTT','OUTLN');

expdp directory=dpbkps dumpfile=expd_multi_%U.dmp logfile=exp_multi.log compression=all parallel=2 schemas=scott,outln

impdp directory=dpbkps dumpfile=exp_multi_%U.dmp logfile=impdp_multi.log parallel=2 tables=outln.ol$ table_exists_action=replace


Flashback Table In Oracle

Flashback Table In Oracle


-we can recover a table to a specific timestamp without downtime.
-we can flashback to the original table.
-we can flashback to a new table
-we can flashback query to identify if undo segments have the old data.

flashback table is enabled by-default.

example:

insert into scott.dept values(&dept,'&ename','&loc');
commit;
alter system checkpoint;

query to check:

select count(*) from scott.dept as of timestamp TO_TIMESTAMP('27-11-2019 09:18:00 'DD-MM-YYYY HH24:MI:SS');

If above query will shows any data, then we can recover the table till that time.

1.flashback to a different table.

create table scott.dept_temp as select * from scott.dept as of timestamp TO_TIMESTAMP('27-11-2019 09:18:00 'DD-MM-YYYY HH24:MI:SS');
select * from scott.dept_temp;

2.flashback to a original table.

alter table scott.dept enable row movement;
flashback table scott.dept to timestamp TO_TIMESTAMP('27-11-2019 09:18:00 'DD-MM-YYYY HH24:MI:SS');
alter table scott.dept disable row movement;

If user drops a table.
drop table scott.dept_temp;

flashback table scott.dept_temp to before drop;

Note: if a user truncte the table, then you cannot use flashback.

Flashback database:

flashback database is not enabled by-default.

To check:
select flashback_on from v$database;

Important parameters of flashback database:

show parameter db_recovery
db_recovery_file_dest
db_recovery_file_dest_size
show parameter flash
db_flashback_retention_target ---by-default 1440 mins

How to enable flashback:

Till 10g version, flashback will be enabled in mount state, but in 11g version we can enabled it open state.
alter database flashback on;

Note: when you enable flashback, one directory is automatically created with flashback name in fra location, and it is pre-requisite to set fra location before we enabled the flashback.

insert into scott.dept values(&dept,'&ename','&loc');
commit;

Note: To performing the flashback database technique, your database should be open in mount state.

shutdown immediate;
startup mount;
rman target /
flashback database to time "TO_TIMESTAMP('27-11-2019 10:01:00','DD-MM-YYYY HH24:MI:SS')";

alter database open resetlogs;
alter database open;

Restore point:

-application release
-upgradation of database
-application upgradation
-ETL

create restore point A;
shutdown immediate;
startup mount;

rman target /
flashback database to restore point A;
alter database open resetlogs;

Note: restore point will be stored based on your retention period.

create restore point B guarantee flashback database;

Note: It will not dependant on retention period, and stored the restore points until you drop them(useful for longer applications)

shutdown immediate;
startup mount;
rman target /
flashback database to restore point B
alter database open resetlogs;
select name, guarantee_flashback_database from v$restore_point;
drop restore point B;

Note: even after you purging the recycebin it will hold the dropped values based on your retention period.

Monday, November 25, 2019

Understanding Lag in an Active Data Guard Configuration

Understanding Lag in an Active Data Guard Configuration


• A standby database configured with real-time apply can lag behind the primary database as a result of:
– Insufficient CPU capacity
– High network latency
– Limited bandwidth

• Queries on the standby database need to return current results and/or be within an established service level.
• Ways to “manage” the standby database lag and take necessary action:
– Configure Data Guard configuration with a maximum data lag that will trigger an error when it is exceeded.
– Monitor the redo apply lag and take action when the lag is unacceptable

Monitoring Apply Lag: V$DATAGUARD_STATS

• Apply lag: This is the difference, in elapsed time, between when the last applied change became visible on the standby and when that same change was first visible on the primary.
• The apply lag row of the V$DATAGUARD_STATS view reflects statistics that are computed periodically and to the nearest second.

Standby database:
SQL> SELECT name, value, datum_time, time_computed FROM v$dataguard_stats WHERE name like 'apply lag';

Monitoring Apply Lag: V$STANDBY_EVENT_HISTOGRAM

• View histogram of apply lag on a physical standby database.
• Use to assess value for STANDBY_MAX_DATA_DELAY.
• Use to focus on periods of time when the apply lag exceeds desired levels so that issue can be resolved.
SQL> SELECT * FROM V$STANDBY_EVENT_HISTOGRAM WHERE NAME = 'apply lag' AND COUNT > 0;

Configuring Zero Lag Between the Primary and Standby Databases

• Certain applications have zero tolerance for any lag.
• Query on the standby database must return the same result as though it were executed on the primary database.
• Enforce by setting STANDBY_MAX_DATA_DELAY to 0.
• The standby database must have advanced to a value equal to that of the current SCN on the primary database at the time the query was issued.
• Results are guaranteed to be the same as the primary database, else ORA-3172 error is returned to the query.
• The primary database must operate in maximum availability or maximum protection mode.
• SYNC must be specified for redo transport.
• Real-time query must be enabled.

Setting STANDBY_MAX_DATA_DELAY by Using an AFTER LOGON Trigger

Create an AFTER LOGON trigger that:
• Is database role aware
– It uses DATABASE_ROLE, a new attribute in the USERENV context.
– SQL and PL/SQL clients can retrieve the database role programmatically using the SYS_CONTEXT function.
– It enables you to write role-specific triggers.
• Sets STANDBY_MAX_DATA_DELAY when the application logs on to a real-time query–enabled standby database
• Allows for configuration of a maximum data delay without changing the application source code

CREATE OR REPLACE TRIGGER sla_logon_trigger
AFTER LOGON
ON APP.SCHEMA
BEGIN
IF (SYS_CONTEXT('USERENV', 'DATABASE_ROLE')
IN ('PHYSICAL STANDBY'))
THEN execute immediate
'alter session set standby_max_data_delay=5';
ENDIF;
END;

Physical Standby Dataguard Health Checks

Physical Standby Dataguard Health Checks


Check dataguard status of errors:

select message,to_char(timestamp,'DD-MM-YY HH24:MI'),error_code from v$dataguard_status where error_code != 0;

SELECT DESTINATION, STATUS, ERROR FROM V$ARCHIVE_DEST WHERE DEST_ID=2;

Check transport status:

primary> select * from v$archive_processes where status <> 'STOPPED';

primary> select status,error from v$archive_dest where status <>'INACTIVE';

Background process status in DG:

standby or primary> select process,status,client_process,sequence#,block#,active_agents,known_agents from v$managed_standby;

To know LOG status:

standby> select 'Last Applied : ' Logs, to_char(next_time,'DD-MON-YY:HH24:MI:SS') Time from v$archived_log where sequence# = (select max(sequence#) from v$archived_log where applied='YES')
union select 'Last Received : ' Logs, to_char(next_time,'DD-MON-YY:HH24:MI:SS') Time from v$archived_log where sequence# = (select max(sequence#) from v$archived_log);

To know last sequence released and applied:

Standby> select al.thrd "Thread", almax "Last Seq Received", lhmax "Last Seq Applied" from (select thread# thrd, max(sequence#) almax from v$archived_log
where resetlogs_change#=(select resetlogs_change# from v$database) group by thread#) al, (select thread# thrd, max(sequence#) lhmax from v$log_history where first_time=(select max(first_time) from v$log_history)
group by thread#) lh where al.thrd = lh.thrd;

To know transport lag time, apply lag time and apply finish time:

standby>
set lines 180
col name for a40
col value for a40
col unit for a40
select NAME, VALUE, UNIT from v$dataguard_stats union select null,null,' ' from dual union select null,null,'Time Computed: '||MIN(TIME_COMPUTED) from v$dataguard_stats;

Ensure everything is ok from PRIMARY:

select status,error from v$archive_dest where status <>'INACTIVE';
select * from v$archive_processes where status <> 'STOPPED';
show parameter dest_2
show parameter state_2
alter system set log_archive_dest_state_2=enable scope=both sid='*';
alter system switch logfile;
alter system switch logfile;

Friday, November 22, 2019

12c Multitenant Activities

12c Multitenant Activities


1.Backup and Recovery of CDB & PDB databases in 12c.


Backup:

How to take backup of container and pluggable database in command?
rman target /
backup database plus archivelog;
first it will root container, pluggable database and then seed database.

How to take the backup of root container database?
rman>backup database root;

How to take the backup of a individual pluggable database?
rman>backup pluggable database pdb1;
or
rman target=sys@pdb1
rman>backup database;

Recovery:

To do a full recovery of PDB database ?
rman target=/
run{
alter pluggable database pdb1 close;
restore pluggable database pdb1;
recover pluggable databaes pdb1;
alter pluggable database pdb1 open;
}

PITR in pluggable database ?

rman target=/
run{
alter pluggable database pdb1 close;
set until time "SYSDATA - 10/24/2019"; (we can give either TIME, SCN or SEQUENCE)
restore pluggable database pdb1;
recover pluggable databaes pdb1;
alter pluggable database pdb1 open;
}


2.How to clone a local pluggable database (PDB) ?


-connect to root container database
conn / as sysdba

-check the pdb's
select name, open_mode from v$pdbs order by name;

-switch the source PDB to read-only mode.
alter pluggable database pdb1 close;
alter pluggable database pdb1 open read only;

-clone the PDB
-if you are not using OMF, convert the paths
sql>create pluggable database pdb2 from pdb1
file_name_convert=('/u01/app/oradata/cdb1/pdb1/','/u01/app/oradata/cdb1/pdb2');
alter pluggable database pdb2 open;

-check the pdbs status
select name, open_mode from v$pdbs order by name; (here only pdb2 will be read write mode)

-switch the source PDB to read-write mode.
alter pluggable database pdb1 close;
alter pluggable database pdb1 open;

-check the pdbs status
select name, open_mode from v$pdbs order by name;

How to remove the clone PDB's ?

alter pluggable database pdb2 close;
drop pluggable database pdb2 including datafiles;



How to clone a remote pluggable database (PDB) ?


-connect to remote PDB
conn sys/password@pdb5 as sysdba

-check database name
select name from v$database;

-create & grant a user we can point a database link to
create user remote_clone_user identified by remote_clone_user;
grant create session, create pluggable database to remote_clone_user;

-switch the remote PDB to read-only mode
alter pluggable database pdb5 close;
alter pluggable database pdb5 open read only;

-connect to root container in the local CDB
conn sys/password@CDB1 as sysdba

-check the database name
select name from v$database;

-create a database link to the remote PDB
create database link clone_link connect to remote_clone_user identified by abc123 using 'pdb5';

-clone the remote database over the database link
create pluggable database pdb5new from pdb5@clone_link file_name_convert=('/u01/aap/oradata/cdb3/pdb5/','/u01/app/oradata/cdb1/pdb5new/');

alter pluggable database pdb5new open;

-check the pdb's
select name, open_mode from v$pdbs order by name;

-open the remote PDB in read-write mode again
alter pluggable database pdb5 close;
alter pluggable database pdb5 open;


How to connect to PDB & CDB database's ?


-connect to CDB using OS authentication
conn / as sysdba

-connect to CDB using a local connection
conn system/manger

-check available services
select name, pdb from v$services order by name;
all services are also registered with listener

-connect to CDB using EZCONNECT & tnsnames.ora
conn system/manager@//localhost:1521/cdb1
conn system/manager@cdb1

-display the container name and ID usign sqlplus
show con_name
show con_id

-switch to a user created pluggable database
alter session set container=pdb1;
show con_name

-switch back to root container
alter session set container=cdb$root;

-connect to pdb using tnsnames.ora
conn system/password@pdb1


How to create a empty pluggable database ?


-connect to root container
conn / as sysdba

-create a pdb based on seed
create pluggable database pdb2 admin user pdb_adm identifed by password1 file_name_convert=('/u01/app/oradata/cdb1/pdbseed)',/u01/pp/oradata/cdb2/pdb3/';
(or)
-set file_name_convert at session level
alter session set pdb_file_name_convert='/u01/app/oradata/cdb1/pdbseed/','/u01/app/oradata/cdb1/pdb4/';
create pluggable database pdb4 admin user pdb_adm identified by password1;

Alertlog show below message:

create pluggable database pdb4 admin user pdb_adm identified by *
Tue Dec 31 02:02:26 2019
Opatch XML is skipped for PDB PDB$SEED (conid=2)
 APEX_040200.APEX$_WS_NOTES (CONTENT) - CLOB populated
Tue Dec 31 02:03:17 2019
****************************************************************
Pluggable Database PDB4 with pdb id - 5 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
Database Characterset for PDB4 is WE8MSWIN1252
Deleting old file#5 from file$
Deleting old file#7 from file$
Adding new file#13 to file$(old file#5)
Adding new file#14 to file$(old file#7)
Successfully created internal service pdb4 at open
ALTER SYSTEM: Flushing buffer cache inst=0 container=5 local
****************************************************************
Post plug operations are now complete.
Pluggable database PDB4 with pdb id - 5 is now marked as NEW.
****************************************************************
Completed: create pluggable database pdb4 admin user pdb_adm identified by *


-check pdb views
select pdb_name, status from dba_pdbs;
Newly created database status show as NEW
select name, open_mode from v$pdbs;

-finish pdb creation by opening the PDB
alter pluggable database pdb2 open read write;

-check pdb views
select pdb_name, status from dba_pdbs;
Newly created database status show as NORMAL

If you want to drop the newly created PDB's
alter pluggable database pdb3 close;
drop pluggable database pdb3 including datafiles;


How to manage tablespaces in a CDB & PDB ?


connect to root container
conn / as sysdba

show con_name
CDB$ROOT

create a tablespace for the CDB:
create tablespace dummy datafile '/u01/app/oradata/dummy01.dbf' size 10m autoextend on next 10m;

adding a datafile to it:
alter tablespace dummy add datafile '/u01/app/oradata/dummy02.dbf' size 10m autoextend on next 10m;

remove the tablespaces:
drop tablespace dummy including contents and datafiles;

create a tablespace for PDB:
alter session set container=pdb1;
(or)
conn sys/password@pdb1 as sysdba

show con_name
PDB1

create a tablespace for the PDB:
create tablespace dummy datafile '/u01/app/oradata/dummy01.dbf' size 10m autoextend on next 10m;

adding a datafile to it:
alter tablespace dummy add datafile '/u01/app/oradata/dummy02.dbf' size 10m autoextend on next 10m;

remove the tablespaces:
drop tablespace dummy including contents and datafiles;

Note:

A PDB doesn't have an undo tablespace, and we can't see it
We can see the undo datafile for the CDB from v$datafile

create temporary tablespace in the PDB:
create temporary tablespace temp2 tempfile '/u01/app/oradata/temp02.dbf' size 10m autoextend on next 10m;

remove a temporary tablespace:
drop tablespace temp2 including contents and datafiles;

There are 2 options for setting default tablespaces in PDB:

alter database default tablespace users;
alter database default temporary tablespace temp;
(or)
use PDB syntax for clarity
alter pluggable database default tablespace user;
alter pluggable database default temporary tablespace temp;


How to managing users and roles for CDB's and PDB's ?


connect to root container database

creating a common user:
create user c##test_user1 identified by password1 container=all;
grant create session to c##test_user1 container=all;

container=all default when issued from root container

create user c##test_user2 identified by password1;
grant create session to c##test_user2;

Switch container while connected to a common user or switch to PDB, alter session set container=PDB1;
conn sys/password1@pdb1 as sysdba

create the local user using the container clause:
create user test_user3 identified by password1 container=current;
grant create session to test_user3 container=current;

container=current default when issued from non-root container.
create user test_user4 identified by password1;
grant create session to test_user4;

Make a local user a DBA role:
grant PDB_DBA to test_user4;

create a common role
conn / as sysdba
create role c##test_role1;
grant it to a common user
grant c##test_role1to c##test_user1 container=all;

grant it to a lcoal user
conn sys/password1@pdb1
grant c##test_role1 to test_user3;

create local role:
create role test_role1;
grant create session to test_role1;


How to pluggable database PDB's archive files ?


connect to CDB1 instance
conn sys/password1@cdb1 as sysdba

display the current PDB's:

show pdbs

pdb1 read write

unplug:

alter pluggable database <pdb1> close immediate;
alter pluggable database <pdb1> unplug into '/u01/mqm/pdb1.pdb';

we can safely drop the PDB after archiving
drop pluggable database pdb1 including datafiles;

We can check the content of the pdb archive files using unzip -l command
It will show the .dbf files plus .xml file


plug-in:

check its safe to plugin.

set serveroutput on
declare
1_result BOOLEAN;
BEGIN
1_result := DBMS_PDB.check_plug_compatibility(pdb_descr_file => '/tmp/pdb1.pdb',
      pdb_name =>'pdb1');
IF 1_result THEN
DBMS_OUTPUT.PUT_LINE('compatible');
ELSE
DBMS_OUTPUT.PUT_LINE('incompatible');
END IF;
END;
/

plugin from an archive file.
create pluggable database pdb1 using '/u01/mqm/pdb1.pdb';
alter pluggable database pdb1 open read write;

show pdbs


How to refresh pluggable database PDB's ?


connect to CDB1 instance
conn sys/password1@cdb1 as sysdba

display current pdb's

show pdbs

pdb5 read write

create link user for DBlink
create user c##remote_clone_user identified by remote_clone_user container=all;
grant create session, create pluggable database to c##remote_clone_user container=all;

connect to CDB1 and create database link.
conn sys/password1@cdb1 as sysdba
create database link clone_link connect to c##remote_clone_user identified by remote_clone_user using 'cdb3';

create refreshable read-only PDB
create pluggable database pdb5_ro from pdb5@clone_link refresh mode manual;
alter pluggable database pdb5_ro open read only;

Make changes to the source PDB.
conn sys/password1@cdb3 as sysdba
alter session set container=pdb5;
create tablespace test_tbs datafile size 10m autoextend on next 10m;

check for the TEST_TBS tablespace in the refreshable PDB.
conn sys/password1@cdb1 as sysdba
alter session set container=pdb5_ro;
select tablespace_name from dba_tablespaces;
It will not show the test_tbs tablespace.

Refresh the read-only PDB.
alter session set container=pdb5_ro;
alter pluggable database close immediate;
alter pluggable database refresh;
alter pluggable database open read only;
select tablespace_name from dba_tablespaces;
Now it will show the test_tbs tablespace.

alter the refreshable PDB refresh mode.
alter pluggable database pdb5_ro refresh mode every 120 minutes;
select pdb_id, pdb_name, refresh_mode, refresh_interval from dba_pdbs order by 1;


How to startup & shutdown of PDB's & CDB's database's ?


CDB shutdown:
conn  / as sysdba
shutdown immediate;

CDB startup:
conn / as sysdba
startup

using alter command.
alter pluggable datababase <pdb1_name,pdb2_name,or all> open or close;

How to unplug and plugin pluggable database (PDB's) ?

check the availabe pdb's by connecting root container database.
conn / as sysdba
select name, open_mode from v$pdbs order by name;

create a new PDB to test the unplug/plugin.
create pluggable database pdb2 admin user pdb_adm identified by password1 file_name_convert=('/u01/app/oradata/cdb1/pdbseed/','/u01/app/oradata/cdb1/pdb2/');
alter pluggable database pdb2 open read write;

unplug the PDB:
alter pluggable database pdb2 close;
alter pluggable database pdb2 uplug to '/u01/app/mqm/oradata/cdb1/pdb2/pdb2.xml';

check the pdbs are present.
select name, open_mode from v$pdbs order by name;
Here pdb2 should be mount stage.

Drop the PDB, but keep the files.
drop pluggable database pdb2 keep datafiles;

check again the pdb status.
select name, open_mode from v$pdbs order by name;
It will drop the PDB2 database.

check the compatibility.

plugin and rename the PDB:
create pluggable database pdb5 using '/u01/app/mqm/oradata/cdb1/pdb2/pdb2.xml' file_name_convert=('/u01/app/oradata/cdb1/pdb2/','/u01/app/oradata/cdb1/pdb5/');
alter pluggable database pdb5 open;

check the status
select name, open_mode from v$pdbs order by name;

drop the new PBD
alter pluggable database pdb5 close;
alter pluggable database pdb5 including datafiles;

plugin the database using the same name and file position.
create pluggable database pdb2 using '/u01/app/mqm/oradata/cdb1/pdb2/pdb2.xml'
nocopy
tempfile reuse;
alter pluggable database pdb2 open read write;

check the status
select name, open_mode from v$pdbs order by name;











The Multitenant Architecture

The Multitenant Architecture


The Multitenant architecture allows a single super container (“Container” alias CDB) to shelter multiple other containers (“Pluggable” alias PDB). From the server subsystem, there is only one database i.e. the container database, and therefore only one instance (and one SID) is available i.e. the CDB, while each application sees only the specific PDB to which it connects to. There is no application code change required while connecting to a PDB. Each PDB runs as a service within the CDB but ensures complete security and isolation amongst them.

Traditionally, an Oracle database consists of its instance and the file structure. An Oracle database instance is made up of the memory and the background process. The file structure comprises of the datafiles (.dbf), control file, spfile, password file, redo logs, archived logs, undo segment and temp tablespace. Oracle 12c Multitenant architecture simply reorganizes these components and categorizes them as “sharable” component and “application component”. The “sharable” components are instance level pieces and the system pieces of a database like SPFILE (parameter file), control file, SYSTEM and SYSAUX tablespace, redo logs, archived log files, UNDO and TEMP. These sharable components exist at the root container (CDB) level. The user or application data for each database is the application component and therefore, each PDB just holds its own user data. Each PDB plugs into the CDB with its application data only while shares the memory, redo logs, archived logs, UNDO segment, and control file information with other pluggable databases in the same container.

A multitenant container database has three types of containers:

*The Oracle supplied container is called the root container (CDB$ROOT) and consists of just Oracle metadata (and maybe a little bit of user data) and common users. Each CDB has one root.

*The seed container is named PDB$SEED, and there is one of these per CDB. The purpose of the seed container isn’t to store user data—it’s there for you as a template to create PDBs.

*The user container , which is actually called a pluggable database (or PDB), consists of user metadata and user data.

Each of these—the root, the seed, and the PDB(s)—is called a container, and each container has a unique container ID (CON_ID) and container name (CON_NAME). Each PDB also has a globally unique identifier (GUID). The idea behind the concept of a container is to separate Oracle metadata and user data by placing the two types of data into separate containers. That is, the system and user data are separated. There’s a SYSTEM tablespace in both the central container and the PDB containers, however, they contain different types of data. The root container consists of Oracle metadata whereas the PDB container’s SYSTEM tablespace contains just user metadata. The Oracle metadata isn’t duplicated by storing it in each PDB—it’s stored in a central location for use by all the PDBs that are part of that CDB. The CDBs contain pointers to the Oracle metadata in the root container, thus allowing the PDBs to access these system objects without duplicating them in the PDBs. A CDB has similar background processes and files as a normal non-CDB database. However, some of the processes and files are common for both a CDB and its member PDB databases, and some aren’t.

Common Entities between CDB and PDBs

*Background processes There’s a single set of background processes for the CDB. The PDBs don’t have any background processes attached to them.

*Redo log files These are common for the entire CDB, with Oracle annotating the redo data with the identity of the specific PDB associated with the change. There’s one active online redo log for a single-instance CDB or one active online redo log for each instance of an Oracle RAC CDB. A CDB also has a single set of archived redo log files.

*Memory You allocate memory only to the CDB, because that’s the only instance you need in a multitenant database.

*Control files These are common for the entire CDB and will contain information that reflects the changes in each PDB.

*Oracle metadata All Oracle-supplied packages and related objects are shared.

*Temporary tablespace There’s a common temporary tablespace for an entire CDB. Both the root and all the PDBs can use this temporary tablespace. This common tablespace acts as the default TEMP tablespace. In addition, each PDB can also have a separate temporary tablespace for its local users.

*Undo tablespace All PDBs use the same undo tablespace. There’s one active undo tablespace for a single-instance CDB or one active undo tablespace for each instance of an Oracle RAC CDB.

A CDB contains a set of system data files for each container and a set of user-created data files for each PDB. Also CDB contains a CDB resource manager plan that allows resources management among the PDBs in that CDB.

Entities Exclusive for PDBs

*Tablespaces for the applications tables and indexes These application tablespaces that you’ll create are specific to a PDB and aren’t shared with other PDBs, or the central CDB. The data files that are part of these tablespaces constitute the primary physical difference between a CDB and a non-CDB. Each data file is associated to a specific container.

*Local temporary tablespaces Although the temporary tablespace for the CDB is common to all containers, each PDB can also create and use its own temporary tablespaces for its local users.

*Local users and local roles Local users can connect only to the PDB where the users were created. A common user can connect to all the PDBs that are part of a CDB.

*Local metadata The local metadata is specific to each application running in a PDB and therefore isn’t shared with other PDBs.

*PDB Resource Manager Plan These plans allow resource management within a specific PDB. There is separate resource management at the CDB level.

The PDB containers have their own SYSTEM and SYSAUX tablespaces. However, they store only user metadata in the SYSTEM tablespace and not Oracle metadata.

Data files are associated with a specific container. A permanent tablespace can be associated with only one container. When you create a tablespace in a container, that tablespace will always be associated with that container.

Tuesday, November 5, 2019

Oracle GoldenGate Basics

Oracle GoldenGate Basics


What is Oracle GoldenGate ?

Oracle Golden Gate is a software product that allows you to replicate, filter, and transform data from one database to another database.

What You Can do By using Oracle GoldenGate ?

-> Using Oracle Golden Gate, you can move committed transactions across multiple heterogeneous systems in your enterprise.
-> It enables you to replicate data between Oracle databases to other supported heterogeneous database, and between heterogeneous databases.
-> You can replicate to Java Messaging Queues, Flat Files, and to Big Data targets in combination with Oracle Golden Gate for Big Data.

Oracle GoldenGate Purposes ?

-> A static extraction of data records from one database and the loading of those records to another database.
-> Continuous extraction and replication of transactional Data Manipulation Language (DML) operations and data definition language (DDL) changes (for supported databases) to keep source and target data consistent.
-> Extraction from a database and replication to a file outside the database.

Oracle GoldenGate Key Features

-> Data movement is in real-time, reducing latency.
-> Only committed transactions are moved, enabling consistency and improving performance.
-> Different versions and releases of Oracle Database are supported along with a wide range of heterogeneous databases running on a variety of operating systems.
-> You can replicate data from an Oracle Database to a different heterogeneous database.
-> Simple architecture and easy configuration.
-> High performance with minimal overhead on the underlying databases and infrastructure.

GoldenGate supported Heterogeneous Environments ?

-> Different platforms (Linux, Windows, etc.)
-> Different databases (Oracle, MySQL, SQLServer, etc.)
-> Different database versions

Oracle GoldenGate Components

-> Extract
-> Data pump
-> Replicat
-> Trails or extract files
-> Checkpoints
-> Manager
-> Collector

Oracle GoldenGate Logical Architecture ?




When Do You Use Oracle GoldenGate ?

The most common use cases are
-> Business continuity and high availability
-> Initial load and database migration and upgrades with zero downtime
-> Data integration
-> Decision support and data warehousing.

How Do You Use Oracle GoldenGate ?

There are many different architectures that can be configured; which range from a simple uni-directional architecture to the more complex peer-to-peer. No matter the architecture, Oracle GoldenGate provides similarities between them, making administration easier.

Oracle Golden Gate Supported Topologies ?



Saturday, October 26, 2019

How To Enable Flash Recovery Area In Oracle Database

How To Enable Flash Recovery Area In Oracle Database


The flash recovery area(FRA) is an Oracle-managed destination( either FILE SYSTEM or ASM ) for centralized backup and recovery files. It simplifies the backup management.

The following recovery-related files are stored in the flash recovery area:
— Current control file
— Online redo logs
— Archived redo logs
— Flashback logs
— Control file auto backups
— Datafile and control file copies
— Backup pieces
— Foreign archived redo log

Below are the steps for enabling flash recovery area.

DB_RECOVERY_FILE_DEST_SIZE and DB_RECOVERY_FILE_DEST initial parameters are required for enabling FRA.

DB_RECOVERY_FILE_DEST_SIZE -> It is the disk quota size for the flash recovery area.
DB_RECOVERY_FILE_DEST – > This initialization parameter is a valid destination for the Flash Recovery Area. It can be a directory, file system, or ASM disk group.

NOTE : DB_RECOVERY_FILE_DEST_SIZE must be set before DB_RECOVERY_FILE_DEST.

1. Check whether FRA is enabled or not.

SQL> show parameter db_recovery_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string
db_recovery_file_dest_size           big integer 0

SQL> select * from V$RECOVERY_FILE_DEST;

no rows selected

2. Enable FRA.

SQL> alter system set db_recovery_file_dest_size=20G scope=both;
SQL> alter system set db_recovery_file_dest='/u01/oradata/FRA' scope=both;

SQL> show parameter db_recovery_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      /u01/oradata/FRA
db_recovery_file_dest_size           big integer 20G


select * from V$RECOVERY_FILE_DEST;
NAME                                   SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
-------------------------------------- ----------- ---------- ----------------- --------------- ----------
/u01/oradata/FRA               2.1475E+10          0                 0               0          0

Now FRA has been enabled.

How to make archivelog destination same as flash recovery area:

SQL> alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' scope=both;

System altered.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     816
Next log sequence to archive   818
Current log sequence           818
SQL> alter system switch logfile;

select * from v$flash_recovery_area_usage

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                           .01                         0               1          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                          .49                         0               2          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

FOR RAC:

For RAC database, DB_RECOVERY_FILE_DEST_SIZE and DB_RECOVERY_FILE_DEST parameter must be same across all the instances. So it is recommended to put FRA on ASM DISKS.

SQL> alter system set db_recovery_file_dest_size=20G scope=both sid='*';
SQL> alter system set db_recovery_file_dest='+FRADG' scope=both sid='*';

   

Sunday, October 20, 2019

Oracle RAC Upgrade From 11.2.0.4 to 12.1.0.2

Oracle RAC Upgrade From 11.2.0.4 to 12.1.0.2


Main steps:

GRID_HOME:

1.check all the services are up and running of GRIDHOME (11gr2)
crsctl check crs
crsctl status resource -t
2.check the GRID infrastructure software version
crsctl query crs activeversion
3.check database status and configuration
srvctl status database -d PROD
srvctl config database -d PROD
4.perform the local backup of OCR, GRID and DATABASE.
./ $GRID_HOME/bin/ocrconfig -export /u01/ocrmanulbkp
use tar command for GRID and DATABASE backup
5.Run the cluvfy utility from 12cr1 to pre-check any errors
./runclufy.sh stage -pre crsinst -upgrade -rolling -scr_crshome <11g loc> -dest_crshome <12c loc> -dest_version 12.1.0.2 -
versbose (-fixup) optionally
6.stop the running 11g database
srvctl stop database -d PROD
6.Unzip the 12c software, change the ownership and permissions
7.start the 12cr1 upgrade by executing ./runInstaller
-select upgrade GRID infrastructure with ASM
-check the existing grid homes
-provide the location of 12c grid home
8.execute the rootupgrade.sh script in both nodes from root user
9.check the clusterware upgrade version
crsctl query crs activeversion
10.start the 11g database
srvctl start database -d PROD
srvctl status database -d PROD

RDBMS_HOME:

1.backup the database before upgrade(important)
take level zero backup or cold backup
2.database pre-checks
-run cluvfy.sh stage -pre dbinst -upgrade -src_dbhome <11g> -dest_dbhome <12c> dest_version 12.1.0.2.0
-unzip the database software
-execute runInstaller
-select the RAC cluster database
-provide the 12c binaries home location
-run the root.sh script on both the nodes with root user

DATABASE:

1.perform the pre-checks
-check the invalid objects
select count(*) from dba_objects where status='INVALID';
-check the duplicate objects owned by sys and system
other than 4 we need to cleanup
2.run the preupgrade tool
copy the prequprd.sql and utluppkg.sql from 12c and run on 11g home
preupgrd.sql will generate 3 files, prequpgrade.log, preupgrade_fixups.sql and postupgrade_fixups.sql
3.execute the preupgrade_fixups.sq
it will set few parameters like processess, job_queue_processes, execute gather status and purge recyclebin
4.check the timezone version, for 12c database timezone is 18 and 11g is 14
select version from v$timezone_file;
5.make sure no files in begin backup mode and no files in media recovery
select * from v$recover_file;
6.disable cronjob and dbms_scheduler jobs
execute dbms_scheduler.disable;
7.remove EM repository because it is superseded in 12c
emctl stop dbcontrol
@?/rdbms/admin/emremove.sql
8.enable flashback by setting 2 parameters
db_recovery_file_dest_size and db_recovery_file_dest
alter database flashback on;
9.create restore point
create restore point bef_upgrade guarantee flashback database;
10.stop the 11g listener and database.
lsnrctl stop PROD
shut immediate;
11.update theORACLE_HOME paths pointing to 12c home
export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_SID=PROD
12.move spfile and password file from 11g home dbs directory to 12c home dbs directory
13.start the 12c database with startup upgrade option
startup upgrade;
14.run catupgrade script from os level with parallel=6
goto cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl catctl.pl -n 6 -1 $ORACLE_HOME/diagnostics catupgrd.sql
check the logfiles for diagnosis the upgrde process
15.run the post upgrade status tool
startup
@utlu121s.sql
16.run the catuppst.sql script
$ORACLE_HOME/rdbms/admin/catuppst.sql
16.upgrade DST timezone
downlod the dst upgrade script from metalink
17.run the postupgrade_fixups.sql script
18.change the ORACLE_HOME to 12c in listener.ora file
19.uncomment the crontab and enable the dba_scheduler_jobs
18.run utlrp.sql to compile to invalid objects
19.update the compatible parameter and bounce the db
alter system set compatibe='12.1.0.2' scope=spfile;
Once compatible is set to higher version, downgrade is not possible

IF UPGRADE FAILS:

If you have done the prechecks properly, then chances are very less that, upgrade willl fail. but enven if it is fails for any other reasons like server crash during upgrade, then follow below steps to revert back to 11g version.

1.shutdown immediate;
2.set ORACLE_HOME to 11g
3.startup mount (with the 11g spfile)
4.select * from v$restore_point;
5.flashback database to restore point bef_upgrade (this restore point was created before upgrade)
6.alter database open resetlogs;

Oracle RAC Patching Steps In 11gr2

Oracle RAC Patching Steps In 11gr2


Patch is a software code.

-To fix the bugs
-Defects in software
-It delivers new features/tools

Disadvantages:

-Downtime

Classifications of patches:

-Security patche (CPU/SPU)
-PSU

Pre-requisite before patch (PPA):

-check the invalid objets
-patch conflicts
-patch dependencies
-patch status (applied or not)
-check opatch version (6880880)

GRID patching:

While applying patches on GRID home, bydefault the GRID home is locked

cd $GRID_HOME/crs/install

#./rootcrs.pl -unlock

-it will stop the grid services.
-unlock the GRID home.

$Opatch apply -local
after applying the patch

cd $GRID_HOME/crs/install
#rootcrs.pl -patch

-it will start the GRID services
-it will lock the GRID home.

Apply the GRID patch on node2 with above steps

Patching Steps:

GRID_HOME patching:

-export PATH=$ORACLE_HOME/OPatch:$PATH:.
-check the OPatch version
-check the patch conflicts
-check the invalid objects
-Run rootcrs.pl -unlock script which is located in $GRID_HOME/crs/install with root privileges
-finally apply with OPatch
-OPatch apply -local

RDBMS_HOME patching:

-set the paths
-srvctl stop instance -d PROD -i PROD1
-OPatch apply -local
-srvctl start instance -d PROD -i PROD1
-Run the catbundle.sql script

Note:
catbundle.sql script will update the dba_registry_history table
root.sh will be located in $GRID_HOME/crs/install



Wednesday, October 16, 2019

Oracle Enterprise Manager (OEM) 13c

Oracle Enterprise Manager (OEM) 13c


Oracle Enterprise Manager is Oracle’s solution to manage/monitor various database/applications which may be running on Cloud infrastructure or on-premise.





1) AGENTS
Agent or management agent is a piece of software that runs on every host that you want to monitor. Management agent work in conjunction with plug-ins to monitor the target server. All hosts where you install the agents are then termed as “Managed Hosts”.

Agents can be of two types:

CENTRAL AGENT : When you first install Oracle Management Service (OMS), by default you receive a management agent called the Central Agent. It is used for monitoring the OMS host, the OMS, and the other targets running on this OMS host.

STANDALONE TARGET AGENT : To monitor other hosts and the targets running on those hosts, you must install a separate Standalone Management Agent on each of those hosts.

2) ORACLE MANAGEMENT SERVICE

OMS is the actual brain of OEM. OMS is deployed over Weblogic 13c. It is a web-based application and perform below broad level activities:

-Works with the Management Agents and the plug-ins to discover targets.
-Monitor and manage the agents
-Store the collected information in a repository for future reference and analysis
-Renders the user interface for Enterprise Manager Cloud Control.

3) ORACLE MANAGEMENT REPOSITORY
Oracle Management Repository is simply a database/database schema where all the information collected by the Management Agent gets stored. The Management Repository then organizes the data so that it can be retrieved by the OMS and displayed in the Enterprise Manager Cloud Control console.

4) PLUG-INS
Plug-ins as name suggest are pluggable entities that offer special management capabilities customized to suite specific target types. Example if your target type is Oracle EBS, you will need specifc plug-ins to monitor the Oracle EBS. Plug-ins are deployed to the OMS as well as the Management Agent in OEM 13c.

Good thing about Plug-ins is that they have independent release cycles, so every time you have a new version of an Oracle product released, you will have a new version of the plug-in released to support monitoring of that new product version in Enterprise Manager Cloud Control.

Default Plugins that OEM 13c will install are:

-Oracle Database
-Oracle Fusion Middleware
-Oracle Exadata
-Oracle Cloud Framework
-Oracle System Infrastructure

In addition to these plug-ins, you can optionally install other plug-ins available in the software kit.

5) BI PUBLISHER
Oracle Business Intelligence (BI) Publisher is Oracle’s primary reporting tool for authoring, managing, and delivering all your highly formatted documents. Starting with Oracle Enterprise Manager 13c, Oracle BI Publisher is installed and configured by default on the OMS.

6) CONSOLE
Console is the GUI Front end of the OEM application. With the help of the console, you can monitor and administer your entire computing environment from one location. All the systems and services including enterprise application systems, databases, hosts, middleware application servers, listeners etc will be visible through Console

7) EM CLI
The Enterprise Manager Command Line Interface (EMCLI) is a command-line too that is accessible through classic programming language constructs, enabling tasks t be created and run either from the command-line or programatically.

8) TARGETS
A target instance, can be defined as any entity that can be monitored within an enterprise. Managed targets are the entities that Enterprise Manager can monitor and manage. Examples of targets include hosts, databases, application servers, applications, and listeners. As your environment changes, you can add and remove targets from Enterprise Manager as required.

9) CONNECTORS
Connector is a very specialized piece of software whose work is to be act like an intermediary between OEM and third party application like BMC Remedy Ticket generation system. Connectors make your life easier in the sense that they give you ready-made solution to connect your OEM system to other famous third party applications.

10) JVMD ENGINE
Java Virtual Machine Diagnostics (JVMD) Engine enables you to diagnose performance problems in Java applications. Starting with Oracle Enterprise Manager 13c , as part of the Oracle Fusion Middleware Plug-in deployment, one JVMD Engine is installed and con􀃝gured by default on the OMS. You will also need JVMD Agents to be manually deployed on the targeted JVMs.