Pages

Saturday, April 29, 2017

How Redo Logs Work

How Redo Logs Work


The online redo log of a database consists of two or more online redo log files. Oracle needs a minimum of two files to guarantee that one is always available for writing while the other is being archived.The Oracle server sequentially records all changes made to the database in the redo log buffer. The redo entries are written from the redo log buffer to one of the online redo log groups called the current online redo log group by the LGWR process.

LGWR writes under the following situations:

-When a transaction commits
-When the redo log buffer becomes one-third full
-When there is more than a megabyte of changed records in the redo log buffer
-Before the DBWn writes modified blocks in the database buffer cache to the data files

LGWR writes to online redo log files in a circular fashion.Each redo log file group is identified by a log sequence number that is overwritten when log is reused.LGWR writes on the online redo log files continueslly. When the current online redo log grooup is filled log writer starts writing to the next group. When the last online redo log file is filled, LGWR returns to the first online redo log group and starts writing again.

Administrative Privileges

Administrative Privileges


Administrative privileges that are required for an administrator to perform basic database operations are granted through two special system privileges, SYSDBA and SYSOPER. You must have one of these privileges granted to you, depending upon the level of authorization you require.
The SYSDBA and SYSOPER system privileges allow access to a database instance even when the database is not open. Control of these privileges is totally outside of the database itself.
The SYSDBA and SYSOPER privileges can also be thought of as types of connections that enable you to perform certain database operations for which privileges cannot be granted in any other fashion. For example, you if you have the SYSDBA privilege, you can connect to the database by specifying CONNECT AS SYSDBA.

SYSDBA and SYSOPER

-SYSDBA            

Perform STARTUP and SHUTDOWN operations
ALTER DATABASE: open, mount, back up, or change character set
CREATE DATABASE
DROP DATABASE
CREATE SPFILE
ALTER DATABASE ARCHIVELOG
ALTER DATABASE RECOVER
Includes the RESTRICTED SESSION privilege

-SYSOPER          

Perform STARTUP and SHUTDOWN operations
CREATE SPFILE
ALTER DATABASE OPEN/MOUNT/BACKUP
ALTER DATABASE ARCHIVELOGALTER DATABASE RECOVER (Complete recovery only. Any form of incomplete recovery, such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA.)
Includes the RESTRICTED SESSION privilege

Row Chaining, Row Migrating, PCTFREE, PCTUSED

Row Chaining, Row Migrating, PCTFREE, PCTUSED


Row Chaining

The row is too large to fit into an EMPTY data block that is known as chained row. Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that segment. If  you have a 2k blocksize but your row size is 4k, we will use 3 blocks to store that row in pieces.  Any table with a long/long raw will have chained rows.Row chaining most often occurs with large rows, such as rows that contain a column of datatype LONG or LONG RAW. Row chaining in these cases isnot avoided. Any table whose rowsize exceeds the  blocksize will have chained rows.  Any table with more then 255 columns will have chained rows.You can see if you "have it" by looking at the system wide statistic:

select * from v$sysstat where name like 'table pop cont%';

Row Migrating

A row that was moved to another block due to an update making it too large to fit on its original block with the other rows there is known as migrated row.Oracle tries to find another Blockwith enough free space to hold the entire row.If such block is available Oracle moves the entire ROW to the NEW BLOCK.Oracle keeps the original Row piece of a Migrated row row to POINT to the NEW BLOCK containing the actual row.The ROWID of the MIGRATED rows does not change. INDEXES are not updated and they point to the ORIGINAL row LOCATION.

Migration and Chaining have negative effects on performance. INSERT and UPDATE statements that cause migration and chaining perform very poorly since due to additional PROCESSING. Queries that use an Index to select migrated or chained rows must perform additional I/O's.

PCTFREE Parameter

The PCTFREE parameter specifies the minimum percentage of data block to be reserved as free space. If you used PCTFREE 20 then at least 20% size of data block will be reserved as free space. For example if you use data block size 8K (DB_BLOCK_SIZE=8192) then PCTFREE 20 will reserved 1638 bytes as free space in a data block. This parameter is used to update to the existing rows already within each block.You can specify this parameter which issuing CREATE TABLE statement.
 


PCTUSED Parameter

This parameter specifies the minimum percentage of a block that can be used for row data plus overhead(data block header, table directory, and row directory) before new rows are added to the block. After a data block is filled to the limit determined by PCTFREE, Oracle considers the block unavailable for the insertion of new rows until the percentage of that block falls below the parameter PCTUSED.

If we set PCTUSED 40 in the CREATE TABLE statement then data block used for this table's data segment is considered unavailable for the insertion of any new rows until the amount of used space in the block falls to 39% or less (Here the block's used space has previously reached PCTFREE).

Shared Pool

Shared Pool


The shared pool is used to store the most recently executed SQL statements and the most recently used data definitions. The shared pool caches various types of program data. The shared pool stores parsing, interpreting, and executing all of the SQL statements and data dictionary information. The shared pool includes the following components:
-The library cache
-The dictionary cache
-Reserved Pool

The total size of the shared pool is determined by the initialization parameter SHARED_POOL_SIZE. Increasing the value of this parameter increases the amount of memory reserved for the shared pool.

Library Cache

Library Cache


The library cache stores information about the most recently used SQL and PL/SQL statements. Library cache is very important part of Oracle Sharedpool. Shared Pool controls execution of SQL statements. Shared pool is divided into Data dictionary Cache and Library Cache. In Dedicated server configuration Private SQL area is created in PGA of server process. Shared SQL areas are accessible to all users, so the library cache is contained in the shared pool with in the SGA.

Shared SQL Area

Shared SQL Area contains parse tree and execution plan of SQL cursors and PL/SQL programs. So executable form of SQL statements is available here which ca be reused. When a query is submitted to oracle server for execution, oracle checks if same query has been executed previously. If the parsed execution plan is found then this event is known as Library cache hit or soft parsing. If pared form of the statement is not found in the shared pool then new statement is parsed and its parsed version is stored in Shared SQL area. This is known as hard parse.

Oracle allocates memory from shared pool when a new statement is submitted. If required, oracle may deallocate memory from previous statements. As a result of this, deallocated statements shall require hard parsing when re-submitted. More resources are used to perform a hard parse. So it is very important to keep the size for shared pool large enough to avoid hard parsing.

As Library cache is kept inside Shared Pool so use SHARED_POOL_SIZE initialization parameter to increase the size of Shared Pool. It will indirectly increase memory available for Shared SQL Area.

Private SQL area

Each session issuing a SQL statement has a private SQL area in its PGA (see "Private SQL Area"). Each user that submits the same statement has a private SQL area pointing to the same shared SQL area. Thus, many private SQL areas in separate PGAs can be associated with the same shared SQL area. The database automatically determines when applications submit similar SQL statements. The database considers both SQL statements issued directly by users and applications and recursive SQL statements issued internally by other statements. The location of the private SQL area depends on the connection established for the session. If a session is connected through a shared server, then part of the private SQL area is kept in the SGA.



###DBhtml.sql

SET MARKUP HTML ON ENTMAP ON SPOOL ON PREFORMAT OFF ;
set pagesize 0 trimspool on feedback off lines 900;
set feedback off;
SET VERIFY OFF;
SET TRIMSPOOL ON ;
set trimspool on;
set pages 1500;
set lines 1000;
set underline off;
set colsep ','
set echo off
spool /u01/healthcheck/DB_NAME-PROD.HTML
select ' ** Database General Information ** '  from dual;
SELECT DBID "DATABASE_ID", NAME "DB_NAME", LOG_MODE, OPEN_MODE, RESETLOGS_TIME  FROM V$DATABASE;
SELECT instance_name, status, to_char(startup_time,'DD-MON-YYYY HH24:MI:SS') "DB Startup Time"
FROM   sys.v_$instance;
column "Host Name" format a15;
column "Host Address" format a15;
SELECT UTL_INADDR.GET_HOST_ADDRESS "Host Address", UTL_INADDR.GET_HOST_NAME "Host Name" FROM DUAL;
SELECT BANNER "VERSION" FROM V$VERSION;
col "Database Size" format a15;
col "Free space" format a15;
select round(sum(used.bytes) / 1024 / 1024/1024 ) || ' GB' "Database Size",
round(free.p / 1024 / 1024/1024) || ' GB' "Free space"
from (select bytes from v$datafile
union all select bytes from v$tempfile
union all select bytes from v$log) used,
(select sum(bytes) as p from dba_free_space) free
group by free.p;


select ' ** Database Physical Size **' from dual;
select sum(bytes/1024/1024/1024) "DB Physical Size(GB)" from dba_data_files;


select ' ** Database Logical Size **' from dual;
select sum(bytes/1024/1024/1024) "DB Actual Size(GB)" from dba_segments;

select ' ** Database Users who are expiring in 15 days **' from dual;
select username , ACCOUNT_STATUS from dba_users where ACCOUNT_STATUS NOT LIKE 'OPEN' and EXPIRY_DATE > sysdate -15;

select ' ** Database SGA Component Size **' from dual;
set line 200;
select    pool, m_bytes from ( select     pool, to_char( trunc(sum(bytes)/1024/1024,2), '99999.99' ) as M_bytes
    from     v$sgastat
    where     pool is not null   group     by pool
    union
    select     name as pool, to_char( trunc(bytes/1024/1024,3), '99999.99' ) as M_bytes
    from     v$sgastat
    where    pool is null  order     by 2 desc
    ) UNION ALL
    select    'TOTAL' as pool, to_char( trunc(sum(bytes)/1024/1024,3), '99999.99' ) from v$sgastat;

select ' ** DB Characterset Information **' from dual;
Select * from nls_database_parameters;
col name  format A60 heading "Control Files";
select name from   sys.v_$controlfile;
col member  format A40 heading "Redolog Files";
set line 200;
col archived format a15;
col status format a10;
col first_time format a20;
select a.group#, a.member, b.archived, b.status, b.first_time from v$logfile a, v$log b
where a.group# = b.group# order by a.group#;
select ' ** DB Profile and Default Information ** ' from dual;

set line 200;
col username format a25;
col profile format a20;
col default_tablespace format a25;
col temporary_tablespace format a25;
col EXPIRY_DATE format a20;
col lock_date format a20;
Select username, profile, default_tablespace, temporary_tablespace , EXPIRY_DATE,LOCK_DATE from dba_users;


select ' ** Users Log on Information  ** '  from dual;
set line 200;
col OSUSER format a40;
col STATUS format a15
col MACHINE format a35;
Select to_char(logon_time,'dd/mm/yyyy hh24:mi:ss') "Logon_Time",osuser,status,machine from v$session where type !='BACKGROUND';

select ' ** Monitoring Schema Growth Rate **'  from dual;
select    obj.owner "Owner",  obj_cnt "Objects",  decode(seg_size, NULL, 0, seg_size) "Size in MB"
from (select owner, count(*) obj_cnt from dba_objects group by owner) obj,
(select owner, ceil(sum(bytes)/1024/1024) seg_size
    from dba_segments group by owner) seg
where obj.owner  = seg.owner(+)
order by 3 desc ,2 desc, 1;

select ' ** Largest object in Database ** '   from dual;
SET LINE 200;
col SEGMENT_NAME format a30;
col SEGMENT_TYPE format a30;
col BYTES format a30;
col TABLESPACE_NAME FORMAT A30;
SELECT * FROM (select SEGMENT_NAME, SEGMENT_TYPE, BYTES/1024/1024/1024 GB, TABLESPACE_NAME from dba_segments order by 3 desc ) WHERE ROWNUM <= 5;

select ' ** Monitoring Most resource usnig SQL statements ** 'from dual;
set line 200;
SELECT * FROM   (SELECT Substr(a.sql_text,1,50) sql_text,
               Trunc(a.disk_reads/Decode(a.executions,0,1,a.executions)) reads_per_execution,
               a.buffer_gets, a.disk_reads, a.executions, a.sorts, a.address
        FROM   v$sqlarea a
        ORDER BY 2 DESC)
WHERE  rownum <= 5;

select ' ** Monitoring Objects Created within 7 days ** ' from dual;
select count(1) from user_objects where CREATED >= sysdate - 7;

select ' ** Counting Invalid object in Database ** '   from dual;
Select owner, object_type,object_name, count(*) from dba_objects where status='INVALID' group by  owner, object_type,object_name;

select ' ** Monitoring Current Running Long Job in DB ** '  from dual;
SELECT SID, SERIAL#, opname, SOFAR, TOTALWORK,
ROUND(SOFAR/TOTALWORK*100,2) COMPLETE
FROM   V$SESSION_LONGOPS
WHERE TOTALWORK != 0 AND    SOFAR != TOTALWORK order by 1;

select ' ** Monitoring DML Lock **'   from dual;
set line 200;
col username format a30;
col lock_type format a20;
col osuser format a30;
col owner format a25;
col object_name format a50;
SELECT s.sid, s. serial#, s.username, l.lock_type, s.osuser, s.machine,
    o.owner, o.object_name, ROUND(w.seconds_in_wait/60, 2) "Wait_Time"
FROM
     v$session s, dba_locks l, dba_objects o, v$session_wait  w
WHERE   s.sid = l.session_id
  AND l.lock_type IN ('DML','DDL')
  AND l.lock_id1 = o.object_id
  AND l.session_id = w.sid
ORDER BY   s.sid;

select ' ** Track Redolog Generation ** '  from dual;
select trunc(completion_time) logdate, count(*) logswitch, round((sum(blocks*block_size) / 1024 / 1024)) "REDO PER DAY(MB)"
from v$archived_log
group by trunc(completion_time)
order by 1;

select ' ** Monitor DB Corruption or Need of Recovery ** '  from dual;
set line 200;
SELECT r.FILE# AS df#, d.NAME AS df_name, t.NAME AS tbsp_name, d.STATUS,
    r.ERROR, r.CHANGE#, r.TIME FROM V$RECOVER_FILE r, V$DATAFILE d, V$TABLESPACE t
    WHERE t.TS# = d.TS# AND d.FILE# = r.FILE#;

select ' **  Tablespace Information  **'  from dual;
col tablespace_name format a15 heading "Tablespace Name"
SELECT Total.name "Tablespace Name",
       nvl(Free_space, 0) Free_space,
       nvl(total_space-Free_space, 0) Used_space,
       total_space
FROM
  (select tablespace_name, sum(bytes/1024/1024) Free_Space
     from sys.dba_free_space
    group by tablespace_name
  ) Free,
  (select b.name,  sum(bytes/1024/1024) TOTAL_SPACE
     from sys.v_$datafile a, sys.v_$tablespace B
    where a.ts# = b.ts#
    group by b.name
  ) Total
WHERE Free.Tablespace_name(+) = Total.name
ORDER BY Total.name
/
select ' ** Shows Used/Free Space Per Datafile **' from dual;

set linesize 200
col file_name format a50 heading "Datafile Name"


SELECT SUBSTR (df.NAME, 1, 40) file_name, df.bytes / 1024 / 1024 allocated_mb,
         ((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0))
               used_mb,
         NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb
    FROM v$datafile df, dba_free_space dfs
   WHERE df.file# = dfs.file_id(+)
GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes
ORDER BY file_name;

TTI off

select ' ** Report Tablespace < 10% free space **' from dual;
set pagesize 300;
set linesize 100;
column tablespace_name format a15 heading Tablespace;
column sumb format 999,999,999;
column extents format 9999;
column bytes format 999,999,999,999;
column largest format 999,999,999,999;
column Tot_Size format 999,999 Heading "Total Size(Mb)";
column Tot_Free format 999,999,999 heading "Total Free(Kb)";
column Pct_Free format 999.99 heading "% Free";
column Max_Free format 999,999,999 heading "Max Free(Kb)";
column Min_Add format 999,999,999 heading "Min space add (MB)";
select a.tablespace_name,sum(a.tots/1048576) Tot_Size,
sum(a.sumb/1024) Tot_Free, sum(a.sumb)*100/sum(a.tots) Pct_Free,
ceil((((sum(a.tots) * 15) - (sum(a.sumb)*100))/85 )/1048576) Min_Add
from (select tablespace_name,0 tots,sum(bytes) sumb
from sys.dba_free_space a
group by tablespace_name
union
select tablespace_name,sum(bytes) tots,0 from
sys.dba_data_files
group by tablespace_name) a
group by a.tablespace_name
having sum(a.sumb)*100/sum(a.tots) < 10
order by pct_free;
-- Audited Failed Logins ;
--select ' ** Audited Failed Logins ** ' from dual;
--SELECT *  FROM SYS.DBA_AUDIT_SESSION WHERE returncode != 0  and     TIMESTAMP >=  Sysdate-1  ORDER BY extended_timestamp desc;
--SELECT USERNAME, OS_USERNAME, USERHOST, EXTENDED_TIMESTAMP FROM SYS.DBA_AUDIT_SESSION WHERE returncode != 0 ORDER BY extended_timestamp desc;
--select ' ** Audited Privileges ** ' from dual;
--Audited Privileges
--SELECT PRIV_USED, USERNAME, ACTION_NAME, TIMESTAMP FROM  SYS.DBA_AUDIT_TRAIL dat, SYS.DBA_PRIV_AUDIT_OPTS dpao WHERE
-- dat.PRIV_USED = dpao.PRIVILEGE  and dat.USERNAME = dpao.USER_NAME  and to_char(extended_timestamp AT TIME ZONE DBTIMEZONE, 'YYYY-MM-DD HH24:MI:SS TZD') > TO_CHAR((SYSDATE-1),'YYYY-MM-DD HH24:MI:SS')
--ORDER BY extended_timestamp desc;
--select ' ** Audited Objects ** ' from dual;
--Audited Objects
--SELECT OWNER, OBJ_NAME, USERNAME, ACTION_NAME, TIMESTAMP FROM SYS.DBA_AUDIT_OBJECT   where   TIMESTAMP >= Sysdate-1 ORDER BY extended_timestamp  desc;
select ' ** File I/O statistics ** ' from dual;
prompt
set linesize 150
col name format a50 heading "Datafile Name"
select name,PHYRDS "Physical Reads",PHYWRTS "Physical Writes",READTIM "Read Time(ms)",WRITETIM "Write Time(ms)",AVGIOTIM "Avg Time" from v$filestat, v$datafile where v$filestat.file#=v$datafile.file#;
set feedback on
prompt

select ' ** Total DB Applied Patches List ** '  from dual;
  SELECT    'Patch '
         || PATCH_ID
         || '     '
         || ': applied on '
         || ACTION_TIME
         || CHR (10)
         || 'Unique Patch ID: '
         || PATCH_UID
         || CHR (10)
         || 'Patch description: "'
         || DESCRIPTION
         || '"'    PATCH_DETAIL
    FROM dba_registry_sqlpatch
ORDER BY action_time
/

select ' ** Monitor ASM Diskgroup Sizes ** '  from dual;
SET LINESIZE  145
SET PAGESIZE  9999
SET VERIFY    off
COLUMN group_name             FORMAT a20           HEAD 'Disk Group|Name'
COLUMN sector_size            FORMAT 99,999        HEAD 'Sector|Size'
COLUMN block_size             FORMAT 99,999        HEAD 'Block|Size'
COLUMN allocation_unit_size   FORMAT 999,999,999   HEAD 'Allocation|Unit Size'
COLUMN state                  FORMAT a11           HEAD 'State'
COLUMN type                   FORMAT a6            HEAD 'Type'
COLUMN total_mb               FORMAT 999,999,999   HEAD 'Total Size (MB)'
COLUMN used_mb                FORMAT 999,999,999   HEAD 'Used Size (MB)'
COLUMN pct_used               FORMAT 999.99        HEAD 'Pct. Used'

break on report on disk_group_name skip 1
compute sum label "Grand Total: " of total_mb used_mb on report

SELECT
    name                                     group_name
  , sector_size                              sector_size
  , block_size                               block_size
  , allocation_unit_size                     allocation_unit_size
  , state                                    state
  , type                                     type
  , total_mb                                 total_mb
  , (total_mb - free_mb)                     used_mb
  , ROUND((1- (free_mb / total_mb))*100, 2)  pct_used
FROM
    v$asm_diskgroup
ORDER BY
    name
/

select ' ** Monitor RMAN Backup JOB Details ** '  from dual;
select START_TIME,END_TIME INPUT_BYTES,STATUS,INPUT_TYPE from v$rman_backup_job_details
where END_TIME > (sysdate-7);


select ' ** Monitor Flash Recovery Area Usage ** '  from dual;
select name,round(space_limit / 1024 / 1024) space_limit_mb,round(space_used  / 1024 / 1024) space_used_mb,percent_space_used,      percent_space_reclaimable, percent_space_not_reclaimable from v$recovery_file_dest,( select sum(percent_space_reclaimable)  percent_space_reclaimable, sum(percent_space_used) percent_space_used, sum(percent_space_used - percent_space_reclaimable) percent_space_not_reclaimable from  v$recovery_area_usage) order by name;

select ' ** Monitor Standby Database Sequence ** '  from dual;
set line 150
col NAME for a50
set lines 110
select p.prim_max_sequence,
         d.dr_max_archived_sequence,
         ad.dr_max_applied_sequence,
         p.prim_max_sequence-d.dr_max_archived_sequence archive_lag_in_dr,
         p.prim_max_sequence-ad.dr_max_applied_sequence applied_lag_in_dr
from
(select max(sequence#) prim_max_sequence from v$archived_log where dest_id=1 and archived='YES') p,
(select max(sequence#) dr_max_archived_sequence from v$archived_log where standby_dest='YES' and archived='YES') d,
(select max(sequence#) dr_max_applied_sequence from v$archived_log where standby_dest='YES' and archived='YES' and applied='YES') ad;

select ' ** Monitor DB Active Sessions ** '  from dual;
select s.status,count(1),s.username from v$process p,v$session s where paddr(+)=addr group by s.status,s.username order by 1;

select ' ** Monitor Total Open Cursors ** '  from dual;
set lines 333;
select sum(a.value) total_cur, avg(a.value) avg_cur, max(a.value) max_cur,
s.username, s.machine
from v$sesstat a, v$statname b, v$session s
where a.statistic# = b.statistic#  and s.sid=a.sid
and b.name = 'opened cursors current'
group by s.username, s.machine
order by 1 desc;

select ' ** Monitor Archivelog Generation Details ** '  from dual;
SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '00', 1, NULL))
            "00-01",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '01', 1, NULL))
            "01-02",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '02', 1, NULL))
            "02-03",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '03', 1, NULL))
            "03-04",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '04', 1, NULL))
            "04-05",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '05', 1, NULL))
            "05-06",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '06', 1, NULL))
            "06-07",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '07', 1, NULL))
            "07-08",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '08', 1, NULL))
            "08-09",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '09', 1, NULL))
            "09-10",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '10', 1, NULL))
            "10-11",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '11', 1, NULL))
            "11-12",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '12', 1, NULL))
            "12-13",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '13', 1, NULL))
            "13-14",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '14', 1, NULL))
           "14-15",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '15', 1, NULL))
            "15-16",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '16', 1, NULL))
            "16-17",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '17', 1, NULL))
            "17-18",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '18', 1, NULL))
            "18-19",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '19', 1, NULL))
            "19-20",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '20', 1, NULL))
            "20-21",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '21', 1, NULL))
            "21-22",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '22', 1, NULL))
            "22-23",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '23', 1, NULL))
            "23-00",
         COUNT (*) TOTAL
    FROM V$ARCHIVED_LOG
WHERE ARCHIVED='YES' and completion_time > (sysdate-7)
GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')
ORDER BY TO_DATE (DAY, 'DD/MM/YYYY');

select ' ** Monitor DB Block Session Details ** '  from dual;
SELECT a.inst_id, a.sid "BLOCKING ID", d.spid "BLOCKING SPID", b.sid "WAITING ID",c.sql_id "BLOCKIN_SQLID" FROM gv$lock a, gv$lock b, gv$session c, gv$process d
WHERE a.BLOCK = 1 AND b.request > 0 AND a.id1 = b.id1 AND a.id2 = b.id2 and a.sid=c.sid and c.paddr=d.addr
/

select ' ** Monitor Standby Database GAP ** '  from dual;
Col APPLIED_TIME format a20
Col destination format a20
Col Status format a20
SELECT DB_NAME,destination,  APPLIED_TIME, LOG_APPLIED,LOG_ARCHIVED,
(
  CASE
    WHEN ((APPLIED_TIME            IS NOT NULL    AND (LOG_ARCHIVED-LOG_APPLIED) IS NULL)
    OR (APPLIED_TIME               IS NULL    AND (LOG_ARCHIVED-LOG_APPLIED) IS NOT NULL)
    OR ((LOG_ARCHIVED-LOG_APPLIED)  > 1))
    THEN 'Error! Log Gap is '
    ELSE 'OK!'
  END) Status,
   LOG_ARCHIVED-LOG_APPLIED LOG_GAP
FROM
  ( SELECT INSTANCE_NAME DB_NAME FROM GV$INSTANCE WHERE INST_ID = 1 ),
  (SELECT MAX(SEQUENCE#) LOG_ARCHIVED   FROM V$ARCHIVED_LOG    WHERE DEST_ID=1 ),
   (select applied_seq# as LOG_APPLIED,destination as destination  from v$archive_dest_status WHERE DEST_ID=3 ),
  (SELECT TO_CHAR(MAX(COMPLETION_TIME),'DD-MON/HH24:MI') APPLIED_TIME  FROM V$ARCHIVED_LOG  WHERE DEST_ID=1 );

select ' ** Monitor DB Corruptions ** '  from dual;
SELECT distinct 'Data Block# '|| block# || ' of Data File '
          || name || ' is corrupted.'
   FROM   v$backup_corruption a, v$datafile b
   WHERE  a.file# = b.file#;


select ' ** Monitor DB Incidents ** '  from dual;
set linesize 170
col RECENT_PROBLEMS for a45
select PROBLEM_KEY RECENT_PROBLEMS_1_WEEK_BACK,to_char(FIRSTINC_TIME,'DD-MON-YY HH24:mi:ss') FIRST_OCCURENCE,to_char(LASTINC_TIME,'DD-MON-YY HH24:mi:ss')
LAST_OCCURENCE FROM V$DIAG_PROBLEM WHERE LASTINC_TIME > SYSDATE -1;

select ' ** Monitor DB Alertlogfile ** '  from dual;
SELECT distinct to_char(originating_timestamp,'DD.MM.YYYY HH24:MI:SS'), message_text FROM x$dbgalertext
where originating_timestamp > sysdate-2 and message_text LIKE 'ORA-%';

select ' ** Current DB Alertlogfile Location ** '  from dual;
col NAME for a15
col VALUE for a60
select name, value from v$diag_info where name = 'Diag Trace';

!df -kh

!uptime

spool off
exit

Stream Pool

Stream Pool


This is a new area in Oracle Database 10g that is used to provide buffer areas for the streams components of Oracle.  To configure the Streams pool explicitly, specify the size of the pool in bytes using the streams_pool_size initialization parameter. If the size of the Streams pool is greater than zero, then any SGA memory used by Streams is allocated from the Streams pool. If the size of the Streams Pool is zero or not specified, then the memory used by Streams is allocated from the shared pool and may use up to 10% of the shared pool.

Database Buffer Cache

Database Buffer Cache


Database Buffer cache is one of the most important components of System Global Area (SGA). Database Buffer Cache is the place where data blocks are copied from datafiles to perform SQL operations. Buffer Cache is shared memory structure and it is concurrently accessed by all server processes. The buffers in the cache are organized in two lists: the write list and the least recently used (LRU) list. The write list holds dirty buffers, which contain data that has been modified but has not yet been written to disk.

The LRU list holds pinned buffers,clean, free and dirty buffers that have not yet been moved to the write list. Free buffers do not contain any useful data and are available for use. Pinned buffers are currently being accessed when an Oracle process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of the LRU list. As more buffers are continually moved to the MRU end of the LRU list, dirty buffers age toward the LRU end of the LRU list. When oracle server process requires a specific data block, it first searches it in Buffer cache. If it finds required block, it is directly accessed and this event is known as Cache Hit. If searching in Buffer cache fails then it is read from datafile on the disk and the event is called Cache Miss. If the required block is not found in Buffer cache then process needs a free buffer to read data from disk. It starts search for free buffer from least recently used end of LRU list. In process of searching, if user process finds dirty block in LRU list it shifts them to Write List. If the process can not find free buffers until certain amount of time then process signals DBWn process to write dirty buffers to disks.

When the user process is performing a full table scan, it reads the blocks of the table into buffers and puts them on the LRU end (instead of the MRU end) of the LRU list. This is because a fully scanned table usually is needed only briefly, so the blocks should be moved out quickly to leave more frequently used blocks in the cache.


Size of the Database Buffer Cache :-

Oracle allows different block size for different tablespaces. A standard block size is defined in DB_BLOCK_SIZE initialization parameter . System tablespace uses standard block size. DB_CACHE_SIZE parameter is used to defiane size for Database buffer cache.
you can also set the size for two additional buffer pools, KEEP and RECYCLE, by setting DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE. These three parameters are independent of one another.

Buffer Pools

Buffer Pools


A buffer pool is a collection of buffers. The database buffer cache is divided into one or more buffer pools.

You can manually configure separate buffer pools that either keep data in the buffer cache or make the buffers available for new data immediately after using the data blocks. You can then assign specific schema objects to the appropriate buffer pool to control how blocks age out of the cache.

The possible buffer pools are as follows:

Default pool

This pool is the location where blocks are normally cached. Unless you manually configure separate pools, the default pool is the only buffer pool.

Keep pool

This pool is intended for blocks that were accessed frequently, but which aged out of the default pool because of lack of space. The goal of the keep buffer pool is to retain objects in memory, thus avoiding I/O operations.

Recycle pool

This pool is intended for blocks that are used infrequently. A recycle pool prevent objects from consuming unnecessary space in the cache.

You can create a tablespace with a block size that differs from the standard size. Each non default block size has its own pool. Oracle Database manages the blocks in these pools in the same way as in the default pool. The structure of the buffer cache when multiple pools are used. The cache contains default, keep, and recycle pools. The default block size is 8 KB. The cache contains separate pools for tablespaces that use the nonstandard block sizes of 2 KB, 4 KB, and 16 KB.

Redo Log Buffer

Redo Log Buffer


Oracle database keeps record of changes made to data. Every time user performs a DML, DDL or DCL operation, its redo entries are also created. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP operations. Redo entries are used for database recovery, if necessary. Oracle Database processes copy redo entries from the user memory space to the redo log buffer in the SGA. The redo entries take up continuous, sequential space in the buffer. The background process log writer (LGWR) writes the redo log buffer to the active online redo log group on disk
If database is running in archiving mode then archive log files are created from redo log files. These archive log files are used to in various database recovery techniques.

LOG_BUFFER initialization parameter is used to set the size Redo Log buffer

Server Process

Server Process


A server process is a program that directly interacts with the Oracle server. Once a user has established a connection, a server process is started to handle the user processes requests. A server process can be either a dedicated server process or a shared server process. In a dedicated server environment, the server process handles the request of a single user process. Once a user process disconnects, the server process is terminated. In a shared server environment, the server process handles the request of several user processes. The server process communicates with the Oracle server using the Oracle Program Interface Server processes (or the server portion of combined user/server processes) created on behalf of each user's application can perform one or more of the following:

-Parse and run SQL statements issued through the application
-Read necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA
-Return results in such a way that the application can process the information

Background Process

Background Process


An Oracle instance is the combination of the background processes and memory structures. The relationship between the physical and memory structures is maintained and enforced by Oracle’s background processes.Background processes perform functions on behalf of the invoking process. They consolidate functions that would otherwise be handled by multiple Oracle programs running for each user. The background processes perform input/output (I/O) and monitor other Oracle processes to provide increased parallelism for better performance and reliability. Many other processes exist that are used by other options within Oracle, such as Advanced Queuing, Real Application Clusters, Shared Server, Advanced Replication, and so on. The background processes in an Oracle instance can include the following:

Database Writer Process (DBWn)

Log Writer Process (LGWR)

Checkpoint Process (CKPT)

System Monitor Process (SMON)

Process Monitor Process (PMON)


Optional Background Process:

Recoverer Process (RECO)

Archiver Processes (ARCn)

Queue Monitor Processes (QMNn)

User Process

User Process



When a user runs an Oracle tool Oracle creates a user process to run the user's application. A database user who needs to request information from the database must first make a connection with the Oracle server. The connection is requested using a database interface tool, such as SQL*Plus, and beginning the user process. The user process does not interact directly with the Oracle server. Rather it generates calls through the user program interface (UPI), which creates a session and starts a server process

Connections and Sessions

The terms "connection" and "session" are closely related to the term "user process", but are very different in meaning. A connection is a communication pathway between a user process and an Oracle instance. A communication pathway is established using available inter-process communication mechanisms (on a computer that executes both the user process and Oracle) or network software (when different computers execute the database application and Oracle, and communicate via a network).  A session is a specific connection of a user to an Oracle instance via a user process. For example, when a user starts SQL*Plus, the user must provide a valid username and password and then a session is established for that user. A session lasts from the time the user connects until the time the user disconnects or exits the database application.

Introduction To Processes

Introduction To Processes



All connected Oracle users must execute two modules of code to access an Oracle database instance:

-Application or Oracle tool
-Oracle server code

These code modules are executed by processes. A process is a "thread of control" or a mechanism in an operating system that can execute a series of steps. (Some operating systems use the terms job or task. A process normally has its own private memory area in which it runs.
The process structure varies for different Oracle configurations, depending on the operating system and the choice of Oracle options. There are some processes

-User Process
-Server Process
-Background Process 

Instance Memory Structure

Instance Memory Structure


The size of these structures affects the performance of the Oracle database server and is controlled by initialization parameters. These initialization parameters can be categorized as memory parameters. When a database is created with DBCA, the memory parameters are automatically set to optimal values based on your specification of the database workload. However, as your database usage expands, you might find it necessary to alter the settings of the memory parameters. Oracle provides alerts and advisors to identify memory sizing problems and to help you determine appropriate values for memory parameters. Oracle’s memory structure consists of two memory areas known as:

• System Global Area (SGA): Allocated at instance startup, and is a fundamental component of an Oracle Instance

• Program Global Area (PGA): Allocated when the server process is started

Program Global Area (PGA)

Program Global Area (PGA)


The Program Global Area or Process Global Area (PGA) is a memory region that contains data and control information for a single server process or a single background process. The PGA is allocated when a process is created and deallocated when the process is terminated. In contrast to the SGA, which is shared by several processes, the PGA is an area that is used by only one process.

Content of the PGA

The content of the PGA memory varies, depending on whether the instance is running the shared server option. But generally speaking, the PGA memory can be classified as follows.

Private SQL Area

A private SQL area contains data such as bind information and runtime memory structures. Each session that issues a SQL statement has a private SQL area. Each user that submits the same SQL statement has his or her own private SQL area that uses a single shared SQL area. Thus, many private SQL areas can be associated with the same shared SQL area. The private SQL area of a cursor is itself divided into two areas whose lifetimes are different: The persistent area, which contains, for example, bind information. It is freed only when the cursor is closed. The run-time area, which is freed when the execution is terminated.

Cursors and SQL Areas

The application developer of an Oracle precompiler program or OCI program can explicitly open cursors, or handles to specific private SQL areas, and use them as a named resource throughout the execution of the program. Recursive cursors that Oracle issues implicitly for some SQL statements also use shared SQL areas. The management of private SQL areas is the responsibility of the user process. The allocation and deallocation of private SQL areas depends largely on which application tool you are using, although the number of private SQL areas that a user process can allocate is always limited by the initialization parameter OPEN_CURSORS. The default value of this parameter is 50

PGA Memory Management for Dedicated Mode

You can automatically and globally manage the size of SQL work areas. The database administrator simply needs to specify the total size dedicated to PGA memory for the Oracle instance by setting the initialization parameter PGA_AGGREGATE_TARGET. The specified number (for example, 2G) is a global target for the Oracle instance, and Oracle tries to ensure that the total amount of PGA memory allocated across all database server processes never exceeds this target.

System Global Area (SGA)

System Global Area (SGA)


The System Global Area (SGA) is a group of shared memory areas that are dedicated to an Oracle instance. All Oracle processes use the SGA to hold information. The SGA is used to store incoming data and internal control information that is needed by the database. You control the amount of memory to be allocated to the SGA by setting some of the Oracle initialization parameters.

An SGA and Oracle processes constitute an Oracle instance. Oracle automatically allocates memory for an SGA when you start an instance, and the operating system reclaims the memory when you shut down the instance. Each instance has its own SGA. The SGA is read/write. All users connected to a multiple-process database instance can read information contained within the instance's SGA, and several processes write to the SGA during execution of Oracle.

The SGA contains the following data structures:

Database buffer cache
Redo log buffer
Shared pool
Java pool
Large pool
Data dictionary cache

Part of the SGA contains general information about the state of the database and the instance, which the background processes need to access; this is called the fixed SGA. 

RMAN Configurations

RMAN Configurations


-CONFIGURE RETENTION POLICY

It used with the 2 different options. Recovery Window or Redundancy

Redundancy: CONFIGURE RETENTION POLICY TO REDUNDANCY 1;

If you set Redundancy to 1, RMAN will keep one backup. If you take second backup , RMAN will sign previous backup as obsolete. You can delete obsolete backup with “delete obsolete” command.

Recovery Windows: CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;

If you set Recovey windows to 3 days, RMAN will sign backups as obsolete older than 3 days.
For example, we have 4 backups. And our backup dates are 16 may, 18 may, 21 may and 23 may.

If we set Recovery window to 3 days, RMAN must sign 16may and 18 may backups as obsolete
(23may – 3 days=20may). But RMAN signs 18may backup as not obsolete. Because RMAN automatically detects backups. And 18may backup is needed for recovery of 20may.

-CONFIGURE DEFAULT DEVICE TYPE

Backup can be taken two different locations. Tape and Disk

For backing up to disk: CONFIGURE DEFAULT DEVICE TYPE TO DISK;

For backing up to tape: CONFIGURE DEFAULT DEVICE TYPE TO SBT;

-CONFIGURE CONTROLFILE AUTOBACKUP ON/OFF

Automatically backups the controlfile to fra. (If you dont set FRA then it puts backup of

controlfile to $ORACLE_HOME/dbs by default)

-CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’

By default, RMAN automatically names control file backups and  saves them into FRA. To configure RMAN  to write control file backups to the specific directory:  %F will generate a unique filename likes  c-‘IIIIIIIIII-YYYYMMDD-QQ’ . Here  ‘IIIIIIIIII’ is DBID, ‘YYYYMMDD’ is date and  ‘QQ’ is hexadecimal id. You can also backup controlfile to specific location. For example;

-CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘uo1/oradata/cf_%F’;

You can also reset configuration.

-CONFIGURE CONTROLFILE AUTOBACKUP FOR DEVICE TYPE DISK CLEAR;

-To change  the location of RMAN backups:

RMAN> configure channel device type disk format '/u01/app/oracle/oradata/orcl/backup/%U' maxpiecesize 8 G;


You can also write in RUN script.

RMAN> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE sbt TO 'controlfile_%F';
RMAN> BACKUP AS COPY DATABASE;
RMAN> RUN {
SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/tmp/%F.bck';
BACKUP AS BACKUPSET DEVICE TYPE DISK DATABASE;
}

Format Description

%a Specifies the activation id number of the database.
%A Specifies the activation id number of the database completed 0
%c Specifies the copy number of the backup piece within a set of duplexed backup pieces. The maximum value is 256.
%d Specifies the name of the database
%D Specifies the current day of the month in format DD
%e Specifies the archived log sequence number
%f Specifies the absolute file number
%F Combines the DBID, day, month, year, and sequence into a unique and repeatable generated name
%h Specifies the archived redo log thread number
%I Specifies the DBID
%M Specifies the month in format MM
%n Specifies the name of the database. padded on the right with x characters to a total length of eight characters.
%N Specifies the tablespace name.
%p Specifies the piece number within the backup set
%r Resetlogs ID
%s Specifies the backup set number
%S Specifies the backup set number completed 0
%t Specifies the backup set time stamp.
%T Specifies the year, month, and day in format YYYYMMDD
%u Specifies an 8-character name constituted by compressed representations of the backup set or image copy number.
%U Specifies a system-generated unique file name.
%Y Specifies the year in this format: YYYY.

-CONFIGURE BACKUP OPTIMIZATION OFF/ON

If you set this configuration to ON then the backup command skips backing up files when the identical file has already been backed up. RMAN uses to determine whether a file is identical to a file that it is backed up with following criterias.

-CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET
Specifies number of channels which determines whether RMAN reads or writes in parallel. You must also specify DEVICE TYPE.

-CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 3

The configure “datafile backup copies” used to specify how many copies of each backup piece
should be created on the specified device type for the datafile. In my example above, it will take backup 3 copy. It is known as mirror backup. You can specify 3 different location with FORMAT option.

BACKUP DEVICE TYPE DISK COPIES 3 DATAFILE 7 FORMAT
‘/tmp/%U’,’?/oradata/%U’,’?/%U’;

-CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1
The configure “archivelog backup copies” used to specify how many copies of each backup piece should be created on the specified device type for the arhive log files. You can specify 2 different location with FORMAT option.

-CONFIGURE MAXSETSIZE TO UNLIMITED

You can use the MAXSETSIZE parameter on the BACKUP and configure commands to set a limit for the size of backup sets. If you set maxsetsize less than backing up datafile size then you will get an error. Default value of this configuration is “unlimited”.

-CONFIGURE ENCRYPTION FOR DATABASE OFF/ON

Encrypted backups cannot be read if they are obtained by unauthorized users. This configuration specifies whether encryption will be used or not.

-CONFIGURE ENCRYPTION ALGORITHM ‘AES128’

-CONFIGURE COMPRESSION ALGORITHM ‘BASIC’ AS OF RELEASE ‘DEFAULT’ OPTIMIZE FOR LOAD TRUE

There are two compression method ZLIB and BZIP2. ZLIB consumes less cpu but the compression
rate is low. BZIP2 consumes more cpu but the compression rate is high.

-CONFIGURE COMPRESSION ALGORITHM ‘ZLIB’;

-CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;

There are 3 compression level. LOW, MEDIUM and HIGH. HIGH level is not recommended because of suited for backups over slower networks. MEDIUM level is recommended.

-CONFIGURE ARCHIVELOG DELETION POLICY TO NONE

Specifies the archive log deletion policy.

In 10g you can set;

-CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO {ARCHIVERETENTION};

In 11g you can set;

-CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;

The archived log deletion policy also has option specific to Data Guard. For example, if you set archive log deletion policy to the “APPLIED ON STANDBY” then RMAN can delete logs after they have been applied at all mandatory remote destinations.

-CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;

-CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE

You can use maxpiecesize channel parameter to set limits on the size of backup pieces. In my example below, I limit the backup piece size to 2G.

-CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE = 2G;

-CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/oracle/ora10g/dbs/snapcf_test11g.f’

RMAN needs a snapshot control file when resynchronizing with the recovery catalog or taking a backup of the current control file. The default location is platform specific and depends on the Oracle home of each target database. In Linux environment it locates in $ORACLE_HOME/dbs directory. You can change its location with this configuration.

Install Oracle 11G Release 2

Install Oracle 11G Release 2 


Oracle software that was verified

Oracle Database 11g Release 2 (11.2.0.1.0) for Linux x86-64

linux.x64_11gR2_database_1of2.zip (1,239,269,270 bytes) (cksum - 3152418844)
linux.x64_11gR2_database_2of2.zip (1,111,416,131 bytes) (cksum - 3669256139)

OS configuration and preparation

OS configuration is executed as root. To login as root just execute following command in terminal.

su - root

Add groups

groupadd -g 54321 oinstall
groupadd -g 54322 dba
groupadd -g 54323 oper
groupadd -g 54329 asmadmin

Add user Oracle

useradd -u 54321 -g oinstall -G dba,oper,asmadmin oracle

Change password for user

passwd oracle

Add kernel parameters to /etc/sysctl.conf

kernel.shmmni = 4096
kernel.shmmax = 4398046511104
kernel.shmall = 1073741824
kernel.sem = 250 32000 100 128
fs.aio-max-nr = 1048576
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586

Apply kernel parameters

/sbin/sysctl -p

Add following lines to set shell limits for user oracle in file /etc/security/limits.conf

oracle   soft   nproc    131072
oracle   hard   nproc    131072
oracle   soft   nofile   131072
oracle   hard   nofile   131072
oracle   soft   core     unlimited
oracle   hard   core     unlimited
oracle   soft   memlock  50000000
oracle   hard   memlock  50000000

The /etc/hosts file must contain a fully qualified name for the server.

<IP-address>  <fully-qualified-machine-name>  <machine-name>

For example.

127.0.0.1 linux linux.dbaora.com localhost localhost.localdomain

Modify .bash_profile for user oracle in his home directory and add following lines

# Oracle Settings
export TMP=/tmp

export ORACLE_HOSTNAME=linux.dbaora.com
export ORACLE_UNQNAME=ORA11G
export ORACLE_BASE=/ora01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_SID=ORA11G

PATH=/usr/sbin:$PATH:$ORACLE_HOME/bin

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;

alias cdob='cd $ORACLE_BASE'
alias cdoh='cd $ORACLE_HOME'
alias tns='cd $ORACLE_HOME/network/admin'
alias envo='env | grep ORACLE'

umask 022

Packages

Check which packages are installed and which are missing

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE}(%{ARCH})\n' binutils

compat-libstdc++-33
elfutils-libelf
elfutils-libelf-devel
gcc
gcc-c++
glibc
glibc-common
glibc-devel
glibc-headers
ksh
libaio
libaio-devel
libgcc
libstdc++
libstdc++-devel
make
sysstat
unixODBC
unixODBC-devel
numactl
numactl-devel

You can install missing packages from dvd. Just mount it and install missing packages using rpm -Uvh command from directory <mount dvd>/Packages or by using yum install command.

First option from dvd
rpm -Uvh/ivh libaio-devel*.x86_64.rpm
rpm -Uvh/ivh ksh*.x86_64.rpm

Second option using yum install command. It requires access to internet.
yum install libaio-devel*.x86_64
yum install ksh*.x86_64

Stop and disable firewall

systemctl stop firewalld
systemctl disable firewalld

Edit the “/etc/redhat-release” file replacing the current release information

"example"(Twenty Three)” with the following.

redhat release 6
Add the following line to the “/etc/pam.d/login” file, if it does not already exist.

session    required     pam_limits.so
Add following lines in .bash_profile for user oracle

Create directory structure

ORACLE_BASE – /ora01/app/oracle
ORACLE_HOME – /ora01/app/oracle/product/11.2.0/db_1

mkdir -p /ora01/app/oracle/product/11.2.0/db_1

chown oracle:oinstall -R /oracle01

Secure Linux

Disable secure linux by editing the “/etc/selinux/config” file, making sure the SELINUX  flag is set as follows. It requires REBOOT to be effective

SELINUX=permissive

Install database software

Issue the following command to avoid problems with starting binary installation in

graphical mode

xhost +<machine-name>

--in my case it's
xhost +linux.dbaora.com

Let’s start with database software installation as oracle user.

su - oracle

--unzip software 11.2.0.4
unzip linux.x64_11gR2_database_1of2.zip
unzip linux.x64_11gR2_database_2of2.zip

--I defined 4 aliases in .bash_profile of user oracle to make
--administration easier :)

[oracle@linux]$ alias envo cdob cdoh tns
alias envo='env | grep ORACLE'
alias cdob='cd $ORACLE_BASE'
alias cdoh='cd $ORACLE_HOME'
alias tns='cd $ORACLE_HOME/network/admin'

--run alias command envo to display environment settings

envo
ORACLE_UNQNAME=ORA11G
ORACLE_SID=ORA11G
ORACLE_BASE=/ora01/app/oracle
ORACLE_HOSTNAME=linux.dbaora.com
ORACLE_HOME=/ora01/app/oracle/product/11.2.0/db_1

--run alias command cdob and cdoh to check ORACLE_BASE, ORACLE_HOME
[oracle@linux]$ cdob
[oracle@linux oracle]$ pwd
/ora01/app/oracle

[oracle@linux db_1]$ cdoh
[oracle@linux db_1]$ pwd
/ora01/app/oracle/product/11.2.0/db_1

--run installation

./runInstall



















Post Installation Steps

Edit the “/etc/redhat-release” file to restore original release information.

linux release 23 (Twenty Three)

Edit the “/etc/oratab” file to set restart flag for ORA11G to ‘Y’.

ORA11G:/u01/app/oracle/product/11.2.0/db_1:Y




Thursday, April 27, 2017

Voting Disk

Voting Disk


What does it contain, who updates it, how is it used, where is it stored and so on?


Voting disk manage information about node membership. Each voting disk must be accessible by all nodes in the cluster.If any node is not passing heat-beat across other note or voting disk, then that node will be evicted by Voting disk, hence the voting disk a key component of clusterware and its failure can lead to inoperability of the cluster.

In a RAC at any point in time the clusterware must know which nodes are member of the cluster so that

- it can perform load balancing

- In case a node fails, it can perform failover of resources as defined in the resource profiles

- If a node joins, it can start resources on it as defined in OCR/OLR

- If a node joins, it can assign VIP to it in case GNS is in use

- If a node fails, it can execute callouts if defined and so on

Hence, there must be a way by which clusterware can find out about the node membership.

That is where voting disk comes into picture. It is the place where nodes mark their attendance. Consider an analogy where a manager wants to find out which of his subordinates are present. He can just check the attendance register and assign them their tasks accordingly. Similarly, CSSD process on every node makes entries in the voting disk to ascertain the membership of that node.

The voting disk  records node membership information. If it ever fails, the entire clustered environment for Oracle 11g RAC will be adversely affected and a possible outage may result if the vote disks is/are lost. Also, in a cluster communication between various nodes is of paramount importance.  Nodes which can’t communicate with other nodes  should be evicted from the cluster. While marking their own presence, all the nodes also register the information about their communicability with other nodes in voting disk . This is called network heartbeat. CSSD process in each RAC node maintains its heart beat in a block of size 1 OS block, in the hot  block of voting disk  at a specific offset.  The written block has a header area with the node name.

The heartbeat counter increments every second on every write call. Thus heartbeat of various nodes is recorded at different offsets in the voting disk. In addition to maintaining its own disk block, CSSD processes also monitors the disk blocks maintained by the CSSD processes running in other cluster nodes. Healthy nodes will have continuous network and disk heartbeats exchanged between the nodes.

Break in heart beat indicates a possible error scenario.If the disk block is not updated in a short timeout period, that node is considered unhealthy and  may be rebooted to protect the database information. In this case , a message to this effect is written in the kill block of the node. Each node reads its kill block once per second, if the kill block is overwritten node commits suicide.

During reconfig (join or leave) CSSD monitors all nodes and determines whether  a node has a disk heartbeat, including those with no network heartbeat. If no disk  heartbeat is detected  then node is declared as dead.

What is stored in voting disk?

Voting disks contain static and dynamic data.

Static data : Info about nodes in the cluster

Dynamic data : Disk heartbeat logging

It maintains and consists of important details about the cluster nodes membership, such as

- which node is part of the cluster,

- who (node) is joining the cluster, and

- who (node) is leaving the cluster.


Why is voting disk needed ?

The Voting Disk Files are used by Oracle Clusterware  by way of a health check .

- by CSS to determine which nodes are currently members of the cluster

- in concert with other Cluster components such as CRS to shut down, fence, or reboot either single or multiple nodes whenever network communication is lost between any node within the cluster, in order to prevent the dreaded split-brain condition in which two or more instances attempt to control the RAC database. It  thus protects the database information.

- It will be used by the CSS daemon to arbitrate with peers that it cannot see over the private interconnect in the event of an outage,

allowing it to salvage the largest fully connected subcluster for further operation.  It checks the voting disk to determine if there is a failure on any other nodes in the cluster. During this operation, NM will make an entry in the voting disk to inform its vote on availability. Similar operations are performed by other instances in the cluster. The three voting disks configured also provide a method to determine who in the cluster should survive. For example, if eviction of one of the nodes is necessitated by an unresponsive action, then the node that has two voting disks will start evicting the other node. NM alternates its action between the heartbeat and the voting disk to determine the availability of other nodes in the cluster.

The Voting disk is the key communication mechanism within the Oracle Clusterware where all nodes in the cluster read and write heartbeat information. CSSD processes (Cluster Services Synchronization Daemon) monitor the health of  RAC nodes employing two distinct heart beats: Network heart beat and Disk heart beat. Healthy nodes will have continuous network and disk heartbeats exchanged between the nodes. Break in heart beat indicates a possible error scenario.

There are few different scenarios possible with missing heart beats:

1. Network heart beat is successful, but disk heart beat is missed.

2. Disk heart beat is successful, but network heart beat is missed.

3. Both heart beats failed.

In addition, with numerous nodes, there are other possible scenarios too. Few possible

scenarios:

1. Nodes have split in to N sets of nodes, communicating within the set, but not with members in other set.

2. Just one node is unhealthy.

Nodes with quorum will maintain active membership of the cluster and other node(s) will be fenced/rebooted.

A node must be able to access more than half of the voting disks.

A node must be able to access more than half of the voting disks at any time. For example,let’s have a two node cluster with an even number of let’s say 2 voting disks. Let Node1 is able to access voting disk1 and Node2 is able to access voting disk2 . This means that there is no common file where clusterware can check the heartbeat of both the nodes.

Hence, if we have 2 voting disks, all the nodes in the cluster should be able to access both the voting disks. If we have 3 voting disks and both the nodes are able to access more than half i.e. 2 voting disks, there will be at least on disk which will be accessible by both the nodes. The clusterware can use that disk to check the heartbeat of both the nodes. Hence, each  node should be  able to access
more than half the number of voting disks. A node not able  to do so will have to be evicted from the cluster to maintain the integrity of the cluster  . After the cause of the failure has been corrected and access to the voting disks has been restored, you can instruct Oracle Clusterware to recover the failed node and restore it to the cluster.


Loss of more than half your voting disks will cause the entire cluster to fail !!

Where is voting disk stored?

The Voting Disk is a shared disk that will be accessed by all member nodes in the cluster during an operation. Hence, the voting disks must be on shared accessible storage .

- You should plan on allocating 280MB for each voting disk file.

- Prior to 11g R2 RAC, it could be placed on a raw device

A clustered filesystem supported by Oracle RAC such as OCFS, Sun Cluster, or Veritas

Cluster filesystem

- As of  11g R2 RAC, it can be placed on  ASM disks . This simplifies management and improves performance.  But this brought up a puzzle too. For a node to join the cluster, it must be able to access voting disk but voting disk is on ASM and ASM can’t be up until node is up. To resolve this issue, Oracle ASM reserves several blocks at a fixed location for every Oracle ASM disk used for storing the voting disk.As a result , Oracle Clusterware can access the voting disks present in ASM even if the ASM instance is down and CSS can continue to maintain the Oracle cluster even if the ASM instance has failed.The physical location of the voting files in used ASM disks is fixed, i.e. the cluster stack does not rely on a running ASM instance to access the files. The location of the file is visible in the ASM disk header (dumping the file out of ASM with dd is quite easy):


– The voting disk is not striped but put as a whole on ASM Disks

– In the event that the disk containing the voting disk fails, Oracle ASM will choose another disk on which to store this data.

– It eliminates the need for using a third-party cluster volume manager.

– you can reduce the complexity of managing disk partitions for voting disks during Oracle

Clusterware installations.

–  Voting disk needs to be mirrored, should it become unavailable, cluster will come down.

Hence, you should maintain multiple copies of the voting disks on separate disk LUNs so that you eliminate a Single Point of Failure

(SPOF) in your Oracle 11g RAC configuration.

– If voting disk is stored on ASM, multiplexing level of voting disk is decided by the - If voting disk is on a diskgroup with external

redundancy, one copy of voting file will be stored on one disk in the diskgroup.

-  If we store voting disk on a diskgroup with normal redundancy, we should be able to tolerate the loss of one disk i.e. even if we lose one disk, we should have sufficient number of voting disks so that clusterware can continue.  If the diskgroup has 2 disks (minimum required for normal redundancy), we can store 2 copies of voting disk on it. If we lose one disk, only one copy of voting disk will be
left  and clusterware won’t be able to continue,  because to continue, clusterware should be able to access more than  half the no. of

voting disks i.e.> (2*1/2)

i.e. > 1

i.e.=  2

Hence, to be able to tolerate the loss of one disk, we should have 3 copies of the voting disk on a diskgroup with normal redundancy .

So, a normal redundancy diskgroup having voting disk should have minimum 3 disks in it.

– Similarly, if we store voting disk on diskgroup with high redundancy, 5 Voting Files are placed, each on one ASM Disk i.e a high

redundancy diskgroup should have at least 5 disks so that even of we lose 2 disks, clusterware can continue .

 – Ensure that all the nodes participating in the cluster have read/write permissions on disks.

 – You can have up to a maximum of 15 voting disks. However, Oracle recommends that you do not go beyond five voting disks.

Backing up voting disk:

In previous versions of Oracle Clusterware you needed to backup the voting disks with the dd command. Starting with Oracle Clusterware 11g Release 2 you no longer need to backup the voting disks. The voting disks are automatically backed up as a part of the OCR. In fact,Oracle explicitly indicates that you should not use a backup tool like dd to backup or restore voting disks. Doing so can lead to the loss of the voting disk.

Although the Voting disk contents are not changed frequently, you will need to back up the Voting disk file every time

- you add or remove a node from the cluster or

- immediately after you configure or upgrade a cluster.

  A node in the cluster must be able to access more than half of the voting disks at any time in order to be able to tolerate a failure of n voting disks. Therefore, it is strongly recommended that you configure an odd number of voting disks such as 3, 5, and so on.

– Check the location of voting disk

grid@host01$crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

–  —–    —————–                ——— ———

 1. ONLINE   243ec3b2a3cf4fbbbfed6f20a1ef4319 (ORCL:ASMDISK01) [DATA]

Located 1 voting disk(s).

– we can see that only one copy of the voting disk is there on data diskgroup which has external redundancy. As I mentioned earlier,

Oracle writes the voting devices to the underlying disks at pre-designated locations so that it can get the contents of these files when the cluster starts up.