Showing posts with label My Notes. Show all posts
Showing posts with label My Notes. Show all posts

Friday, March 29, 2019

Large Tables In Oracle database

Large Tables In Oracle database 


creating a table with 10 million records

ORACLE> sqlplus '/as sysdba'

SQL> alter session set workarea_size_policy=manual;
SQL> alter session set sort_area_size=1000000000;

SQL> create table QADER_T1 as select rownum as id, 'Just Some Text' as textcol, mod(rownum,5) as numcol1, mod(rownum,1000) as numcol2 , 5000 as numcol3, to_date ('01.' || lpad(to_char(mod(rownum,12)+1),2,'0') || '.2018' ,'dd.mm.yyyy') as time_id from dual connect by level<=1e7;

Insert Another 10 Million Records to the Table

ORACLE> sqlplus '/as sysdba'

SQL> insert into QADER_T1 select rownum as id, 'Just Some Text' as textcol, mod(rownum,5) as numcol1, mod(rownum,1000) as numcol2, 5000 as numcol3,
to_date ('01.' || lpad(to_char(mod(rownum,12)+1),2,'0') || '.2018' ,'dd.mm.yyyy') as time_id from dual connect by level<=1e7;
SQL> COMMIT;

Tuesday, March 12, 2019

What is Parallel Concurrent Processing?

What is Parallel Concurrent Processing?


Parallel concurrent processing allows you to distribute concurrent managers across multiple nodes in a cluster, massively parallel, or networked environment. Instead of operating concurrent processing on a single node while other nodes are idle, you can spread concurrent processing across all available nodes, fully utilizing hardware resources.

Benefits of Parallel Concurrent Processing ?

Parallel concurrent processing provides Oracle E-Business Suite users with the following benefits:

* High performance - the ability to run concurrent processes on multiple nodes to improve concurrent processing throughput.

* Fault Tolerance - the ability to continue running concurrent processes on available nodes even when one or more nodes fails.

* Adaptability - the ability to integrate with platform-specific batch queue and load-balancing systems to maximize concurrent processing performance on a particular platform.

* Single Point of Control - the ability to administer concurrent managers running on multiple nodes from any node in a cluster, massively parallel, or networked environment.

Parallel Concurrent Processing Environments ?

Parallel concurrent processing runs in multi-node environments, such as cluster, massively parallel, and networked environments. In these environments, each node consists of one or more processors (CPUs) and their associated memory. Each node has its own memory that is not shared with other nodes And each node operates independently of other nodes, except when sharing a resource such as a disk.

Important Roles in PCP:

Role of ICM in PCP ?

1.Internal Manager (ICM) monitors, activates and deactivates all managers.
2.ICM migrates managers during node and/or instance failures and needs to be active for failover/failback to work.
3.ICM uses the Service Manager (FNDSM) to spawn and terminate all concurrent manager processes, and to manage GSM services like Workflow mailer, Output Post Processor, etc.
4.ICM will contact the APPS TNS Listener on each local and remote concurrent processing node to start the Service Manager on that node.
5.ICM will not attempt to start a Service Manager if it is unable to TNS ping the APPS TNS Listener
6.One Service Manager is defined for each application node registered in FND_NODES.
7.Each service/manager may have a primary and a secondary node. Initially, a concurrent manager is started on its primary node. In case of node failure, all concurrent managers on that node migrate to their respective secondary nodes.

Role of Service Manager in PCP ?

1.Service manager (FNDSM process) is used to manage services/managers on each concurrent node. It is a requirement in all concurrent processing environments and is therefore an integral part of PCP. PCP cannot be implemented without Service manager.
2.The Service Manager is spawned from the APPS TNS Listener.
3.The APPS TNS Listener must be started on every application node in the system, and started by the user that starts ICM (e.g. applmgr)
4.TNS Listener spawns Service Manager to run as agent of ICM for the local node
5.The Service Manager is started by ICM on demand when needed. If no management actions are needed on a node, Service Manager will not be started by ICM until necessary. When ICM exits its Service Managers exit as well.
6.The Service Manager environment is set by gsmstart.sh and APPSORA.env as defined in listener.ora

Role of Internal Monitors in PCP ?

1.The only function of Internal Monitor (FNDIMON process) is to check if ICM is running and restart failed ICM on local node.
2.Internal Monitors are seeded on every registered node by default by autoconfig.
3.Activate Internal Monitor on each concurrent node where the ICM can start in case of a failure. Bydefault, Internal Monitor is deactivated.
4.If the ICM goes down, the Internal Monitor will attempt to start a new ICM on the local node.
5.If multiple ICMs are started, only the first will stay active. The others will gracefully exit.


Thursday, February 28, 2019

Basic Performance Tunning in Oracle Database

Basic Performance Tunning in Oracle Database


Method.1


How will you find the performance issues ?

Mostly, when any users intimates us that:
This query is running slowly ?
Daily, this query will retrieve data in 1 minute. But, today it is running since 2 hours ?
Load average is high ?
CPU utilization of a process is high ?
Users experiencing slowness ?

a. Get the top consuming PID’s (Works in almost all Linux related OS). And if it is Windows, please look into task manager.

ps -eo pcpu,pid,user,args | sort -k 1 -r |head -10

b.  Now, you need to pick the SQL_ID mainly for proceeding further. You will be prompted for PID which you picked in above command.

set linesize 2000;
select s.sid,s.serial#, s.inst_id,p.spid, s.SQL_ID, t.SQL_TEXT, s.machine from gv$process p, gv$session s, gv$sqltext t where s.paddr = p.addr and p.spid=&processid and s.SQL_HASH_VALUE = t.HASH_VALUE;

Now we have two ways:

1. Using the sql_id, pick the tables involved in it. We are using explain plans.
select * from table(dbms_xplan.display_cursor(sql_id => ‘&SQL_ID’, format => ‘+ALLSTATS’));

2. Run below Oracle Provided script to get details. This is Sql Tuning Report. This report will give all the recommendations available.
@?/rdbms/admin/sqltrpt.sql

Now, you will be having the tables list. Check when was latest time stamp of the table analyzed.

SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN, TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS') FROM DBA_TABLES WHERE TABLE_NAME ='&TABLE_NAME';

What is the use of last_analyzed column ?
This will give the information when the stats were gathered.

If the stats are not up-to-date, we need to gather stats.

Find whether the table is partitioned table or normal table.

select table_name, subpartition_name, global_stats, last_analyzed, num_rows from dba_tab_subpartitions where table_name='&Tablename' and table_owner='&owner' order by 1, 2, 4 desc nulls last;

select TABLE_OWNER, TABLE_NAME, PARTITION_NAME, LAST_ANALYZED from DBA_TAB_PARTITIONS where TABLE_OWNER=’&owner’ and TABLE_NAME=’&Tablename’ order by LAST_ANALYZED;

If table has NO partitions and if stats need to be gathered, use this

EXEC dbms_stats.gather_table_stats(‘&Owner’,’&Table’,cascade=>TRUE);

If table has partitions, we can gather stats on whole table. But, its better to gather stats for recently created ten partitions, use below

begin
SYS.DBMS_STATS.gather_table_stats
(ownname => ‘&Table_Owner’,
tabname => ‘&Table_Name’,
PARTNAME => ‘&partition’,
estimate_percent => 0.001,
DEGREE => 24,
granularity => ‘PARTITION’,
CASCADE => TRUE,
stattab => NULL,
statid => NULL,
statown => NULL
);
END;
/

We can also run this SQL Tunning Advisor from OEM:

1.Go to home
2.Click on Advisor Central
3.Click on SQL Advisor
4.Click on SQL Tunning Advisor
5.Click on Top Activity
6.Choose a SQL_ID for which you want to run tune
7.Click on Actions button go
8.click Submit Button


Method.2


How to tune a SELECT query ?

Step.1
Find the SQL_ID of the slow running query

There could be two possibilities:
1) Query is still running: If the slow query is still running then we can find the sql_id of the query by using v$session view.

2) Query is completed: It might be query is completed but application team got to know it later that the query was slow and it did not finish in its usual time. Since this query is not running right now, we can’t query v$session to get its information.

So for this we use AWR/ASH report. We ask application team that at what time the query was running and for that duration we generate the AWR. In ASH report we find all kind of information regarding the top SQL’s. Basically we see SQL STATISTIS section of the AWR report. In this section there is SQL ORDERED BY ELAPSED TIME which matters most to us. If the query which is reported by application team is present in this section then we note down the sql_id of the query. Otherwise we generate ASH report for that particular time and get the sql_id.

Step.2
Run the SQL Tuning advisor for that SQL_ID
After finding the sql_id we can run sql tuning advisor on this sql_id.

@$ORACLE_HOME/rdbms/admin/sqltrpt.sql) by providing SQL_ID as the input for generating the findings and recommendations. SQL Tuning Advisor seems to be doing logical optimization mainly by checking your SQL structure and statistics.

Based on the tuning advisor recommendation we have to take corrective actions. These recommendation could be and many more:

1) Gather Statistics
2) Create Index
3) Drop Index
4) Join orders
5) Accept sql profile
6) Create baseline and many more recommendations

Before applying any corrective action on production, we need to test that in DEV/QA/UAT/TEST environment or we can ask and tuning expert that this is the recommendation. After all the analysis we should apply in the production database.

After corrective action from tuning advisor run the SQL again and see the improvement.

Step.3
Use TOP command to check the CPU usages by various processes:

TOP command is also useful for performance tuning. Many times, in a single server multiple databases are running. It may happen that one database is consuming more server resources than others. So we have to find out which oracle process is consuming more resources and it is related to which database. For this we use  TOP command. If we see there is CPU used by an oracle process is very high then this a matter of subject to worry about.

If it is a DML statement then we have check the locking in the database

Step.4
Find the locking in the database:

The very first step is to find out if there is any locking in the database. Sometime due to locking a session does not get the required resources and the session gets slow.

We can find below command to check locking in the database:

sql>select s1.username || '@' || s1.machine || ' ( SID=' || s1.sid || ') is blocking'
|| s2.username || '@' || s2.machine || '( SID=' || s2.sid || ')' from
v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.block=1 and l2.request > 0
and l1.id1=l2.id1
and l2.id2=l2.id2;

Query Output:

S1.USERNAME||'@'||S1.MACHINE||'(SID='||S1.SID||')ISBLOCKING'||S2.USERNAME||'@'||
----------------------------------------------------------------------------------------
SYS@host1.abc.com ( SID=229) is blockingSYS@host1.abc.com( SID=226)

If we see the locking from above query then we can simply inform to user/application. And if they suggest to kill this blocking session then after killing we can get rid of this slowness.


Step.5
Check for the wait events:

There could be some wait events on the database. Check for the particular user and session.

Query for displaying sessions, session state, and wait details

col "Description" format a50
select sid,
        decode(state, 'WAITING','Waiting',
                'Working') state,
        decode(state,
                'WAITING',
                'So far '||seconds_in_wait,
                'Last waited '||
                wait_time/100)||
        ' secs for '||event
        "Description"
from v$session
where username = 'ARUP';

Output:

SID   STATE       Description
————— ——————————  ———————————————————————————————————————————————————————
2832  Working     Last waited 2029 secs for SQL*Net message from client
3346  Waiting     So far 743 secs for enq: TX - row lock contention
4208  Waiting     So far 5498 secs for SQL*Net message from client

It clearly shows the state of the sessions: whether they are working or waiting; if they are working, what they were waiting for earlier and for how long; and if they are waiting, what for and for how long.

In many troubleshooting situations, just knowing the SID of each session is not enough. We may need to know other details, such as the client machine the session is connecting from, the user (of both the database and the operating system), and the service name. All of this information is also readily available in the same V$SESSION view we have been using. Let’s briefly examine the columns that provide that information, by running the below query

select SID, osuser, machine, terminal, service_name,
       logon_time, last_call_et
from v$session
where username = 'ARUP';

SID   OSUSER  MACHINE   TERMINAL  SERVICE_NAME  LOGON_TIME LAST_CALL_ET
————— ——————  ———————   ————————  ————————————  —————————— ————————————
3346  oradb   prodb1    pts/5     SYS$USERS     05-FEB-12          6848
2832  oradb   prodb1    pts/6     SERV1         05-FEB-12          7616
4408  ANANDA  ANLAP     ANLAP     ADHOC         05-FEB-12             0

OSUSER. The operating system user as which the client is connected. The output indicates that session 4408 is connected from the ANLAP machine, where a Windows user, ANANDA, has logged in.
MACHINE. The name of the machine where the client is running. This could be the database server itself. For two of the sessions, the machine name shows up as “prodb1.” Session 4408 runs on a different machine—ANLAP—presumably a laptop.
TERMINAL. If the session is connected from a UNIX server, this is the terminal where it runs.
LOGON_TIME. This shows when the session was first connected to the Oracle Database instance.

More:

1.Run TOP command in Linux to check CPU usage.
2.Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.
3.Enable the trace file before running your queries,then check the trace file using tkprof create output file.
According to explain plan check the elapsed time for each query,then tune them respectively.

What is the use of iostat/vmstat/netstat command in Linux?

iostat – reports on terminal, disk and tape I/O activity.
vmstat – reports on virtual memory statistics for processes, disk, tape and CPU activity.
netstat – reports on the contents of network data structures.

When a user issues a query

When a user issues a query ?


A SP is created and the related objects checks in the data dictionary cache, based on the objects parsing and execution plan is created in the library cache, depending on the execution plan it will check the data in database buffer cache, if found it will give the result to the user via SP, if not it will retrieved the data from the disk.

when a user issues a insert,update, delete statement ?

In our case data is still in the SGA database buffer cache.

Data updated in the db buffer cache and marked as dirty buffer, immediately statement is placed into redo buffer, row updated message return to the user.

After issuing commit:

1.new SCN obtained from controlfile

2.commit place into redo buffer.

3.LGWR writes redo buffer contents to redolog files & remove from redo buffer.

4.Controlfile is updated with new SCN.

Tuesday, August 7, 2018

What Does FND Stands For?

What Does FND Stands For?


FND stands for foundation tables which is a combination of AOL SYSTEM ADMINISTRATOR MODULES tables, it will be placed under FND_TOP. This module is a standard and any customization may some-time leads to distruction of the Oracle Applications.

Sunday, June 24, 2018

Difference between crsctl start crs & crsctl start cluster while starting the cluster

Difference between crsctl start crs & crsctl start cluster while starting the cluster


crsctl start crs : To start the entire Oracle Clusterware stack on a node, including the OHASD process,which is responsible for starting up all other cluserware processes . This command is to be used only on the local node..

crsctl start cluster : To start Oracle Clusterware stack on local node . It does not include the OHASD process.

Tuesday, May 29, 2018

What is iDB Protocol

What is iDB Protocol


iDB protocol is an intelligent DataBase protocol which is the heart of the Exadata. It is used to do communication between Database node and cell storage. It transfers data between cell storage  and Database Node. Also it's provides interconnection bandwidth aggregation and failover.



Saturday, May 26, 2018

Oracle Memory Structure

Oracle Memory Structure


Oracle’s memory structure consists of two memory areas known as:

• System Global Area (SGA): Allocated at instance startup, and is a fundamental component of an Oracle Instance
• Program Global Area (PGA): Allocated when the server process is started

System Global Area

The SGA is also called the shared global area. It is used to store database information that is shared by database processes. It contains data and control information for the Oracle server and is allocated in the virtual memory of the computer where Oracle resides.

The SGA consists of several memory structures:
– Shared pool
– Database buffer cache
– Redo log buffer
– Other structures (e.g. lock and latch management,
statistical data)
• There are two optional memory structures that can be configured within the SGA:
– Large pool
– Java pool

Shared Pool

The shared pool is used to store the most recently executed SQL statements and the most recently used data definitions.
• It consists of two key performance-related memory
structures:
– Library cache
– Data dictionary cache
• Sized by the parameter
SHARED_POOL_SIZE.

Database Buffer Cache

The database buffer cache stores copies of data blocks that have been retrieved from the data files.
• It enables great performance gains when you obtain and update data.
• It is managed through a least recently used (LRU) algorithm.
• DB_BLOCK_SIZE determines the primary block size.
Consists of independent sub-caches:
– DB_CACHE_SIZE
– DB_KEEP_CACHE_SIZE
– DB_RECYCLE_CACHE_SIZE
• Database buffer cache can be dynamically resized to grow or shrink using ALTER SYSTEM.
• DB_CACHE_ADVICE can be set to gather statistics for predicting different cache size behavior.

Redo Log Buffer Cache

The redo log buffer cache records all changes made to the database data blocks.
• Its primary purpose is recovery.
• Changes recorded within are called redo entries.
• Redo entries contain information to reconstruct or redo changes.
• Size is defined by LOG_BUFFER.

Large Pool

The large pool is an optional area of memory in the SGA configured only in a shared server environment.
• It relieves the burden placed on the shared pool.
• This configured memory area is used for session memory (UGA), I/O slaves, and backup and restore operations.
• Unlike the shared pool, the large pool does not use an LRU list.
• Sized by LARGE_POOL_SIZE.
ALTER SYSTEM SET LARGE_POOL_SIZE = 64M;

Java Pool

The Java pool services the parsing requirements for Java commands.
• Required if installing and using Java.
• It is stored much the same way as PL/SQL in database tables.
• It is sized by the JAVA_POOL_SIZE parameter.

Library Cache

The library cache stores information about the most recently used SQL and PL/SQL statements.
The library cache:
• Enables the sharing of commonly used statements
• Is managed by a least recently used (LRU) algorithm
• Consists of two structures:
– Shared SQL area
– Shared PL/SQL area
• Has its size determined by the shared pool sizing.

Data Dictionary Cache

The data dictionary cache is a collection of the most recently used definitions in the database.
• It includes information about database files, tables, indexes, columns, users, privileges, and other database objects.
• During the parse phase, the server process looks at the data dictionary for information to resolve object names and validate access.
• Caching the data dictionary information into memory improves response time on queries.
• Size is determined by the shared pool sizing.

Program Global Area Components

The Program Global Area or Process Global Area (PGA) is a memory region that contains data and control information for a single server process or a single background process. The PGA is allocated when a process is created and deallocated when the process is terminated. In contrast to the SGA, which is shared by several processes, the PGA is an area that is used by only one process. In a dedicated server configuration, the PGA includes these components:
• Sort area: Used for any sorts that may be required to process the SQL statement
• Session information: Includes user privileges and performance statistics for the session
• Cursor state: Indicates the stage in the processing of the SQL statements that are currently used by the session
• Stack space: Contains other session variables

Process Structure

An Oracle process is a program that depending on its type can request information, execute a series of steps, or perform a specific task. Oracle takes advantage of various types of processes:
• User process: Started at the time a database user requests connection to the Oracle server.
• Server process: Connects to the Oracle Instance and is started when a user establishes a session.
• Background process: Available when an Oracle instance is started.

User Process

A user process is a program that requests interaction with the Oracle server.
• It must first establish a connection.
• It does not interact directly with the Oracle server.

Server Process

A server process is a program that directly interacts with the Oracle server.
• It fulfills calls generated and returns results.
• Can be dedicated or shared server.

Background Processes

The relationship between the physical and memory structures is maintained by Oracle’s background processes.
• Mandatory background processes
DBWn  PMON CKPT LGWR SMON RECO
• Optional background processes
ARCn LMON Snnn QMNn LMDn CJQ0 Pnnn LCKn Dnnn

Database Writer

The server process records changes to rollback and data blocks in the buffer cache. Database Writer (DBWn) writes the dirty buffers from the database buffer cache to the data files. It ensures that a sufficient number of free buffers—buffers that can be overwritten when server processes need to read in blocks from the data files—are available in the database buffer
cache. Database performance is improved because server processes make changes only in the buffer cache.

DBWn defers writing to the data files until one of the following events occurs:
• Incremental or normal checkpoint
• The number of dirty buffers reaches a threshold value
• A process scans a specified number of blocks when scanning for free buffers and cannot
fine any.
• Timeout occurs.
• A ping request in Real Application Clusters environment.
• Placing a normal or temporary tablespace offline.
• Placing a tablespace in read only mode.
• Dropping or Truncating a table.
• ALTER TABLESPACE tablespace name BEGIN BACKUP

LOG Writer

LGWR performs sequential writes from the redo log buffer cache to the redo log file under
the following situations:
• When a transaction commits
• When the redo log buffer cache is one-third full
• When there is more than a megabyte of changes records in the redo log buffer cache
• Before DBWn writes modified blocks in the database buffer cache to the data files
• Every 3 seconds.
Because the redo is needed for recovery, LGWR confirms the commit only after the redo is written to disk.
LGWR can also call on DBWn to write to the data files.
Note: DBWn does not write to the online redo logs.

System Monitor

If the Oracle instance fails, any information in the SGA that has not been written to disk is lost. For example, the failure of the operating system causes an instance failure. After the loss of the instance, the background process SMON automatically performs instance recovery when the database is reopened. Instance recovery consists of the following steps:
1. Rolling forward to recover data that has not been recorded in the data files but that has been recorded in the online redo log. This data has not been written to disk because of the loss of the SGA during instance failure. During this process, SMON reads the redo log files and applies the changes recorded in the redo log to the data blocks. Because all committed transactions have been written to the redo logs, this process completely recovers these transactions.
2. Opening the database so that users can log on. Any data that is not locked by unrecovered transactions is immediately available.
3. Rolling back uncommitted transactions. They are rolled back by SMON or by the individual server processes as they access locked data.
SMON also performs some space maintenance functions:
• It combines, or coalesces, adjacent areas of free space in the data files.
• It deallocates temporary segments to return them as free space in data files. Temporary segments are used to store data during SQL statement processing.

Process Monitor

The background process PMON cleans up after failed processes by:
• Rolling back the user’s current transaction
• Releasing all currently held table or row locks
• Freeing other resources currently reserved by the user
• Restarts dead dispatchers.

Checkpoint

An event called a checkpoint occurs when the Oracle background process DBWn writes all the modified database buffers in the SGA, including both committed and uncommitted data, to the data files.

Checkpoints Responsible for:
• Signalling DBWn at checkpoints.
• Updating datafile headers with checkpoint information.
• Updating control files with checkpoint information.

Archiver (ARCn)

• Optional background process
• Automatically archives online redo logs when ARCHIVELOG mode is set
• Preserves the record of all changes made to the database.

Logical Structure

A logical structure hierarchy exists as follows:
• An Oracle database is a group of tablespaces.
• A tablespace may consist of one or more segments.
• A segment is made up of extents.
• An extent is made up of logical blocks.
• A block is the smallest unit for read and write operations.

Control File Contents

The information in the control file includes:
• Database name is taken from either the name specified by the initialization parameter
DB_NAME or the name used in the CREATE DATABASE statement.
• Database identifier is recorded when the database is created.
• Time stamp of database creation is also recorded at database creation.
• Names and locations of associated data files and online redo log files are updated when
a data file or redo log is added to, renamed in, or dropped from the database.
• Tablespace information is updated as tablespaces are added or dropped.
• Redo log history is recorded during log switches.
• Location and status of archived logs are recorded when archiving occurs.
• Location and status of backups are recorded by the Recovery Manager utility.
• Current log sequence number is recorded when log switches occur.
• Checkpoint information is recorded as checkpoints are made.

Redo Log Files

Redo log files record all changes made to data and provide a recovery mechanism from a system or
media failure.
• Redo log files are organized into groups.
• An Oracle database requires
at least two groups.
• Each redo log within a group is called a member.

Structure of the Redo Log Files

The database administrator can set up the Oracle database to maintain copies of online redo log files to avoid losing database information due to a single point of failure.
Online Redo Log Groups
• A set of identical copies of online redo log files is called an online redo log group.
• The LGWR background process concurrently writes the same information to all online redo log files in a group.
• The Oracle server needs a minimum of two online redo log file groups for the normal operation of a database.
Online Redo Log Members
• Each online redo log file in a group is called a member.
• Each member in a group has identical log sequence numbers and the same size. The log sequence number is assigned each time the Oracle server starts writing to a log group to identify each redo log file uniquely. The current log sequence number is stored in the control file and in the header of all data files.

Archived Redo Log Files

• Archiving redo log files is accomplished by ARCn (Archiver) or manually through SQL statements.
• An entry in the control file recording the archive log name, log sequence number, and high and low SCN number is made whenever a redo log is successfully archived.
• A filled redo log file cannot be reused until a checkpoint has taken place and the redo log file has been backed up the ARCn process.
• Archived redo log files can be multiplexed.
• Archived redo log files must be maintained by theDBA.

ARCHIVELOG mode or in NOARCHIVELOG mode.

NOARCHIVELOG

In NOARCHIVELOG mode, the online redo log files are overwritten each time an online redo log file is filled, and log switches occur. LGWR does not overwrite a redo log group until the checkpoint for that group is completed.

ARCHIVELOG

If the database is configured to run in ARCHIVELOG mode, inactive groups of filled online redo log files must be archived. Because all changes made to the database are recorded in the online redo log files, the database administrator can use the physical backup and the archived online redo log files to recover the database without losing any committed data.
There are two ways in which online redo log files can be archived:
• Manually
• Automatically: Recommend method

Sunday, May 20, 2018

Important Port Numbers In IDAM

Important Port Numbers In IDAM



Weblogic admin server 7001
Obiee server                  9704
Oam_server                   14100
Oam_policy_mgr 14150
Oim_server                   14000
Omsm_server                14180
Soa_server                     8001

Testing URLs
------------------
WebLogic Admin Server
http://prodidm:7001/console

Fusion Middleware Control
http://prodidm:7001/em

Oracle Access Manager Console
http://prodidm:7001/oamconsole

Oracle Access Manager Server
http://prodidm:141000/oam

OIM Server
http://prodidm:14000/oim
xelsysadm/*******

SOA Suite
http://prodidm:8001/soa-infra
weblogic/*******

OIM Sysconsole
http://HOSTNAME:PORT/sysadmin/
Example:
http://localhost:14000/sysadmin/
or
http://iam.myhostname.com:14000/sysadmin/
Username for system administrator is : xelsysadm

OIM user console
http://HOSTNAME:PORT/oim/
Example:
http://localhost:14000/oim/
or
http://iam.myhostname.com:14000/oim/

OAM console
http://WLS_HOSTNAME:PORT/oamconsole
Example:
http://localhost:7001/oamconsole
or
http://iam.myhostname.com:7001/oamconsole

WLS console
http://WLS_HOSTNAME:PORT/console
Example:
http://localhost:7001/console
or
http://iam.myhostname.com:7001/console


What is Oracle IDAM ?

What is Oracle IDAM ?


Identity management is a collection of processes that a company uses to manage the security life cycle of resources for its users, organizations and entities.

Importance:
A company needs an identity management solution to:
-Manage its users, organizations and resources.
-Regulate access rights to its resources.
-Provide security for its resources.
-Audit, monitor and log its resouces
-Pass correspondence between resources

EBS Technology Codelevel Checker (ETCC)

EBS Technology Codelevel Checker (ETCC)


This patch provides two scripts, together referred to as ETCC, that help ensure you have the required database and middle tier bugfixes installed for your Oracle E-Business Suite Release 12.2 system.

The scripts are:
- checkDBpatch.sh (checkDBpatch.cmd on Windows). This is the Database EBS Technology Codelevel Checker (DB-ETCC), which determines if all the needed bugfixes exist in the specified database ORACLE_HOME.

- checkMTpatch.sh (checkMTpatch.cmd on Windows). This is the Middle Tier EBS Technology Codelevel Checker (MT-ETCC), which determines if all the needed bugfixes exist in the middle tier file system. 

Saturday, May 19, 2018

Database Upgrade High Level Steps From 11g to 12c

Database Upgrade High Level Steps From 11g to 12c.


Manual upgrade process are divided in 3 steps.

Step.1 pre-upgrade
Step.2 actual database upgrade
Step.3 post-upgrade

To perform pre-upgrade steps that you do not require downtime, you must be finished before downtime starts. Before starting upgrade process finish 12c installation on preffered mount point.

Source version: 11.2.0.3
Target version:12.1.0.2

1) Before starting the upgrade make sure Oracle 12c database binary already installed on the database server.
-Choose software only while installing 12chome

2) Check the integrity of the database by running (dbupgdiag.sql script)
-download the Oracle Database Upgrade Diagnostic Utility from support.oracle.com
-Check the invalid objects.
- Run utlrp.sql to validate invalid objects before upgrading process
-Check the duplicate objects owned by sys and system.
-purge recyclebin
-verify installed components status and version.
-check any corruption in data dictionary.

3) Run preupgrade information tool.
-Copy the Pre-Upgrade Information Tool script preupgrd.sql along with utluppkg.sql from the Oracle Database 12chome.
-Run preupgrd.sql
-It will generate below files
*preupgrade.log
*preupgrade_fixups.sql
*postupgrade_fixups.sql

4)  Review preupgrade.log script output.

5) Fix the warnings reported by preupgrade.log file.

6) Execute the preupgrade_fixup.sql
-preupgrade_fixups.sql script will attempt to resolve issues reported by the preupgrade process.
-Issues that cannot be resolved automatically by a fixup script are flagged with ** USER ACTION REQUIRED **.
-Disable cronjob
-check any files in media recovery/backup mode
-Check the timezone version
-Gather dictionary statistics
-Purge the Database RecycleBin Before Upgrading
-Remove EM repository

7) Stopping the 11g database components.

8) Backup the 11g home (preffered cold backup)

9)Set the 12c ENV variables and startup the database with upgrade.
-start with startup upgrade option.
-Run catupgrade script from os level with paralle=6 as below
cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl catctl.pl -n  6 -l $ORACLE_HOME/diagnostics catupgrd.sql

10)  Run the Post-Upgrade Status Tool
-Execute post upgrade script- utlu121s.sql which provides a summary of the upgrade
-Execute catuppst.sql script
-Download the dst upgrade script from oracle
-Once dst upgrade is successful , validate the time_zone(It should be 18)
-Run postupgrade_fixup.sql, The postupgrade_fixups.sql script is created to address issues that can be fixed after the database has been upgraded.
-Change the ORACLE_HOME to 12c in listener.ora file.
-Run utlrp.sql
-Update the compatible parameter and restart database.

Oracle Enterprise Manager Comes In 3 Flavors

Oracle Enterprise Manager Comes In 3 Flavors


-Database control
-Grid control
-Cloud control

Database Control: Using database control you can manage only one database. If you want to administer more than one database, then you must configure a seperate database control (OEM) for each database, otherwise you can go for grid/cloud control configuration.

Grid control: Using grid control you can monitor you whole infrastructure; i.e. you can manage more that one databases. Grid control is responsible for monitoring all targets and collects information on the Oracle databases, application servers, listeners  hardware, operation system and third party softwares. Grid control is now obsolete.

Cloud control: cloud control is the latest release to manage and monitor multiple hosts, it provides much better management features for entire oracle and non-oracle databases.


Major Differences Between DBCA & EMCA

Major Differences Between DBCA & EMCA


DBCA does not provide any options to re-configure the DBcontorl.
But EMCA can perform all possible customizations for existing DBcontrol.
E.g. drop dbconsole, recreate dbconsole, upgrade, etc...

Automated Maintenance Task

Automated Maintenance Task


In 11g, There are 3 main automated maintenance tasks in every database enabled by default:

-Automatic Optimizer Statistics Collection – Gathers stale or missing statistics for all schema objects. The task name is ‘auto optimizer stats collection’.
-Automatic Segment Advisor – Identifies segments that could be reorganized to save space. The task name is ‘auto space advisor’.
-Automatic SQL Tuning Advisor – Identifies high load SQL The task name is ‘sql tuning advisor’

These tasks can run up to 20 hours on Saturdays & Sundays utilizing a good portion of your hardware.

Clone From Production To Development.

Clone From Production To Development


-Make sure preclone scripts already run, if it is not first run preclone on dbTier and appsTier.
-Take the backup of application binaries as well as database backup.
-Mount the backups to the target side, if it is not mounted then do the scp.
-Create blackouts on target side, and send an outage email to business users that this instance is going to clone.
-Shutdown the application services, shutdown the database and startup restrict-mode drop the database, and take backup if required (export-import, pfile, context_file and other config files) sometimes it depends on your project to project)
-On target side untar the binaries of database and prepare the pfile, in pfile keep all the parameters which is there, just add two parameters (db_file_name_convert and log_file_name_convert) which actually convert the filesystem from source to target, and then we do startup nomount (which starts the instance and background processes)
-Now we will run rman duplicate.
-Rman connect auxiliary and run command allocate channels.
-"DUPLICATE" target database to name of the database along with the backup location and then release channels.

what internally rman duplicate will do is?

*it will restore the controlfile from the backup.
*it will mount the database internally (alter database mount).
*it will restore the database.
*it will recover the database.
*and then it will open the database with resetlogs option.

when you open the database with resetlogs, incarnation of the database will be happen, it means

*new logfile sequence number will be assigned and it will be set to 1
*redologfile will be given a new timestamp
*new scn number will be assigned and then your datafile and controlfile will be sync.

-Untar binaries for application side, and then go to common_top/clone/bin and fire perl adcfgclone.pl appsTier.
-Fire perl adcfgclone.pl appsTier context_file=<give cloned instance contextfile>

Post Clone Steps:
* Change the APPS password.
*Cancel the scheduled requests.
*Change the java color and banner.
*Updating workflow mailer status.
*Cleanup the nodes table by running.
EXEC FND_CONC_CLONE.SETUP_CLEAN;.
*Run autoconfig on both the tiers.
*Drop softlinks which are pointing to production
*Do sanity check and release instance for user access.
*Etc.

Why Database Upgrade

Why Database Upgrade


Several Bugs that are causing outages and /or business impacts.

-Upgrade your database because numerous bugs fixed.
-Several new features deployed with higher versions.
-Improved performance and scale-ability for latest versions rather that previous versions.
-Already thousands of customers upgraded their database from lower versions to recent versions.
-Extended support fee can avoided, no bug fixes unless you pay extra fee to get extended support.

Upgrading means the process of changing the data dictionary contents of a database to reflect particular release.

Upgrading from 10.2.x to 11.2.x is a major release upgrade
Ex: 10.2.0.5.0 to 11.2.0.1.0

Upgrading from 10.2.0.x to 10.2.0.x is a patch set upgrade
Ex: 10.2.0.1 to 10.2.0.5

Database Upgrade High Level Steps From 10g to 11g.

Database Upgrade High Level Steps From 10g to 11g.


DBUA - Database Upgrade Assistant - Recommended
MANUAL - Using SQL Scripts & Utilities

Manual upgrade process are divided in 3 steps.
Step.1 pre-upgrade
Step.2 actual database upgrade
Step.3 post-upgrade

To perform pre-upgrade steps that you do not require downtime, you must be finished before downtime starts. Before starting upgrade process finish 11g installation on preffered mount point.

Source version:10.2.0.5
Target version: 11.2.01

1) Before starting the upgrade make sure 11g database binary already installed on the database server.
-install only 11g software in separate location.

2) Check integrity of the source database and sanity operations.
-find duplicate objects in sys and system
-purge recyclebin
-verify installed components status and version.
-fix invalid objects before upgrading process
-check any files in media recovery/backup mode
-Gather dictionary statistics
-check any corruption in data dictionary.

3) Run pre-upgrade information tool
-copy pre-upgrade script utlu112i.sql from 11g home and ran on 10g home

4) Review pre-upgrade script output.

5) Fix the warnings reported by pre-upgraded tool

6) create pfile from spfile and prepare the pfile from 11g home database.
-removing deprecated parameters.
-change compatibility parameter
-set 11g $ORACLE_BASE.
-copy newly created pfile from 10ghome to 11ghome dbs directory.
-Configure ENV file for 11g database.
-Network configuration for 11g
*add new listener with different port number in tnsnames.ora and listener.ora files

7) Stopping 10g database components

8) Backup 10g home (preffered cold backup, because we have downtime)

9) Set 11g ENV variables and startup the database.
-start with startup upgrade option.
-run catupgrd.sql script
-database shutdown automatically after completing the catupgrd.sql script.

10) startup the database and check version.

11) Execute post upgrade script- utlu112s.sql
-Verify upgraded components status and their versions.

12) Perform upgrade actions that changes the database from upgrade mode to normal mode
-run catuppst.sql
-find invalid objects count
-run utlrp.sql script
-verify any invalid objects is fixed or not

13) Upgrade timezone file version

14) Perform post upgrade tasks



Thursday, May 17, 2018

Reasons of RMAN Database Backup Failure

Reasons of RMAN Database Backup Failure


-Network issue
-Space issues in mount
-Tablespace full
-Directory permission issue

What is your backup strategies for Database

What is your backup strategies for Database?


-Rman backup's are scheduled through crontab.
-We have full backup policy on Saturday, everyday we have incremental backups.
-Archives are backed up every two hours from archive destination to tapes.