Thursday, February 28, 2019

Basic Performance Tunning in Oracle Database

Basic Performance Tunning in Oracle Database


Method.1


How will you find the performance issues ?

Mostly, when any users intimates us that:
This query is running slowly ?
Daily, this query will retrieve data in 1 minute. But, today it is running since 2 hours ?
Load average is high ?
CPU utilization of a process is high ?
Users experiencing slowness ?

a. Get the top consuming PID’s (Works in almost all Linux related OS). And if it is Windows, please look into task manager.

ps -eo pcpu,pid,user,args | sort -k 1 -r |head -10

b.  Now, you need to pick the SQL_ID mainly for proceeding further. You will be prompted for PID which you picked in above command.

set linesize 2000;
select s.sid,s.serial#, s.inst_id,p.spid, s.SQL_ID, t.SQL_TEXT, s.machine from gv$process p, gv$session s, gv$sqltext t where s.paddr = p.addr and p.spid=&processid and s.SQL_HASH_VALUE = t.HASH_VALUE;

Now we have two ways:

1. Using the sql_id, pick the tables involved in it. We are using explain plans.
select * from table(dbms_xplan.display_cursor(sql_id => ‘&SQL_ID’, format => ‘+ALLSTATS’));

2. Run below Oracle Provided script to get details. This is Sql Tuning Report. This report will give all the recommendations available.
@?/rdbms/admin/sqltrpt.sql

Now, you will be having the tables list. Check when was latest time stamp of the table analyzed.

SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN, TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS') FROM DBA_TABLES WHERE TABLE_NAME ='&TABLE_NAME';

What is the use of last_analyzed column ?
This will give the information when the stats were gathered.

If the stats are not up-to-date, we need to gather stats.

Find whether the table is partitioned table or normal table.

select table_name, subpartition_name, global_stats, last_analyzed, num_rows from dba_tab_subpartitions where table_name='&Tablename' and table_owner='&owner' order by 1, 2, 4 desc nulls last;

select TABLE_OWNER, TABLE_NAME, PARTITION_NAME, LAST_ANALYZED from DBA_TAB_PARTITIONS where TABLE_OWNER=’&owner’ and TABLE_NAME=’&Tablename’ order by LAST_ANALYZED;

If table has NO partitions and if stats need to be gathered, use this

EXEC dbms_stats.gather_table_stats(‘&Owner’,’&Table’,cascade=>TRUE);

If table has partitions, we can gather stats on whole table. But, its better to gather stats for recently created ten partitions, use below

begin
SYS.DBMS_STATS.gather_table_stats
(ownname => ‘&Table_Owner’,
tabname => ‘&Table_Name’,
PARTNAME => ‘&partition’,
estimate_percent => 0.001,
DEGREE => 24,
granularity => ‘PARTITION’,
CASCADE => TRUE,
stattab => NULL,
statid => NULL,
statown => NULL
);
END;
/

We can also run this SQL Tunning Advisor from OEM:

1.Go to home
2.Click on Advisor Central
3.Click on SQL Advisor
4.Click on SQL Tunning Advisor
5.Click on Top Activity
6.Choose a SQL_ID for which you want to run tune
7.Click on Actions button go
8.click Submit Button


Method.2


How to tune a SELECT query ?

Step.1
Find the SQL_ID of the slow running query

There could be two possibilities:
1) Query is still running: If the slow query is still running then we can find the sql_id of the query by using v$session view.

2) Query is completed: It might be query is completed but application team got to know it later that the query was slow and it did not finish in its usual time. Since this query is not running right now, we can’t query v$session to get its information.

So for this we use AWR/ASH report. We ask application team that at what time the query was running and for that duration we generate the AWR. In ASH report we find all kind of information regarding the top SQL’s. Basically we see SQL STATISTIS section of the AWR report. In this section there is SQL ORDERED BY ELAPSED TIME which matters most to us. If the query which is reported by application team is present in this section then we note down the sql_id of the query. Otherwise we generate ASH report for that particular time and get the sql_id.

Step.2
Run the SQL Tuning advisor for that SQL_ID
After finding the sql_id we can run sql tuning advisor on this sql_id.

@$ORACLE_HOME/rdbms/admin/sqltrpt.sql) by providing SQL_ID as the input for generating the findings and recommendations. SQL Tuning Advisor seems to be doing logical optimization mainly by checking your SQL structure and statistics.

Based on the tuning advisor recommendation we have to take corrective actions. These recommendation could be and many more:

1) Gather Statistics
2) Create Index
3) Drop Index
4) Join orders
5) Accept sql profile
6) Create baseline and many more recommendations

Before applying any corrective action on production, we need to test that in DEV/QA/UAT/TEST environment or we can ask and tuning expert that this is the recommendation. After all the analysis we should apply in the production database.

After corrective action from tuning advisor run the SQL again and see the improvement.

Step.3
Use TOP command to check the CPU usages by various processes:

TOP command is also useful for performance tuning. Many times, in a single server multiple databases are running. It may happen that one database is consuming more server resources than others. So we have to find out which oracle process is consuming more resources and it is related to which database. For this we use  TOP command. If we see there is CPU used by an oracle process is very high then this a matter of subject to worry about.

If it is a DML statement then we have check the locking in the database

Step.4
Find the locking in the database:

The very first step is to find out if there is any locking in the database. Sometime due to locking a session does not get the required resources and the session gets slow.

We can find below command to check locking in the database:

sql>select s1.username || '@' || s1.machine || ' ( SID=' || s1.sid || ') is blocking'
|| s2.username || '@' || s2.machine || '( SID=' || s2.sid || ')' from
v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.block=1 and l2.request > 0
and l1.id1=l2.id1
and l2.id2=l2.id2;

Query Output:

S1.USERNAME||'@'||S1.MACHINE||'(SID='||S1.SID||')ISBLOCKING'||S2.USERNAME||'@'||
----------------------------------------------------------------------------------------
SYS@host1.abc.com ( SID=229) is blockingSYS@host1.abc.com( SID=226)

If we see the locking from above query then we can simply inform to user/application. And if they suggest to kill this blocking session then after killing we can get rid of this slowness.


Step.5
Check for the wait events:

There could be some wait events on the database. Check for the particular user and session.

Query for displaying sessions, session state, and wait details

col "Description" format a50
select sid,
        decode(state, 'WAITING','Waiting',
                'Working') state,
        decode(state,
                'WAITING',
                'So far '||seconds_in_wait,
                'Last waited '||
                wait_time/100)||
        ' secs for '||event
        "Description"
from v$session
where username = 'ARUP';

Output:

SID   STATE       Description
————— ——————————  ———————————————————————————————————————————————————————
2832  Working     Last waited 2029 secs for SQL*Net message from client
3346  Waiting     So far 743 secs for enq: TX - row lock contention
4208  Waiting     So far 5498 secs for SQL*Net message from client

It clearly shows the state of the sessions: whether they are working or waiting; if they are working, what they were waiting for earlier and for how long; and if they are waiting, what for and for how long.

In many troubleshooting situations, just knowing the SID of each session is not enough. We may need to know other details, such as the client machine the session is connecting from, the user (of both the database and the operating system), and the service name. All of this information is also readily available in the same V$SESSION view we have been using. Let’s briefly examine the columns that provide that information, by running the below query

select SID, osuser, machine, terminal, service_name,
       logon_time, last_call_et
from v$session
where username = 'ARUP';

SID   OSUSER  MACHINE   TERMINAL  SERVICE_NAME  LOGON_TIME LAST_CALL_ET
————— ——————  ———————   ————————  ————————————  —————————— ————————————
3346  oradb   prodb1    pts/5     SYS$USERS     05-FEB-12          6848
2832  oradb   prodb1    pts/6     SERV1         05-FEB-12          7616
4408  ANANDA  ANLAP     ANLAP     ADHOC         05-FEB-12             0

OSUSER. The operating system user as which the client is connected. The output indicates that session 4408 is connected from the ANLAP machine, where a Windows user, ANANDA, has logged in.
MACHINE. The name of the machine where the client is running. This could be the database server itself. For two of the sessions, the machine name shows up as “prodb1.” Session 4408 runs on a different machine—ANLAP—presumably a laptop.
TERMINAL. If the session is connected from a UNIX server, this is the terminal where it runs.
LOGON_TIME. This shows when the session was first connected to the Oracle Database instance.

More:

1.Run TOP command in Linux to check CPU usage.
2.Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.
3.Enable the trace file before running your queries,then check the trace file using tkprof create output file.
According to explain plan check the elapsed time for each query,then tune them respectively.

What is the use of iostat/vmstat/netstat command in Linux?

iostat – reports on terminal, disk and tape I/O activity.
vmstat – reports on virtual memory statistics for processes, disk, tape and CPU activity.
netstat – reports on the contents of network data structures.

When a user issues a query

When a user issues a query ?


A SP is created and the related objects checks in the data dictionary cache, based on the objects parsing and execution plan is created in the library cache, depending on the execution plan it will check the data in database buffer cache, if found it will give the result to the user via SP, if not it will retrieved the data from the disk.

when a user issues a insert,update, delete statement ?

In our case data is still in the SGA database buffer cache.

Data updated in the db buffer cache and marked as dirty buffer, immediately statement is placed into redo buffer, row updated message return to the user.

After issuing commit:

1.new SCN obtained from controlfile

2.commit place into redo buffer.

3.LGWR writes redo buffer contents to redolog files & remove from redo buffer.

4.Controlfile is updated with new SCN.

What is RCU

What is RCU?


Repository Creation Utility (RCU) is used to create and manage Oracle Fusion Middleware database schemas. RCU will create custom schemas and tablespaces and it will support only Oracle databases.

Note:
Oracle Internet Directory schema names cannot be customized. Other Identity Management schemas names, like OIF (Oracle Identity Federation), can be customized.

For example:

Component                             Schema Owner
Oracle Internet Directory ODS

Monday, February 25, 2019

Weblogic upgrade server from 10.1.3.4 to 10.3.6

Weblogic upgrade server from 10.1.3.4 to 10.3.6


Important points:

Different weblogic servers and supported java versions :

Oracle WebLogic 10.3.5.0 supports Sun JDK 1.6.0_24.
Oracle WebLogic 10.3.6.0, supports the Oracle JDK 1.7.0
Oracle WebLogic 12.1.3 supports Oracle JDK 1.7.0_55+ (1.7.0_67)
Oracle WebLogic 12.2.1 (12R2) supports JDK 1.8

Upgrade Weblogic server from 10.1.3.4 to 10.3.6 by applying the Patch: 13529623.

step.1

Shut down all WebLogic processes you want to upgrade such as:

Node Manager
Admin Server
All Managed Servers

step.2

Download and installed a new version of JDK 1.7+.
After installing Verifying the JDK Version
java –version

Note:
A plus sign '+' after the version number indicates that this and its subsequent versions are supported.

step.3

Download this patch set for Oracle WebLogic 10.3.6.0: Patch 13529623 (p13529623_1036_Generic.zip)
Unzip the file in a temporary location and confirm the extracted contains this file:
wls1036_upgrade_generic.jar

step.4

Running OUI to Upgrade an Existing WebLogic Server to 10.3.6
java -jar wls1036_upgrade_generic.jar

step.5

Upon execution, the installer starts preparing the OUI install program.

1.On Choose Middleware Home Directory, select the existing Middleware home you wish to upgrade.
2.Click the Next button.
3.On Register for Security Updates, the Email address and/or the My Oracle Support Password fields as applicable.
4.Click the Next button.
5.On Choose Products and Components, verify the components.

Note:
The OUI installer automatically selects the Oracle Coherence component. You can choose to select or deselect this component, keeping in mind that this

server type has not yet been verified with Oracle JD Edwards EnterpriseOne.

6.Click the Next button.
7.On Choose Product Installation Directories, verify the directory locations for the previously selected products and components.

Note:
A new version of Oracle Coherence_3.7 will be installed.

8.Click the Next button.
OUI begins copying the files and performs the upgrade.
9.On Installation Complete, click the check box for Run Quickstart to continue with the upgrade of the Oracle WebLogic domains.
10.Click the Done button to complete the installation and exit OUI.
The Quickstart configuration screen appears.
11.On the QuickStart links panel, select this link:
Upgrade domains to version 10.3.6
An Upgrade Wizard is launched.

12.On the Welcome panel of the Upgrade Wizard, review and complete the tasks listed in the Prerequisites section of the above screen.
13.When the Prerequisite tasks are complete, click the Next button.
14.On Select WebLogic Version, select this radio button:
9.0 or higher

15.Click the Next button.
16.On Select a Domain to Upgrade, drill down through the Oracle\Middleware\user_projects\domains directory structure and select the Oracle JD Edwards

domain. For example:
E1_Apps

17.Click the Next button.
18.On Inspect Domain, review the upgrade configuration selections.
19.Click the Next button.
20.On Select Upgrade Options, select this check box:
Back up current domain (recommended)

Caution:
The wizard advises you that if you choose the check box or Add log files to backup zip, the resultant zip file can be extremely large.

21.On Domain Backup, review the message.
22.Click the Next button.
23.On Select Directory for Domain Backup, you can accept or change location and filename of the backup zip file.
24.The wizard shows the progress of the domain backup.
25.When the backup is complete, click the Next button.
26.On Finalize Domain Upgrade, review the message.
27.Click the Next button to begin the Upgrade.
28.On Upgrade Complete, click the Done button to exit OUI.

Note:
As a result of this domain upgrade, you do not need to individually upgrade any Managed Server.

29.start the WebLogic NodeManager.
30.Start the WebLogic Administration Console.
31.Start the existing Managed Server such as the Oracle JD Edwards EnterpriseOne HTML server.
32.Test and verify the upgrade.

Source:

https://docs.oracle.com/cd/E24902_01/doc.91/e18840/upgrade_1036.htm#EOHWL191

Wednesday, February 13, 2019

DMZ Configuration on R12

DMZ Configuration on R12


* OS verification
* Backup the source (prodcution instance)
* Copy the source application tier to the DMZ machine
* Create user and group for the DMZ machine
* Create directories for the target DMZ and give them the required grant and ownership
* Configure the DMZ
* Run autoconfig on both application tiers and the database tier
* Start and check the services

--> OS Verification

i)   Create OS user

ii)  Host file format
The  “/etc/hosts”  file  should use the  following  format  and Hostname  length should be less than 255 characters.
IP Address   Full Qualified Hostname    alias
Example:
10.10.19.39   externalnode.oracle.com    externalnode
10.10.11.31   internalnode.oracle.com    internalnode

iii) Mount Point / Storage Details
All disks in a disk group should have the same I/O density (Megabytes per second of transfer rate per Gigabyte of capacity)

iv)  Software Requirement
Ar, ld, make, X Display Server

v)   Login into the Source and shut the application services

vi)  Take a backup of the application directories and transfer it to the target server

--> Configuring DMZ

vii)  login as: appldmz

viii) Check the below settings

echo $PATH
which make
which ld

ix)  Navigate to the location where the files have been transfered.

x)  And further navigate to the clone bin diretory and run the adcfgclone.pl with appsTier
pwd
/oracle/appldmz
apps  inst
cd apps/apps_st/comn/clone/bin

perl adcfgclone.pl appsTier

It will ask the instance specific details:

Target System Hostname (virtual or normal) [externalnode] :
Target System Database SID : RAC1
Target System Database Server Node [irecruit] : internal
Target System Base Directory : /oracle/appldmzTarget System Forms ORACLE_HOME Directory [/oracle/appldmz/apps/tech_st/10.1.2] :Target System Web ORACLE_HOME

Directory [/oracle/appldmz/apps/tech_st/10.1.3] :
Target System APPL_TOP Mountpoint [/oracle/appldmz/apps/apps_st/appl] :
Target System COMMON_TOP Directory [/oracle/appldmz/apps/apps_st/comn] :
Target System Instance Home Directory [/oracle/appldmz/inst] :
Username for the Applications File System Owner [appldmz] :
Target System Root Service [enabled] :
Target System Web Entry Point Services [enabled] :
Target System Web Application Services [enabled] :
Target System Batch Processing Services [enabled] :
Target System Other Services [disabled] :
Do you want to preserve the Display [internal:0.0] (y/n) ? : n
Target System Display [irecruit:0.0] :
Do you want the the target system to have the same port values as the source system (y/n)[y] ? : n
Target System Port Pool [0-99] : 10

xi)  Once this process completes succesfully shut down the services change the following parameters in the .xml file

s_isWeb                           ----required value=YES
s_isWebDev                     ----required value=YES
s_http_listen_parameter    ----New Port for the http listener
s_https_listen_parameter  ----New Port for the https listener
s_webentryurlprotocol      ----Set the value to the web entry protocol (http/https)
s_webentryhost                ----Set the value to the webentry host
s_webentrydomain            ----Set the value to the webentry domain
s_active_webport              ----Set the value to the active port
s_login_page                    ----Set the value to point to the new webentry configuration
s_server_ip_address          ----Set the value of this variable to the IP address of the external facing network interface

xii)  Run the node clean package as apps user
Sqlplus apps/********
exec fnd_conc_clone.setup_clean;
commit;
exit
Run autoconfig in the series as stated below
a)  database tier
b)  primary apps tier (prodcution)
c)  DMZ (externalnode)

xiii)  After the completion of the autoconfig run the following script as apps from the primary node
sqlplus apps/welcome @$FND_TOP/patch/115/sql/txkChangeProfH.sql SERVRESP

This would change the profile options hierarchy type values to SERVRESP. Run autoconfig again on the primary node. As seen in the following lines

@txkChangeProfH.sql SERVRESP
Changing the hierarchy type for the  Profile APPS_WEB_AGENT
Profile APPS_WEB_AGENT hierarchy type has been
successfully changed to SERVRESP
Changing the hierarchy type for the  Profile APPS_SERVLET_AGENT
Profile APPS_SERVLET_AGENT hierarchy type has been
successfully changed to SERVRESP
Changing the hierarchy type for the  Profile APPS_JSP_AGENT
Profile APPS_JSP_AGENT hierarchy type has been
successfully changed to SERVRESP
Changing the hierarchy type for the  Profile APPS_FRAMEWORK_AGENT
Profile APPS_FRAMEWORK_AGENT hierarchy type has been
successfully changed to SERVRESP
Changing the hierarchy type for the  Profile ICX_FORMS_LAUNCHER
Profile ICX_FORMS_LAUNCHER hierarchy type has been
successfully changed to SERVRESP
Changing the hierarchy type for the  Profile ICX_DISCOVERER_LAUNCHER
Profile ICX_DISCOVERER_LAUNCHER hierarchy type has been
successfully changed to SERVRESP
Changing the hierarchy type for the  Profile ICX_DISCOVERER_VIEWER_LAUNCHER
Profile ICX_DISCOVERER_VIEWER_LAUNCHER hierarchy type has been
successfully changed to SERVRESP
Changing the hierarchy type for the  Profile HELP_WEB_AGENT
Profile HELP_WEB_AGENT hierarchy type has been
successfully changed to SERVRESP
Changing the hierarchy type for the  Profile APPS_PORTAL
Profile APPS_PORTAL hierarchy type has been
successfully changed to SERVRESP
Changing the hierarchy type for the  Profile CZ_UIMGR_URL
Profile CZ_UIMGR_URL hierarchy type has been
successfully changed to SERVRESP
Changing the hierarchy type for the  Profile QP_PRICING_ENGINE_URL
Profile QP_PRICING_ENGINE_URL hierarchy type has been
successfully changed to SERVRESP
Changing the hierarchy type for the  Profile TCF:HOST
Profile TCF:HOST hierarchy type has been
successfully changed to SERVRESP
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit
Production
With the Partitioning, OLAP and Data Mining options

xiv)  After the completion of the script start the primary instance and login as sysadmnin, And change the following profiles at server level
Server  : externalnode <server_name>
Profile : Node Trust Level <profile_name>
Make the changes at the server level to external

This Completes the DMZ Configuration !!!
xv)  Expose the responsibility to the external tier as per the requirement.

xvi) Once the above process is completed, the DMZ node can be started with the sequence given below.

xvii)  Startup and shutdown of DMZ Applications

We can start the externalnode(DMZ) node with the following sequence

adopmnctl.sh start
adoafmctl.sh start
adformsctl.sh start
adoacorectl.sh start
adapcctl.sh start

To stop the DMZ follow the sequence stated below

adapcctl.sh stop
adoacorectl.sh stop
adformsctl.sh stop
adoafmctl.sh stop
adopmnctl.sh stop

URL :  http://externalnode.oracle.com:8010/OA_HTML/IrcVisitor.jsp

Note:

1.Check SERVER_ADDRESS in fnd_nodes for external tier, make sure it should be pointing to external hostname, if it is not showing add node to fnd_nodes using the Sysadmin –> Install –> nodes.
Check the below columns:
*NODE_NAME                   
*SERVER_ADDRESS               
*HOST                   
*DOMAIN

2.Enable Oracle E-Business Suite Application Server Security

1.Set the value of Application Server Security Authentication (s_appserverid_authentication) to SECURE, in the CONTEXT_FILE on all the nodes.
2.Run AutoConfig on each Applications middle tier to complete the configuration.
3.After AutoConfig completes successfully, restart Oracle HTTP Server and OC4J processes








Oracle Applications Cloning Steps For 12.2.x

Oracle Applications Cloning Steps For 12.2.x


High-level steps of cloning process with AD and TKX delta6
=============================================:

1.Execute the adpreclone.pl dbtier and appsTier
2.Copy the database and applications files (run filesystem) from source to target
3.Configure the database tier on the target server
4.Configure the applcation tier on the run edition
5.Copy the run editon to the patch edition
6.Configure the application tier with patch edition

High-level steps of cloning process with AD and TKX Delta7
=============================================:

1.Execute the adpreclone.pl on dbtier and appsTier.
2.Copy the database and applcation files (run file system) from source to target.
3.Configure the database tier on the target server.
4.Configure the applcation tier on the run edition with the dualfs option.
5.Copy the run editon to the patch edition
6.Configure the application tier with patch edition

Note:
When using the dualfs option, you don't have to copy the patch file system from the run file system. Rapid clone will copy the patch file system and configure both file systems.

What happens when you run adpreclone and adcfgclone.

When you run this commnad adpreclone.pl dbTier ?
This will run in two steps Techstack and database.

1.Techstack:
It will create following directories in the ORACLE_HOME/appsutil/clone
Jlib, db, data where “Jlib” relates to libraries “db” will contain the techstack information, “data” will contain the information related to datafiles and required for cloning.
Creates driver files at ORACLE_HOME/appsutil/driver/instconf.drv
Converts inventory from binary to xml, the xml file is located at $ORACLE_HOME/appsutil/clone/context/db/Sid_context.xml

2.Prepare database for cloning:
This includes creating datbase control file script and datafile location information file at
$ORACLE_HOME/appsutil/template
adcrdbclone.sql, dbfinfo.lst
Generates database creation driver file at ORACLE_HOME/appsutil/clone/data/driver
data.drv
Copy JDBC Libraries at ORACLE_HOME/appsutil/clone/jlib/classes12.jar and appsoui

When Running adpreclone appsTier ?
This will create stage directory at $COMMON_TOP/clone. This also run in two steps.

1.Techstack:
Creates template files for
Oracle_iAS_Home/appsutil/template
Oracle_806_Home/appsutil/template

Creates Techstack driver files for
IAS_ORACLE_HOME/appsutil/driver/instconf.drv
806_ORACLE_HOME/appsutil/driver/instconf.drv

2.APPL_TOP preparation:
-It will create application top driver file $COMMON_TOP/clone/appl/driver/appl.drv
-Copy JDBC libraries $COMMON_TOP/clone/jlib/classes111.zip

Configuring the target system
Once it is done . Run as below adcfclone.pl for apps Tier and dbTier.

On database side:
cd $ORACLE_HOME/appsutils/clone/binperl adcfgclone.pl dbTier pwd=apps
This will use the templates and driver files those were created while running adpreclone.pl on source system and has been copied to target system.

Following scripts are run by adcfgclone.pl dbTier for configuring techstack

* adchkutl.sh — This will check the system for ld, ar, cc, and make versions.
* adclonectx.pl — This will clone the context file. This will ceate a new context file as per the details of this instance.
* runInstallConfigDriver — located in $Oracle_Home/appsutil/driver/instconf.drv
* Relinking $Oracle_Home/appsutil/install/adlnkoh.sh — This will relink ORACLE_HOME

1.On database side, following scripts are run
Driver file $Oracle_Home/appsutil/clone/context/data/driver/data.drv
Create database adcrdb.zip
Autoconfig is run
Control file creation adcrdbclone.sql

2.On Application Side:
COMMON_TOP/clone/bin/perl adcfgclone.pl appsTier pwd=apps
Following scripts are run by adcfgclone.pl
Creates context file for target adclonectx.pl

Run driver files
$ORACLE_HOME/appsutil/driver/instconf.drv
$IAS_ORACLE_HOME/appsutil/driver/instconf.drv

Relinking of Oracle Home $ORACLE_HOME/bin/adlnk806.sh $IAS_ORACLE_HOME/bin/adlnkiAS.sh
At the end it will run the driver file $COMMON_TOP/clone/appl/driver/appl.drv and then runs autoconfig.

What is Dualfs option in 12.2.x ?

What is Dualfs option in 12.2.x ?


Oracle Applications R12.2.5 New Features: dualfs in cloning

From Oracle Applications Release 12.2.5 Version, we have dualfs option for cloning applications tier.We can use this feature in earlier 12.2.x versions also, if AD-TXK Delta 7 patch is applied. Before AD-TXK Delta 7, it was required to run adcfgclone twice on run and patch filesystems.Using dualfs option,we can create both run and patch fs at one time.

syntax:

-bash$ perl adcfgclone.pl appsTier dualfs

Difference Between SQL TRACE, EXPLAIN PLAN and TKPROF.

Difference Between SQL TRACE, EXPLAIN PLAN and TKPROF.


Overview Of SQL TRACE
-------------------------------
The diagnostic tool 'sql trace' provides performance information about individual SQL statements and generates the following statistics for each statement:

* parse, execute, and fetch counts
* CPU and elapsed times
* physical reads and logical reads
* number of rows processed
* misses on the library cache

This information is input to a trace (.trc) file and sql trace can be enabled/disabled for a session or an instance.

Setting Initialization Parameters:

1.SQL_TRACE
Enable/Disable SQL Trace for the instance.(TRUE/FALSE)

2.TIMED_STATISTICS
Enable/Disable the collection of timed statistics, such as CPU and elapsed times.(TRUE/FALSE)

3.USER_DUMP_DEST
Specifies the destination for the trace file.

Enabling/Disabling SQL Trace:

ALTER SESSION SET SQL_TRACE = TRUE;
ALTER SESSION SET SQL_TRACE = FALSE;

Enabling/Disabling TIMED_STATISTICS:

ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
ALTER SESSION SET TIMED_STATISTICS = FALSE:

Trace Files
-----------
Oracle will generate trace (.trc) files for every session where the value of SQL_TRACE = TRUE and write them to the USER_DUMP_DEST destination. If tracing has been enabled for the instance then individual trace files will be generated for each session, unless otherwise disabled (see above). Note, that the generated files may be owned by an operating system user other than your own so you may have to get this user to grant you access before you can use TKPROF to format them.

Using TKPROF
-------------------
The TKPROF facility accepts as input an SQL trace file and produces a formatted output file.

Simple Example
-------------------
This example shows TKPROF being run to format a trace file named "dsdb2_ora_18468.trc" and writing it to a formatted output file named "dsdb2_trace.out".

$ TKPROF dsdb2_ora_18468.trc dsdb2_trace.out SYS=NO EXPLAIN=SCOTT/TIGER

The EXPLAIN PLAN Command
----------------------------------------
The EXPLAIN PLAN command displays the execution plan chosen by the Oracle optimizer for SELECT, UPDATE, INSERT, and DELETE statements. A statement's execution plan is the sequence of operations that Oracle performs to execute the statement. By examining the execution plan, you can see exactly how Oracle executes your SQL statement. This information can help you determine whether the SQL statement you have written takes advantage of the indexes available.

Creating the Output Table
-------------------------------
Before you can issue an EXPLAIN PLAN statement, there must exist a table to hold its output, you do either of the following:

* Run the SQL script "UTLXPLAN.SQL" to create a sample output table called PLAN_TABLE in your schema.

* Issue a CREATE TABLE statement to create an output with any name you choose.  You can then issue an EXPLAIN PLAN statement and direct its output to this table.  Any table used to store the output of the EXPLAIN PLAN command must have the same column names and datatypes as the PLAN_TABLE

SQL Trace Facility Statistics
----------------------------------
TKPROF lists the statistics for a SQL statement returned by the SQL trace facility in rows and columns.  Each row corresponds to one of
three steps of SQL statement processing:

* PARSE
This step translates the SQL statement into an execution plan. This includes checks for proper security authorization and checks or the existence of tables, columns, and other referenced objects.

* EXECUTE
This step is the actual execution of the statement by Oracle. For INSERT, UPDATE, and DELETE statements, this step modifies the data.  For SELECT statements, the step identifies the selected rows.

* FETCH
This step retrieves rows returned by a query.Fetches are only performed for SELECT statements.

How To Collect 10046 Trace (SQL_TRACE) Diagnostics for Performance Issues

How To Collect 10046 Trace (SQL_TRACE) Diagnostics for Performance Issues


Gathering 10046 trace

The 10046 trace is the equivalent of setting SQL_TRACE=TRUE. The advantage of using the event is that additional details may be written to the trace file depending on the level specified with the event. This event is often requested to be set by Oracle Support when gathering information for specific SQL issues and particular when investigating and/or diagnosing performance issues. Event 10046 is the standard method of gathering extended sql_trace information for Oracle sessions.

Trace Location

11g R1 and above:

With the introduction of the new diagnosability infrastructure introduced in Oracle Database 11g Release 1, traces and core files are placed in a location controlled by the DIAGNOSTIC_DEST initialization parameter.
To show the location of the DIAGNOSTIC_DEST, the following command can be used:
SQL> show parameter diagnostic_dest

Pre 11g R1:

Event 10046 tracing will produce a trace file in the <Parameter user_dump_dest> for user processes and <Parameter background_dump_dest> for background processes.
To show the location of the user_dump_dest, the following command can be used:

SQL> show parameter user_dump_dest

Note: Some examples include setting a 'tracefile_identifier' to assist with finding the resultant trace output.

Session Tracing

This tracing can be used where the session is accessible to the user prior to the start of the statement(s) to be traced.
To gather 10046 trace at the session level:
alter session set tracefile_identifier='10046';

alter session set timed_statistics = true;
alter session set statistics_level=all;
alter session set max_dump_file_size = unlimited;

alter session set events '10046 trace name context forever,level 12';

-- Execute the queries or operations to be traced here --

select * from dual;
exit;

If the session is not exited then the trace can be disabled using:

alter session set events '10046 trace name context off';

Note that if the session is not closed cleanly and tracing is disabled, then important trace information may be missing from the trace file.

Note: "statistics_level=all" is set here so as to gather some level of statistics in cases where the parameter has been changed from the default and recommended level of "TYPICAL" (to BASIC). In order to diagnose performance issues, some level of statistics is required. A value of "ALL" may not be absolutely necessary but has been chosen over "TYPICAL" so as to have the most comprehensive information for diagnosis.

Tracing a process after it has started

If trace from an existing session is required then oradebug can be used to attach to the session and initiate 10046 tracing.
The first step is to identify the session to be traced by some means:

For example, in SQL*Plus, start a session to find the OS process id (spid) for the target session:
select p.PID,p.SPID,s.SID
from v$process p,v$session s
where s.paddr = p.addr
and s.sid = &SESSION_ID
/

SPID is the operating system Process identifier (os pid)
PID is the Oracle Process identifier (ora pid)

If you do not know the Session ID then you can use a select similar to the following to help you identify the target session:

column line format a79
set heading off
select 'ospid: ' || p.spid || ' # ''' ||s.sid||','||s.serial#||''' '||
  s.osuser || ' ' ||s.machine ||' '||s.username ||' '||s.program line
from v$session s , v$process p
where p.addr = s.paddr
and s.username <> ' ';

Document For multi threaded processes in 12c, use newly added column stid from v$process to find specific thread, as Oracle combines many processes into a single ospid.  To find the specific thread, use following syntax:

oradebug setospid <spid> <stid>

Once the OS process id for the process has been determined then the trace can be initialised as follows:

Lets assume that the process to be traced has an os pid of 9834.
Login to SQL*Plus as a dba and execute the following:
connect / as sysdba
oradebug setospid 9834
oradebug unlimit
oradebug event 10046 trace name context forever,level 12
Remember to replace the example '9834' value with the actual os pid.

Note that it is also possible to attach to a session via oradebug using the 'setorapid'.

In this case the PID (Oracle Process identifier ) would be used (rather than the 'SPID') and the oradebug text would change to:
connect / as sysdba
oradebug setorapid 9834
oradebug unlimit
oradebug event 10046 trace name context forever,level 12
Remember to replace the example '9834' value with the actual ora pid.

To disable oradebug tracing once tracing is finished:
oradebug event 10046 trace name context off
Document For multi threaded processes in 12c, use newly added column stid from v$process to find specific thread, as Oracle
combines many processes into a single ospid.  To find the specific thread, use following sytax:
oradebug setospid <spid> <stid>oradebug unlimit
The tracefile name will be something like <instance><spid>_<stid>.trc.

Instance wide tracing
Note: Please be cautious when setting system wide, as this will impact performance due to every session being traced.

This setting will trace every session that is created after the parameter is set. Existing sessions will not be traced.

Setting system-wide 10046 tracing can be useful for scenarios where a problem session is known to occur but cannot be identified in advance.
In this situation, tracing can be enabled for a short period of time, the problem can then be reproduced and tracing disabled and the resultant traces searched for evidence of the problem.

System-wide tracing can be enabled as follows:
alter system set events '10046 trace name context forever,level 12';

The setting can be disabled in all sessions by using the following command:
alter system set events '10046 trace name context off';

Initialisation parameter setting

This setting will trace every session in the instance when it is restarted.
event="10046 trace name context forever,level 12"

The setting can be disabled by removing the parameter and restarting the instance or by using an alter system command as follows:
alter system set events '10046 trace name context off'

Monday, February 11, 2019

WebLogic Server Life Cycle States


WebLogic Server Life Cycle States




Backup & Recovery FAQ's

Backup & Recovery FAQ's


Why and when should I backup my database?

Backup and recovery is one of the most important aspects of a DBA's job. If you lose your company's data, you could very well lose your job. Hardware and software can always be replaced, but your data may be irreplaceable!

Normally one would schedule a hierarchy of daily, weekly and monthly backups, however consult with your users before deciding on a backup schedule. Backup frequency normally depends on the following factors:

* Rate of data change/ transaction rate
* Database availability/ Can you shutdown for cold backups?
* Criticality of the data/ Value of the data to the company
* Read-only tablespace needs backing up just once right after you make it read-only
* If you are running in archivelog mode you can backup parts of a database over an extended cycle of days
* If archive logging is enabled one needs to backup archived log files timeously to prevent database freezes
* Etc.
Carefully plan backup retention periods. Ensure enough backup media (tapes) are available and that old backups are expired in-time to make media available for new backups. Off-site vaulting is also highly recommended.
Frequently test your ability to recover and document all possible scenarios. Remember, it's the little things that will get you. Most failed recoveries are a result of organizational errors and miscommunication.

What strategies are available for backing-up an Oracle database?

The following methods are valid for backing-up an Oracle database:

Export/Import - Exports are "logical" database backups in that they extract logical definitions and data from the database to a file. See the Import/ Export FAQ for more details.
Cold or Off-line Backups - shut the database down and backup up ALL data, log, and control files.
Hot or On-line Backups - If the database is available and in ARCHIVELOG mode, set the tablespaces into backup mode and backup their files. Also remember to backup the control files and archived redo log files.
RMAN Backups - while the database is off-line or on-line, use the "rman" utility to backup the database.
It is advisable to use more than one of these methods to backup your database. For example, if you choose to do on-line database backups, also cover yourself by doing database exports. Also test ALL backup and recovery scenarios carefully. It is better to be safe than sorry.

Regardless of your strategy, also remember to backup all required software libraries, parameter files, password files, etc. If your database is in ARCHIVELOG mode, you also need to backup archived log files.

What is the difference between online and offline backups?

A hot (or on-line) backup is a backup performed while the database is open and available for use (read and write activity). Except for Oracle exports, one can only do on-line backups when the database is ARCHIVELOG mode.
A cold (or off-line) backup is a backup performed while the database is off-line and unavailable to its users. Cold backups can be taken regardless if the database is in ARCHIVELOG or NOARCHIVELOG mode.
It is easier to restore from off-line backups as no recovery (from archived logs) would be required to make the database consistent. Nevertheless, on-line backups are less disruptive and don't require database downtime.
Point-in-time recovery (regardless if you do on-line or off-line backups) is only available when the database is in ARCHIVELOG mode.

What is the difference between restoring and recovering?

Restoring involves copying backup files from secondary storage (backup media) to disk. This can be done to replace damaged files or to copy/move a database to a new location.

Recovery is the process of applying redo logs to the database to roll it forward. One can roll-forward until a specific point-in-time (before the disaster occurred), or roll-forward until the last transaction recorded in the log files.

How does one put a database into ARCHIVELOG mode?

The main reason for running in archivelog mode is that one can provide 24-hour availability and guarantee complete data recoverability. It is also necessary to enable ARCHIVELOG mode before one can start to use on-line database backups.

Issue the following commands to put a database into ARCHIVELOG mode:
SQL> CONNECT sys AS SYSDBA
SQL> STARTUP MOUNT EXCLUSIVE;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ARCHIVE LOG START;
SQL> ALTER DATABASE OPEN;

Does Oracle write to data files in begin/hot backup mode?

When a tablespace is in backup mode, Oracle will stop updating its file headers, but will continue to write to the data files.
When in backup mode, Oracle will write complete changed blocks to the redo log files. Normally only deltas (change vectors) are logged to the redo logs. This is done to enable reconstruction of a block if only half of it was backed up (split blocks). Because of this, one should notice increased log activity and archiving during on-line backups.


How does one backup and restore a database using RMAN?

The biggest advantage of RMAN is that it only backup used space in the database. RMAN doesn't put tablespaces in backup mode, saving on redo generation overhead. RMAN will re-read database blocks until it gets a consistent image of it. Look at this simple backup example.

rman target sys/*** nocatalog
run {
  allocate channel t1 type disk;
  backup
    format '/app/oracle/backup/%d_t%t_s%s_p%p'
      (database);
   release channel t1;
}
Example RMAN restore:

rman target sys/*** nocatalog
run {
  allocate channel t1 type disk;
  # set until time 'Aug 07 2000 :51';
  restore tablespace users;
  recover tablespace users;
  release channel t1;
}

The examples above are extremely simplistic and only useful for illustrating basic concepts. By default Oracle uses the database controlfiles to store information about backups. Normally one would rather setup a RMAN catalog database to store RMAN metadata in. Read the Oracle Backup and Recovery Guide before implementing any RMAN backups.
Note: RMAN cannot write image copies directly to tape. One needs to use a third-party media manager that integrates with RMAN to backup directly to tape. Alternatively one can backup to disk and then manually copy the backups to tape.

Can one restore RMAN backups without a CONTROLFILE and RECOVERY CATALOG?

Details of RMAN backups are stored in the database control files and optionally a Recovery Catalog. If both these are gone, RMAN cannot restore the database. In such a situation one must extract a control file (or other files) from the backup pieces written out when the last backup was taken.

Friday, February 8, 2019

Weblogic.nodemanager.NMException / java.rmi.NoSuchObjectException While Starting AdminServer

Weblogic.nodemanager.NMException / java.rmi.NoSuchObjectException While Starting AdminServer 


Issue:

javax.naming.ServiceUnavailableException [Root exception is java.rmi.NoSuchObjectException: The object identified by: '31' could not be found. Either it was has not been exported or it has been collected by the distributed garbage collector.]
at weblogic.jndi.internal.ExceptionTranslator.toNamingException(ExceptionTranslator.java:70)
weblogic.management.provider.MSIService.registerForReconnectToAdminServer(MSIService.java:174)
at weblogic.management.mbeanservers.compatibility.internal.CompatibilityMBeanServerService.start(CompatibilityMBeanServerService.java:154)
at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
Caused by: java.rmi.NoSuchObjectException: The object identified by: '31' could not be found. Either it was has not been exported or it has been collected by the distributed garbage collector.
at weblogic.rmi.internal.OIDManager.getServerReference(OIDManager.java:251)
at weblogic.rmi.extensions.server.ServerHelper.getLocalInitialReference(ServerHelper.java:455)
at weblogic.rmi.extensions.StubFactory.getStub(StubFactory.java:131)
at weblogic.rmi.extensions.StubFactory.getStub(StubFactory.java:124)
at weblogic.jndi.WLInitialContextFactoryDelegate.getInitialReference(WLInitialContextFactoryDelegate.java:427)

Cause:

Under DOMAIN_HOME/servers/AdminServer/nodemanager directory, there is a file called startup-copy.properties file.

This file has ADMIN_URL parameter. Adminserver does not need ADMIN_URL parameter defined in start-up.properties. If it is present, then it results in the error.

Solution:

Make sure nodemanager.properties has StartScriptEnabled=true and StartScriptName=startWebLogic.sh
Under DOMAIN_HOME/servers/AdminServer/nodemanager directory, make sure none of the files have ADMIN_URL parameter.

Referene metalink Doc ID 2335354.1

Solaris : Screen commands

Solaris : Screen commands


To start screen session:
$ screen

To detach screen and go back to normal ssh session
$ ctrl+a  +d

To list screen sessions
$ screen -ls

To reattach SCREEN session
$ screen -r

To reattach specific SCREEN session
$ screen -r <session-id>

To start screen session and assign session name to it
$ screen -S <session-name>

Difference between Oracle EBS 12.1.3 and Oracle EBS R12.2.0 ?

Difference between Oracle EBS 12.1.3 and Oracle EBS R12.2.0 ?


1)In R12.2 we have two kind of filesystems fs1(run filesystem) and fs2(patch filesystem) where as In R12.1.3 we only deal with one application filesystem.

2)In R12.2 we have the Application servers replaced by Weblogic server to manage the technology statck.
The 10.1.3 Home is replaced by FMW  (Fusion Middleware Home) i.e $FMW_HOME

3)The major change in R12.2 is involvement of Weblogic server to manager all the forms,oacore servers
where as in R12.1.3 we had the Application server 10.1.3 to manage the web home or Java Home

4)The adpatch(patching) in R 12.1.3 is replaced by adop(online patching) in R 12.2.4

5)adop utility involves 5 phases to apply a standard patch in Oracle EBS R12.2.4.

6)While installing Oracle EBS 12.2.0 ,It will ask for TWO PORT POOLS one for the run file system and another patch file system.

7)In oracle EBS 12.2.0 While starting and stop all services it will ask weblogic password in additional to apps.

What is OS Watcher Utility and How to use it for Database Troubleshooting ?

What is OS Watcher Utility and How to use it for Database Troubleshooting ?


OS Watcher (OSWatcher) is a tool to help DBA's to trouble shoot Database performance, Cluster reboot, node eviction, DB server reboot, DB instance Crash related issues and many more.

As we know, OS stats like top, mpstat, netstat plays an important role in Database trouble shooting but there is no way to keep historical date for these stats. Here, OS Watcher is the only rescue for Database Administrator. Suppose Yesterday, There was some performance issue on Database Node but you were not aware about that and when you know that the issue was resolved itself.

Now, DBA can get Database related stats from AWR reports but not OS related stats for last day, To overcome this challenge Oracle introduce OS Watcher utility, which collects OS stats data at a frequency of five minutes and keep it for seven days (default settings). So Now, DBA need not to worry about historical OS stats.

To Trouble shoot Database performance related issues AWR, ADDM and OS Watcher logs are the first place to start for a Remote DBA. Where as for Cluster reboot, node eviction, DB server reboot Alter log files, OS Watcher and System messages (/var/log/messages) plays an important role.


Example 1: This would start the tool and collect data at default 30 second intervals and log the last 48 hours of data to archive files.

./startOSWbb.sh

Example 2: This would start the tool and collect data at 60 second intervals and log the last 10 hours of data to archive files and automatically compress the files.
./startOSWbb.sh 60 10 gzip

Example 3: This would start the tool and collect data at 60 second intervals and log the last 10 hours of data to archive files, compress the files and set the archive directory to a non-default location.

./startOSWbb.sh 60 10 gzip /u02/tools/oswbb/archive

Example 4: This would start the tool and collect data at 60 second intervals and log the last 48 hours of data to archive files, NOT compress the files and set the archive directory to a non-default location.

./startOSWbb.sh 60 48 NONE /u02/tools/oswbb/archive

Example 5: This would start the tool, put the process in the background, enable to the tool to continue running after the session has been terminated, collect data at 60 second intervals, and log the last 10 hours of data to archive files.

nohup ./startOSWbb.sh 60 10 &

Note:
OSW will keep running until stop/crash and will keep data for last 72 hours only. Data automatically compress after 72 hours.

Below files are created:

[root@mqm ~]# ls -ltr /u01/share/oswbb/archive
total 40
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswvmstat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswtop
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswslabinfo
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswps
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswprvtnet
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswnetstat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswmpstat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswmeminfo
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswiostat
drwxr-xr-x 2 root root 4096 Aug 12 02:49 oswifconfig

Control, Redo & datafiles

Control, Redo & datafiles


The most important files in oracle are
1. control file
2. Redo files
3. Data files

CONTROL FILE:
Number of Data files  (DBF’s)
Redo Log Files
File Names
File Timestamps
SCN (System Change Number)
Tablespace details any backup info
Critical Information about the database including whether the database has closed last time gracefully or not.
Without the control file even if the files are present (dbf’s , Redo’s) the database cannot be opened.
We require minimum of one control file. But it is advisable to have 2 or 3 mirror copies in different drives, so that even if one drive fails, with the help of second control file we can open the database.
controlfile is a very tiny file (size 500kb to 1mb) as it contains only the details of db, not any data like tables.

REDOLOG FILE:
These files contain latest info performed by end users against the tables.
We require min two redo log groups.
At any point in time oracle writes to one redolog file which is said to be in status called “CURRENT” and once the file gets filled up oracle performs a “LOG SWITCH” and starts writing to second file which will now become  “CURRENT” and this is a cyclic process.
A background processes called “LOG WRITER” is responsible for writing to the redolog files.
We can have more log groups to avoid “PAUSE STATE”
Redolog Files are usually any where between 500kb to 3mb depending on the number of transactions.(in Oracle10g min redo log size is 4MB)

DATA FILE
These are database files which represent tablespace.
One database file can represent only one tablespace.
Each DBF is made up of oracle blocks which is 2n kb (N ranges from 1 … 5)
The supported block sizes in oracle are 2k, 4k, 8k, 16k, and 32k.
All our tables, indexes are ultimately created in DBF’s
The very first block in every DBF has most important info called SCN.

what are ddl, dml & dcl

what are ddl, dml & dcl


Data Definition Language(DDL) are statements used for defining the database structure or schema.

CREATE statement- to create objects in the database
ALTER statement- to alter the objects in the database
DROP – to delete the objects in the database

Data Manipulation language(DML) are statements used for managing data within schema objects.

SELECT statement- to retrieve data from the a database
INSERT statement- to insert data into a table
UPDATE – to updates existing data

Data Control Language(DCL) are statements used for controling the access of data.

GRANT statement- to give access privileges to database
REVOKE statement- to withdraw access privileges

Conclusion:

DDL statements are auto-commit. That means you cant roll back them.
while DML are not auto-commit. That means you can roll-back the operations

What happens while creating the database?

What happens while creating the database?


1.The oracle engine will creates the controlfile. The location and name of the file will be taken from the parameter file.

After the creation of the controlfile,

2.The Oracle will mount the database and then create the system tablespace. And in the system tablespace, a rollback segment with the name system is created. This rollback will exist till the life time of the database and will take care of all transactions(which are performed against the system tablespace).

3.The Controlfile records all the physical files as entries.

4.Now, Oracle has to create base tables in the system tablespace.
Base tableslike TAB$, SYN$, USER$, VIEWS$, .,.,
For creation of base tables, oracle needs the definitions of these tables. The definitions of these base tables have been pre-defined by oracle in a file called sql.bsq.
When we execute the create database command, this file is called.
After execution, the database is opened.

5.Execution of catalog.sql – $ORACLE_HOME/rdbms/admin/catalog.sql

This view creates most of the views, synonyms and public synonyms that are used by the DBA and programmer.

catproc.sql :- Creates most of the packages used for Oracle Programming.
The location of the script is $ORACLE_HOME/rdbms/admin/catproc.sql

pupbld.sql :- Product user profile, This script will create certain tables and roles in the system user that will be used by the DBA to restrict particular user from executing particular command.
The location of the script is $ORACLE_HOME/sqlplus/admin/pupbld.sql.

background_dump_dest = /ora1/oradata/sandy/bdump
If any error is encountered in any of the background processes. Then, those errors would be recorded in this directory. It also contains the alert log file

core_dump_dest=/ora1/oradata/sandy/cdump
It will contain the trace files associated with any memory leaks that may occur within the database. Memory leaks usually occur when a programmer  does not close a pl/sql cursor that had been opened previously. Any errors related to SGA will be recorded to CDUMP directory.

user_dump_dest=/ora1/oradata/sandy/udump
It contains the files that have been traced by the users or DBA.

Process Monitor (PMON):-

If a user updates or performs some dml operation on a set rows but does not commit then that set of rows will be locked. Thus other users cannot perform dml on those rows.
In the mean while, if the client system from where the dml operation was performed gets rebooted or disconnected from the oracle server.
Then, PMON will come into picture and release the locks that had been placed by the dead user process.
After the locks are released PMON will rollback the transaction.

ALERT_LOG_FILE :-

This is the most important file for the DBA. The location is in /ora1/oradata/sandy/bdump. This file will contain the entire physical changes that are made to the database. It will also contain log of every action that is done by the database including..

Startup.
Shutdown.
Startup of background processes.
Opening the files.
Any database related errors.

RAC Basic Faq's

RAC Basic Faq's


Useful Commands:

crsctl enable has –> Enable Automatic start of Oracle High Availability services after reboot
crsctl disable has –> Disable Automatic start of Oracle High Availability services after reboot
What is OHASD?
Ohasd stands for Oracle High Availability Services Daemon. Ohasd spawns 3 types of services at cluster level.

Level 1 : Cssd Agent
Level 2: Oraroot Agent (respawns cssd, crsd, cttsd, diskmon,acfs)
Level 3: OraAgent(respawns mdsnd, gipcd, gpnpd, evmd, asm), CssdMonitor

Useful Commands:
1. crsctl enable has –> To start has services after reboot.
2. crsctl disable has –> has services should not start after reboot
3. crsctl config has –> Check configuration whether autostart is enabled or not.
4. cat /etc/oracle/scls_scr/<Node_name>/root/ohasdstr –> check whether it is enabled or not.
5. cat /etc/oracle/scls_scr/<Node_name>/root/ohasdrun –> whether restart enabled if node fails.

What is OCR? How and why OLR is used? Where is the location of OCR & OLR?
OCR stands for Oracle Cluster Registry. It holds information on it such as node membership (which nodes are part of this cluster), Software Version, Location of the voting disk, Status of RAC databases, listeners, instances & services. OCR is placed in ASM, OCFS.

ASM can be brought up only if we have access to OCR. But, OCR is accessible only after the ASM Is up. In this case, how will CRS services come up?

Yes. For this OLR (Oracle Local Registry) is there. This is a multiplexing of OCR file which was placed in local file system.

OLR holds information on it such as CRS_HOME, GPnP details, active version, localhost version, OCR latest backup(with time & location), Node name.,., .
Location Of OCR & OLR:

#cat /etc/oracle/ocr.loc –> OCR file Details.
ocrconfig_loc=<+ASM_Location>
local_only=FALSE

# cat /etc/oracle/olr.loc –> OLR file Details.
olrconfig_loc=<file_name_with_location.olr>
crs_home=<CRS_HOME_Location>

Useful Commands:

NOTE: Some commands like restore need bounce of services. Please verify before taking any action.

ocrconfig –showbackup –> OCR file backup location
ocrconfig –export < File_Name_with_Full_Location.ocr > –> OCR Backup
ocrconfig –restore <File_Name_with_Full_Location.ocr> –> Restore OCR
ocrconfig –import <File_Name_With_Full_Location.dmp> –> Import metadata specifically for OCR.
Ocrcheck –details –> Gives the OCR info in detail
ocrcheck –local –> Gives the OLR info in detail
ocrdump –local <File_Name_with_Full_Location.olr> –> Take the dump of OLR.
ocrdump <File_Name_with_Full_Location.ocr> –> Take the dump of OCR.

What is the Voting Disk and how is this Used?
If a node joins cluster, if a node fails (may be evicted), if VIP need to be assigned in case of GNS is configured. In all the cases, voting disk comes into picture. Voting disk saves the info of which nodes were part of cluster. While starting the crs services, with the help of OCR, it will vote in the voting disk (Nothing but mark attendance in the cluster)

We need not take the backup of the voting disk periodically like our cron jobs. We are supposed to take backup only in SOME of the below cases.

There are two different jobs done by voting disk.

Dynamic – Heart beat information
Static – Node information in the cluster
Useful Commands:

dd if=Name_Of_Voting_Disk of=Name_Of_Voting_Disk_Backup –> Taking backup of voting disk
crsctl query css votedisk –> Check voting disk details.
crsctl add css votedisk path_to_voting_disk –> To add voting disk
crsctl add css votedisk –force –> If the Cluster is down
crsctl delete css votedisk <File_Name_With_Password_With_file_name> –> Delete Voting disk
crsctl delete css votedisk –force –> If the cluster is down
crsctl replace votedisk <+ASM_Disk_Group> –> Replace the voting disk.

What is CRS?
CRSD stands for Cluster Resource Service Daemon. It is a proce–> which is responsible to monitor, stop, start & failover the resources. This process maintains OCR and this is responsible for restarting resource when any failover is about to take place.

Useful Commands:

crs_stat –t –v –> Check crs resources
crsctl stat res -t –> Check in a bit detail view. BEST ONE.
crsctl enable crs –> Enable Automatic start of Services after reboot
crsctl check crs –> Check crs Services.
crsctl disable crs –> Disable Automatic start of CRS services after reboot
crsctl stop crs –> Stop the crs services on the node which we are executing
crsctl stop crs –f –> Stop the crs services forcefully
crsctl start crs –> To start the crs services on respective node
crsctl start crs –excl –> To start the crs services in exclusive mode when u lost voting disk.
You need to replace the voting disk after you start the css.
crsctl stop cluster –all –> Stop the crs services on the cluster nodes
crsctl start cluster –all –> Start the crs services on all the cluster nodes.
olsnodes –> Find all the nodes relative to the cluster
oclumon manage –get master –> With this you will get master node information
cat $CRS_HOME/crs/init/<node_name>.pid –> Find PID from which crs is running.

What is CSSD?
CSSD stands for Cluster Synchronization Service Daemon. This is responsible for communicating the nodes each other. This will monitor the heart beat messages from all the nodes.

Example:

We have 2 node RAC cluster. Till one hour back, our CSSD is monitoring both the nodes and able to communicate each other. Now, if one of the node is down, CRS should know that one of the node is down. This information is provided by CSSD process.

Simple Scenario:

If both the nodes are up & running now. And due to one of the communication channel, CSSD process got information that the other node is down. So, in this case, new transactions cannot be assigned to that node. The node eviction will be done. And the node which is running now will be taking the ownership as master node.

This sample scenario was taken for a better understanding ONLY.

Useful Commands:
crsctl stop css–> For stopping the css
crsctl disable css –> Disabling automatic startup after reboot.

What is CTTSD?
CTTSD stands for Cluster Time Synchronization Service Daemon. This service by default will be in observer mode. If time difference is there, it won’t be taking any action. To run this service in active mode, we need to disable all the time synchronization services like NTP (Network Time Protocol). But, it is recommended as per my knowledge to keep this service in observer mode. This line was quoted because, if this service is in online mode. And time synchronization difference is huge, the cttsd process may terminate. And sometimes, crsd fail to startup due to time difference.

Useful Commands:
cluvfy comp clocksync -n all -verbose –> To check the clock synchronization across all the nodes
crsctl check ctts –> Check the service status & timeoffset in msecs.

What is VIP?
VIP stands for Virtual IP Address. Oracle uses VIP for Database level access. Basically, when a connection comes from application end. Then using this IP address, it will connect. Suppose if IP for one of the node is down. As per protocol timeout, it need to wait 90 seconds to get a session. In this scenario, VIP comes into picture. If one of the VIP is down, connections will be routed only to the active node. The VIP must be on same address as public IP address. This VIP is used for RAC failover and RAC management.

Useful Commands:
srvctl start vip –n <node_name> -i <VIP_Name> –> To start VIP
srvctl stop vip –n <node_name> -i <VIP_Name> –> To stop VIP
srvctl enable vip -i vip_name –> Enable the VIP.
srvctl disable vip -i vip_name –> Disable the VIP.
srvctl status nodeapps –n <node_name> –> status of nodeapps
srvctl status vip –n <node_name> –> status of vip on a node

What is SCAN IP & Listener?
SCAN stands for Single Client Access Name. Scan IP’s must be on same sub net mask. Three SCAN IP’s is a recommended number of count which redirects user sessions to the scan listeners. Load balancing on scan listener will be done by least_recently_loaded algorithm.

SCAN Listener… When a connection is initiated from the application end, scan listener verifies the load balancing. And once it gets info, it will assign the connection to the node listener. And user can do his transaction.

Main use is that we need not change the connect string in the application servers if any changes on the cluster are done like adding a node, deleting a node and other modifications basing on requirement.

Useful Commands:
srvctl config scan –> retrieves scan listener configuration
srvctl config scan_listener –> List of scan listeners with Port number
srvctl add scan –n <node_name> –> Add a scan listener to the cluster
srvctl add scan_listener –p <Desired_port_number> –> to add scan listener on specific port
SQL> SHOW PARAMETER REMOTE_LISTENER; –> find the list of scan listeners
srvctl stop scan –> stops all scan listeners when used without –i option
srvctl stop scan_listener –> Stops one or more services in the cluster
srvctl start scan –> To start the scan VIP
srvctl start scan_listener –> Start the scan listener.
srvctl status scan –> verify scan VIP status
srvctl status scan_listener –> Verify scan listener status.
srvctl modify scan_listener –> Modify the scan listener
srvctl relocate scan_listener –i <Ordinal_Number> –n <node_name> –> relocate the scan listener to another node.

What is ologgerd?
Ologgerd stands for cluster logger service Daemon. This is otherwise called as cluster logger service. This logger services writes the data in the master node. And chooses other nodes as standby. If any network issue occurs between the nodes, and if it is unable to contact the master. Then the other node takes ownership & chooses a node as standby node. This master will manage the operating system metric database in CHM repository.

Useful Commands:
Oclumon manage –get master –> Find which is the master node
oclumon manage -get reppath –> Will get the path of the repository logs
oclumon manage -get repsize –> This will give you the limitations on repository size
Oclumon showobjects –>find which nodes are connected to loggerd
Oclumon dumpnodeview –> This will give a detail view including system, topconsumers, processes, devices, nics, filesystems status, protocol errors.
oclumon dumpnodeview -n <node_1 node_2 node_3> -last “HH:MM:SS” –> you can view all the details in c. column from a specific time you mentioned.
oclumon dumpnodeview allnodes -last “HH:MM:SS” –> If we need info from all the nodes.11.What is sysmon?
This process is responsible for collecting information in the local node. This will collect the info from every node and that data will be sent the data to master loggerd. This will send the info like CPU, memory usage, Os level info, disk info, disk info, process, file system info.

What is evmd?
Evmd stands for Event Volume Manager Daemon. This handles event messaging for the processes. It sends and receives actions regarding resource state changes to and from all other nodes in a cluster. This will take the help of ONS(Oracle Notification Services).

Useful Commands:

evmwatch -A -t “@timestamp @@” –> Get events generated in evmd.
Evmpost –u “<Message here>” –h <node_name> –> This will post message in evmd log in the mentioned node.

What is mdnsd?
Mdnsd stands for Multicast Domain Name Service. This process is used by gpndp to locate profiles in the cluster as well as by GNS to perform name resolutions. Mdnsd updates the pid file in init directory.

What is ONS?
ONS stands for Oracle Notification Service. ONS will allow users to send SMS, emails, voice messages and fax messages in a easy way. ONS will send the state of database, instance. This state information is used for load balancing. ONS will also communicate with daemons in other nodes for informing state of database.

This is started as part of CRS as part of nodeapps. ONS will run as a node application. Every node will have its own ONS configured.

Useful Commands:

srvctl status nodeapps –> Status of nodeapps
cat $ORACLE_HOME/opmn/conf/ons.config –> Check ons configuration.
$ORACLE_HOME/opmn/logs –> ONS logs will be in this location.

what is OPROCD ?
OPROCD stands for Oracle Process Monitor Daemon. Oprocd monitors the system state of cluster nodes. Stonith, which is nothing but power cycling the node. Simply, means power off & power on the server using reboot command. And main change in OPROCD is cssd agent from 11gR2.

Useful Commands:

CRS_HOME/oprocd stop –> To stop the processon single node.

What is FAN?
FAN stands for Fast Application Notification. If any state change occurs in cluster/instance/node, an event is triggered by the event manager and it is propogated by ONS. The event is known as FAN event. It was the feature which was introduced in Oracle 10g for an immediate notification. FAN uses ONS for notifying.

Useful Commands:

onsctl ping –> To check whether ons is running or not.
onsctl debug –> Will get detail view of ons.
onsctl start –> Start the daemon.
onsctl stop –> Stop the daemon.

What is TAF?
TAF stands for Trasparent Application Failover. When any rac node is down, the select statements need to failover to the active node. And insert, delete, update and also Alter session statements are not supported by TAF. Temporary objects & pl/sql packages are lost during the failover.

There are two types of failover methods used in TAF.

Basic failover: It will connect to single node. And no overload will be there. End user experiences delay in completing the transaction.
Preconnect failover: It will connect to primary & backup node at at time. This offers faster failover. An overload will be experienced as statement need to be ready to complete transaction with minimal delay.
Useful Commands:

Add a service:
Srvctl add service –d <database_name> -s <Name_for_service> -r <instance_names> -p <Policy_specification>
Policy specification – none, basic, preconnect

2.Check TAF status:
SELECT machine, failover_type, failover_method, failed_over, COUNT(*) FROM gv$session GROUP BY machine, failover_type, failover_method, failed_over;

What is FCF?

FCF stands for Fast Connection Failover. It is an application level failover process. This will automatically subscribes to FAN events and this will help in immediate reaction on the up & down events from the database cluster. All the failure applications are cleaned up immediately, so that the application will receive a failure message. And after cleanup, if new connection is received then with load balancing it will reach active node. As said, this is application level process I am not discussing much.

What is GCS(LMSn)?

GCS stands for Global Cache Service. GCS catches the information of data blocks, and access privileges of various instances. Integrity is maintained by maintaining global access. It is responsible for transferring blocks from instance to another instance when needed.

Clear Understanding: Blocks of table “A” were retrieved with a connection to second node. Now, if first node requests blocks from this table, services need not pick the data from the datafiles. Blocks can be retrieved from other instance. This is the main use of GCS.

What is GES(LMD)?

GES stands for Global Enqueue Service. GES controls library and dictionary caches on all the nodes. GES manages transaction locks, table locks, library cache locks, dictionary cache locks, database mount lock.

21. What is GRD?

GRD stands for Global Resource Directory. This is to record the information of resources and enqueues. As the word, it stores info on all the information. Information like Data block identifiers, data block mode(shared, exclusive, null), buffer caches will be having access.

22. What is GPNPD?

GPNPD stands for Grid Plug aNd Play Daemon. A file is located in CRS_HOME/gpnp/<node_name>/profile/peer/profile.xml which is known as GPNP profile. And this profile consists of cluster name, hostname, ntwork profiles with IP addresses, OCR. If we do any modifications for voting disk, profile will be updated.

Useful Commands:

gpnptool ver -> Check the version of tool.
gpnptool lfind ->  get local gpnpd server.
gpnptool get -> read the profile
gpnptool lfind -> check daemon is running on local node.
gpnptool check –p= CRS_HOME/gpnp/<node_name>/profile/peer/profile.xml -> Check whether configuration is valid.
23. why is Diskmon?

Disk monitor daemon continuously runs when ocssd starts. And it monitors and performs I/O fencing for Exadata storage server (This server is termed as cell as per Exadata). This process will run since the ocssd starts because exadata cell can be added to any cluster at any time.

Useful Commands:

./crsctl stat res ora.diskmon <– To check diskmon status.

Export or import with advantages?

Export or import with advantages?


What happens when we export data from a database?
Before going into the topic, you should note some things. Export means using select statement from source(backup). And import means using the insert statement (restore).

The export utility will simply prepare a set of sql statements and execute them against the target database.
The data that is returned by the select operation(definition and rows of the objects) will be written to a file known as the dump file. The extension of the dump file is “.dmp”.
This dump file is platform independent and can be carried to any other oracle site.
DBA can import the objects from the dump file into database when required.
Import file reads the definition from dump file and creates the objects there after.
It executes insert statements from the dump file against the database and loads the data back into the database objects that were created from the datafile.

Advantages of export and import:

We can backup and restore a particular object(table).
These are used to copy objects from one oracle schema to another.
For performing an oracle database from one operating system to another(like windows to linux), then logical backups are the only way to do that using dump file.
We can export(backup) and import(restore) objects without shutting down the database.
Whenever we upgrade the oracl le versions like 8i to 9i, oracle will automatically upgrade the database. But, still we need to export full database before we begin the upgrade process.  This is because, if upgrade fails then exp will be helpful.
Export helps in performing a database re-org. After using database for a while, it gets fragmented with some segments relating to so many extents. This will result in poor I/O performance. For this, we follow some steps.

1. Full Database Export.
2. Drop the Database.
3. Create a Brand New Database.
4. Full Database Import

At the end of the import, all the segments will be back within single huge extents and we will get rid of all fragmentation.
Exports can be done at table level, schema level, tablespace level or full database.
While exporting a table we can mention the query option from 8i onwards, example export the emp table where the deptno=10
If the database is too large the dump file can run into GB and this much space may not be available. In a single file system, thus we can export the data to multiple files spanning across multiple files.
The size of an export dump file is usually 6 to 10 times smaller than the size of the database.

why??????????

1.  Control file, redolog file are not exported
2. Export does not carry any base tables(objects owned by sys user)
3. The temporary segments are ignored.
4. Undo and rollback segments are ignored as we are interested only in committed data.
5. Only table data goes into the dump file along with the table definitions

Time taken for exp:imp is usually 1:6 ratio as export is only a select operation where as import is a combination of DML+DDL operations and each line goes through the undo or rollback segments. And also goes to logfiles.
Import operation will cause a lot of redo activity and hence log switching will occur.

Differences Between traditional exp/imp and datapump

Differences Between traditional exp/imp and datapump


1)Impdp/Expdp has self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands.

3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.

4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp full=y where in original export/import does not always exhibit this behavior.

5)Expdp/Impdp access files on the server rather than on the client.

6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original export/import we could directly compress the dump by using pipes.

8)The Data Pump method for moving data between different database versions is different than the method used by original Export/Import.

9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

10)Expdp/Impdp consume more undo tablespace than original Export and Import.

11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.

12)Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.

13)There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.

Basic DBA Faq's

Basic DBA Faq's



What is Log Switch?
The point at which ORACLE ends writing to one online redo log file and begins writing to another is called a log switch.

What is On-line Redo Log?
The On-line Redo Log is a set of tow or more on-line redo files that record all committed changes made to the database. Whenever a transaction is committed, the corresponding redo entries temporarily stores in redo log buffers of the SGA are written to an on-line redo log file by the background process LGWR. The on-line redo log files are used in cyclical fashion.

Which parameter specified in the DEFAULT STORAGE clause of CREATE TABLESPACE cannot be altered after creating the tablespace?
All the default storage parameters defined for the tablespace can be changed using the ALTER TABLESPACE command. When objects are created their INITIAL and MINEXTENS values cannot be changed.

What are the steps involved in Database Startup?
Start an instance, Mount the Database and Open the Database.

What are the steps involved in Instance Recovery?
Rolling forward to recover data that has not been recorded in data files, yet has been recorded in the on-line redo log, including the contents of rollback segments. Rolling back transactions that have been explicitly rolled back or have not been committed as indicated by the rollback segments regenerated in step a. Releasing any resources (locks) held by transactions in process at the time of the failure. Resolving any pending distributed transactions undergoing a two-phase commit at the time of the instance failure.

Can Full Backup be performed when the database is open?
No.

What are the different modes of mounting a Database with the Parallel Server?
Exclusive Mode If the first instance that mounts a database does so in exclusive mode, only that Instance can mount the database. Parallel Mode If the first instance that mounts a database is started in parallel mode, other instances that are started in parallel mode can also mount the database.

What are the advantages of operating a database in ARCHIVELOG mode over operating it in NO ARCHIVELOG mode?
Complete database recovery from disk failure is possible only in ARCHIVELOG mode. Online database backup is possible only in ARCHIVELOG mode.

What are the steps involved in Database Shutdown?
Close the Database, Dismount the Database and Shutdown the Instance.

What is Archived Redo Log?
Archived Redo Log consists of Redo Log files that have archived before being reused.

What is Restricted Mode of Instance Startup? – An instance can be started in (or later altered to be in) restricted mode so that when the database is open connections are limited only to those whose user accounts have been granted the RESTRICTED SESSION system privilege.

What is Partial Backup?
A Partial Backup is any operating system backup short of a full backup, taken while the database is open or shut down.

What is Mirrored on-line Redo Log?
A mirrored on-line redo log consists of copies of on-line redo log files physically located on separate disks, changes made to one member of the group are made to all members.

What is Full Backup?
A full backup is an operating system backup of all data files, on- line redo log files and control file that constitute ORACLE database and the parameter.

Can a View based on another View?
Yes.

Can a Tablespace hold objects from different Schemes?
Yes.

Can objects of the same Schema reside in different tablespaces?
Yes.

What is the use of Control File?
When an instance of an ORACLE database is started, its control file is used to identify the database and redo log files that must be opened for database operation to proceed. It is also used in database recovery.

Do View contain Data?
Views do not contain or store data.

What are the Referential actions supported by FOREIGN KEY integrity constraint?
UPDATE and DELETE Restrict – A referential integrity rule that disallows the update or deletion of referenced data. DELETE Cascade – When a referenced row is deleted all associated dependent rows are deleted.

What are the type of Synonyms?
There are two types of Synonyms Private and Public

What is a Redo Log?
The set of Redo Log files YSDATE,UID,USER or USERENV SQL functions, or the pseudo columns LEVEL or ROWNUM.

What is an Index Segment?
Each Index has an Index segment that stores all of its data.

Explain the relationship among Database, Tablespace and Data file.?
Each databases logically divided into one or more tablespaces one or more data files are explicitly created for each tablespace

What are the different type of Segments?
Data Segment, Index Segment, Rollback Segment and Temporary Segment.

What are Clusters?
Clusters are groups of one or more tables physically stores together to share common columns and are often used together.

What is an Integrity Constrains?
An integrity constraint is a declarative way to define a business rule for a column of a table.

What is an Index?
An Index is an optional structure associated with a table to have direct access to rows, which can be created to increase the performance of data retrieval. Index can be created on one or more columns of a table.

What is an Extent?
An Extent is a specific number of contiguous data blocks, obtained in a single allocation, and used to store a specific type of information.

What is a View?
A view is a virtual table. Every view has a Query attached to it. (The Query is a SELECT statement that identifies the columns and rows of the table(s) the view uses.)

What is Table?
A table is the basic unit of data storage in an ORACLE database. The tables of a database hold all of the user accessible data. Table data is stored in rows and columns.

What is a Synonym?
A synonym is an alias for a table, view, sequence or program unit.

What is a Sequence?
A sequence generates a serial list of unique numbers for numerical columns of a database and its tables.

What is a Segment?
A segment is a set of extents allocated for a certain logical structure.

What is schema?
A schema is collection of database objects of a User.

Describe Referential Integrity?
A rule defined on a column (or set of columns) in one table that allows the insert or update of a row only if the value for the column or set of columns (the dependent value) matches a value in a column of a related table (the referenced value). It also specifies the type of data manipulation allowed on referenced data and the action to be performed on dependent data as a result of any action on referenced data.

What is Hash Cluster?
A row is stored in a hash cluster based on the result of applying a hash function to the row and its cluster key value. All rows with the same hash key value are stores together on disk.

What is a Private Synonyms?
A Private Synonyms can be accessed only by the owner.

What is Database Link?
A database link is a schema object in one database that enables you to access objects on another database

What is a Tablespace?
A database is divided into Logical Storage Unit called tablespaces. A tablespace is used to grouped related logical structures together

What is Rollback Segment?
A Database contains one or more Rollback Segments to temporarily store “undo” information.

What are the Characteristics of Data Files? – A data file can be associated with only one database. Once created a data file can’t change size. One or more data files form a logical unit of database storage called a tablespace.

How to define Data Block size?
A data block size is specified for each ORACLE database when the database is created. A database users and allocated free database space in ORACLE datablocks. Block size is specified in INIT.ORA file and can’t be changed later.

What does a Control file Contain?
A Control file records the physical structure of the database. It contains the following information. Database Name Names and locations of a database datafiles and redolog files. Time stamp of database creation. When is hot backup taken. Rman config details.

What is difference between UNIQUE constraint and PRIMARY KEY constraint?
A column defined as UNIQUE can contain Nulls while a column defined as PRIMARY KEY cannot contain Nulls.

What is Index Cluster?
A Cluster with an index on the Cluster Key 48.When does a Transaction end? – When it is committed or Rollbacked.

What are the factors that affect OPTIMIZER in choosing an Optimization approach?
The OPTIMIZER_MODE initialization parameter Statistics in the Data Dictionary the OPTIMIZER_GOAL parameter of the ALTER SESSION command hints in the statement.

What is the effect of setting the value ALL_ROWS for OPTIMIZER_GOAL, parameter of the ALTER SESSION Command?
The Optimizer chooses Cost_based approach and optimizes with the goal of best throughput if statistics for atleast one of the tables accessed by the SQL statement exist in the data dictionary. Otherwise the OPTIMIZER chooses RULE_based approach.

What is the function of Optimizer?
The goal of the optimizer is to choose the most efficient way to execute a SQL statement.

What is Execution Plan?
The combinations of the steps the optimizer chooses to execute a statement is called an execution plan.

What are the different approaches used by Optimizer in choosing an execution plan?
Rule-based and Cost-based.