Showing posts with label Oracle Core DBA. Show all posts
Showing posts with label Oracle Core DBA. Show all posts

Saturday, June 20, 2020

What is Going on Inside My Database

What is Going on Inside My Database?


The simplest query to determine performance at database level is to query v$session_wait and take a lead from there.

sqlplus '/as sysdba'
SQL> select event, state, count (*) from v$session_wait group by event, state order by 3 desc;


It uses the Oracle wait interface to report what all database sessions are currently waiting and doing CPU activity.

Whenever there is an issue on Database System, like extremely slow log file writes this query will give good hint towards the cause of problem. Of course, just running couple of queries against wait interface doesn’t give you the full picture but nevertheless, if you want to see an instance sessions state overview, this is the simplest query you can use.

Interpreting this query output should be combined with reading some OS performance tool output (like vmstat or perfmon), to determine whether the problem is induced by CPU overload. For example, if someone is running a parallel backup compression job on the server which is eating all CPU time, some of these waits may be just a side-effect of CPU overload).

Sometimes you might want to exclude the background processes and idle sessions from the picture. On that scenario use the SQL Scripts provided below:

sqlplus '/as sysdba'
SET LINESIZE 200;
COL SW_EVENT FORMAT A90;
SQL> select count(*), CASE WHEN state != 'WAITING' THEN 'WORKING' ELSE 'WAITING' END AS state, CASE WHEN state != 'WAITING' THEN 'On CPU / runqueue' ELSE event END AS sw_event FROM v$session WHERE type = 'USER' AND status = 'ACTIVE' GROUP BY CASE WHEN state != 'WAITING' THEN 'WORKING' ELSE 'WAITING' END, CASE WHEN state != 'WAITING' THEN 'On CPU / runqueue' ELSE event END ORDER BY 1 DESC, 2 DESC;


Sometimes you might want to exclude the background processes and idle sessions from the picture. On that scenario use the SQL Scripts provided below:
This is something you get in ASH as well, instance performance graph which shows you the instance wait summary. ASH nicely puts the CPU count of server into the graph as well (that you would be able to put the number of “On CPU” sessions into perspective).

If you wish to include the background processes and idle sessions, use following query:

SQL> select count(*), CASE WHEN state != 'WAITING' THEN 'WORKING' ELSE 'WAITING' END AS state, CASE WHEN state != 'WAITING' THEN 'On CPU / runqueue' ELSE event END AS sw_event FROM v$session_wait GROUP BY CASE WHEN state != 'WAITING' THEN 'WORKING' ELSE 'WAITING' END, CASE WHEN state != 'WAITING' THEN 'On CPU / runqueue' ELSE event END ORDER BY 1 DESC, 2 DESC;

You can use similar technique for easily viewing the instance activity from other perspectives and dimensions as well, like which SQL is being executed.

SQL> select sql_hash_value, count(*) from v$session where status = 'ACTIVE' group by sql_hash_value order by 2 desc;
SQL> select sql_text,users_executing from v$sql where hash_value = <HashValue-PreviousCommand>;



Saturday, May 9, 2020

How to check installed components and options in Oracle

How to check installed components and options in Oracle


To check which components are installed in a database:
SELECT * FROM DBA_REGISTRY;

To check which options are enabled:
SELECT * FROM V$OPTION;

To check which features are used:
SELECT * FROM DBA_FEATURE_USAGE_STATISTICS;

Sunday, February 23, 2020

Oracle Networking Between Server & Client

Oracle Networking Between Server & Client


Server Side:

- set the IP address and host name on server.
- check ip using # ipconfig.
- make the listener.ora aby using $ netmgr.
- ping the client machine on server side.


Client Side:

- set the IP address and host name on client.
- check ip using # ipconfig
- make the listener.ora by using $ netmgr.
- ping the server macine on client side.
- make tnsnames.ora file on client side $ netca.


Client side database:

netmgr --> listener --> expand the listener --> delete the old listener --> add listener --> add address (hostname and ip) --> choose database services (sid and global sid)

netca --> naming methods configuration --> local naming --> next --> next --> local net service naming configuration --> add --> <service_name_server_side> --> <hostname> --> next --> perform a test --> system/manage --> next --> net service name --> click no and choose finish.

Thursday, August 22, 2019

How to Change Database Name

How to Change Database Name 


BY RE-CREATING CONTROLFILE

STEP1:- CREATE CONTROLFILE CREATATION SCRIPT.

SQL> alter database backup controlfile to trace as 'D:\q.sql';
Database altered.

STEP2:- EDIT/MODIFY CONTROLFILE CREATION SCRIPT.

CREATE CONTROLFILE SET DATABASE "YE" RESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 'C:\ORACLEXE\APP\ORACLE\FAST_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_G9Z48JWN_.LOG'  SIZE 50M BLOCKSIZE 512,
  GROUP 2 'C:\ORACLEXE\APP\ORACLE\FAST_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_G9Z48LD1_.LOG'  SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
  'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSTEM.DBF',
  'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSAUX.DBF',
  'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\UNDOTBS1.DBF',
  'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\USERS.DBF',
  'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\MKTBS.DBF',
  'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\R.DBF'
CHARACTER SET AL32UTF8
;

STEP3: CHANGE DB_NAME PARAMETER IN SPFILE/PFILE.

SQL> alter system set db_name=YE scope=spfile;
System altered.

STEP4:- NOW REMOVE OR MOVE OLD CONTROL FILES.

Before removing old controlfile do take backup of all controlfiles.

SQL> show parameter control
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_file_record_keep_time        integer     7
control_files                        string      C:\ORACLEXE\APP\ORACLE\ORADATA\XE\CONTROL.DBF
control_management_pack_access       string      NONE

STEP5: SHUT DOWN THE DATABASE AND STARTUP IN NOMOUNT MODE TO REFLECT DB_NAME PARAMETER CHANGES.

SQL> startup nomount;

STEP6: NOW EXECUTE NEWLY CREATED CONTROLFILE SCRIPT.

SQL>@D:\control_db.sql
Control file created.

STEP7: OPEN DATABASE WITH RESETLOGS OPTION.

SQL> alter database open resetlogs;
Database altered.

SQL> select database_name from v$database;


USING NID(DBNEWID UTILITY)

Step-1. We will change both db_name to XE and dbid belongs to cloned database.

Check db_id and db_name for new environment before operation.

SQL> select dbid,name from v$database;

DBID NAME
---------- ---------
6753825411 YE

Step-2. Startup instance with mount state.

export ORACLE_SID=CLONEDB
sqlplus / as sysdba
shutdown immediate;
startup mount;

nid target=/ dbname=XE logfile=/D:/nid.log
Database name changed to XE.
Modify parameter file and generate a new password file before restarting.
Database ID for database XE changed to 2898066260.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.

After DBNEWID completed successful, instance has been also closed automatically.

Step-3. Startup instance with nomount option and change the db_name to XE.

Then shutdown and startup mount instance again for activate new db_name. At last, open database with resetlogs option.

Step-4. You should create new password file for the new environment if you need

cd $ORACLE_HOME/dbs
orapwd file=orapwXE password=clone entries=3 

Thursday, July 25, 2019

How to make a table read only in Oracle

How to make a table read only in Oracle


Oracle 11g allows tables to be marked as read-only using the ALTER TABLE command.

ALTER TABLE table_name READ ONLY;
ALTER TABLE table_name READ WRITE;

Let’s create a table and make it read-only.

CREATE TABLE Example (id  NUMBER);
INSERT INTO Example VALUES (1);
ALTER TABLE Example READ ONLY;

CREATE TABLE Example (id  NUMBER);
INSERT INTO Example VALUES (1);
ALTER TABLE Example READ ONLY;
Any DML statements that affect the table data results in an ORA-12081 error message.

SQL> INSERT INTO Example VALUES (10);
INSERT INTO Example VALUES (10);
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "SCOTT"."Example"

SQL> INSERT INTO Example VALUES (10);
INSERT INTO Example VALUES (10);
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "SCOTT"."Example"
DML and DDL operations return to normal once the table is switched back to read-write mode.

SQL> ALTER TABLE Example READ WRITE;
Table altered.

SQL> DELETE FROM Example;
1 row deleted.

SQL> ALTER TABLE Example READ WRITE;
 Table altered.

SQL> DELETE FROM Example;
 1 row deleted.

The read-only status of tables is displayed in the READ_ONLY column of the [DBA|ALL|USER]_TABLES views.

Saturday, March 2, 2019

How to refresh TEST database schemas from PRODUCTION database

How to refresh TEST database schemas from PRODUCTION database


Source Database Side
--------------------------:

Step.1
Check the Schema size in the source database.
SQL> select owner, sum(bytes)/1024/1024/1024  "GB"  from dba_segments where owner='EXAMPLE';

Step.2
Check the tablespace assigned to EXAMPLE schema.
SQL>select default_tablespace from dba_users where username='EXAMPLE';

Step.3
Check tablespace size assign to EXAMPLE schema
select owner,sum(bytes/1024/1024)MB from dba_segments group by owner;

Step.4
Take the count of schema objects, this will be useful after refresh to compare both source and target schema objects.
SQL> select object_type, count(*) from dba_objetcts where owner='ORACLE' group by object_type;

Step.5
Before export, first check mount point size where we're going to store the export dump files, if mount point doesn't have sufficient space export job will fail. So make sure we have enough space in the mount point.

Step.6
Create a data pump directory at the database level for export, if we do not create directory then it will use the default directory DATA_PUMP_DIR. So if we are using this default directory, we need to make sure that it is mapped to the correct path.

Step.7
Now take the export of schema or schemas.

$ nohup expdp \'/ as sysdba\' directory=DATA_PUMP_DIR dumpfile=proddb1_8August_%U.dmp logfile=proddb1_8August.log schemas=EXAMPLE parallel=8 &

Step.8
Copy the dump file from source server to destination server. We basically use scp command but we can also use other server copy command for this. It is always better to copy the dump files in the location which will be used in the impdp directory path. Otherwise we will have to again copy this to the impdp directory path.

We have completed almost all steps in source database side, now we are moving to target database side:

Target Database side
-------------------------:

Step.1
Check if we have enough space available in the data pump director path to hold the dump file coming from the source database server.

Step.2
Check if we have enough space available in the tablespace of the user which is going to be     refreshed. For this, it is always better to check the tablespace size of that particular user in the prod database and add that much of space before refresh starts.

Step.3
It is always recommended to take the export backup of the schema on target side as well which we are going to refresh.

$ nohup expdp \'/ as sysdba\' directory=DATA_PUMP_DIR dumpfile=devdb1_8August_%U.dmp logfile=devdb1_8August.log schemas=EXAMPLE parallel=8 &

Step.4
Drop the schema objects only on the target side i.e. on development database. Double check  it before dropping anything in the database. Make sure we only drop the schema object not the complete schema. Sometime people also drop complete schema but that is not recommended.

Step.5
Import the data into target schema(Shema refresh)

$ nohup impdp \'/ as sysdba\' directory=DATA_PUMP_DIR dumpfile=proddb1_8August_%U.dmp logfile=imp_8August.log schemas=EXAMPLE parallel=8 &
     
Note: While import is running, it is always better to monitor the alert log file of the database
and also monitor the import log file. Sometime the error is reported on the alert log and is not captured in the import log. So monitor both logs.

Step.6
Gather the statistics on the schema which we just refreshed.

Step.7
Count the objects and match it from production database. Make sure all are imported properly before confirming to anyone or application team.

Saturday, June 23, 2018

How to resize temporary tablespace

How to resize temporary tablespace


step.1

Following query will give you tablespace name and datafile name along with path of that data file.
sql> select file_name, tablespace_name from dba_temp_files;

step.2

Following query will create temp tablespace named: 'temp2' with 800 MB size along with auto-extend and maxsize unlimited.
sql> create temporary tablespace temp2 tempfile  '/u01/oradata/temp_02.dbf' size 800m autoextend on next 10m maxsize unlimited;

step.3

Following query will help you to alter database for default temporary tablespace. ( i.e. Newly created temp tablespce: ‘TEMP_NEW’ )
sql> alter database default temporary tablespace temp2;

step.4

Retrieve ‘SID_NUMBER’ & ‘SERIAL#NUMBER’ of existing live session’s who are using old temporary tablespace ( i.e. TEMP ) and kill them.
sql> select b.tablespace,b.segfile#,b.segblk#,b.blocks,a.sid,a.serial#, a.username,a.osuser, a.status from v$session a,v$sort_usage b WHERE a.saddr = b.session_addr;

Provide above inputs to following query, and kill session’s.
sql> alter system kill session 'sid, serial#';
For example:
sql> alter system kill session '59,57391';

step.5

Now, we can drop old temporary tablespace without any trouble with following:
sql> drop tablespace temp2 including contents and datafiles;

If you wish to continue with old temporary tablespace name, i.e. ‘TEMP’ then follow same step mentioned above to recreate temp tablespace with old name.

Tuesday, March 20, 2018

How to Resize the Undo Tablespace


1. To Shrink the Undo Tablespace Size.
2. Add Space to the Undo Tablespace.

Solution:

1. To Shrink the Undo Tablespace Size.
Undo space once allocated will be available for reuse but will not be deallocated to the OS. The best way to shrink Undo tablespace is to switch to a new Undo tablespace and drop the old Undo tablespace. The steps are:

a) Create a new undo tablespace of the same size (larger or smaller) depending on your database requirements.

SQL> create undo tablespace undotbs02 datafile '/u01/oracle/prod/undotbs02.dbf' size 5000M;

b) Switch to the new Undo tablespace

ALTER SYSTEM SET UNDO_TABLESPACE = undotbs02 SCOPE=BOTH;

c) Check the status of the undo segments and determine if all the segments in the old undo tablespace are offline.

select tablespace_name , status , count(*) from dba_rollback_segs group by tablespace_name , status;

If there are Undo segments with status other than OFFLINE in the tablespace to be dropped, we need to wait till they become OFFLINE. You may have to wait for the duration of the tuned_undoretention (from v$undostat) to ensure all Undo segments have become OFFLINE.

select status,segment_name from dba_rollback_segs where status not in ("OFFLINE') and tablespace_name=<undo tablespace to be dropped>;

eg:

select status,segment_name from dba_rollback_segs where status not in ("OFFLINE') and tablespace_name='UNDOTBS1';

d). If all the Undo segments in the old Undo tablespace to the dropped is of status OFFLINE, then drop the tablespace.

select tablespace_name , status , count(*) from dba_rollback_segs group by tablespace_name , status;

Verify and then drop:

SQL>Drop tablespace <tablespace_name> including contents and datafiles;

eg:

SQL>Drop tablespace UNDOTBS1 including contents and datafiles;

2. Add Space to the Undo Tablespace.

For increasing / resize undo tablespace there are two options :
a) Resize the existing undo datafile
b) Add new undo datafile to the tablespace.

a) To resize the existing undo datafile:

select tablespace_name T_NAME,file_name, bytes/1024/1024 MB from dba_data_files where tablespace_name =(SELECT UPPER(value) FROM v$parameter WHERE name = 'undo_tablespace') order by file_name;

alter database datafile '<COMPLETE_PATH_OF_UNDO_DBF_FILE>' resize <SIZE>M;

Example:
alter database datafile '/u01/oracle/prod/undotbs02.dbf' resize 1500M;
 

b) Step to add a new datafile:

alter tablespace <UNDO tbs name> ADD DATAFILE '<COMPLETE_PATH_OF_UNDO_DBF_FILE>' size 20M;

Example:
alter tablespace UNDOTBS1 ADD DATAFILE '/u01/oracle/prod/undotbs02.dbf' size 20M;

How to resize redolog files in oracle


step.1

First see the size of the current  redo logs.           
SQL> select group#, bytes, status from v$log;

step.2

Retrieve all the log member names for the groups.                                         
SQL> select group#, member from v$logfile;       

step.3

Create 3 new log groups and name them groups 4, 5, and 6, with different size.                         
SQL> alter database add logfile group 4 '/oracle/dbs/log4PROD.dbf' size 10M;                               
SQL> alter database add logfile group 5 '/oracle/dbs/log5PROD.dbf' size 10M;   
SQL> alter database add logfile group 6 '/oracle/dbs/log6PROD.dbf' size 10M; 

step.4

Run a query to view the v$log status.                                 
SQL> select group#, status from v$log;                                     

step.5

Drop the old redolog files, make sure the status should be INACTIVE.
SQL> alter database drop logfile group 1;                                   
SQL> alter database drop logfile group 2;                                   
SQL> alter database drop logfile group 3;   

step.6

Verify the groups were dropped, and the new groups' sizes are correct.
SQL> select group#, bytes, status from v$log; 

step.7

Take a backup of the database, and delete the files associated with old redolog groups as they are no longer needed.
rm /usr/oracle/dbs/log1PROD.dbf
rm /usr/oracle/dbs/log2PROD.dbf 
rm /usr/oracle/dbs/log3PROD.dbf       

step.8

Monitor the alert.log for the times of redo log switches. Due to increased redo log size, the groups should not switch as frequently under the same load conditions.

Note:

You cannot drop CURRENT and ACTIVE redolog files.

Sunday, October 1, 2017

How To Find Whether The Parameter Is Static or Dynamic

How To Find Whether The Parameter Is Static or Dynamic?


ISSYS_MODIFIABLE column display the values FALSE and IMMEDIATE.

FALSE: it means database bounce is require.
IMMEDIATE: it means database bounce not require.

Example:

SQL> select ISSYS_MODIFIABLE from v$parameter where name like '%control_files%';

ISSYS_MOD
---------
FALSE

SQL> select ISSYS_MODIFIABLE from v$parameter where name like '%open_cursors%';

ISSYS_MOD
---------
IMMEDIATE

SQL> select ISSYS_MODIFIABLE from v$parameter where name like '%undo_tablespace%';

ISSYS_MOD
---------
IMMEDIATE

SQL> select ISSYS_MODIFIABLE from v$parameter where name like '%log_archive_dest%';

ISSYS_MOD
---------
IMMEDIATE

Thursday, September 28, 2017

Moving/Renaming Of Datafiles In Oracle

Moving/Renaming Of Datafiles In Oracle

Example:

/home/oracle/OraHome1/databases/ora9 (actual location)
/home/oracle/databases/ora9 (changed location)

SQL> select tablespace_name, substr(file_name,1,70) from dba_data_files;

TABLESPACE_NAME                SUBSTR(FILE_NAME,1,70)
------------------------------ ----------------------------------------------------------------------
SYSTEM                         /home/oracle/OraHome1/databases/ora9/system.dbf
UNDO                           /home/oracle/OraHome1/databases/ora9/undo.dbf
DATA                           /home/oracle/OraHome1/databases/ora9/data.dbf

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/home/oracle/OraHome1/databases/ora9/redo1.ora
/home/oracle/OraHome1/databases/ora9/redo2.ora
/home/oracle/OraHome1/databases/ora9/redo3.ora

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/home/oracle/OraHome1/databases/ora9/ctl_1.ora
/home/oracle/OraHome1/databases/ora9/ctl_2.ora
/home/oracle/OraHome1/databases/ora9/ctl_3.ora

Now, the database can be shut down:

SQL> shutdown immediate;

The files can be copied to their destination:
$ cp /home/oracle/OraHome1/databases/ora9/system.dbf  /home/oracle/databases/ora9/system.dbf
$ cp /home/oracle/OraHome1/databases/ora9/undo.dbf    /home/oracle/databases/ora9/undo.dbf
$ cp /home/oracle/OraHome1/databases/ora9/data.dbf    /home/oracle/databases/ora9/data.dbf

SQL> startup mount;

SQL> alter database rename file '/home/oracle/OraHome1/databases/ora9/system.dbf' to '/home/oracle/databases/ora9/system.dbf';
SQL> alter database rename file '/home/oracle/OraHome1/databases/ora9/undo.dbf'   to '/home/oracle/databases/ora9/undo.dbf';
SQL> alter database rename file '/home/oracle/OraHome1/databases/ora9/data.dbf'   to '/home/oracle/databases/ora9/data.dbf';

SQL> shutdown immediate;

SQL> shutdown mount;

SQL> alter database open;

Sunday, June 11, 2017

Oracle Shutdown Immediate Waiting On Active Process

Oracle Shutdown Immediate Waiting On Active Process


Usually whenever we try to shutdown an oracle database with immediate option it will wait for some active process to get terminate. This may take long time and it also depends on the number of processes.

Use below command to find out the active sessions on database:


bash-3.00$ ps -ef | grep LOCAL=NO
oracle 17862     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17882     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17854     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17890     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17906     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17848     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17978     1   0   Jun 13 ?           0:03 oracletest (LOCAL=NO)
oracle 17908     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17918     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17910     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17852     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17866     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17930     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17914     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17874     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17932     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17986     1   0   Jun 13 ?           0:03 oracletest (LOCAL=NO)
oracle 17952     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17936     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17836     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17944     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17974     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17948     1   0   Jun 13 ?           0:05 oracletest (LOCAL=NO)
oracle 17898     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17878     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17876     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17900     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17924     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17902     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17892     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17894     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17956     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17966     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17940     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17834     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17960     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)
oracle 17934     1   0   Jun 13 ?           0:00 oracletest (LOCAL=NO)

We need to kill the process only with option LOCAL=NO.

The below is the handy UNIX command which can be used for killing these idle active session on database which are avoiding the database to shutdown with immediate option.


ps -ef|grep 'oracletest (LOCAL=NO)'|grep -v grep|awk '{print$2}'|xargs -i kill -9 {}

Now verify the processes again with ps -ef | grep LOCAL=NO it should not return any list. If these processes killed then the database will shutdown.

Sunday, August 28, 2016

How to check NLS parameter value of Oracle Database.

How to check NLS parameter value of Oracle Database.


SQL> select * from nls_database_parameters;

PARAMETER                      VALUE
------------------------------ ----------------------------------------
NLS_LANGUAGE                   AMERICAN
NLS_TERRITORY                  AMERICA
NLS_CURRENCY                   $
NLS_ISO_CURRENCY               AMERICA
NLS_NUMERIC_CHARACTERS         .,
NLS_CHARACTERSET               AR8ISO8859P6
NLS_CALENDAR                   GREGORIAN
NLS_DATE_FORMAT                DD-MON-RR
NLS_DATE_LANGUAGE              AMERICAN
NLS_SORT                       BINARY
NLS_TIME_FORMAT                HH.MI.SSXFF AM

PARAMETER                      VALUE
------------------------------ ----------------------------------------
NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZH:TZM
NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZH:TZM
NLS_DUAL_CURRENCY              $
NLS_COMP                       BINARY
NLS_LENGTH_SEMANTICS           BYTE
NLS_NCHAR_CONV_EXCP            FALSE
NLS_RDBMS_VERSION              11.1.0.7.0

20 rows selected.

Thursday, August 25, 2016

what is oracle database ?


It is a very common question. Before starting Oracle Database architecture, you should know what exactly the database server is.
I have came across a lot of answers, and examples. Combining everything I am sharing this. Hope u will find it useful…

let’s imagine a scenario first. Consider YOURSELF as a person who has some special capability.
You can listen very fast, you can see anything very easily, you can interact with others very quickly.
Of course as a human being you have couple of hands, eyes , ears , legs and so on. And also you have got certain amount of MEMORY inside your BRAIN.
But also you have got a big PROBLEM. You CAN NOT remember those things for a long time.

So one of your well wishers give you some NOTE BOOKS to write whatever you listen or see.

NOW, if you relate above mentioned things with ORACLE. Then YOU are basically an Oracle INSTANCE, consists of some MEMORY (RAM) which is like your brain. and some BACKGROUND PROCESSES like your eyes, hands, ears etc, using which YOU can do something..
And those NOTE BOOKS in which you write everything.. basically those are DATAFILES.

So all together this Oracle Instance and Data files are known as ORACLE DATABASE SERVER.

Wednesday, August 10, 2016

Suspending MMON slave action kewrmafsa_ for 82800 seconds

Suspending MMON slave action kewrmafsa_ for 82800 seconds


One of our database environment we noticed that AWR report is not available for particular time interval. In further investigating we found one MMON related error in the alert log.

Error:

Suspending MMON slave action kewrmapsa_ for 82800 seconds

Cause:

While MMON activity is important in maintaining information in the database, if the cpu utilization and database activities  exceeds cpu or run time policies, then MMON activity can be temporarily suspended. This is done so that more important activity can continue.  The "Suspending MMON ... for 82800 seconds" warning messages indicate that you should look at the overall performance of your system to check if there is anything that could be causing MMON activities to exceed their resource allocation such that they are suspended.

Solution:

Increase the Server configuration add RAM and more CPUs.

Tuesday, August 9, 2016

ORA-12012: error on auto execute of job "SYS"."BSLN_MAINTAIN_STATS_JOB"

ORA-12012: error on auto execute of job "SYS"."BSLN_MAINTAIN_STATS_JOB"

ORA-06550: line 1, column 807:

PLS-00201: identifier 'DBSNMP.BSLN_INTERNAL' must be declared


Cause:

You might get this error when database is created using template from old database or by cloning from another database.  Therefore, table DBSNMP.BSLN_BASELINES contains inconsistent information that is causing the job to fail.



Solution:

If you delete the inconsistent rows from table DBSNMP.BSLN_BASELINES, it will fix the problem but you can fix this easiest way by following the below guidelines.

SQL> select log_date,status from dba_scheduler_job_run_details where job_name='BSLN_MAINTAIN_STATS_JOB';

LOG_DATE                                                                    STATUS

--------------------------------------------------------------------------- ------------------------------

13-AUG-14 09.45.36.831010 AM +02:00                                         FAILED

17-AUG-14 09.00.00.980275 AM +02:00                                         FAILED


SQL> sqlplus / as sysdba

-- Drop the DBSNMP user by executing catnsnmp.sql script.

SQL> @$ORACLE_HOME/rdbms/admin/catnsnmp.sql

-- Create the DBSNMP user by executing catsnmp.sql

SQL> @$ORACLE_HOME/rdbms/admin/catsnmp.sql

-- Re-execute the job

SQL> exec dbms_scheduler.run_job('BSLN_MAINTAIN_STATS_JOB',false);

PL/SQL procedure successfully completed.

SQL> select log_date,status from dba_scheduler_job_run_details where job_name='BSLN_MAINTAIN_STATS_JOB';

LOG_DATE                                                                    STATUS

--------------------------------------------------------------------------- ------------------------------

13-AUG-14 09.45.36.831010 AM +02:00                                         FAILED

17-AUG-14 09.00.00.980275 AM +02:00                                         FAILED

20-AUG-14 02.50.36.799841 PM +02:00                                         SUCCEEDED

Reference metalink Doc ID 1413756.1

Sunday, June 12, 2016

SP2-0734: unknown command beginning when recreate controlfile

SP2-0734: unknown command beginning when recreate controlfile

Create backup for controlfile

SQL> alter database backup controlfile to trace as ‘d:\control\test.trc’;

Database altered.

Recreate Controlfile, Edit the test.trc file and when i try to recreate the control file , it showing SP-0734

SQL> SHUTDOWN ABORT
ORACLE instance shut down.
SQL> @D:\control\CONTROL.TRC;
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1375792 bytes
Variable Size             331350480 bytes
Database Buffers          197132288 bytes
Redo Buffers                5804032 bytes
SP2-0042: unknown command “DATAFILE” – rest of line ignored.
SP2-0734: unknown command beginning “‘D:\CONTRO…” – rest of line ignored.
SP2-0734: unknown command beginning “‘D:\CONTRO…” – rest of line ignored.
SP2-0734: unknown command beginning “‘D:\CONTRO…” – rest of line ignored

This happened due to blank line before the DATAFILE clause and after — STANDBY LOGFILE and  remove that space line as well as remove line — STANDBY LOGFILE and re run the script,

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE “AZAR” NORESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 8
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 ‘C:\DATAFILES\AZAR\REDO01.LOG’  SIZE 50M BLOCKSIZE 512,
  GROUP 2 ‘C:\DATAFILES\AZAR\REDO02.LOG’  SIZE 50M BLOCKSIZE 512,
  GROUP 3 ‘C:\DATAFILES\AZAR\REDO03.LOG’  SIZE 50M BLOCKSIZE 512
— STANDBY LOGFILE                                                             [ REMOVE THIS SPACE LINE ]

DATAFILE
  ‘C:\DATAFILES\AZAR\SYSTEM01.DBF’,
  ‘C:\DATAFILES\AZAR\SYSAUX01.DBF’,
  ‘C:\DATAFILES\AZAR\UNDOTBS01.DBF’,
  ‘C:\DATAFILES\AZAR\USERS01.DBF’,
  ‘C:\TEST01.DBF’
CHARACTER SET AR8ISO8859P6
;

After removed space

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE “AZAR” NORESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 8
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 ‘C:\DATAFILES\AZAR\REDO01.LOG’  SIZE 50M BLOCKSIZE 512,
  GROUP 2 ‘C:\DATAFILES\AZAR\REDO02.LOG’  SIZE 50M BLOCKSIZE 512,
  GROUP 3 ‘C:\DATAFILES\AZAR\REDO03.LOG’  SIZE 50M BLOCKSIZE 512
DATAFILE
  ‘C:\DATAFILES\AZAR\SYSTEM01.DBF’,
  ‘C:\DATAFILES\AZAR\SYSAUX01.DBF’,
  ‘C:\DATAFILES\AZAR\UNDOTBS01.DBF’,
  ‘C:\DATAFILES\AZAR\USERS01.DBF’,
  ‘C:\TEST01.DBF’
CHARACTER SET AR8ISO8859P6
;

SQL> shutdown abort
ORACLE instance shut down.
SQL> @D:\control\CONTROL.TRC;
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1375792 bytes
Variable Size             331350480 bytes
Database Buffers          197132288 bytes
Redo Buffers                5804032 bytes

Control file created.

SQL> select status from v$instance;

STATUS
————
MOUNTED

SQL> alter database open;

Database altered.

SQL> select status from v$instance;

STATUS
————
OPEN 

Wednesday, June 1, 2016

About Alert Log File In Oracle

About Alert Log File In Oracle

The alert log file is a chronological log of messages and errors written out by an Oracle Database. Typical messages found in this file is: database startup, shutdown, log switches, space errors, etc. This file should constantly be monitored to detect unexpected messages and corruptions.Oracle will automatically create a new alert log file whenever the old one is deleted.

When an internal error is detected by a process, it dumps information about the error to its trace file. Some of the information written to a trace file is intended for the database administrator, while other information is for Oracle Worldwide Support. Trace file information is also used to tune applications and instances.

The alert log of a database includes the following information :

1) All internal errors (ORA-00600), block corruption errors (ORA-01578), and deadlock errors (ORA-00060) that occur.

2) Administrative operations, such as CREATE, ALTER, and DROP statements and STARTUP,
SHUTDOWN, and ARCHIVELOG statements.

3) Messages and errors relating to the functions of shared server and dispatcher processes.

4) Errors occurring during the automatic refresh of a materialized view.

5) The values of all initialization parameters that had non-default values at the time the database and instance startup .

which process writes to alert log file:

Not "one" but all the background processes can/do write to it. The archiver writes to it. LogWriter can write (if we have log_checkpoints_to_alert). When a background process detects that another has died, the former writes to the alert log before panicking the instance and killing it.  Similarly an ALTER SYSTEM command issued by the server process for our database session will also write to the alert.log .

To find the location of alert log file we can find by below command

SQL> select value from v$parameter where name = 'background_dump_dest' ;   OR
SQL> show parameter background
SQL> select * from v$diag_info;

If the background_dump_dest parameter is not specified, Oracle will write the alert.log into the $ORACLE_HOME/RDBMS/trace directory.

Tuesday, May 31, 2016

About Redo Block Address (RBA)

About Redo Block Address (RBA)

An  RBA (Redo Block Address) points  to  a  specific phyical  location  within a redo logfile . The "tail of the log" is the RBA of the most recent redo entry written to the redo log file . It is ten bytes long and has  three  components .

the log file sequence number  ( 4 bytes)
the log file block number       ( 4 bytes)
the byte offset into the block at which the redo record starts (2 bytes)

For Example :  RBA [0x775.2.10]  maps to Log squence , Block number with byte offset .

There are different types of RBA available in SGA , the following are :

Low RBA : Dirty buffer contains first redo change address called Low RBA. From x$bh we can check low RBA.

High RBA : Dirty buffer contains last and most recent redo changes address called High RBA. From x$bh we can check High RBA.

Checkpoint  RBA : DBWR  has written  buffers from  checkpoint queue  are pointing  to  checkpoint  RBA while  incremental checkpoint  is  enabled. This  RBA copies  in  to  control  file’s checkpoint  progress record. When instance recovery occurs that time it starts from checkpointing  RBA from control  file. We  can check this RBA from x$targetrba (sometimes from x$kccrt).

On-disk RBA : That RBA which was flushed in to online Redo Log File on disk. This RBA recorded in to control file record  section. We can check from x$kcccp for on-disk RBA (sometimes from x$targetrba).

How RBA comes in Pictures :
CKPT records checkpoint information to controlfile for maintaining book keeping information like checkpoint  progress . Each instance checkpoint  refers  to some  RBA (called checkpoint RBA) whose  redo prior to this RBA have been written to disk. Hence recovery time is difference between checkpoint RBA and end of the redo log  .

Given a  checkpoint RBA, DBWR writes  buffers  from  the  head  of  the queue  until  low RBA of the buffer at  the head of the checkpoint queue  is greater  than  the checkpoint  RBA . At  this  point ,CKPT can  record  this checkpoint  progress  record  in  control file  (phase 3).

PHASE(1)  process  initiating  the checkpoint (checkpoiting  RBA or current RBA is marked) (The RBA of the last change made to a buffer) at the time reuqest is initiated.

PHASE (2)  DBWR  writes all  required  buffers  i.e  all  buffers  that  have  been modified at RBAs less than or equal to the checkpoint RBA. After all required buffers have been written, in

PHASE (3)  CKPT process records the completion of the checkpoint in control file.

The checkpoint  RBA  is copied  into  the  checkpoint  progress  record  of  the  controlfile by the checkpoint  heartbeat  once  every  3  seconds. Instance recovery, when needed, begins from the checkpoint  RBA  recorded  in  the  controlfile. The  target  RBA is the point up to which DBWn should seek to advance the checkpoint RBA to satisfy instance recovery objectives.

The term sync RBA is sometimes used to refer to the point up to which LGWR is required to sync the thread. However, this is not a full RBA -- only a redo block number is used at this point.

Monday, May 30, 2016

Post Activities After Database Cloning

Post Activities After Database Cloning

Here are some Post checklist to check the cloned database environment.

1) Check Database Name and DBID :
After successful creation of clonning on same or different server we should try to change the database name and id.(in case of same server, clone has different name) .It is suggested to changed the DBID of clone database because Recovery Manager (RMAN) distinguishes databases by DBID. For more details click here

2) Edit the Listener.ora and tnsname.ora files :
Edit the listener file to include the new entry of the clonned database.Invoke  the lsnrctl utility and reload the listener. Similarly check the service name in the tnsnames.ora file. Make sure the connection should be connected to cloned database.

3) Check all the schedule jobs :
If we have scheduled any scripts ,then make sure that all the jobs are enabled for the clonned database.

4) Verify the parameter file :
Make sure the location mention in the pfile of the clonned database should be correct. The location and values of the parameter should be valid.

5) Check tempfile  :
After the successful creation of the clone database ,check the tempfile. If there is no tempfile then add the new tempfile in clone database.

6) Check the archivelog :
Generally archivelog mode is disabled for UAT/Cloned databases. In case our production database is in archivelog mode, make sure that the clonned database archiving is disable.

7) Check for database link :
Check for database link present in the cloned environment. Ensure that these are select only dblinks and will not perform any DML in production databases. If  find any ,then you can either drop these or recreate them to point to any UAT or simply remove/hash out tnsnames entry corresponding to these hosts. Also check for any hard coded IP address in host column in DBA_DB_LINKS.