Friday, March 29, 2019

Large Tables In Oracle database

Large Tables In Oracle database 


creating a table with 10 million records

ORACLE> sqlplus '/as sysdba'

SQL> alter session set workarea_size_policy=manual;
SQL> alter session set sort_area_size=1000000000;

SQL> create table QADER_T1 as select rownum as id, 'Just Some Text' as textcol, mod(rownum,5) as numcol1, mod(rownum,1000) as numcol2 , 5000 as numcol3, to_date ('01.' || lpad(to_char(mod(rownum,12)+1),2,'0') || '.2018' ,'dd.mm.yyyy') as time_id from dual connect by level<=1e7;

Insert Another 10 Million Records to the Table

ORACLE> sqlplus '/as sysdba'

SQL> insert into QADER_T1 select rownum as id, 'Just Some Text' as textcol, mod(rownum,5) as numcol1, mod(rownum,1000) as numcol2, 5000 as numcol3,
to_date ('01.' || lpad(to_char(mod(rownum,12)+1),2,'0') || '.2018' ,'dd.mm.yyyy') as time_id from dual connect by level<=1e7;
SQL> COMMIT;

Sunday, March 24, 2019

Backup of a Container Database (CDB) and a Pluggable Database (PDB) in Oracle Database 12c

Backup of a Container Database (CDB) and a Pluggable Database (PDB) in Oracle Database 12c


1. Container Database (CDB) Backup

Backup of a Container Database (CDB) is essentially the same as a non-Container Database. The main thing to remember is, by doing a full backup of the CDB you are also doing a full backup of all PDBs.

Connect to RMAN using OS authentication and take a full backup using the following command. This means you are connecting to the root container with "AS SYSDBA" privilege.

$ rman target=/

RMAN> BACKUP DATABASE PLUS ARCHIVELOG;

Note:

The datafiles associated with the CBD (cdb1) and all the PDBs (pdb1, pdb2, pdb$seed) are included in the backup.

2. Root Container Backup

A backup of the root container is a backup of the CDB, excluding any of the PDBs.

Connect to RMAN using OS authentication and backup the root container using the following command. This means you are connecting to the root container with "AS SYSDBA" privilege.

$ rman target=/

RMAN> BACKUP DATABASE ROOT;

Note:

The datafiles associated with the CBD (cdb1) are included, but all the PDBs (pdb1, pdb2, pdb$seed) are not included in the backup.

3. Pluggable Database (PDB) Backup

There are two ways to back up pluggable databases. When connected to RMAN as the root container, you can backup one or more PDBs using the following command.

$ rman target=/

RMAN> BACKUP PLUGGABLE DATABASE pdb1, pdb2;

Note:
You can see this includes the datafiles for both referenced PDBs.

Alternatively, connect to a specific PDB and issue the following command.

$ rman target=sys@pdb1

RMAN> BACKUP DATABASE;

4. Tablespace and Datafile Backups

Multiple PDBs in the same CDB can have a tablespace with the same name, for example SYSTEM, SYSAUX and USERS. One way to remove that ambiguity is connect to the appropriate PDB. Once RMAN is connected to the PDB, the tablespace backup commands is unchanged compared to previous versions.

$ rman target=sys@pdb1

RMAN> BACKUP TABLESPACE system, sysaux, users;
Alternatively, you can remove the ambiguity by qualifying the PDB name with the tablespace name when connected to the root container.

$ rman target=sys@cdb1

RMAN> BACKUP TABLESPACE pdb1:system, pdb1:sysaux, pdb1:users, pdb2:system;
Datafiles have unique file numbers and fully qualified names, so they can be backed up from the root container or the individual PDB.

$ rman target=/

# Or

$ rman target=sys@pdb1

RMAN> BACKUP DATAFILE 8, 9, 10;
If you are connecting to a PDB, only the files belonging to that PDB can be backed up. So for example, when connected as PDB1, we get an error if we try to backup the SYSTEM datafile from the root container.

RMAN> BACKUP DATAFILE 1;

Starting backup at 23-DEC-13
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============


Wednesday, March 13, 2019

12c Pluggable Databases FAQ's

12c Pluggable Databases FAQ's


1.Can Point-In-Time recovery be conducted at Pluggable Database (PDB) level?
Yes, it will be possible.

2. Can database services be created inside a Pluggable Database (PDB)?
Of course you can have many services within a PDB. Each will denote the PDB within which it is defined as the initial current container. Use the normal methods to create, maintain, and drop additional services in a PDB. A PDB’s default service must not be dropped. Of course, service names should be unique across all the databases that are registered with a particular listener.

3. Can I have local TEMPORARY tablespace at Pluggable Database (PDB) level?
In spite, during the PDB creation only SYSTEM and SYSAUX tablespaces created, you can create local temporary tablespace later for every PDB. UNDO tablespace can NOT be local and stays on the CDB level.

4. Is conventional database mode will be still available in 12c database and later versions?
Yes, on Oracle 12c you will have two options of creating a usual old-fashioned Oracle database (non-CDB) or creating a Container Database (CDB) that will hold all your Pluggable Databases (PDB) that you will create or plug later. However, based on my information non-CDB mode will be available only in 12c release 1 for compatibility reasons and will be discontinued in 12cR2.

5. Will Oracle 12c database have only one container?
Yes and No! In fact, Oracle considers a CDB as well as all PDBs as containers. Every container has own name. For example a single Container Database (CDB) has a container name CDB$Root. Every Pluggable database has a container name similar to the PDB name. On CDB level by default your current container is CDB$Root but you can switch between PDBs’ containers also. On CDB level you can change current container with following SQL:

alter session set container = my_cdb

or verify current container with below SQL:

select Sys_Context('Userenv', 'Con_Name') "current container" from dual;

6. Is cloning a PDB from one in a different CDB supported?
I have not seen it in 12c Beta 2, but Oracle intends to support it in the final version. You’ll identify the source CDB by creating a database link to it from the CDB in which you will create the clone.

7.Is online PDB cloning supported?
Not in 12c Beta2, but Oracle intents to remove this restriction in the final version by using the same approach that allows online RMAN backup: by re-constructing any blocks that change during the time it takes to copy the source PDB’s datafiles.

8.Why switching PDB Open_Mode/Restricted modes requires 2 SQL commands?
Not sure why, but in final 12c release it will be possible to go from any one Open_Mode/Restricted combination to another with a single use of the alter pluggable database statement.

9.Will it be possible to allocate/restrict resources (CPU/Memory) of PDB’s (Pluggable)?
Yes, it will be possible using new functionality of Oracle 12c Resource Manager.
Using 12c Resource manager you will be able control CPU, Exadata I/O, sessions and parallel servers. A new 12c CDB Resource Manager Plan will use so-called “Shares” (resource allocations) to specify how CPU is distributed between PDBs. A CDB Resource Manager Plan also can use “utilization limits” to limit the CPU usage for a PDB. With a default directive, you do not need to modify the resource plan for each PDB plug and unplug. However I’m not aware of a possibility to restrict memory of PDBs, since they will share one single SGA

Startup and Shutdown Container Databases (CDB) and Pluggable Databases (PDB)

Startup and Shutdown Container Databases (CDB) and Pluggable Databases (PDB) 


1. Container Database (CDB)
Startup and shutdown of the container database is the same as it has always been for regular instances. The SQL*Plus STARTUP and SHUTDOWN commands are available when connected to the CDB as a privileged user. Some typical values are shown below.

STARTUP [NOMOUNT | MOUNT | RESTRICT | UPGRADE | FORCE | READ ONLY]
SHUTDOWN [IMMEDIATE | ABORT]

2. Pluggable Database (PDB)
Pluggable databases can be started and stopped using SQL*Plus commands or the ALTER PLUGGABLE DATABASE command.

SQL*Plus Commands
The following SQL*Plus commands are available to start and stop a pluggable database, when connected to that pluggable database as a privileged user.
STARTUP FORCE;
STARTUP OPEN READ WRITE [RESTRICT];
STARTUP OPEN READ ONLY [RESTRICT];
STARTUP UPGRADE;
SHUTDOWN [IMMEDIATE];
Some examples are shown below.

STARTUP FORCE;
SHUTDOWN IMMEDIATE;

STARTUP OPEN READ WRITE RESTRICT;
SHUTDOWN;

STARTUP;
SHUTDOWN IMMEDIATE;
ALTER PLUGGABLE DATABASE
The ALTER PLUGGABLE DATABASE command can be used from the CDB or the PDB.

The following commands are available to open and close the current PDB when connected to the PDB as a privileged user.

ALTER PLUGGABLE DATABASE OPEN READ WRITE [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE OPEN READ ONLY [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE OPEN UPGRADE [RESTRICTED];
ALTER PLUGGABLE DATABASE CLOSE [IMMEDIATE];
Some examples are shown below.

ALTER PLUGGABLE DATABASE OPEN READ ONLY FORCE;
ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;

ALTER PLUGGABLE DATABASE OPEN READ WRITE;
ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
The following commands are available to open and close one or more PDBs when connected to the CDB as a privileged user.

ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ WRITE [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN READ ONLY [RESTRICTED] [FORCE];
ALTER PLUGGABLE DATABASE <pdb-name-clause> OPEN UPGRADE [RESTRICTED];
ALTER PLUGGABLE DATABASE <pdb-name-clause> CLOSE [IMMEDIATE];
The <pdb-name-clause> clause can be any of the following:

One or more PDB names, specified as a comma-separated list.
The ALL keyword to indicate all PDBs.
The ALL EXCEPT keywords, followed by one or more PDB names in a comma-separate list, to indicate a subset of PDBs.

Some examples are shown below.

ALTER PLUGGABLE DATABASE pdb1, pdb2 OPEN READ ONLY FORCE;
ALTER PLUGGABLE DATABASE pdb1, pdb2 CLOSE IMMEDIATE;

ALTER PLUGGABLE DATABASE ALL OPEN;
ALTER PLUGGABLE DATABASE ALL CLOSE IMMEDIATE;

ALTER PLUGGABLE DATABASE ALL EXCEPT pdb1 OPEN;
ALTER PLUGGABLE DATABASE ALL EXCEPT pdb1 CLOSE IMMEDIATE;

12c Database Commands

12c Database Commands


1. Create and open a new Oracle 12c Pluggable database (PDB)

Create a pdb using the seed:

Now we will create and open a new Pluggable database (PDB) called My_PDB.
Every CDB has a standard template of a PDB whose name is PDB$Seed. We actually create a new PDB by cloning this Seed. See below a SQL example :

Pre-requisite
connect to root with CREATE PLUGGABLE DATBASE privileges.
configure a storage location (and ensure disk space) for the new database’s files.
select a unique name for the new database.

create pluggable database My_PDB admin user App_Admin identified by pass file_name_convert = ('/pdbseed/', '/my_pdb/');

Ex:
create pluggable database lub admin user pdbmqm identified by oracle file_name_convert=('/u02/app/oracle/oradata/orcl/pdbseed', '/u02/app/oracle/oradata/orcl/lub');

The file_name_convert clause specifies how names for the new files are derived from those of the seed using the same scheme as you know from RMAN. During the PDB creation Oracle copies just 2 database files for System and Sysaux tablespaces to the new location of our PDB. The rest of database files for undo, redo, and so on is global for the whole CDB and belongs to the special container called CDB$Root.

The admin user clause is mandatory. In an extended form, it lets you grant privileges and roles to the new user. This user is able to start a new session only within My_PDB.

On completion of a create pluggable database command, the new PDB will be in MOUNTED mode. Before you can start a session in the new PDB, it must be open. So, let’s open our PDB using below SQL :

alter pluggable database My_PDB open;

2. Check Container Database (CDB) and Pluggable database (PDB) files

select con_id, tablespace_name, file_name from cdb_data_files where file_Name like '%/cdb1/pdbseed/%' or file_Name like '%/cdb1/my_pdb/%' order by 1, 2;

CON_ID Tablespace_Name File_Name
------ ------------ -------------------------------------------------
2 SYSAUX /opt/oracle/oradata/cdb1/pdbseed/sysaux01.dbf
2 SYSTEM /opt/oracle/oradata/cdb1/pdbseed/system01.dbf
3 SYSAUX /opt/oracle/oradata/cdb1/My_PDB/sysaux01.dbf
3 SYSTEM /opt/oracle/oradata/cdb1/My_PDB/system01.dbf

3. Open all Oracle 12c Pluggable Databases (PDB)

Each PDB within a CDB has its own Open_Mode and Restricted status in each instance of a RAC. The possible values Open_Mode are MOUNTED, READ ONLY, and READ WRITE; and the possible values for the Restricted status are YES and NO when the PDB is open and otherwise null.

Starting an instance (which opens the entire CDB) does not cause PDBs to be opened. The alter pluggable database statement is used to set the Open_Mode. You give in SQL either the name of a particular PDB or use the keyword all, thus:

alter pluggable database all open;

4. Close all Oracle 12c Pluggable Databases (PDB) in CDB
This SQL statement closes all the PDBs in the CDB:

alter pluggable database all close;

Show the open time of PDBs

select substr(name,1,10) name, con_id, open_mode, to_char(open_time,'DD-MON-YYYY') open_time, total_size from v$pdbs;

SQL> select con_id,name,open_mode from v$pdbs;

CON_ID NAME   OPEN_MODE
---------- ------------------------------ ----------
 2 PDB$SEED   READ ONLY
 3 ORCL   READ WRITE
 4 PROD   READ WRITE

or

show pdbs

How to set the pdb database:

SQL> alter session set container=PROD;
SQL> show con_name

CON_NAME
------------------------------
PROD
SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
/home/oracle/app/oracle/oradata/cdb1/prod/system01.dbf
/home/oracle/app/oracle/oradata/cdb1/prod/sysaux01.dbf

How to close the PDB database:
alter pluggable database devpdb1 close immediate;

How to set the container database:
alter session set container = <db_name>;

How to check the container database:
show con_name

How to drop the PDB database:
drop pluggable database My_PDB keep datafiles;

How to Tell If the Database Is a CDB
select name, cdb, con_id from v$database;

Viewing Information About Containers
select name, con_id, db_id from v$containers;

Viewing Information About the PDBs
select pdb_id, pdb_name, status from cdb_pdbs;

Finding the Open Mode of a PDB
select name, open_mode, restricted, open_time from v$pdbs;

Viewing the History of PDBs
select db_name, con_id, pdb_name, operation, cloned_from_pdb_name from cdb_pdb_property;

Viewing all the tablespaces in a CDB
select tablespace_name, con_id from cdb_tablespaces;</em>

Note:

1. After creating a PDB database open it atleast once in read-write mode to complete the integration with the CDB.

2. Memory and processes are required at container database level only.

Components of the CDB are as follows

Components of the CDB are as follows


* One or more PDBs

* CDB$ROOT, the keeper of all the PDBs that are part of the collection

* PDB$SEED, a template from which new PDBs can be created

Types of Databases in 12c

Types of Databases in 12c 


1.Container Database (CDB): The primary database that contains multiple plugged-in databases. Many operations can be performed at the container level to reduce management costs. A database is created as either a CDB or a non-CDB.

2.Pluggable Database (PDB): A set of schemas, objects, and non-schema objects that can be plugged and unplugged from a container database. The PDB appears to OracleNet and end users as a database in and of itself but is actually managed within a container that may have many PDBs.

3.Seed Database (Seed PDB): A default PDB that the system uses as a template to quickly provision other user-created PDBs. Internally, it’s called PDB$SEED.

12c Pluggable Database feature capability

12c Pluggable Database feature capability


* You can have many Pluggable Databases (PDBs) inside a single container database (CDB)
* PDB is backwards compatible with an ordinary pre-12.1 database;
* PDB is transparent to applications * you don’t change the client code or your database objects
* Each instance in a RAC opens CDB as a whole (so the CDB and all the PDBs in it are at the same Oracle Database version)
* A session sees only the single PDB it connects to
* You can unplug a PDB from one CDB and plug it into another CDB
* You can clone a PDB, both within the same CDB or from one CDB to another one
* Resource Manager is extended with new PDB capabilities
* The operations on PDBs as entities (creating, unplugging, plugging in, cloning, dropping, and setting the Open_Mode) are implemented as SQL statements
* CDB’s administrator executes these operations when connected to its so-called root
* Pluggable Databases functionality is fully interoperable with all the database options
* All PDBs can be backed up at once, but recovered separately

12c Pluggable Database feature benefits

12c Pluggable Database feature benefits


* Consolidate many PDBs onto a single platform.
* Fast provisioning of a new PDB or of a clone of an existing PDB;
* Fast redeployment, by unplug and plug, of an existing database to a new platform
* Patch or upgrade the Oracle Database version for many PDBs quickly by doing it just once
* Patch or upgrade a single PDB by unplugging it and plugging it into a different CDB at a later version
* Separation of the content of one PDB from that of peer PDBs in the same CDB
* Separation of the duties of the application administrators of these PDBs

Oracle 12c Pluggable Database features

Oracle 12c Pluggable Database features 


* Each PDB has its own private data dictionary for customer-created database objects; CDB on the other hand as a whole has the data dictionary for the Oracle-supplied system each data dictionary defines its own namespace. In other words, there is global Data dictionary (CDB level) and local one (PDB level).
* There is a new split data dictionary architecture that allows a PDB to be quickly unplugged from one CDB and plugged into a different CDB
* Each PDB sees a read-only definition of the Oracle-supplied system
* There are global database initialisation parameters on CDB level and local ones. PDB parameters belong only to you particular PDB and will be persistent even after you unplug you PDB.
* Database users can be global (CDB) or local (PDB only). SYS and SYSTEM users exist in both DBs right at the beginning. If you create a new user in CDB you will see it in PDB also. In case of creation of a user on PDB level it will stay local.
* Temporary tablespaces can be global or local
* Redo logs and Undo tablspace are only global (on CDB level)
* Data Guard acts on the CDB as a whole; RMAN scheduled backups are done for the CDB as a whole; you can back up a selected PDB whenever you want to; you can do point-in-time
* An application connects, with no code changes, to a PDB; the system administrator connects to the CDB; the service named in the connect string specifies the destination PDB
* A PDB allows a clear declarative definition of an application; one PDB knows nothing of the other PDBs within the same CDB; each PDB is a hermetically sealed container. That ensures new level of DB independence and robust security

Tuesday, March 12, 2019

RMAN Interview Related Questions ?

RMAN Interview Related Questions ?


1. Difference between catalog and nocatalog?
Recovery catalog is central and can have information of many databases.

2. Difference between using recovery catalog and control file?
When new incarnation happens, the old backup information in control file will be lost. It will be preserved in recovery catalog.
In recovery catalog, we can store scripts.
Recovery catalog is central and can have information of many databases.

3. Can we use same target database as catalog?
No. The recovery catalog should not reside in the target database (database to be backed up), because the database can't be recovered in the mounted state.

4. How do u know how much RMAN task has been completed?
By querying v$rman_status or v$session_longops

5. From where list and report commands will get input?
Both the commands command quering v$ and recovery catalog (RC) views. V$BACKUP_FILES or many of the recovery catalog views such as RC_DATAFILE_COPY or RC_ARCHIVED_LOG.

6. Command to delete archive logs older than 7days?
RMAN> delete archivelog all completed before sysdate-7;

8. What is the use of crosscheck command in RMAN?
Crosscheck will be useful to check whether the catalog information is intact with OS level information.

9. What are the differences between crosscheck and validate commands?
Validate command is to examine a backup set and report whether it can be restored. RMAN scans all of the backup pieces in the specified backup sets and looks at the checksum to verify that the contents are intact so that backup can be successfully restored if necessary.

Crosscheck command is to verify the status of backups and copies recorded in the RMAN repository against media such as disk or tape. The crosscheck command only processes files created on the same device type as the channel running crosscheck.

10. Which is one is good, differential (incremental) backup or cumulative (incremental) backup?
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0

11. What is Level 0, Level 1 backup?
A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data, backing the datafile up into a backup set just as a full backup would. A level 1 incremental backup can be either of the following types:
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0

12. Can we perform level 1 backup without level 0 backup?
If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If compatibility < 10.0.0, RMAN generates a level 0 backup of the file contents at the time of the backup. If compatibility is >= 10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup. In other words, the SCN at the time the incremental backup is taken is the file creation SCN.

14. What is snapshot control file?
The snapshot control file is a copy of a database control file created in an operating system-specific location by RMAN. RMAN creates the snapshot control file so that it has a consistent version of a control file to use when either resynchronizing the recovery catalog or backing up the control file.

15. What is the difference between backup set and backup piece?
Backup set is logical and backup piece is physical.

16. RMAN command to backup for creating standby database?
RMAN> duplicate target database to standby database ....

17. How to do cloning by using RMAN?
RMAN> duplicate target database …

18. Suppose you lost one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week/day old and don’t have backup of this (newly created) datafile. How do you restore/recover file?
create the datafile and recover that datafile.
SQL> alter database create datafile ‘…path..’ size n;
RMAN> recover datafile file_id;

19. What is obsolete backup and expired backup?
A status of "expired" means that the backup piece or backup set is not found in the backup destination.
A status of "obsolete" means the backup piece is still available, but it is no longer needed. The backup piece is no longer needed since RMAN has been configured to no longer need this piece after so many days have elapsed, or so many backups have been performed.

20. What is the difference between hot backup and RMAN backup?
For hot backup, we have to put database in begin backup mode, then take backup.
RMAN won’t put database in backup mode.

21. How to put manual/user-managed backup in RMAN (recovery catalog)?
By using catalog command.
RMAN> CATALOG START WITH '/tmp/backup.ctl';

25. What is the use of nofilenamecheck in rman
When NOFILENAMECHECK is used with the DUPLICATE command, RMAN does not validate the filenames during restoration. If the primary database and the standby database are on the same host, The DUPLICATE NOFILENAMECHECK option should not be used.

26. What is RMAN ?
Recovery Manager (RMAN) is a utility that can manage your entire Oracle backup and recovery activities.
Which Files must be backed up?
Database Files (with RMAN)
Control Files (with RMAN)
Offline Redolog Files (with RMAN)
INIT.ORA (manually)
Password Files (manually)

27. When you take a hot backup putting Tablespace in begin backup mode, Oracle records SCN # from header of a database file.  What happens when you issue hot backup database in RMAN at block level backup? How does RMAN mark the record that the block has been backed up ?  How does RMAN know what blocks were backed up so that it doesn't have to scan them again?
In 11g, there is Oracle Block Change Tracking feature.  Once enabled; this new 10g feature records the modified since last backup and stores the log of it in a block change tracking file. During backups RMAN uses the log file to identify the specific blocks that must be backed up. This improves RMAN's performance as it does not have to scan whole datafiles to detect changed blocks.
Logging of changed blocks is performed by the CTRW process which is also responsible for writing data to the block change tracking file. RMAN uses SCNs on the block level and the archived redo logs to resolve any inconsistencies in the datafiles from a hot backup. What RMAN does not require is to put the tablespace in BACKUP mode, thus freezing the SCN in the header. Rather, RMAN keeps this information in either your control files or in the RMAN repository (i.e., Recovery Catalog). 

28.  What are the Architectural components of RMAN?
1.RMAN executable
2.Server processes
3.Channels
4.Target database
5.Recovery catalog database (optional)
6.Media management layer (optional)
7.Backups, backup sets, and backup pieces

29.  What are Channels?
A channel is an RMAN server process started when there is a need to communicate with an I/O device, such as a disk or a tape. A channel is what reads and writes RMAN backup files. It is through the allocation of channels that you govern I/O characteristics such as:
Type of I/O device being read or written to, either a disk or an sbt_tape
Number of processes simultaneously accessing an I/O device
Maximum size of files created on I/O devices
Maximum rate at which database files are read
Maximum number of files open at a time

30.  Why is the catalog optional?
Because RMAN manages backup and recovery operations, it requires a place to store necessary information about the database. RMAN always stores this information in the target database control file. You can also store RMAN metadata in a recovery catalog schema contained in a separate database. The recovery catalog
schema must be stored in a database other than the target database.

31.  What does complete RMAN backup consist of ?
A backup of all or part of your database. This results from issuing an RMAN backup command. A backup consists of one or more backup sets.

32.  What is a Backup set?
A logical grouping of backup files -- the backup pieces -- that are created when you issue an RMAN backup command. A backup set is RMAN's name for a collection of files associated with a backup. A backup set is composed of one or more backup pieces.

33.  What is a Backup piece?
A physical binary file created by RMAN during a backup. Backup pieces are written to your backup medium, whether to disk or tape. They contain blocks from the target database's datafiles, archived redo log files, and control files. When RMAN constructs a backup piece from datafiles, there are a several rules that it follows:
*  A datafile cannot span backup sets
*  A datafile can span backup pieces as long as it stays within one backup set
*  Datafiles and control files can coexist in the same backup sets
*  Archived redo log files are never in the same backup set as datafiles or control files RMAN is the only tool that can operate on backup pieces. If you need to restore a file from an RMAN backup, you must use RMAN to do it. There's no way for you to manually reconstruct database files from the backup pieces. You must use RMAN to restore files from a backup piece.

34.  What are the benefits of using RMAN?
1. Incremental backups that only copy data blocks that have changed since the last backup.
2. Tablespaces are not put in backup mode, thus there is noextra redo log generation during online backups.
3. Detection of corrupt blocks during backups.
4. Parallelization of I/O operations.
5. Automatic logging of all backup and recovery operations.
6. Built-in reporting and listing commands.

35. RMAN Restore Preview
The PREVIEW option of the RESTORE command allows you to identify the backups required to complete a specific restore operation. The output generated by the command is in the same format as the LIST command. In addition the PREVIEW SUMMARY command can be used to produce a summary report with the same format as the LIST SUMMARY command. The following examples show how these commands are used:
# Spool output to a log file
SPOOL LOG TO c:\oracle\rmancmd\restorepreview.lst;
# Show what files will be used to restore the SYSTEM tablespace’s datafile
RESTORE DATAFILE 2 PREVIEW;
# Show what files will be used to restore a specific tablespace
RESTORE TABLESPACE users PREVIEW;
# Show a summary for a full database restore
RESTORE DATABASE PREVIEW SUMMARY;
# Close the log file
SPOOL LOG OFF;

36. Where should the catalog be created?
The recovery catalog to be used by rman should be created in a separate database other than the target database. The reason been that the target database will be shutdown while datafiles are restored.

37. How many times does oracle ask before dropping a catalog?
The default is two times one for the actual command, the other for confirmation.

38. How to view the current defaults for the database.
RMAN> show all;
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/u02/app/oracle/product/10.1.0/db_1/dbs/snapcf_test.f’; # default

39. Backup the database.
run
{
backup incremental level $level ${level_keyword}
tag INC${target_db}_$level database include current controlfile;
backup archivelog all not backed up 1 times delete input;
}

40. What are the various reports available with RMAN
rman>list backup;
 rman> list archive;
17. What does backup incremental level=0 database do?
Backup database level=0 is a full backup of the database. rman>>backup incremental level=0 database;
You can also use backup full database; which means the same thing as level=0;
18. What is the difference between DELETE INPUT and DELETE ALL command in backup?
Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we archive the files, when a command is issued through rman to backup archivelogs it uses one of the location to backup the data. When we specify delete input the location which was backedup will get deleted, if we specify delete all all log_archive_dest_n will get deleted.
DELETE all applies only to archived logs. delete expired archivelog all;

41. How do I backup archive log?
In order to backup archivelog we have to do the following:-
run
{
allocate channel t1 type 'SBT_TAPE';
delete noprompt archivelog until time = 'sysdate-3/24';
delete noprompt obsolete;
release channel t1;
}

42. How do I do a incremental backup after a base backup?
run
{
backup incremental level $level ${level_keyword}
tag INC${target_db}_$level database include current controlfile;
backup archivelog all not backed up 1 times delete input;
}

43. In catalog database, if some of the blocks are corrupted due to system crash, How will you recover?
using RMAN BLOCK RECOVER command

44. You have taken a manual backup of a datafile using o/s. How RMAN will know about it?
You have to catalog that manual backup in RMAN's repository by command
RMAN> catalog datafilecopy '/DB01/BACKUP/users01.dbf';
restrictions:
> Accessible on disk
> A complete image copy of a single file

45. Where RMAN keeps information of backups if you are using RMAN without Catalog?
RMAN keeps information of backups in the control file.
CATALOG vs NOCATALOG
the difference is only who maintains the backup records like when is the last successful backup incremental differential etc.
In CATALOG mode another database (TARGET database) stores all the information.
In NOCATALOG mode controlfile of Target database is responsible.

46. How do you see information about backups in RMAN?
RMAN> List Backup;
Use this SQL to check
SQL> SELECT sid totalwork sofar FROM v$session_longops WHERE sid 153;
Here give SID when back start it will show SID

47. How RMAN improves backup time?
RMAN backup time consumption is very less than compared to regular online backup as RMAN copies only modified blocks

48. What is the advantage of RMAN utility?
Central Repository
Incremental Backup
Corruption Detection
Advantage over tradition backup system:
1). copies only the filled blocks i.e. even if 1000 blocks is allocated to datafile but 500 are filled with data then RMAN will only create a backup for that 500 filled blocks.
2). incremental and accumulative backup.
3). catalog and no catalog option.
4). detection of corrupted blocks during backup;
5). can create and store the backup and recover scripts.
6). increase performance through automatic parallelization( allocating channels) less redo generation.

49. List the encryption options available with RMAN?
RMAN offers three encryption modes: transparent mode, password mode and dual mode.

50. What is the significance of incarnation and DBID in the RMAN backups?
When you have multiple databases you have to set your DBID (Database Id) which is unique to each database. You have to set this before you do any restore operation from RMAN.
There is possibility that incarnation may be different of your database. So it is advised to reset to match with the current incarnation. If you run the RMAN command ALTER DATABASE OPEN RESETLOGS then RMAN resets the
target database automatically so that you do not have to run RESET DATABASE. By resetting the database RMAN considers the new incarnation as the current incarnation of the database.

51. List at least 6 advantages of RMAN backups compare to traditional hot backups?
RMAN has the following advantages over Traditional backups:
1. Ability to perform INCREMENTAL backups
2. Ability to Recover one block of datafile
3. Ability to automatically backup CONTROLFILE and SPFILE
4. Ability to delete the older ARCHIVE REDOLOG files, with the new one's automatically.
5. Ability to perform backup and restore with parallelism.
6. Ability to report the files needed for the backup.
7. Ability to RESTART the failed backup, without starting from beginning.
8. Much faster when compared to other TRADITIONAL backup strategies.

52. How do you enable the autobackup for the controlfile using RMAN?
issue command at rman prompt.....
RMAN> configure controlfile autobackup on;
also we can configure controlfile backup format......
RMAN> configure controlfile autobackup format for device type disk to
2> '$HOME/BACKUP/RMAN/ F.bkp';
$HOME/BACKUP/RMAN/ this can be any desired location.

53. How do you identify what are the all the target databases that are being backed-up with RMAN database?
You don’t have any view to identify whether it is backed up or not . The only option is connect to the target database and give list backup this will give you the backup information with date and timing.

54. What is the difference between cumulative incremental and differential incremental backups?
Differential backup: This is the default type of incremental backup which backs up all blocks changed after the most recent backup at level n or lower.
Cumulative backup: Backup all blocks changed after the most recent backup at level n-1 or lower.

55. How do you identify the block corruption in RMAN database? How do you fix it?
using v$block_corruption view u can find which blocks corrupted.
Rman>> block recover datafile <fileid> block <blockid>;
Using the above statement u recover the corrupted blocks.
First check whether the block is corrupted or not by using this command
sql>select file# block# from v$database_block_corruption;
file# block
2 507
the above block is corrupted...
conn to Rman
To recover the block use this command...
Rman>blockrecover dataile 2 block 507;
the above command recover the block 507
Now just verify it.....
Rman>blockrecover corruption list;

56. How do you clone the database using RMAN software? Give brief steps? When do you use crosscheck command?
Check whether backup pieces proxy copies or disk copies still exist.
Two commands available in RMAN to clone database:
1) Duplicate
2) Restore.

57. What is the difference between obsolete RMAN backups and expired RMAN backups?
The term obsolete does not mean the same as expired. In short obsolete means "not needed " whereas expired means "not found."

58. List some of the RMAN catalog view names which contain the catalog information?
RC_DATABASE_INCARNATION RC_BACKUP_COPY_DETAILS
RC_BACKUP_CORRUPTION
RC_BACKUP-DATAFILE_SUMMARY to name a few

59. What is db_recovery_file_dest ? When do you need to set this value?
If Database Flashback option is on then use this option.

60. How do you setup the RMAN tape backups?
RMAN Target /
run
{
Allocate channel ch1 device type sbt_tape maxpiecesize 4g
Format' D_ U_ T_ t';
sql 'alter system switch logfile';
Backup database;
backup archivelog from time 'sysdate-7';
Backup Format ' D_CTLFILE_P_ U_ T_ t' Current controlfile;
release channel ch1;
}
This is backup script for Tivoli Backup Server

61. How do you install the RMAN recovery catalog?
Steps to be followed:
1) Create connection string at catalog database.
2) At catalog database create one new user or use existing user and give that user a recovery_catalog_owner privilege.
3)Login into RMAN with connection string
a) export ORACLE_SID         
b) rman target catalog @connection string
4) rman> create catalog;
5) register database;

62. When do you recommend hot backup? What are the pre-reqs?
Database must be Archivelog Mode
Archive Destination must be set and LOG_ARCHIVE_START TRUE (EARLIER VERSION BEFORE 10G)
If you go through RMAN then
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u01/oracle/autobackup/ F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO
'/u01/app/oracle/product/10.2.0/db_2/dbs/snapcf_dba.f'; # default

63. What is the difference between physical and logical backups?
In Oracle Logical Backup is "which is taken using either Traditional Export/Import or Latest Data Pump". Where as Physical backup is known "when you take Physical O/s Database related Files as Backup".

64. What is RAID? What is RAID0? What is RAID1? What is RAID 10?
RAID: It is a redundant array of independent disk
RAID0: Concatenation and stripping
RAID1: Mirroring

65. What are things which play major role in designing the backup strategy?
I Believe in designing a good backup strategy it will not only be simply backup but also a contingency plan. In this case you should consider the following:
1. How long is the allowable down time during recovery? - If short you could consider using dataguard.
2. How long is the backup period? - If short I would advise to use RMAN instead of user managed backup.
3. If limited disk space for backup never use user managed backup.
4. If the database is large you could consider doing full rman backups on a weekend and do a incremental backup on a weekday.
5. Schedule your backup on the time where there is least database activity this is to avoid resource huggling.
6. Backup script should always be automized via scheduled jobs. This way operators would never miss a backup period.
7. Retention period should also be considered. Try keeping atleast 2 full backups. (current and previous backup).
Cold backup: shutdown the database and copy the datafiles with the help of
O.S. command. this is simply copying of datafiles just like any other text file copy.
Hot backup: backup process starts even though database in running. The process to take a hot backup is
1) sql> alter database begin backup;
2) copy the datafiles.
3) after copying
sql> alter database end backup;
Begin backup clause will generate the timestamp. it'll be used in backup consistency i.e. when begin backup pressed it'll generate the timestamp. During restore database will restore the data from backup till that timestamp and remaining backup will be recovered from archive log.

66. What is hot backup and what is cold backup?
Hot backup when the database is online cold backup is taken during shut down period

67. How do you test that your recovery was successful?
SQL> SELECT count(*) FROM flashback_table;

68. How do you backup the Flash Recovery Area?
A:RMAN> BACKUP RECOVERY FILES;
The files on disk that have not previously been backed up will be backed up. They are full and incremental backup sets, control file auto-backups, archive logs and datafile copies.

69. How to enable Fast Incremental Backup to backup only those data blocks that have changed?
A:SQL> ALTER DATABASE enable BLOCK CHANGE TRACKING;

70. How do you set the flash recovery area?
A:SQL> ALTER SYSTEM SET db_recovery_file_dest_size = 100G;SQL> ALTER SYSTEM SET db_recovery_file_dest = ‘/u10/oradata/school’;

71. How can you use the CURRENT_SCN column in the V$DATABASE view to obtain the currentSCN?
A:SQL> SELECT current_scn FROM v$database;

72. You have taken a manual backup of a datafile using o/s. How RMAN will know about it?
You have to catalog that manual backup in RMAN's repository by command
RMAN> catalogdatafilecopy '/DB01/BACKUP/users01.dbf';
restrictions:> Accessible on disk> A complete image copyof a single file

73. In catalog database, if some of the blocks are corrupted due to system crash, How will you recover?
using RMAN BLOCK RECOVER command

74. List advantages of RMAN backups compare to traditional hot backups?
RMAN has the following advantages over Traditional backups:
1. Ability to perform INCREMENTALbackups
2. Ability to Recover one block of datafile
3. Ability to automatically backup CONTROLFILEand SPFILE
4. Ability to delete the older ARCHIVE REDOLOG files

75. How do you identify the expired, active, obsolete backups? Which RMAN command you use?
Use command:
Rman > crosscheck backup;
Rman > crosscheck archivelog all;
Rman > listbackup;
Rman > list archive logall

76. How do you enable the autobackup for the controlfile using RMAN?
RMAN> configure controlfile autobackup on;
also we can configurecontrolfile backup format......
RMAN> configure control file auto backup format for device type disk

77. How do you identify what are the all the target databases that are being backed-up with RMAN database?
You don’t have any view to identify whether it is backed up or not . The only option is connect to the target database and give list backup, this will give you the backup information with date and timing

78. What is the difference between cumulative incremental and differential incremental backups?
Differential backup: This is the default type of incremental backup which backs up all blocks changed after the most recent backup at level n or lower.
Cumulative backup: Backup all blocks changed after the most recent backup at level n-1 or lower
                   
79. Explain how to setup the physical stand by database with RMAN?
$ Export ORACLE_SID=TEST $ rman target /
 RMAN> show all;
Using target database controlfile instead of recovery catalog RMAN configuration parameters are:
CONFIGURE RETENTIONPOLICY TO RECOVERY WINDOW OF 1 DAYS;
CONFIGURE BACKUP OPTIMIZATION

80. What is auxiliary channel in RMAN? When do you need this?
An auxiliary channel is a link to auxiliary instance. If you do not have automatic channels configured, then before issuing the DUPLICATE command, manually allocate at least one auxiliary channel with in the same RUN command.

81. What is backup set?
RMAN can also store its backups in an RMAN-exclusive format which is called backup set. A backupset is a collection of backup pieces, each of which may contain one or more datafile backups

82. What is RMAN and how does one use it?
Recovery Manager (or RMAN) is an Oracle provided utility for backing-up, restoring and recoveringOracle Databases. RMAN ships with the database server and doesn't require a separate installation. TheRMAN executable is located in your ORACLE_HOME/bin directory.

83. What kind of backup are supported by RMAN?
Backup SetsDatafiles CopiesOS BackupWhat is the Flash Recovery Area?
It is a unified storage location for all recovery-related files and activities in an Oracle Database. Itincludes Control File, Archived Log Files, Flashback Logs, Control File Autobackups, Data Files, andRMAN files.

84. How do you define a Flash Recovery Area?
To define a Flash Recovery Area set the following Oracle Initialization Parameters.
SQL> ALTER SYSTEM SET db_recovery_file_dest_size = 100G;
SQL> ALTER SYSTEM SET db_recovery_file_dest = ‘/u10/oradata/school’;

85. How do you use the V$RECOVERY_FILE_DEST view to display information regarding the flashrecovery area?
SQL> SELECT name, space_limit, space_used,space_reclaimable, number_of_filesFROM v$recovery_file_dest;

86. How can you display warning messages?
SQL> SELECT object_type, message_type,message_level, reason, suggested_actionFROM dba_outstanding_alerts;

66. How to use the best practice to use Oracle Managed File (OMF) to let Oracle database to create andmanage the underlying operating system files of a database?
SQL> ALTER SYSTEM SETdb_create_file_dest = ‘/u03/oradata/school’;
SQL> ALTER SYSTEM SETdb_create_online_dest_1 = ‘/u04/oradata/school’;

67. How to enable Fast Incremental Backup to backup only those data blocks that have changed?
SQL> ALTER DATABASE enable BLOCK CHANGE TRACKING;

68. How do you monitor block change tracking?
A:SQL> SELECT filename, status, bytes FROM v$block_change_tracking;
It shows where the block change-tracking file is located, the status of it and the size.

69. How do you use the V$BACKUP_DATAFILE view to display how effective the block change trackingis in minimizing the incremental backup I/O?
SQL> SELECT file#, AVG(datafile_blocks), AVG(blocks_read),AVG (blocks_read/datafile_blocks), AVG(blocks)FROM v$backup_datafileWHERE used_change_tracking = ‘YES’ AND incremental_level > 0GROUP BY file#;If the AVG (blocks_read/datafile_blocks) column is high then you may have to decrease the timebetween the incremental backups.

70. How do you backup the entire database?
RMAN> BACKUP DATABASE;

71. How do you backup an individual tablespaces?
RMAN> CONFIGURE DEFAULT DEVICE TYPE TO DISK;
RMAN> BACKUP TABLESPACE system;

72. How do you backup datafiles and control files?
RMAN> BACKUP DATAFILE 3;
RMAN> BACKUP CURRENT CONTROLFILE;

73. Use a fast recovery without restoring all backups from their backup location to the location specified into the controlfile.
RMAN> SWITCH DATABASE TO COPY;

74. What are RTO, MTBF, and MTTR?
RTO: Recovery Time objective-is the maximum amount of time that the database can be unavailable  and still stasfy SLA's
MTBF (Meant tiem Between Failure)-
MTTR (Mean tie to recover)-  fast recovery solutions

75. How do you enable the encryption for RMAN backups?
If you wish to modify your existing backup environment so that all RMAN backups are encrypted, perform the following steps:
· Set up the Oracle Encryption Wallet
· Issue the following RMAN command:

RMAN> CONFIGURE ENCRYPTION ALGORITHM 'AES256'; -- use 256 bit encryption
RMAN> CONFIGURE ENCRYPTION FOR DATABASE ON; -- encrypt backups

76. What is the difference between restoring and recovering?
Restoring involves copying backup files from secondary storage (backup media) to disk. This can be done to replace damaged files or to copy/move a database to a new location.
Recovery is the process of applying redo logs to the database to roll it forward. One can roll-forward until a specific point-in-time (before the disaster occurred), or roll-forward until the last transaction recorded in the log files.
SQL> connect SYS as SYSDBA
SQL> RECOVER DATABASE UNTIL TIME '2001-03-06:16:00:00' USING BACKUP CONTROLFILE;
RMAN> run {
set until time to_date('04-Aug-2004 00:00:00', 'DD-MON-YYYY HH24:MI:SS');
restore database;
recover database;
}

77. List some of the RMAN catalog view names which contain the catalog information?
RC_DATABASE_INCARNATION RC_BACKUP_COPY_DETAILSRC_BACKUP_CORRUPTIONRC_BACKUP-DATAFILE_SUMMARY to name a few

78. What is the difference between obsolete RMAN backups and expired RMANbackups?
The term obsolete does not mean the same as expired.
In short obsolete means"not needed " whereas expired means "not found."

Different JDK Versions ?

Different JDK Versions ?


Java 1.1

* Language changes:
inner classes
* Library changes:
AWT event changes
JDBC, RMI
reflection

Java 1.2

* Language changes:
the strictfp keyword
* Library changes:
a unified collections system
Swing as a new UI-System on top of AWT
* Platform changes
a real JIT, greatly improving speed

Java 1.3

Mostly minor improvements:
* Platform changes:
HotSpot JVM: improvement over the original JIT

Java 1.4

JSR 59
* Language changes:
the assert keyword

* Library changes:
regular expressions support
NIO
integrated XML handling

Java 5 (a.k.a 1.5)

JSR 176, features and enhancements
* Language Changes:
generics (that's the big one)
annotations
enum types
varargs, enhanced for loops (for-each)
* Library changes:
concurrency utilities in java.util.concurrent

Java 6 (a.k.a 1.6)

JSR 270. features and enhancements
Mostly incremental improvements to existing libraries, no new language features (except for the @Override snafu).

Java 7 (a.k.a 1.7)

JSR 336, features and enhancements
* Language changes:
Project Coin (small changes)
switch on Strings
try-with-resources
diamond operator

* Library changes:
new abstracted file-system API (NIO.2) (with support for virtual filesystems)
improved concurrency libraries
elliptic curve encryption
more incremental upgrades

* Platform changes:
support for dynamic languages

Java 8 (a.k.a 1.8)

JSR 337, what's new
* Language changes:
lambda expressions (JSR 335, includes method handles)
continuation of Project Coin (small language improvements)
annotations on Java types
* Library changes:
Improved Date and Time API

What is Parallel Concurrent Processing?

What is Parallel Concurrent Processing?


Parallel concurrent processing allows you to distribute concurrent managers across multiple nodes in a cluster, massively parallel, or networked environment. Instead of operating concurrent processing on a single node while other nodes are idle, you can spread concurrent processing across all available nodes, fully utilizing hardware resources.

Benefits of Parallel Concurrent Processing ?

Parallel concurrent processing provides Oracle E-Business Suite users with the following benefits:

* High performance - the ability to run concurrent processes on multiple nodes to improve concurrent processing throughput.

* Fault Tolerance - the ability to continue running concurrent processes on available nodes even when one or more nodes fails.

* Adaptability - the ability to integrate with platform-specific batch queue and load-balancing systems to maximize concurrent processing performance on a particular platform.

* Single Point of Control - the ability to administer concurrent managers running on multiple nodes from any node in a cluster, massively parallel, or networked environment.

Parallel Concurrent Processing Environments ?

Parallel concurrent processing runs in multi-node environments, such as cluster, massively parallel, and networked environments. In these environments, each node consists of one or more processors (CPUs) and their associated memory. Each node has its own memory that is not shared with other nodes And each node operates independently of other nodes, except when sharing a resource such as a disk.

Important Roles in PCP:

Role of ICM in PCP ?

1.Internal Manager (ICM) monitors, activates and deactivates all managers.
2.ICM migrates managers during node and/or instance failures and needs to be active for failover/failback to work.
3.ICM uses the Service Manager (FNDSM) to spawn and terminate all concurrent manager processes, and to manage GSM services like Workflow mailer, Output Post Processor, etc.
4.ICM will contact the APPS TNS Listener on each local and remote concurrent processing node to start the Service Manager on that node.
5.ICM will not attempt to start a Service Manager if it is unable to TNS ping the APPS TNS Listener
6.One Service Manager is defined for each application node registered in FND_NODES.
7.Each service/manager may have a primary and a secondary node. Initially, a concurrent manager is started on its primary node. In case of node failure, all concurrent managers on that node migrate to their respective secondary nodes.

Role of Service Manager in PCP ?

1.Service manager (FNDSM process) is used to manage services/managers on each concurrent node. It is a requirement in all concurrent processing environments and is therefore an integral part of PCP. PCP cannot be implemented without Service manager.
2.The Service Manager is spawned from the APPS TNS Listener.
3.The APPS TNS Listener must be started on every application node in the system, and started by the user that starts ICM (e.g. applmgr)
4.TNS Listener spawns Service Manager to run as agent of ICM for the local node
5.The Service Manager is started by ICM on demand when needed. If no management actions are needed on a node, Service Manager will not be started by ICM until necessary. When ICM exits its Service Managers exit as well.
6.The Service Manager environment is set by gsmstart.sh and APPSORA.env as defined in listener.ora

Role of Internal Monitors in PCP ?

1.The only function of Internal Monitor (FNDIMON process) is to check if ICM is running and restart failed ICM on local node.
2.Internal Monitors are seeded on every registered node by default by autoconfig.
3.Activate Internal Monitor on each concurrent node where the ICM can start in case of a failure. Bydefault, Internal Monitor is deactivated.
4.If the ICM goes down, the Internal Monitor will attempt to start a new ICM on the local node.
5.If multiple ICMs are started, only the first will stay active. The others will gracefully exit.


Troubleshooting Application Slowness Issue In Weblogic

Troubleshooting Application Slowness Issue In Weblogic


Linux performance related commands

top - to check cpu and memory usage

top - 03:32:44 up 1 day, 23:09,  6 users,  load average: 0.00, 0.05, 0.32
Tasks: 157 total,   1 running, 156 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.1%us,  0.3%sy,  0.0%ni, 97.7%id,  0.8%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:   1011508k total,  1001500k used,    10008k free,     3528k buffers
Swap:  2097148k total,   333960k used,  1763188k free,    69648k cached

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
10 root      20   0     0    0    0 S  3.7  0.0   0:19.80 rcu_sched
13415 oracle    20   0 1286m 293m 4116 S  1.8 29.7   0:29.02 java
1 root      20   0 19408  376  200 S  0.0  0.0   0:04.57 init

System activity report
sar -r 2 5
sar -r 2

[oracle@OEL6 ~]$ sar -r 2 2
Linux 3.8.13-98.2.2.el6uek.x86_64 (OEL6)        02/10/2016      _x86_64_        (1 CPU)

03:33:57 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit
03:33:59 AM     11020   1000488     98.91      2692     69448   2032144     65.37
03:34:01 AM     11020   1000488     98.91      2692     69448   2032144     65.37
Average:        11020   1000488     98.91      2692     69448   2032144     65.37

Input and output statistics

iostat 2 5
iostat 2

[oracle@OEL6 ~]$ iostat 2 1
Linux 3.8.13-98.2.2.el6uek.x86_64 (OEL6)        02/10/2016      _x86_64_        (1 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          1.12    0.00    0.37    0.85    0.00   97.66

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
scd0              0.00         0.00         0.00        544          0
sda               4.91       160.92        43.61   27073268    7336356
fd0               0.00         0.00         0.00         16          0

[oracle@OEL6 ~]$ free
             total       used       free     shared    buffers     cached
Mem:       1011508     999864      11644       2328       3064      60936
-/+ buffers/cache:     935864      75644
Swap:      2097148     334344    1762804

df -h - Disk usage

[oracle@OEL6 ~]$ df -h /home/oracle/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        57G   23G   32G  42% /

jmap -heap will display the snap shot of heap usage

[oracle@OEL6 ~]$ /home/oracle/jdk1.7.0_21/bin/jmap -heap 12659
Attaching to process ID 12659, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 23.21-b01

using thread-local object allocation.
Mark Sweep Compact GC

Heap Configuration:
   MinHeapFreeRatio = 40
   MaxHeapFreeRatio = 70
   MaxHeapSize      = 268435456 (256.0MB)
   NewSize          = 1310720 (1.25MB)
   MaxNewSize       = 17592186044415 MB
   OldSize          = 5439488 (5.1875MB)
   NewRatio         = 2
   SurvivorRatio    = 8
   PermSize         = 21757952 (20.75MB)
   MaxPermSize      = 268435456 (256.0MB)
   G1HeapRegionSize = 0 (0.0MB)

Heap Usage:
New Generation (Eden + 1 Survivor Space):
   capacity = 80543744 (76.8125MB)
   used     = 75933440 (72.415771484375MB)
   free     = 4610304 (4.396728515625MB)
   94.27602471521563% used
Eden Space:
   capacity = 71630848 (68.3125MB)
   used     = 71377800 (68.07117462158203MB)
   free     = 253048 (0.24132537841796875MB)
   99.6467332063415% used
From Space:
   capacity = 8912896 (8.5MB)
   used     = 4555640 (4.344596862792969MB)
   free     = 4357256 (4.155403137207031MB)
   51.112904268152576% used
To Space:
   capacity = 8912896 (8.5MB)
   used     = 0 (0.0MB)
   free     = 8912896 (8.5MB)
   0.0% used
tenured generation:
   capacity = 178978816 (170.6875MB)
   used     = 101403672 (96.7060775756836MB)
   free     = 77575144 (73.9814224243164MB)
   56.65680121607241% used
Perm Generation:
   capacity = 130940928 (124.875MB)
   used     = 130640608 (124.58859252929688MB)
   free     = 300320 (0.286407470703125MB)
   99.7706446681056% used

69269 interned Strings occupying 8985680 bytes.

Validation

Welogic Server check
Check for the weblogic server status
Number of free vs available threads and stuck threads
Timeouts in logs ( Application logs and server logs)
Check java Heap memory size ( Free memory)

Unix server Check
Check I/O statistics -   iostat
Check the memory,swap Bottleneck - free
To check top process CPU utilizations -top
Check the disk space - df -h

DB Server check
Check for hung sessions
Check for long running queries
Check for CPU usage – If beyond a certain threshold need to take an action
Stale Stats

Weblogic Troubleshooting Issues

Weblogic Troubleshooting Issues


Deployment issues:

code issues: we will send error log to the application team for modification.
Caused By: weblogic.utils.ErrorCollectionException:
There are 1 nested errors:
weblogic.j2ee.dd.xml.AnnotationProcessException: Duplicate ejb name 'BDAccountEjbBean' found: annotation 'Stateless' on bean
failed due to connection pool issue: we will fix connection pool issues and then redeploy the application

Out of memory issue during the deployment:
error: java.lang.outofmemory.permgenspace
this error occured due to space in perm area.
setDomainEnv.sh
xx:permsize 64m
xx:maxpermsize 128m
we have set intial permsize=maxpermsize then restarted the servers, redeployed the application
If one or two application faile
d when we are triggering through scipt. we will fix that issue and do a deployment using console

jdbc issues:

1) DB down (raise a ticket to db team)
2) Incorrect hostname or port number ( raise a ticket to network team)
3) Data base connection lost ( telnet ipaddress port )
4) Data base user_acc lock ( raise a ticket to db team for unlocking user_acc)
5) Invalid pakage error (raise a ticket to db team)
6) TNS listener error (raise a ticket to db team)
7) Schema does not exist (raise a ticket to db team)
8) Cannot allocate resource error
Intial capacity : 5
max      : 15 increase max to 25
9) connection leaks ( send error to application team)
10) Connection time out ( raise a tickect to db team for long running quries)

jms issues:

stuck message issues
Check whether dest queue is available, check message format, check queue name.
rolling message issues (messages will run continuously in the loop)
delete those messages in the queue.

diskspace issues:

If the disk space usage is 95%-100% then we will delete old log files
[root@localhost ~]# df -kh
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             3.8G  1.9G  1.8G  52% /
/dev/sda1              46M  9.2M   35M  22% /boot
tmpfs                 506M     0  506M   0% /dev/shm
/dev/sda3              14G  14G    0G  100% /home

du -kh (disk usage)

[root@localhost ~]# du -sh /home
1.8G    /home

[root@localhost bea10.3]# du -sh *
181M    jdk160_05
211M    jrockit_160_05
28K     logs
100M    modules
24K     registry.dat
8.0K    registry.xml
19M     user_projects
556K    utils
429M    wlserver_10.3

delete old log files
/home/bea10.3/user_projects/domains/sherkhan/servers/AdminServer/logs

rm -rf Adminserver.log00001  Adminserver.log00002 Adminserver.log00003
rm -rf Adminserver.out00001 Adminserver.out00002 Adminserver.out00003
rm -rf access.log00001 access.log00002 access.log00003

/home/bea10.3/user_projects/domains/sherkhan/servers/ms1/logs

rm -rf ms1.log00001
rm -rf ms1.out00001
or zip the log files
/home/bea10.3/user_projects/domains/sherkhan/servers/AdminServer/logs
gzip -r *
/home/bea10.3/user_projects/domains/sherkhan/servers/AdminServer
gzip -r logs


High cpu utilization:

top (linux)
prstat (solaris)

top - 07:45:22 up  3:03,  3 users,  load average: 0.16, 0.33, 0.17
Tasks: 113 total,   2 running, 109 sleeping,   0 stopped,   2 zombie
Cpu(s):  0.0%us,  0.7%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   1035400k total,  1020348k used,    15052k free,    77688k buffers
Swap:  2040212k total,        0k used,  2040212k free,   483724k cached

ps -ef | grep 9523

If you find any zombie process count >50 raise a ticket to solaris admins
If any java processes are occupying 95-100% cpu usage then check the log files for any continuous looping messages or jdbc transaction time outs.
fix the problem and kill manged  server using kill -9 pid and restart the service instance.

Application logs files not rotating issue:

check the diskspace if it is full then delete old logs
check whether log4j properties file set in classpath

404 error:

404 error: page can't be displayed.
10.4.5 404 Not Found

The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.

sol:

1) check whether they are using correct url
2) check whether apache server is running ( ps -ef | grep httpd) ( ps -ef | grep -i apache)

Apache2.2/bin

httpd -k start
httpd -k stop

apachectl -k start
apachectl -k stop

3) check the diskspace of Apache server if it is full then delete the log files (df -kh)

goto Apache2.2/logs

delete old logs

4) Check whether the deployed application is in active state
5) If the deployed application is failed then fix the issue and redeploy the application

500 error:

Service unavailable
this error is due to server down
check apache or weblogic service instance is the server is down then start the server.

403 error:

Access forbidden
check whether the proxy mapping is correct
check syngrants and synanyms run properly in data base side
check whether the user having access to the application
issue:replicas.prop file corrupted

<BEA-000386> Server subsystem failed. Reason: java.lang.NumberFormatException: null
java.lang.NumberFormatException: null
        at java.lang.Integer.parseInt(Integer.java:417)
        at java.lang.Integer.parseInt(Integer.java:499)
        at weblogic.ldap.EmbeddedLDAP.validateVDEDirectories(EmbeddedLDAP.java:1097)
        at weblogic.ldap.EmbeddedLDAP.start(EmbeddedLDAP.java:242)
        at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:207)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:176)

This mostly happens when LDAP files are corrupted under the ../domain-name/server/AdminServer/data/ldap/ directory. A possible cause of corruption is when space on server is full. When the associated volume is full (100%) weblogic server will corrupt these files.

sol:
rm -rf /home/bea10.3/user_projects/domains/sherkhan/servers/AdminServer/data/ldap/conf/replicas.prop

To fix the above error tried the below:
Remove the ../domain-name/server/AdminServer/data/ldap/conf/replicas.prop file and restart the Admin server. It should work now.

Error: unable to obtain lock file

<May 13, 2012 8:34:54 PM IST> <Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason:

There are 1 nested errors:
weblogic.management.ManagementException: Unable to obtain lock on C:\bea10.3\user_projects\domains\sherkhan\servers\AdminServer\tmp\AdminServer.lok. Server may already be running
        at weblogic.management.internal.ServerLocks.getServerLock(ServerLocks.java:159)
        at weblogic.management.internal.ServerLocks.getServerLock(ServerLocks.java:58)
        at weblogic.management.internal.DomainDirectoryService.start(DomainDirectoryService.java:73)
        at weblogic.t3.srvr.ServerServicesManager.startService(ServerServicesManager.java:459)
        at weblogic.t3.srvr.ServerServicesManager.startInStandbyState(ServerServicesManager.java:164)
        at weblogic.t3.srvr.T3Srvr.initializeStandby(T3Srvr.java:711)
        at weblogic.t3.srvr.T3Srvr.startup(T3Srvr.java:482)
        at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:440)
        at weblogic.Server.main(Server.java:67)
<May 13, 2012 8:34:54 PM IST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FAILED>
<May 13, 2012 8:34:54 PM IST> <Error> <WebLogicServer> <BEA-000383> <A critical service failed. The server will shut itself down>
<May 13, 2012 8:34:54 PM IST> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN>

rm -rf \bea10.3\user_projects\domains\sherkhan\servers\AdminServer\tmp\AdminServer.lok
if the server is already running then ignore this error
if you are unable to start the server delete lok file and restart the server.


Error: Users are getting 404 error some times and they are able to access the application sometimes.

1) check whether all managed servers are in running state.
if one of the managed server is in shutdown state then bring up the server.
check the http requests in access.log file for all managed server
if you are getting 404 error in one of the managed server log. then check server log for any errors
i got the below error in log file:
java.lang.socket exception: address or port already in use
netstat -anp | grep 8002
if the port is listened on any other instance. restat managed server.
if the issue still persists then raise a reqest to network team..

servers are running in admin mode:

server will run in admin mode due to deployment or connection pool issue

fix deployment or jdbc problems and resume servers.

Failed_not_restartable mode:

if the disk space is full then servers will go to failed not restable mode

stack overflow error:

If you get stack overflow error in log file.
we need to restart the server or increase the stack size using XSS:1024 (2048)

Saturday, March 2, 2019

How to refresh TEST database schemas from PRODUCTION database

How to refresh TEST database schemas from PRODUCTION database


Source Database Side
--------------------------:

Step.1
Check the Schema size in the source database.
SQL> select owner, sum(bytes)/1024/1024/1024  "GB"  from dba_segments where owner='EXAMPLE';

Step.2
Check the tablespace assigned to EXAMPLE schema.
SQL>select default_tablespace from dba_users where username='EXAMPLE';

Step.3
Check tablespace size assign to EXAMPLE schema
select owner,sum(bytes/1024/1024)MB from dba_segments group by owner;

Step.4
Take the count of schema objects, this will be useful after refresh to compare both source and target schema objects.
SQL> select object_type, count(*) from dba_objetcts where owner='ORACLE' group by object_type;

Step.5
Before export, first check mount point size where we're going to store the export dump files, if mount point doesn't have sufficient space export job will fail. So make sure we have enough space in the mount point.

Step.6
Create a data pump directory at the database level for export, if we do not create directory then it will use the default directory DATA_PUMP_DIR. So if we are using this default directory, we need to make sure that it is mapped to the correct path.

Step.7
Now take the export of schema or schemas.

$ nohup expdp \'/ as sysdba\' directory=DATA_PUMP_DIR dumpfile=proddb1_8August_%U.dmp logfile=proddb1_8August.log schemas=EXAMPLE parallel=8 &

Step.8
Copy the dump file from source server to destination server. We basically use scp command but we can also use other server copy command for this. It is always better to copy the dump files in the location which will be used in the impdp directory path. Otherwise we will have to again copy this to the impdp directory path.

We have completed almost all steps in source database side, now we are moving to target database side:

Target Database side
-------------------------:

Step.1
Check if we have enough space available in the data pump director path to hold the dump file coming from the source database server.

Step.2
Check if we have enough space available in the tablespace of the user which is going to be     refreshed. For this, it is always better to check the tablespace size of that particular user in the prod database and add that much of space before refresh starts.

Step.3
It is always recommended to take the export backup of the schema on target side as well which we are going to refresh.

$ nohup expdp \'/ as sysdba\' directory=DATA_PUMP_DIR dumpfile=devdb1_8August_%U.dmp logfile=devdb1_8August.log schemas=EXAMPLE parallel=8 &

Step.4
Drop the schema objects only on the target side i.e. on development database. Double check  it before dropping anything in the database. Make sure we only drop the schema object not the complete schema. Sometime people also drop complete schema but that is not recommended.

Step.5
Import the data into target schema(Shema refresh)

$ nohup impdp \'/ as sysdba\' directory=DATA_PUMP_DIR dumpfile=proddb1_8August_%U.dmp logfile=imp_8August.log schemas=EXAMPLE parallel=8 &
     
Note: While import is running, it is always better to monitor the alert log file of the database
and also monitor the import log file. Sometime the error is reported on the alert log and is not captured in the import log. So monitor both logs.

Step.6
Gather the statistics on the schema which we just refreshed.

Step.7
Count the objects and match it from production database. Make sure all are imported properly before confirming to anyone or application team.