Pages

Monday, August 19, 2019

Datapump Interview Question - Export/Import

Datapump Interview Question  - Export/Import


Datapump Export/Import Modes:

1.Full Export Mode
expdp scott/tiger directory=data_pump_dir dumpfile=qadar.dmp FULL=y
It EXP_FULL_DATABASE role required.

2.Schema Mode
expdp scott/tiger directory=data_pump_dir dumpfile=qadar.dmp schemas=qadar, shoaib, rumaysa
It EXP_FULL_DATABASE role required.

3.Table Mode
expdp scott/tiger directory=data_pump_dir dumpfile=qadar.dmp tables=test1, test2
It EXP_FULL_DATABASE role required.

4.Tablespace Mode
expdp scott/tiger directory=data_pump_dir dumpfile=qadar.dmp tablespaces=users, apps_ts_data

5.Transportable Tablespace Mode
expdp scott/tiger directory=data_pump_dir dumpfile=qadar.dmp transportable_tablespaces=users, apps_ts_data
It EXP_FULL_DATABASE role required.

Before expdp commands you should perform some pre-checks ?

create directory, or else if you go with default directory
select * from dba_directories;

create or replace directory data_pump_dir AS 'c:\temp';
-- default is $ORACLE_BASE/admin/<database_name>/dpdump

gramt export full database to scott;

Note: Verify that the environment variables ORACLE_HOME and ORACLE_SID are set properly in your shell. If they are not set then you must set them at the command line to proceed.


How to check the datapump import jobs are running or not ?

1.Using the datapump client (expdp & impdp) STATUS command:-

When the export or import job is running press +C keys to get to the respective datapump client prompt OR you can use another session of datapump client and using the ATTACH clause attach to the running job and then issue the STATUS command:-
Export> status

2.Querying V$SESSION_LONGOPS & V$SESSION views
3.Querying V$SESSION_LONGOPS & V$DATAPUMP_JOB views
4.Querying DBA_RESUMABLE view.
5.Querying v$session-wait

Oracle introduced datapump from which version ?
Oracle Datapump was introduced with 10gR1 as a replacement to the classic export/import utilities


What are two important packages of Datapump ?
Oracle added two new packages DBMS_DATAPUMP & DBMS_METADATA which forms the base for the datapump export import operations.



What are the important table in datapump ?
Master Table
Master table structure created during an schema export job
SQL> desc "ADMIN"."SYS_EXPORT_SCHEMA_01

What are the important processes in datapump ?

Client Process
Shadow Process
Master Control Process
Master control process (MCP) which is created by shadow process upon START_JOB request from Datapump API is the master process that controls the execution and sequencing for the datapump job with the information fetched from master table. MCP has a process name as <Instance Name>_DMnn_<pid>



Worker process
The job of worker process is to unload/load the data and metadata from/to the database and also to update the master table. Worker process has a process name as <Instance Name>_DWnn_<pid>.

Parallel Query process
parallel processing specially in the case where table partitioning is involved, even the worker process creates some parallel query (PQ) processes and further assigns the unloading/loading assignment to these PQ processes and act as a query coordinator

Command and control queue

All the processes related to Datapump jobs described above have subscription of this queue. Command and control queue is used for interprocess communication between all the datapump processes to coordinate all the API commands, loading/unloading requests & responses, file requests, logging etc. This queue is named as KUPC$C_<job-unique>_<timestamp>.

Status queue

The shadow process subscribes to read from this queue and is used to receive error messages from MCP which is the only process that can write to this queue. This queue is named as KUPC$S_<job-unique>_<timestamp>.

what are data movement in datapump ?

Data Pump has 3 options for moving the data out/in from/to the database, and based on several performance related factors & user specified arguments, MCP chooses the fastest option for that particular operation.

The available options are:-
Direct Path Loads and Unloads.
External Tables.
Insert as Select over a network

How will you take consistent export backup? What is the use of flashback_scn ?
Ans:  To take a consistent export backup you can use the below method:
SQL: select to_char(current_scn) from v$database;
Expdp parfile content:

directory=OH_EXP_DIR dumpfile=exporahow_4apr_<yyyymmdd>.dmp logfile=exporahow_4apr_<yyyymmdd>.log  schemas=ORAHOW flashback_scn=<<current_scn>>

What is use of CONSISTENT option in exp?

When you export a table, you are guaranteed that the contents of that table will be consistent with the time that the export of that table was started. This means that if you start exporting the table at 12:00 and someone makes changes to the data in the table at 12:05 and your export of this table finishes at 12:10, then the export will not contain any of the changes made between 12:00 and 12:10. You cannot change this behavior with Oracle's export utility.

The CONSISTENT parameter controls whether or not the entire export is consistent, even between tables. If CONSISTENT=N (the default), then the export of a table will be consistent, but changes can occur between tables. If CONSISTENT=Y, then the entire dump file is consistent with the point in time that you started the export.

What is use of DIRECT=Y option in exp?

Normally export will follow the process of SELECT statement i.e data from disk will be copied to buffer cache and then it will be written to dump file. When we use direct path by specifying DIRECT=Y in export command, then oracle will copy data directly from disk to PGA and from there it is written to dumpfile.

Which are the common IMP/EXP problems?

ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-imported if we need to do a table refresh..
IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.
ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or use parameter CONSISTENT=NO (However this option could create possible referential problems, because the tables are not exported from one snapshot in time).
ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter) while importing.

What is the Benefits of the Data Pump Technology?
The older export/import technology was client-based. The Data Pump technology is purely       server based. All dump, log, and other files are created on the server by default. Data Pump technology offers several benefits over the traditional export and import data utilities.

The following are the main benefits of the Data Pump technology:
Improved performance: The performance benefits are significant if you are transferring huge
amounts of data.
Ability to restart jobs: You can easily restart jobs that have stalled due to lack of space or have
failed for other reasons. You may also manually stop and restart jobs.
Parallel execution capabilities: By specifying a value for the PARALLEL parameter, you can choose the number of active execution threads for a Data Pump Export or Data Pump Import job.
Ability to attach to running jobs: You can attach to a running Data Pump job and interact with it from a different screen or location. This enables you to monitor jobs, as well as to modify certain parameters interactively. Data Pump is an integral part of the Oracle database server, and as such, it doesn’t need a client to run once it starts a job.
Ability to estimate space requirements: You can easily estimate the space requirements for your export jobs by using the default BLOCKS method or the ESTIMATES method, before running an actual export job (see the “Data Pump Export Parameters” section later in this chapter for details).
Network mode of operation: Once you create database links between two databases, you can perform exports from a remote database straight to a dump file set. You can also perform direct imports via the network using database links, without using any dump files. The network mode is a means of transferring data from one database directly into another database with the help of database links and without the need to stage it on disk.
Fine-grained data import capability: Oracle9i offered only the QUERY parameter, which enabled you to specify that the export utility extract a specified portion of a table’s rows. With Data Pump, you have access to a vastly improved fine-grained options arsenal, thanks to new parameters like INCLUDE and EXCLUDE.
Remapping capabilities: During a Data Pump import, you can remap schemas and tablespaces, as well as filenames, by using the new REMAP_ * parameters. Remapping capabilities enable you to modify objects during the process of importing data by changing old attributes to new values. For example, the REMAP_SCHEMA parameter enables you to map all of user HR’s schema to a new user, OE. The REMAP_SCHEMA parameter is similar to the TOUSER parameter in the old import utility

How to improve exp performance?

1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).

How to improve imp performance?

1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.


What is COMPRESSION parameter in expdp?

The COMPRESSION parameter enables the user to specify which data to compress before writing theexport data to a dump file. By default, all metadata is compressed before it’s written out to an export dump file. You can disable compression by specifying a value of NONE for the COMPRESSION parameter, as shown here:
$ expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=NONE
The COMPRESSION parameter can take any of the following four values:
ALL: Enables compression for the entire operation.
DATA_ONLY: Specifies that all data should be written to the dump file in a compressed format.
METADATA_ONLY: Specifies all metadata be written to the dump file in a compressed format.
This is the default value.
NONE: Disables compression of all types.

What are Export Filtering Parameters in expdp?
Data Pump contains several parameters related to export filtering. Some of them are substitutes for old export parameters; others offer new functionality.

CONTENT
By using the CONTENT parameter, you can filter what goes into the export dump file. The CONTENT
parameter can take three values:
• ALL exports both table data and table and other object definitions (metadata).
• DATA_ONLY exports only table rows.
• METADATA_ONLY exports only metadata.

EXCLUDE and INCLUDE

The EXCLUDE and INCLUDE parameters are two mutually exclusive parameters that you can use to perform what is known as metadata filtering. Metadata filtering enables you to selectively leave out or include certain types of objects during a Data Pump Export or Import job. In the old export utility, you used the CONSTRAINTS, GRANTS, and INDEXES parameters to specify whether you wanted to export those objects. Using the EXCLUDE and INCLUDE parameters, you now can include or exclude many other kinds of objects besides the four objects you could filter previously. For example, if you don’t wish to export any packages during the export, you can specify this with the help of the EXCLUDE parameter.

QUERY

The QUERY parameter serves the same function as it does in the traditional export utility: it lets you selectively export table row data with the help of a SQL statement. The QUERY parameter permits you to qualify the SQL statement with a table name, so that it applies only to a particular table. Here’s an example:

QUERY=OE.ORDERS: "WHERE order_id > 100000"

In this example, only those rows in the orders table (owned by user OE) where the order_id is
greater than 100,000 are exported.

What is use of INDEXFILE option in imp?
Will write DDLs of the objects in the dumpfile into the specified file.

What is use of IGNORE option in imp?
Will ignore the errors during import and will continue the import.

What are the differences between expdp and exp (Data Pump or normal exp/imp)?

Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.

Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?

Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.

How to improve expdp performance?
Using parallel option which increases worker threads. This should be set based on the number of cpus.

How to improve impdp performance?
Using parallel option which increases worker threads. This should be set based on the number of cpus.

In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it will know from where to resume?

Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.

What is the order of importing objects in impdp?

 Tablespaces
 Users
 Roles
 Database links
 Sequences
 Directories
 Synonyms
 Types
 Tables/Partitions
 Views
 Comments
 Packages/Procedures/Functions
 Materialized views

How to import only metadata?

CONTENT= METADATA_ONLY

How to import into different user/tablespace/datafile/table?

REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA

You are getting undo tablespace error during import, how you will avoid it?
We can use COMMIT=Y option

Can we import a 11g dumpfile into 10g database using datapump?
Yes we can import from 11g to 10g using VERSION option.

How to determine the Schemas inside an Oracle Data Pump Export file ?
strings dumpfile.dmp | grep SCHEMA_LIST
(or)
$ strings myfile.dmp|more

How to suspend and Resum Export Jobs (Attaching and Re-Attaching to the Jobs) ?
parameter name JOB_NAME=myfullJob

press CTRL+C to enter into interactive mode. Then he will get the Export> prompt where he can type interactive commands
expdp / as sysdba attach=job_name
export>status
export>stop_job
export>start_jop
export>kill_job
export>continue_job

What are the Process involved in Expdp/ImpDP? at back end?
Client process: this process initiated by client utility and makes call data pump api. Once data pump is initiated this process is necessary for the job.
Shadow process:when client login to db foreground process is created. It services the client data pump api requests. This process creates the master table and creates advanced queues for communication. Once client process ends shadow process also go away.
Master control process :MCP controls the execution of data pump job.there is one MCP per job. MCP divides the data pump job into various metadata and data load or unload jobs and hand over them to worker processes.
Worker processes:MCP creates worked process based on the value of parallel parameter. The worker process performs the task requested by MCP.

How to get estimate size of dumpfile before exporting
Expdp estimate_only=y estimate=blocks | statistics

What would be compression ratio/percentage when we use compression param
It would be Around 80 percent when compared to dumps without compression.

What you will do when export is running slow?
We need to skip taking export of indexes, use BUFFER and DIRECT parameters

Import process is running slow, what can you do now to make it faster?
We can skip importing indexes as it is the most time taking job during import

You are observing undo tablespace error during import, how you will avoid it?
We can use COMMIT=Y option

How to re-organize schema?
We can use dbms_redefinition package for online re-organization of schema objects. Otherwise using import/export and data pump utility you can recreate or re-organize your schema.

How we can improve the EXP Performance?
1.Set the BUFFER parameter to a high value (e.g. 2M)
2.If you run multiple export sessions, ensure they write to different physical disks.

How can we find elapsed time for a particular objects during impdp or expdp ?
We have an undocumented parameter metrics in datapump to check how much it took to export different objects types.

Is it possible to export two tablespace at a time and import into a single tablespace ?
Maybe, at a time you can export but you can't import them at a time

What is exactly dumpfile contains? what happens if we specify direct=y and if we are not specifying also ?
If you specify direct=y then it will bypass the buffer cache, bypass the sql command processing layer.

Whats the content will be on dumpfile ?
impdp dumpfilename sqlfile=content.sql, this will create the schema & password of all the objects

How can we chec datapump file is corrupted or not ?
use SQLFILE parameter with import script to detect corruption.

While importing if we miss 1 row so how we get that particular row ?
reimport from the backup table, reimport with query option to import only particular rows

What is the meaning of table_exists_action=truncate in Expdp ?
truncate deletes existing rows and then loads rows from the source, or this says to truncate the existing table rows, leaving the table definition and replacing the rows from the expdp dumpfile file being imported

No comments:

Post a Comment