Pages

Thursday, October 29, 2020

Oracle SQL Developer Shortcuts

Oracle SQL Developer Shortcuts



  

Saturday, October 17, 2020

Troubleshooting Database Performance Flow

Troubleshooting Database Performance Flow





Friday, October 16, 2020

Administering the DDL Log Files in 12c

Administering the DDL Log Files in 12c

The DDL log is created only if the ENABLE_DDL_LOGGING initialization parameter is set to TRUE. When this parameter is set to FALSE, DDL statements are not included in any log. A subset of executed DDL statements is written to the DDL log.


How to administer the DDL Log?

--> Enable the capture of certain DDL statements to a DDL log file by setting ENABLE_DDL_LOGGING to TRUE.

--> DDL log contains one log record for each DDL statement.

--> Two DDL logs containing the same information:

--> XML DDL log: named log.xml

--> Text DDL: named ddl_<sid>.log


When ENABLE_DDL_LOGGING is set to true, the following DDL statements are written to the log:

ALTER/CREATE/DROP/TRUNCATE CLUSTER

ALTER/CREATE/DROP FUNCTION

ALTER/CREATE/DROP INDEX

ALTER/CREATE/DROP OUTLINE

ALTER/CREATE/DROP PACKAGE

ALTER/CREATE/DROP PACKAGE BODY

ALTER/CREATE/DROP PROCEDURE

ALTER/CREATE/DROP PROFILE

ALTER/CREATE/DROP SEQUENCE

CREATE/DROP SYNONYM

ALTER/CREATE/DROP/RENAME/TRUNCATE TABLE

ALTER/CREATE/DROP TRIGGER

ALTER/CREATE/DROP TYPE

ALTER/CREATE/DROP TYPE BODY

DROP USER

ALTER/CREATE/DROP VIEW


Example

$ more ddl_orcl.log

Thu Nov 15 08:35:47 2012

diag_adl:drop user app_user


Locate the DDL Log File

$ pwd

/u01/app/oracle/diag/rdbms/orcl/orcl/log

$ ls

ddl ddl_orcl.log debug test

$ cd ddl

$ ls

log.xml


Notes: 

- Setting the ENABLE_DDL_LOGGING parameter to TRUE requires licensing the Database Lifecycle Management Pack.

- This parameter is dynamic and you can turn it on/off on the go.

- alter system set ENABLE_DDL_LOGGING=true/false;


How to set oracle path in Linux

How to set oracle path in Linux 


[oracle@test04 ~]$vi MQTEST.env 

export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$PATH:.

export ORACLE_SID=MQTEST

ALERT=/u01/app/oracle/diag/rdbms/mqprod/MQTEST/trace

:wq!

Different types of Shared Memory available in Solaris

Different types of Shared Memory available in Solaris


1) Shared Memory (SHM)

-basic anonymous shared memory

-also called Pageable Shared Memory

-pages can be swapped out during memory shortages which affects the application performance


2) Intimate Shared Memory (ISM)

-is used by default in Solaris

-cannot be dynamically resized

-if you want to modify (e.g. resize) a segment you must restart the processes

-does not need swap space

-segments are automatically locked by the Solaris kernel


3) Dynamic Intimate Shared Memory (DISM)

-allows shared memory segments to be adjusted dynamically (resized or relocated) (e.g. SGA can be resized) - no restart of the processes necessary

-shared memory segments can be locked by the applications by using mlock(3C)

-kernel virtual-to-physical memory address translation structures are shared between processes that attach to the DISM segment, saving kernel memory and CPU time

-needs swap space as it makes reservations on it


4) Optimized Shared Memory (OSM)

-the most recommended shared memory

-provides similar benefits as DISM but without some of the requirements of DISM


Reference metalink Doc ID 1010818.1


ggsci fails with error 'symbol lookup error' on new installation

ggsci fails with error 'symbol lookup error' on new installation


Symptoms:

Installing OGG12.3.0.2  , while invoking ggsci gets below error...

 ./ggsci: symbol lookup error: /optware/software/gg12_3/libggparam.so: undefined symbol:_ZZ11ggGetConfigE4pcfg

 

Cause:

OGG version 12.3 is not compatible with linux 5

 ++ Certification matrix link

https://mosemp.us.oracle.com/epmos/faces/CertifyResults?searchCtx=st%5EANY%7Cpa%5Epi%5E922_Oracle+GoldenGate%7Evi%5E435776%7Epln%5EAny%7E%7C&_afrLoop=246908612827636


Solution:

OGG 12.3 is not compatible with Linux 5.

To confirm, check OS version

> uname -a

You may use OGG 12.2 versions which supports Linux 5


Reference metalink Doc ID 2352254.1

./ggsci: error while loading shared libraries: libnnz12.so: cannot open shared object file: No such file or directory

./ggsci: error while loading shared libraries: libnnz12.so: cannot open shared object file: No such file or directory


Error:

[oracle@test04 oracle]$ ./ggsci

./ggsci: error while loading shared libraries: libnnz12.so: cannot open shared object file: No such file or directory


Solution:

[oracle@test04 oracle]$ echo $ORACLE_HOME

/u01/oracle/product/12.2.0/db_1

[oracle@test04 oracle]$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib


[oracle@test04 oracle]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle

Version 12.1.2.0.0 17185003 OGGCORE_12.1.2.0.0_PLATFORMS_130924.1316_FBO

Linux, x64, 64bit (optimized), Oracle 12c on Sep 25 2013 02:33:54

Operating system character set identified as UTF-8.

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.

GGSCI (test04.oracle.com) 1> 

Tuesday, October 13, 2020

Oracle SOA Suite Backup & Recovery

Oracle SOA Suite Backup & Recovery


Performing a  Full Offline Backup

To perform a full offline backup, you copy the directories that contain Oracle Fusion Middleware files.Archive and compress the source Middleware home, using your preferred tool for archiving. Ensure that the tool you are using preserves the permissions of the files. For example, for online backups on Windows, use copy; for offline backups on Windows, use copy, xcopy, or jar. Do not use Winzip because it does not work with long filenames or extensions.

For example, for Linux and UNIX, use tar.


1. Stop all the processes.

2. Back up the Middleware home (MW_HOME) on all hosts. For example:
tar -cf mw_home_backup.tar $MW_HOME/*

3. If the domain is not located within the Middleware home, back up the Administration Server domain separately. For Example
tar -cf domain_home_backup.tar $MW_HOME/user_projects/domains/domain_name/*

In most cases, you do not need to back up the Managed Server domain directories separately, because the Administration Server domain contains information about the Managed Servers in its domain.

4. If the Oracle instance home is not located within the Middleware home, back up the Oracle instance home. The Oracle instance home contains configuration information about system components, such as Oracle HTTP Server or Oracle Internet Directory.

For Example:
tar -cf sc_home_backup.tar $ORACLE_INSTANCE/*

5. If a Managed Server is not located within the domain, back up the Managed Server directory. 
For example:
tar -cf mg1_home_backup.tar
$MW_HOME/user_projects/domains/domain_name/servers/server_name/*

6. Back up the OraInventory directory. For example:
tar -cf Inven_home_backup.tar /u01/app/oracle/OraInventory

7. Back up OraInst.loc and oratab files, which are located in the /etc directory.

8. Back up the database repositories using the Oracle Recovery Manager (RMAN).

Note: This completes with Backup of Oracle SOA Suite.


Recovery Of Oracle SOA Suite

Note: Stop all the relevant processes that are running before recovery.

Recovering a Middleware

Home
1. Recover the Middleware home directory from backup. For example:
cd $MW_HOME
tar -xf mw_home_backup.tar

Recovering an Oracle

Weblogic Server Domain
1. Recover the domain directory from backup:
cd DOMAIN_HOME
tar -xf domain_backup.tar
Note: If you want to Recover only the Administration Server configuration then recover the domain home backup to a temporary location. Then, restore the config directory to the $DOMAIN_HOME/config location.


Recovering an Oracle

Instance Home

1. Recover the Oracle instance home directory from a backup file. For example:
cd ORACLE_INSTANCE
tar -xf sc_home_backup.tar

Note: If you are recovering an Oracle instance home that was deregistered from the domain then also register the Oracle Home as shown below.

opmnctl registerinstance -adminHost admin_server_host -adminPort admin_server_port -adminUsername username -adminPassword password -oracleInstance ORACLE_INSTANCE_dir -oracleHome ORACLE_HOME_dir -instanceName Instance_name -wlserverHome Middleware_Home

Recovering A Managed Server

RECOVERING A MANAGED SERVER WHEN IT CANN’T BE STARTED

In this scenario, the Managed Server does not operate properly or cannot be started because the configuration has been deleted or corrupted or the configuration was mistakenly changed and you cannot ascertain what was changed.

1. Recover the Middleware home from the backup, if required.
tar -xf mw_home_backup.tar

2. Create a domain template jar file for the Administration Server, using the pack utility. For
example:
pack.sh -domain=$MW_HOME/user_projects/domains/domain_name -template=/tmp/temp.jar -template_name=test_install -template_author=myname -log=/tmp/logs/my.log -managed=true

Note: Specifying the -managed=true option packs up only the Managed Servers. If you want to pack the entire domain, omit this option.

3. Unpack the domain template jar file, using the unpack utility.

unpack.sh -template=/tmp/temp.jar -domain=$MW_HOME/user_projects/domains/domain_name -log=/tmp/logs/new.log

4. Start the Managed Server and Managed Server connects to the Administration Server and updates its configuration changes.

RECOVERING A MANAGED SERVER THAT HAS A SEPARATE DIRECTORY

When Oracle SOA Suite is configured in a domain and no Managed Servers share the domain directory with the Administration Server, you must restore the Managed Server directory. For example, a domain contains two Managed Servers, one of which contains Oracle SOA Suite, but neither of the Managed Server's directories are in the same directory structure as the Administration Server..

1. Restore the Managed Server from backup:
cd ManagedServer_Home
tar -xf managed_server_backup.tar

RECOVERING A MANAGED SERVER THAT HAS A SEPARATE DIRECTORY

When Oracle SOA Suite is configured in a domain and no Managed Servers share the domain directory with the Administration Server, you must restore the Managed Server directory. For example, a domain contains two Managed Servers, one of which contains Oracle SOA Suite, but neither of the Managed Server's directories are in the same directory structure as the Administration Server..

1. Restore the Managed Server from backup:
cd ManagedServer_Home
tar -xf managed_server_backup.tar

2. Restart the Managed Server.
Note: This Hands-on completes with Backup and Recovery of Oracle SOA Suite.


Oracle WebLogic Backup & Recovery

Oracle WebLogic Backup & Recovery


What is a Backup ?

Process of copying or archiving data so that it can be restored in event of failure.

Things to backup

- Database Tier

- Application Tier

- Connector Server

- Operating System Files


Type of Backups in WebLogic ?

- Offline as Cold Backup

- Online as Hot Backup (For DB , enable archive log)

- Size of Backup

- Full

- Incremental

- Time of backup

- After big change or patch

- Daily, Weekly, Monthly, Yearly


What is Offline & Online Backups ?

Offline Backup

- Environment is down and service is not available to user before backing up files.

- Mainly done after big patches or upgrades


Online Backup

- Environment is available during the backup

- Database must be in archive log mode for online backup

- Generally : Weekly full and incremental daily


What is Full VS Partial ?

Full Backup:

- All directories of Weblogic Server


Partial Backup:

- Few files before manual change

- LDAP backup of subset of OID entity


Backup Considerations in Weblogic ?

- Backup before any big change

- Ensure database is in archive log mode and you can recover point in time

- Take backup before & after any patch

- Take regular backup of Domain & after config change


WebLogic Directory/Component to Backup ?

- Middleware Home

- Instance Home for System Components

- Domain Home for Java Components

- Cluster folder if outside DOMAIN_HOME

- oraInventory

- $HOME/beahomelist

- /etc (Linux, IBM AIX), /var/opt/oracle (Other Unix)

- Windows (Registry)

- HKEY_LOCAL_MACHINE\Software\oracle

- HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services

- Oracle Database (RMAN or Manual)


What are Backup Tools ?

- For Application Tier

- O.S. Copy commands like cp, copy, xcopy, tar, jar

- Take care of

- Symbolic Links if any

- Supports long file

- Preserve permissions and ownership

- For Database Tier

- Oracle Home (O.S. Commands)

- Oracle Database (RMAN, Manual)


What is App Tier Backup?

- Directory to Backup

- Individual Files

- Domain Home (regular)

- MW_HOME (after patching)

- Tools for Backup

- cp –r , tar (Unix Utility)

- Backup tools


What is DB Tier Backup?

- Ask DBAs to backup

- Tools for Backup

- RMAN

- Copy data files in hot backup


Recommended Backup Strategy For WebLogic

- Perform a full offline backup

- Perform an online backup of run-time artifacts

- Perform a full or incremental backup of your databases


What is Recovery ?

- Process of restoring from backup or archiving

- Recover entire OIM or just files those need recovery

- Only Offline recovery is possible

- Ensure that file system and database is in sync, if patches are applied on FMW that updated DB too

- If restoring from hot backup on apps tier , there could be lock , pid files so remove them before starting services


What is App Tier Restore ?

- Directory to Restore

- Individual Files

- Domain Home

- MW_HOME

- Tools for Restore

- cp –r , untar (Unix Utility)

- Other Restore tools


What is DB Tier Restore ?

- Ask DBAs to restore

- Tools for Backup

- RMAN RESTORE

- Manual point in time recovery


WebLogic Backup Recommendations

- Before and after making configuration changes to a component or cluster

- Backup: All of the runtime backup artifacts

- Mode: Online


- Prior to deploying a custom pure Java EE application to a Managed Server or cluster

- Backup: All of the runtime backup artifacts

- Mode: Online


- After any major architectural changes to deployment architecture (such as scale out, creation of servers, or creation of clusters)

- Backup: All of the runtime backup artifacts

- Mode: Online


- Before and after product binary files (such as the WebLogic Home) are patched or upgraded

- Backup: All of the backup artifacts

- Mode: Offline


- Before and after patching or upgrading (which may impact BEA home and database)

- Backup: All of the backup artifacts

- Mode: Offline





Creating a Services In Oracle RAC

Creating a Services In Oracle RAC


To check the configuration:

srvctl config database -d RUMPROD


To create a new service:

srvctl add service -d RUMPROD -s QSHO -r RUMPROD1,RUMPROD2


To start a service on one node:

srvctl start service -d RUMPROD -s QRUM -n test02 -o open


To start a service on all nodes:

srvctl start service -d RUMPROD -s QSHO -o open


To stop service in one node/ instance:

srvctl stop service -d RUMPROD -s QSHO -n test01


To stop service in all nodes/ instances:

srvctl stop service -d RUMPROD -s QSHO


To remove a service:

Oracle Best practices is to frirst stop the service, then drop it.

srvctl stop service -d RUMPROD -s QSHO

srvctl remove service -d RUMPROD -s QSHO


To check status of database and list services:

srvctl status database -d db_unique_name -v

Or using View :

SQL> select * from gv$active_services;






Saturday, October 10, 2020

Soft Parse Analysis in AWR

Soft Parse Analysis in AWR


Before: 









After:




Note:

Oracle recommends ideally "soft parse hit ratio" 99% or at least we should target for recommended value is 90%.

Friday, October 9, 2020

Network Wait: SQL*Net more data from client in AWR report

Network Wait: SQL*Net more data from client in AWR report







Resize Operation Completed For File#

Resize Operation Completed For File# 


Symptoms:

File Extension Messages are seen in alert log.There was no explicit file resize DDL as well.

Resize operation completed for file# 45, old size 26M, new size 28M

Resize operation completed for file# 45, old size 28M, new size 30M

Resize operation completed for file# 45, old size 30M, new size 32M

Resize operation completed for file# 36, old size 24M, new size 26M


Changes:

NONE


Cause:

These file extension messages were result of diagnostic enhancement through unpublished to record automatic datafile resize operations in the alert log with a message of the form:

"File NN has auto extended from x bytes to y bytes"

This can be useful when diagnosing problems which may be impacted by a file resize. 


Solution:

In busy systems, the alert log could be completely flooded with file extension messages. A new Hidden parameter parameter "_disable_file_resize_logging" has been introduced through bug 18603375 to stop these messages getting logged into alert log.

(Unpublished) Bug 18603375 - EXCESSIVE FILE EXTENSION MESSAGE IN ALERT LOG 

Set the below parameter along with the fix.

SQL> alter system set "_disable_file_resize_logging"=TRUE ; (Its default value is FALSE)

The bug fix 18603375 is included in 12.1.0.2 onwards.


Reference metalink Doc ID 1982901.1

Monday, October 5, 2020

Starting and Stopping Pluggable Databases In 12c

Starting and Stopping Pluggable Databases In 12c


Starting the pdbs:

Check the status of the pdb's

sqlplus '/ as sysdba'

SQL> select name, open_mode from v$pdbs;

SQL> show pdbs

Note: All the PDBs are in MOUNT state.


Start a single pdb

show pdbs

alter pluggable database lub1 open;


Start all pdb's in single command

show pluggable

alter pluggable database all open;

Note: While starting the pdbs database characterset and pluggable database status information will be written in alertlog.


Stopping the pdbs:

Check the status of the pdb's

sqlplus '/ as sysdba'

SQL> select name, open_mode from v$pdbs;

SQL> show pdbs

Note: All the PDBs are in READ WRITE state.


Stop a single pdb

show pdbs

alter pluggable database lub1 close immediate;


Stop all pdb's in single command

show pluggable

alter pluggable database all close immediate;

Note: While closing the pdb's buffer cache will be flushed.


Create a trigger to open all pluggable databases.

sqlplus '/ as sysdba'

CREATE OR REPLACE TRIGGER pdb_startup AFTER STARTUP ON DATABASE

BEGIN

EXECUTE IMMEDIATE 'alter pluggable database all open';

END pdb_startup;

/







Increasing Load On The Database Server

Increasing Load On The Database Server


Creating a table:

create table t (id number, sometext varchar2(50),my_date date) tablespace test;

Now we will create a simple procedure to load bulk data:

create or replace procedure manyinserts as

v_m number;

begin

for i in 1..10000000 loop

select round(dbms_random.value() * 44444444444) + 1 into v_m from dual ;

insert /*+ new2 */ into t values (v_m, 'DOES THIS'||dbms_random.value(),sysdate);

commit;

end loop;

end;

/

Now this insert will be executed in 10 parallel sessions using dbms_job, this will fictitiously increase load on database:

create or replace procedure manysessions as

v_jobno number:=0;

begin

FOR i in 1..10 LOOP

dbms_job.submit(v_jobno,'manyinserts;', sysdate);

END LOOP;

commit;

end;

/

Now we will execute manysessions which will increase 10 parallel sessions:

exec manysessions;


Check the table size:

select bytes/1024/1024/1024  from dba_segments where segment_name='T';

ORA-01940: Cannot Drop A User that is Currently Connected

ORA-01940: Cannot Drop A User that is Currently Connected


While dropping a user in oracle database , you may face below error. ORA-01940

Problem:

SQL> drop user SCOTT cascade

2 /

drop user SCOTT cascade

*

ERROR at line 1:

ORA-01940: cannot drop a user that is currently connected


Solution:

1. Find the sessions running from this userid:

SQL> SELECT SID,SERIAL#,STATUS from v$session where username='SCOTT';

SID SERIAL# STATUS

---------- ---------- --------

44 56381 INACTIVE

323 22973 INACTIVE

2. Kill the sessions:

SQL> ALTER SYSTEM KILL SESSION '44,56381' immediate;

System altered.

SQL> ALTER SYSTEM KILL SESSION '323,22973' immediate;

System altered.

SQL> SELECT SID,SERIAL#,STATUS from v$session where username='SCOTT';

no rows selected

3. Now the drop the user:

SQL> drop user SCOTT cascade

user dropped.


Saturday, October 3, 2020

Parsing In Oracle

Parsing In Oracle

Parsing is primary phase in the SQL execution. It is followed by other stages: Execute and Fetch.


PARSING BASICS:

Whenever a SQL statement is executed, Oracle Engine performs below actions on it:

-Validate the Syntax

-Validate the objects being referenced in the statement

-Privileges assigned to user executing the Job

-Verifying if statement is already available in the shared pool. Oracle engine will calculate the hash value for the SQL statement and look in

-If statement is not already present then allocate shared memory and create a cursor in shared pool

-Generate the Execution Plan


TYPES OF PARSES

-HARD parse: It means that statement is not available in shared memory or this is a brand new statement that user is trying to execute. If your shared pool is small then also hard parse may be required as the old statement got aged out the shared pool. All the steps mentioned above for parsing need to be done for this situation. Hard parse requires extra system resources. This is also known as ‘Library Cache Miss’.

-SOFT Parse: It means that statement was executed earlier and was already parsed and is available in memory. So Oracle need to do steps 1-3 only as mentioned above since rest of the tasks were already done earlier. It is like work hard once and reap benefits multiple times. This is also known as ‘Library cache Hit’ as you got the statement parsed and available to use in the Library cache.

 

Why hard parses should be avoided:

There are two key reasons why hard parses should be kept to bare minimum required:

-Generation of an execution plan is a very CPU-intensive operation.

-Memory in the shared pool is limited and also memory operations are serialized. Memory operations happens using shared pool latches and if so many hard parses are happening then other processes in the database will have to wait in queue to get the shared pool latch. So hard parse impacts both umber of shared pool latch and library cache latch.

Analyzing Basic Performance Issues

Analyzing Basic Performance Issues 


Whenever any performance issue it is important that DBA and Developers should work together when facing a database/application issue.

When DBAs are reported a performance issue, first step is to get as much information as possible that is related to the issue. 


You can ask below questions to users/developer to collect the first level of information..

 1. What operations/program are executed?

 2. Is it Oracle seeded or custom program?

 3. How much time it used to take earlier?

 4. Is the run time increased over time or you are seeing sudden increase in run time?

 5. Was there any recent code change/migration?

 6. Is it always slow or for certain time of the day only?

 7. Is it slow for all parameters or for some specific parameters?

 8. How much data is getting processed?

 9. What is the frequency of job execution? Was there any change in frequency?

 10. Does the problem happens on both their test and production systems?


Asking above kind of questions will help you in deciding what part of system you should target.

-Target the whole system

-Target a single session

-Target a single SQL statement

 

Your goal should be to answer below three questions:

Where is time spent?

You need to identify where the time is getting consumed. Is it the code or network latency or disk I/O bottlenecks or the CPU/Memory crunch

How is time spent?

You need to find out how the time is spent and how much time is spend in each layer – database, application, network, disk etc .

How to reduce the time spent?

Finally, based on above information see where the major chunk of time is spent and how to reduce it.


 --> High level steps for System level performance issues

Use vmstat top/prstat to identify system wide CPU/Memory consumption.

Use iostat to verify if disks are the bottleneck

Use netstat/tnsping etc to verify if network is issue.

Verify if any other resource intensive processes are running on server.

Verify filesystem space.

Check alert logs, application logs, traces etc.

Check database locks

Generate AWR reports to see what is eating up resources.

Check if increasing application/database memory/redo/undo/temp/SGA will help.

 

--> High level steps for Session level performance issues

Find the Top SQLs executing under session.

Apply SQL optimization techniques on top SQLs.

Verify locking at session level

Generating AWR/ASH for that duration may help in providing useful information.

 

--> High level steps for SQL level performance issues

Apply trace, Generate TKPROF file and analyze for top resource consuming SQL.

Avoid full table scans on large tables

Possible indexes on columns contained in the WHERE clause

Use AWR/ASH reports to get collective information

Also use SQLTRPT/SQLT

Verify is statistics are current

Verify if indexes are valid and are adequate.

Verify is parallelism/materialized views/Baselines/SQL Profiles will help

Monitor V$SESSION_LONGOPS to detect long running operations

Decide on using Hints if those are helping.

Table partitioning can be thought of as an option based on kind and size of tables.