Friday, August 20, 2010

What happens during oracle database hot backup


According to oracle documentation you already heard/aware that during an Oracle tablespace hot backup, a script or program or command puts a tablespace into backup mode, then copies the datafiles to disk or tape, then takes the tablespace out of backup mode. We can check the backup mode from V$BACKUP view. In case of user managed backup, backup process starts after issuing ALTER TABLESPACE tbs_name BEGIN BACKUP; or ALTER DATABASE BEGIN BACKUP; command and backup process ends by ALTER TABLESPACE tbs_name END BACKUP; or ALTER DATABASE END BACKUP; command.

Although the process is very clear and well understood but there are many misconception around hot backup. The misconception start what is actually done during hot backup, is data file opens writeable during backup process? or changes are stored somewhere in the SGA, the redologs, the rollback/undo segments or some combination thereof, and then written back into the datafile when the tablespace is taken out of backup mode?

Well, around the writeable issue inside datafile there is other misconception like "During hot backup process there is generated huge amount of redo data which in fact slows down the database dramatically if the database is in archivelog mode."

Now let's know what actually happens during hot backup. The hot backup steps are,

1)The corresponding tablespace is checkpointed.

2)The checkpoint SCN marker in the datafile headers cease to increment with checkpoints.

3)Full images of changed DB blocks are written to the redologs.

Whenever you issue,

ALTER TABLESPACE tbs_name BEGIN BACKUP;

command, at that point a checkpoint is performed against the target tablespace and the datafile header is frozen, so no more updates are allowed on it (the datafile header), this is for the database to know which was the last time the tablespace had a consistent image of the data.

But during backup process, the corresponding datafiles in the tablespace allow just normal read/write operations, that is I/O activity is not frozen.

In case of redo log generation, each block will be recorded into the redo log files, the first time it the block is changed. So if a row is modified for the first time inside date block since hot backup started the complete block image is recorded in the redo log files but subsequent transactions on the block will only record the transaction just as normal.

Above three steps are required to guarantee consistency during the file is restored and recovery. By freezing the checkpoint SCN in the file headers, any subsequent recovery on that backup copy of the file will know that it must commence at that SCN. Having an old SCN in the file header tells recovery that the file is an old one, and that it should look for the redolog file containing that SCN, and apply recovery starting there. Note that checkpoints to datafiles in hot backup mode are not suppressed during the backup, only the incrementing of the main checkpoint SCN flag. A "hot backup checkpoint" SCN marker in the file header continues to increment as periodic or incremental checkpoints progress normally.

By initially checkpointing the datafiles that comprise the tablespace and logging full block images to redo, Oracle guarantees that any blocks changed in the datafile while in hot backup mode will also be available in the redologs in case they are ever used for a recovery.

Now many one claims that during hot backup process there is excessive redo log generation than in normal mode. It actually depends on the amount of blocks changes during hot backup process. Because the first time a block is changed logging of full images of changed blocks in these tablespaces are recorded to the redo logs. Normally, Oracle logs an entry in the redologs for every change in the database, but it does not log the whole image of the database block. But during the hot backup process by logging full images of changed DB blocks to the redologs, Oracle eliminates the possibility of the backup containing irresolvable split blocks. To understand this reasoning, you must first understand what a split block is.

Typically, Oracle database blocks are a multiple of O/S blocks. For instance, most windows filesystems have a default block size of 512 bytes and unix filesystems have a default blocksize 2k, while Oracle’s default block size is 8k. This means that the filesystem stores data in 512 byte chunks, while Oracle performs reads and writes in 2k chunks, or multiples thereof. While backing up a datafile, your backup script makes a copy of the datafile from the filesystem, using O/S utilities such as copy, dd, cpio, or OCOPY. As it is making this copy, it is reading in O/S-block sized increments. If the database writer happens to be writing a DB block into the datafile at the same time that your script is reading that block’s constituent O/S blocks, your backup copy of the DB block could contain some O/S blocks from before the database performed the write, and some from after. This would be a split block.

By logging the full block image of the changed block to the redologs, it guarantees that in the event of a recovery, any split blocks that might be in the backup copy of the datafile will be resolved by overlaying them with the full legitimate image of the block from the redologs. Upon completion of a recovery, any blocks that got copied in a split state into the backup will have been resolved through application of full block images from the redologs.

How to know a row of a table belong to which tablespace


If someone ask you a question about "how you will determine a table belong to which tablespace." The answer is easy, query from dba_segments/user_segments/all_segments. Like I want to know table T belongs to which tablespace. Then query like below.
SQL> select tablespace_name from dba_segments where segment_name='T';

TABLESPACE_NAME
------------------------------
USERS

So table T belong to USERS tablespace.

If we want to know a table belongs to which datafile we can also determine it by querying from dba_data_files and dba_extents as below.
SQL> select file_name from dba_data_files where file_id in
2 (select file_id from dba_extents where segment_name='T');
FILE_NAME
-----------------------------------------------------------------------
D:\APP\ARJU\ORADATA\ARJU\USERS01.DBF

But if someone asked you to determine about a row of a table resides on which tablespace then you might think once how to do it. Following is an example of how to do it.

1)Create a partition table named test.
SQL> create table test (col1 number)
2      partition by range (col1)
3      (
4      partition p1 values less than (100) tablespace users,
5      partition p2 values less than (200) tablespace users02,
6      partition p3 values less than (maxvalue) tablespace users03
7      )
8  /
Table created.

2)Insert some rows so that they span into multiple tablespaces.
SQL> insert into test values (1);

1 row created.

SQL> insert into test values (111);

1 row created.

SQL> insert into test values (1000);

1 row created.

3)Query to select row and corresponding tablespace.
SQL> select col1, tablespace_name
2      from test, dba_data_files
3      where dbms_rowid.rowid_to_absolute_fno(test.rowid,user,'TEST') = file_id
4      /
COL1 TABLESPACE_NAME
---------- ------------------------------
1 USERS
111 USERS02
1000 USERS03

4)If you were asked to determine the tablespace name which holds the value 1 and determine it's tablespace_name, file_name then query as,
SQL> set lines 100
SQL> col file_name for a50
SQL> select col1, tablespace_name, file_name
2          from test, dba_data_files
3          where dbms_rowid.rowid_to_absolute_fno(test.rowid,user,'TEST') = file_id
4          and col1=1;
COL1 TABLESPACE_NAME                FILE_NAME
---------- ------------------------------ -------------------------------------------
1 USERS                          D:\APP\ARJU\ORADATA\ARJU\USERS01.DBF

How to know which objects are being accessed by a user


You can track what users are doing currently in the database. You can get the object name as well as the operating system user name, the host name, the type of objects they are accessing and many more.

Below is the query. You can query from more columns if you like. Also to run the query you need access to the v$access and v$session view.

In the example I have shown what user TEST1 and TEST2 is doing in the database.
SQL> SET LINESIZE 200
SQL> SET VERIFY OFF
SQL> COL object FOR A30
SQL> SELECT a.object,
 2         a.type,
 3         a.sid,
 4         b.username,
 5         b.osuser,
 6         b.program
 7  FROM   v$access a
 8  JOIN   v$session b ON (a.sid = b.sid)
 9  AND    a.owner = UPPER('&SCHEMA') ORDER BY a.object;
Enter value for schema: TEST1


OBJECT                         TYPE             SID USERNAME     OSUSER             PROGRAM
------------------------------ ---------        ---- ---------- ------------------ ------------
DBMS_APPLICATION_INFO          CURSOR           149 TEST2        ARJUPC\Arju       sqlplus.exe
DUAL                           CURSOR           159 TEST1        ARJUPC\Arju       sqlplus.exe
DUAL                           CURSOR           149 TEST2        ARJUPC\Arju       sqlplus.exe
TAB1                           TABLE            159 TEST1        ARJUPC\Arju       sqlplus.exe

SQL> /
Enter value for schema: TEST2


OBJECT                         TYPE          SID USERNAME     OSUSER             PROGRAM
------------------------------ ----------- ---------- ------------ ------------------ ------------
T                              TRIGGER       159 TEST1        ARJUPC\Arju       sqlplus.exe
TAB1                           TABLE         159 TEST1        ARJUPC\Arju       sqlplus.exe

SQL>

Different types of standby database in oracle data guard


In oracle data guard configuration, you need to setup one or more additional databases beside the primary database. These additional databases are called standby database. Up to nice standby database can be created for one primary database.

Using a backup of primary database you can set up standby database and then you can made standby database as part of data guard configuration. Once you configured standby database, data guard automatically maintains standby database by transmitting redo log from the primary database and then applying redo to the standby database.

A standby database can be of three types.

1)Physical Standby Database: A physical standby database is an identical copy of the primary database. The disk structures are also identical with primary database. It is kept synchronized with the primary database by Redo Apply- which means the redo data is received from the primary database and then redo is applied to the physical standby database.

Note that as of Oracle Database 11g release 1 (11.1), a physical standby database can receive and apply redo while it is open for read-only access. You can use physical standby database for query and reporting purpose along with data protection.

2)Logical Standby Database: A logical standby database is the same logical information of the primary database. The physical data structure need not to be same on the standby database. It is kept synchronized with the primary database by SQL Apply- which means the redo data is received from the primary database, transforms redo data into SQL statements and at last executes the SQL statements on the standby database.

You can use logical standby database for query and reporting purpose along with data protection. Also you have to facility to upgrade oracle database software and patch sets along with data protection with help of logical standby database.

3)Snapshot Standby Database: A snapshot standby database is a convertible copy of the physical standby database but the difference from the physical or logical standby database is, the redo data that it received does not apply into it. The redo is applied whenever it is converted back to the physical standby database. You can play with the snapshot standby database and while converting to physical standby database from snapshot standby database these local updates are discarded.

Note that in case of snapshot standby database, the time needed to perform a role transition is directly proportional to the amount of redo data that needs to be applied.

What is oracle data guard


Oracle data guard is the term(you can call it a feature too) used by oracle and this feature is integrated with oracle RDBMS. Data guard create, maintain, manage and monitor one/more extra databases -called standby database, beside production database. The goal is to survive production databases from any disasters, failure or data corruptions.

At first, an exact replica of production database is created, which referred to as standby database later. Data guard then maintains and keep up to date of these standby databases with of production database. If production database becomes unavailable then data guard can switch standby database to production role and thus minimize the downtime without any data loss.

In addition to availability of production database data guard gives other advantages. Like we can do real time query in standby database, do backup operation in it and thus minimize the load of production database.

There is no restriction about the location of the stand by databases. They can be located thousands miles away from the production database or can be within same room, provided they can communicate each-other.

The more features, configurations, advantage about oracle data guard will be discussed gradually.

Thursday, August 19, 2010

Database cloning with hot backup (if directory structure is same)


Posted by Pavan Kumar on March 26, 2010
Here are the steps for performing database cloning using hot backup to a different server. I am assuming you are using same directory structure in the target server also.
Assumed Oracle version : 10.2.0.4, OS version : Linux 5
Step 1 : Take database hot backup as follows
sql> alter system switch logfile;
sql> alter database begin backup;
$ cp *.dbf to backup location (as it is hot backup, we will not take backup of redolog files)
sql> alter database end backup;
sql> alter system switch logfile;
$ cp *.ctl to backup location
Note: If you are using 9i database, use “tablespace begin backup/end backup” clauses
step 2 : Take backup of spfile or pfile of source database and also archives
step 3 : Install oracle software in target server (select “software only” option in OUI)
step 4 : copy the files to target server either using FTP or any methods
step 5 : place pfile or spfile in dbs directory
step 6 : copy all files (datafiles, controlfiles and archives) to respective locations
step 7 : do the following
sql> startup nomount
sql> alter database mount;
sql> recover database using backup controlfile until cancel;
here it will ask to apply archives and will give suggestion as file name and path. apply required archives
step 8 : finally, open your database with resetlogs option
sql> alter database open resetlogs;
Sometimes you may get following error while doing hot backup cloning
ORA-01194 file 1 needs more recovery to be consistent error
in such cases, do a switch logfile in source server and copy & apply that archive logfile in target server

Database cloning using hot backup (if directory structure is different)

Below steps helps you in performing database cloning using hot backup
Assumptions:
1. directory structure is different in both source and target servers
2. Oracle version : 10.2.0.4
3. OS version : Linux 5
4. target database name is same as source database name
step 1 :  Take the hot backup of source database
sql> alter database begin backup;
$ copy datafiles to backup location
sql> alter database end backup;
step 2 : Take controlfile trace and pfile or spfile (that was using by the source database)
step 3 : Install Oracle software on another machine (choose “Install only” option in OUI). Don’t create any database
step 4 : Copy all the files (including trace file and pfile or spfile) from source server to target server either using FTP or rcp
Note: Not necessary in copying control files and redologfiles
step 5 : Place pfile or spfile in “dbs” directory on target
step 6 : Copy the remaining files to their respective locations (If any directories are missing, do create them)
step 7 : Connect as sysdba and Startup the database in nomount stage
step 8 : Edit the trace file (that was copied) and generate a create controlfile script from it. Modify the script and specify the new locations of  the files.
step 9 : Execute the controlfile script which will create controlfiles in the location specified in CONTROL_FILES parameter. Once control files are created, database will be forwarded to MOUNT state.
sql> @create_controlfile.sql
step 10 : Finally, Open the database with resetlogs option
sql> alter database open resetlogs;

How to know when a table/object's ddl/code was last modified


From the dba_objects/user_objects/all_objects view you can know about the information about when an object(table,index,function etc) was created or when the last DDL operation was done against the table or when last compilation was done.

As of other oracle views.
DBA_OBJECTS contains all database objects.
USER_OBJECTS contains all objects that is owned by the current user.
ALL_OBJECTS contains all objects on which current user has access.

Note that obj is the synonym of USER_OBJECTS view.

The CREATED column of the view contains date about when an object was created.

The LAST_DDL_TIME column of the view contains date about when the object was last modified by a DDL statement. Note that this column value also contain the timing of revoke and grant that was issued against the object. Similarly on procedure, function, trigger if you compile the object then only LAST_DDL_TIME is only modified.

The TIMESTAMP column of the view contains timestamp of the last ddl time excluding any grants, revoke or any compile time.

Before proceed let's set nls_date_format on sql*plus so that we can see the timings of date data type.
SQL> alter session set nls_date_format='DD-MON-YY HH24:MI:SS';

Session altered.

SQL> select created, last_ddl_time, timestamp, status from obj where object_name='TEST' 
and object_type='TABLE';

CREATED            LAST_DDL_TIME      TIMESTAMP           STATUS
------------------ ------------------ ------------------- -------
21-SEP-08 00:43:11 21-SEP-08 00:49:16 2008-09-21:00:43:11 VALID

Now I am adding a column to the table. After adding column see the LAST_DDL_TIME
and TIMESTAMP column value is changed.
SQL> alter table test add col2 number;
Table altered.

SQL>  select created, last_ddl_time, timestamp, status from obj where object_name='TEST'
and object_type='TABLE';

CREATED            LAST_DDL_TIME      TIMESTAMP           STATUS
------------------ ------------------ ------------------- -------
21-SEP-08 00:43:11 20-FEB-09 11:02:29 2009-02-20:11:02:29 VALID

SQL> select sysdate from dual;

SYSDATE
------------------
20-FEB-09 11:02:47

Now I grant select on test table to user arju. After grant note that LAST_DDL_TIME is changed but TIMESTAMP value is not changed.
SQL> grant select on test to arju;

Grant succeeded.

SQL>  select created, last_ddl_time, timestamp, status from obj where object_name='TEST' 
and object_type='TABLE';

CREATED            LAST_DDL_TIME      TIMESTAMP           STATUS
------------------ ------------------ ------------------- -------
21-SEP-08 00:43:11 20-FEB-09 11:12:33 2009-02-20:11:02:29 VALID

SQL>  select created, last_ddl_time, timestamp, status from obj where object_name='P_TEST' 
and object_type='PROCEDURE';

CREATED            LAST_DDL_TIME      TIMESTAMP           STATUS
------------------ ------------------ ------------------- -------
20-FEB-09 10:42:11 20-FEB-09 10:42:11 2009-02-20:10:42:11 VALID

SQL> alter procedure p_test compile;

Procedure altered.

SQL>  select created, last_ddl_time, timestamp, status from obj where object_name='P_TEST' 
and object_type='PROCEDURE';

CREATED            LAST_DDL_TIME      TIMESTAMP           STATUS
------------------ ------------------ ------------------- -------
20-FEB-09 10:42:11 20-FEB-09 11:18:41 2009-02-20:10:42:11 VALID

How to do data pump import from higher to lower version database


You sometimes might face difficulties while export data from higher database version by data pump export and them import the dumpfile into lower version database. It is more appropriate to say it - to export data from a database that has higher compatibility and then import the dumpfile into the database that has lower compatibility settings.


From the post,
When you decide to use data pump operation from a higher compatibility level database to a lower compatibility level database(i.e expdp from a higher compatibility level database and then impdp to a lower compatibility level database) then always while performig data pump exoprt operation always add VERSION parameter and the value of the VERSION parameter need to match the compatiblity level of the database to which you will perform data pump import.

Below is an example which will take a dump of 11.1.0.6 database using 11g data pump export client and then import that dump into 10.2.0.1 database using 10g data pump client.

In the example I have taken a data pump dumpfile of table test from schema arju of 11g database and then import it to 10g database under the same schema.

Step 01: Connect to 11.1.0.6 database compatibility and create the directory.
C:\>sqlplus arju/a@san

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 6 22:58:09 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create directory e as 'E:\';

Directory created.

This directory e is a name of location E:\ drive on my windows computer. I wanted to put my data pump dumpfile into E:\ drive.

Step 02:
Take a dump of 11g database using 11.1.0.6 export data pump client and while export also include version parameter so that dumpfile can be imported into 10g database.

SQL> host expdp arju/a tables=test dumpfile=test_table.dmp logfile=test_table.log directory=e VERSION=10.2

Export: Release 11.1.0.6.0 - Production on Friday, 06 February, 2009 23:00:33

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "ARJU"."SYS_EXPORT_TABLE_01": arju/******** tables=test dumpfile=test_table.dmp logfile=test_table.log directory=e VERSI
ON=10.2
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ARJU"."TEST" 5.007 KB 1 rows
Master table "ARJU"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ARJU.SYS_EXPORT_TABLE_01 is:
E:\TEST_TABLE.DMP
Job "ARJU"."SYS_EXPORT_TABLE_01" successfully completed at 23:01:33

Step 03: Transfer the dumpfile into the D:\ drive of the machine where 10.2g database resides. On windows you can just share your E:\ drive of your 11g machine and from another machine you can just copy the dumpfile. There are many other ways to transfer file using network. If two computers does not have network connection then using pen drive you can transfer the file to the 10.2g machine.

Step 04: Connect to 10.2g database and create a directory d.

C:\>sqlplus arju/a@orcl

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 6 23:05:54 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> create directory d as 'D:';

Directory created.

Step 05: Using 10.2g import data pump client just import the dumpfile into 10.2g database.

SQL> host J:\oracle\product\10.2.0\db_1\BIN\impdp arju/a@orcl tables=test dumpfile=test_table.dmp logfile=test_table.log directory=d

Import: Release 10.2.0.1.0 - Production on Friday, 06 February, 2009 23:09:57

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "ARJU"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "ARJU"."SYS_IMPORT_TABLE_01": arju/********@orcl tables=test dumpfile=test_table.dmp logfile=test_table.log directory=d

Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "ARJU"."TEST" 5.007 KB 1 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "ARJU"."SYS_IMPORT_TABLE_01" successfully completed at 23:10:16

Copy files between Unix and Windows with rcp


There are many third party tools by which you can copy files between windows and unix machine. Some common tools are winscp, SSHSecureShellClient etc. Instead of using any third party tool, you can use original unix and windows built-in tool to do the task.

The RCP (Remote CoPy) is a standard UNIX tool and a Microsoft Windows 2000 tool that copies files/folder to and from computer running the RCP service.

Before you can use the RCP tool from a Windows-based computer you must do something in your UNIX computer.

you must turn on the RCP service and update the security files to allow the Windows-based computer to connect to your unix machine over this service.

The following steps will help you.
Step 01: Turn on RCP service on UNIX Host

Turn on the RCP service on unix machine by using inetd:

1. Log on as root.
2. Edit the file /etc/Inetd.conf.
3. Uncomment the lines that start with shell and that start with exec.
4. Save the file.
5. Use ps -ef |grep inetd and determine inetd service pid. Send the Host User Profile (HUP) signal to inetd by using kill or by using pkill.
6. Run inetd again.

To turn on the RCP service by using xinetd:

1. Log on as root.
2. Edit the file /etc/Xinetd.d/rsh. You may get another file based on your linux distrinution.
3. Change the line disable to no.
4. Save the file.
5. Use ps -ef |grep inetd and determine inetd service pid. Send the HUP signal to xinetd by using kill or by using pkill.
6. Run xinetd again.

Step 02: Set the security permissions
After turning on RCP services you must set the security permissions to allow the Windows host to connect to your computer.

1. Edit the file /etc/hosts.equiv. Based on your linux distribution this file may vary. You can get like /etc/hosts.allow.
2. In the file add a line that contains the name of your Windows host.
3. Add a second line that contains the name of your Windows host and the name of a user who can access the directory that you want to transfer. Separate the two elements with a tab character.
4. Save the file.

Step 03: Transfer the files or directory

From windows computer you can transfer files or folders using rcp using following syntax.
RCP [-a | -b] [-h] [-r] [host][.user:]source [host][.user:] path\destination

For example, to copy the file index.html from the path /home/oracle on the computer unixhost as the user oracle you must use from windows computer,
rcp unixhost.oracle:/home/oracle/index.html index.html

In order to copy whole directory you need to use -r(for recursive) with rcp command.
To copy test directory under /home/oracle on unix, from windows computer issue,
rcp -r unixhost.oracle:/home/oracle/test test

Changing a DBA user to a normal user in oracle


Many times you have granted a user DBA super role instead of giving individual privilege to a user. Later whenever you want to revoke DBA role you need to care of which privilege you need to give the user.

Before example let's take a overview about some views related to privileges and roles in oracle.

1)DBA_SYS_PRIVS describes system privileges granted to users and roles.
SQL> desc dba_sys_privs
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
GRANTEE                                   NOT NULL VARCHAR2(30)
PRIVILEGE                                 NOT NULL VARCHAR2(40)
ADMIN_OPTION                                       VARCHAR2(3)

2)USER_SYS_PRIVS describes system privileges granted to the current user.
SQL> desc user_sys_privs
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
USERNAME                                           VARCHAR2(30)
PRIVILEGE                                 NOT NULL VARCHAR2(40)
ADMIN_OPTION                                       VARCHAR2(3)

3)DBA_ROLE_PRIVS describes the roles granted to all users and roles in the database.
SQL> desc dba_role_privs
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
GRANTEE                                            VARCHAR2(30)
GRANTED_ROLE                              NOT NULL VARCHAR2(30)
ADMIN_OPTION                                       VARCHAR2(3)
DEFAULT_ROLE                                       VARCHAR2(3)

4)DBA_TAB_PRIVS describes all object grants in the database. Note that in the table the column TABLE_NAME does not display only table rather it displays any object, including tables, packages, indexes, sequences, and so on.
SQL> desc dba_tab_privs
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
GRANTEE                                   NOT NULL VARCHAR2(30)
OWNER                                     NOT NULL VARCHAR2(30)
TABLE_NAME                                NOT NULL VARCHAR2(30)
GRANTOR                                   NOT NULL VARCHAR2(30)
PRIVILEGE                                 NOT NULL VARCHAR2(40)
GRANTABLE                                          VARCHAR2(3)
HIERARCHY                                          VARCHAR2(3)

In this example, we want to change a DBA user named "OMS" to a normal user.
Let's see the user OMS has the available roles granted.
SQL> select * from dba_role_privs where grantee='OMS';

GRANTEE                        GRANTED_ROLE                   ADM DEF
------------------------------ ------------------------------ --- ---
OMS                            RESOURCE                       NO  YES
OMS                            JAVAUSERPRIV                   NO  YES
OMS                            DBA                            NO  YES

These roles may contain many privilege. For example the role RESOURCE contains following privileges.
SQL> select * from dba_sys_privs where grantee='RESOURCE';

GRANTEE                        PRIVILEGE                                ADM
------------------------------ ---------------------------------------- ---
RESOURCE                       CREATE TRIGGER                           NO
RESOURCE                       CREATE SEQUENCE                          NO
RESOURCE                       CREATE TYPE                              NO
RESOURCE                       CREATE PROCEDURE                         NO
RESOURCE                       CREATE CLUSTER                           NO
RESOURCE                       CREATE OPERATOR                          NO
RESOURCE                       CREATE INDEXTYPE                         NO
RESOURCE                       CREATE TABLE                             NO

8 rows selected.

Let's see the privilege assigned to user OMS.
SQL> select * from dba_sys_privs where grantee='OMS';

GRANTEE                        PRIVILEGE                                ADM
------------------------------ ---------------------------------------- ---
OMS                            UNLIMITED TABLESPACE                     NO

Now let's see which tablespaces contain the objects owned by the user OMS. We need to assign quota on those tablespaces and then revoking DBA role.

The tablespaces contain objects of user OMS.

SQL> DEFINE owner='OMS'
SQL> select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name ||';'
   from dba_tables where owner='&OWNER'
   UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name
||';' from dba_indexes
   where owner='&OWNER'
   UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name
||';' from dba_tab_partitions
   where table_owner='&OWNER'
   UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name
||';' from dba_ind_partitions
   where index_owner='&OWNER';
old   1: select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name ||';'
new   1: select distinct 'Alter user OMS quota unlimited on '|| tablespace_name ||';'
old   2: from dba_tables where owner='&OWNER'
new   2: from dba_tables where owner='OMS'
old   3: UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name ||';' from dba_indexes
new   3: UNION select distinct 'Alter user OMS quota unlimited on '|| tablespace_name ||';' from dba_indexes
old   4: where owner='&OWNER'
new   4: where owner='OMS'
old   5: UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name ||';' from dba_tab_partitions
new   5: UNION select distinct 'Alter user OMS quota unlimited on '|| tablespace_name ||';' from dba_tab_partitions
old   6: where table_owner='&OWNER'
new   6: where table_owner='OMS'
old   7: UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name ||';' from dba_ind_partitions
new   7: UNION select distinct 'Alter user OMS quota unlimited on '|| tablespace_name ||';' from dba_ind_partitions
old   8: where index_owner='&OWNER'
new   8: where index_owner='OMS'

'ALTERUSEROMSQUOTAUNLIMITEDON'||TABLESPACE_NAME||';'
-----------------------------------------------------------------
Alter user OMS quota unlimited on OMS_INDX_SPC;
Alter user OMS quota unlimited on OMS_SPC;

Let's see if any objects privileges granted to user OMS.

SQL> select * from dba_tab_privs where grantee='OMS';

no rows selected

Now we give privilege and assign quota to user OMS and then revoking DBA role.

Assigning privilege by,
GRANT CREATE SESSION, CREATE TRIGGER, CREATE SEQUENCE, CREATE TYPE, CREATE PROCEDURE,
CREATE CLUSTER, CREATE OPERATOR, CREATE INDEXTYPE, CREATE TABLE TO OMS;


Giving quota on the tablespaces by,

ALTER USER OMS QUOTA UNLIMITED on OMS_SPC;
ALTER USER OMS QUOTA UNLIMITED on OMS_INDX_SPC;


Now revoking DBA role by,
REVOKE DBA FROM OMS;

Tuesday, August 17, 2010

Back up commands in UNIX

ntbackup

Under WindowsNT and Windows2000 filesystem backups are done using the ntbackup program. This is a GUI tool with easy to use wizards to get you started, but it is also accessible from the command line. The command line parameters differ between WindowsNT and Windows2000. Under WindowsNT a typical backup command would look like:
ntbackup backup c:\ /d "Daily Backup" /hc:on /l "C:\backup.log" /e /t normal /v

c:\                : The drive to backup.
/d "Daily Backup"  : The name of the backup set.
/hc:on             : Harware compression on.
/l "C:\backup.log" : Location of the logfile.
/e                 : Log exceptions only.
/t normal          : Backup type normal.
/v                 : Verify backup.
Under Windows2000 a similar command would look like:
ntbackup backup c:\ /D "Daily Backup" /HC:on /L:s /M normal /P DLT /V:yes /UM

c:\                : The drive to backup.
/D "Daily Backup"  : The name of the backup set.
/HC:on             : Harware compression on.
/L:s               : Summary data only in log.
/M normal          : Backup type normal.
/P DLT             : Media pool assignment (Backup/DLT).
/V:yes             : Verify backup.
/UM                : Unmanaged.
The Windows2000 backup logs always appear in:

C:\Documents and SettingsLocal Settings\Application Data\Microsoft\Windows NT\NTbackup\Data

Where user-name is the user who ran ntbackup.

The Windows2000 is rather troublesome regarding reuse of tapes. Using the following procedure should alleviate most of these problems:
  • From the task bar click "Start" -> "Run..."
  • Enter "ntbackup" and click "OK".
  • From the ntbackup menu select "Tools" -> "Options" and click on the "General" tab.
  • Check the "Always move new import mediato the Backup media pool" option and click "OK".
  • Exit ntbackup.
If there are still issues you must right-click the appropriate tape drive in "Computer Management" and select the "Mark as clean" option. At this point the tape should be reused properly.

tar

The tar command can be used to backup and restore files to another filesystem or an offile storage device:
# Create archive.
cd /u01/app/oracle
tar -cvf /tmp/admin.tar admin

# Restore archive.
cd /tmp
tar -xvf admin.tar
If a full path is used during the archive creation the extract locations are fixed rather than relative. The process is similar when accessing a tape device except the destination is the mounted device:
# Mount and rewind the tape.
mt -f /dev/rmt/2m rew

# Create archive.
tar -cvf /dev/rmt/2m /u01/*

# Restore archive.
tar -xvf /dev/rmt/2m

dd

The dd command is similar to the tar command:
# Mount and rewind the tape.
mt -f /dev/rmt/2m rew

# Create archive.
dd if=/u01/app/oracle/* of=/dev/rmt/2m BS=32K

# Restore archive.
dd if=/dev/rmt/2m of=/u01/app/oracle BS=32K

cpio

The cpio command deals with the standard input so filesystem paths must be piped to it:
# Create archive.
cd /u01/app/oracle
find admin | cpio -oc > /tmp/admin.cpio

# Restore archive.
cd /tmp
cpio -idmv < admin.cpio
If a full path is used during the archive creation the extract locations are fixed rather than relative:
find /u01/app/oracle/admin | cpio -oc > /tmp/admin.cpio

vdump, rvdump, vrestore and rvrestore

Full level 0 backup of a local filesystem (/u01) to a local device (/dev/tape/tape1_d6):
/sbin/vdump -0 -u -f /dev/tape/tape1_d6 /u01
Full level 0 backup of a local filesystem (/u01) to a remote device (server2:/dev/tape/tape1_d6):
/sbin/rvdump -0 -u -f server2:/dev/tape/tape1_d6 /u01
Restore a vdump or rvdump archive from a local device (/dev/tape/tape1_d6) to a local filesystem (/u01):
/sbin/vrestore -xf /dev/tape/tape1_d6 -D /u01
Restore a vdump or rvdump archive from a remote device (server2:/dev/tape/tape1_d6) to a local filesystem (/u01):
/sbin/rvrestore -xf server2:/dev/tape/tape1_d6 -D /u01

UNIX commands for DBA's

Basic File Navigation

The "pwd" command displays the current directory:
root> pwd
/u01/app/oracle/product/9.2.0.1.0
The "ls" command lists all files and directories in the specified directory. If no location is defined it acts on the current directory:
root> ls
root> ls /u01
root> ls -al
The "-a" flag lists hidden "." files. The "-l" flag lists file details.

The "cd" command is used to change directories:
root> cd /u01/app/oracle
The "touch" command is used to create a new empty file with the default permissions:
root> touch my.log
The "rm" command is used to delete files and directories:
root> rm my.log
root> rm -R /archive
The "-R" flag tells the command to recurse through subdirectories.

The "mv" command is used to move or rename files and directories:
root> mv [from] [to]
root> mv my.log my1.log
root> mv * /archive
root> mv /archive/* .
The "." represents the current directory

The "cp" command is used to copy files and directories:
root> cp [from] [to]
root> cp my.log my1.log
root> cp * /archive
root> cp /archive/* .
The "mkdir" command is used to create new directories:
root> mkdir archive
The "rmdir" command is used to delete directories:
root> rmdir archive
The "find" command can be used to find the location of specific files:
root> find / -name dbmspool.sql
root> find / -print | grep -i dbmspool.sql
The "/" flag represents the staring directory for the search. Wildcards such as "dbms*" can be used for the filename.

The "which" command can be used to find the location of an executable you are using:
oracle> which sqlplus
The "which" command searches your PATH setting for occurences of the specified executable.

File Permissions

The "umask" command can be used to read or set default file permissions for the current user:
root> umask 022
The umask value is subtracted from the default permissions (666) to give the final permission:
666 : Default permission
022 : - umask value
644 : final permission
The "chmod" command is used to alter file permissions after the file has been created:
root> chmod 777 *.log

Owner      Group      World      Permission
=========  =========  =========  ======================
7 (u+rwx)  7 (g+rwx)  7 (o+rwx)  read + write + execute
6 (u+wx)   6 (g+wx)   6 (o+wx)   write + execute
5 (u+Rx)   5 (g+Rx)   5 (o+Rx)   read + execute
4 (u+r)    4 (g+r)    4 (o+r)    read only
2 (u+w)    2 (g+w)    2 (o+w)    write only
1 (u+x)    1 (g+x)    1 (o+x)    execute only
Character eqivalents can be used in the chmod command:
root> chmod o+rwx *.log
root> chmod g+r   *.log
root> chmod -Rx   *.log
The "chown" command is used to reset the ownership of files after creation:
root> chown -R oinstall.dba *
The "-R" flag causes the command ro recurse through any subdirectories.

OS Users Management

The "useradd" command is used to add OS users:
root> useradd -G oinstall -g dba -d /usr/users/my_user -m -s /bin/ksh my_user
  • The "-G" flag specifies the primary group.
  • The "-g" flag specifies the secondary group.
  • The "-d" flag specifies the default directory.
  • The "-m" flag creates the default directory.
  • The "-s" flag specifies the default shell.
The "usermod" command is used to modify the user settings after a user has been created:
root> usermod -s /bin/csh my_user
The "userdel" command is used to delete existing users:
root> userdel -r my_user
The "-r" flag removes the default directory.

The "passwd" command is used to set, or reset, the users login password:
root> passwd my_user
The "who" command can be used to list all users who have OS connections:
root> who
root> who | head -5
root> who | tail -5
root> who | grep -i ora
root> who | wc -l
  • The "head -5" command restricts the output to the first 5 lines of the who command.
  • The "tail -5" command restricts the output to the last 5 lines of the who command.
  • The "grep -i ora" command restricts the output to lines containing "ora".
  • The "wc -l" command returns the number of lines from "who", and hence the number of connected users.

Process Management

The "ps" command lists current process information:
root> ps
root> ps -ef | grep -i ora
Specific processes can be killed by specifying the process id in the kill command:
root> kill -9 12345

uname and hostname

The "uname" and "hostname" commands can be used to get information about the host:
root> uname -a
OSF1 oradb01.lynx.co.uk V5.1 2650 alpha

root> uname -a | awk '{ print $2 }'
oradb01.lynx.co.uk

root> hostname
oradb01.lynx.co.uk

Error Lines in Files

You can return the error lines in a file using:
root> cat alert_LIN1.log | grep -i ORA-
The "grep -i ORA-" command limits the output to lines containing "ORA-". The "-i" flag makes the comparison case insensitive. A count of the error lines can be returned using the "wc" command. This normally give a word count, but the "-l" flag alteres it to give a line count:
root> cat alert_LIN1.log | grep -i ORA- | wc -l

File Exists Check

The Korn shell allows you to check for the presence of a file using the "test -s" command. In the following script a backup log is renamed and moved if it is present:
#!/bin/ksh
if test -s /backup/daily_backup.log
then
  DATE_SUFFIX=`date +"%y""%m""%d""%H""%M"`
  mv /backup/daily_backup.log /backup/archive/daily_backup$DATE_SUFFIX.log
fi

Remove Old Files

The find command can be used to supply a list of files to the rm command:
find /backup/logs/ -name daily_backup* -mtime +21 -exec rm -f {} ;

Remove DOS CR/LFs (^M)

Remove DOS style CR/LF characters (^M) from UNIX files using:
sed -e 's/^M$//' filename > tempfile
The newly created tempfile should have the ^M character removed.

Run Commands As Oracle User From Root

The following scripts shows how a number of commands can be run as the "oracle" user the "root" user:
#!/bin/ksh
su - oracle <
ORACLE_SID=LIN1; export ORACLE_SID
rman catalog=rman/rman@w2k1 target=/ cmdfile=my_cmdfile log=my_logfile append 
EOF
This is often necessary where CRON jobs are run from the root user rather than the oracle user.

Compress Files

In order to save space on the filesystem you may wish to compress files such as archived redo logs. This can be using either the gzip or the compress commands. The gzip command results in a compressed copy of the original file with a ".gz" extension. The gunzip command reverses this process:
gzip myfile
gunzip myfile.gz
The compress command results in a compressed copy of the original file with a ".Z" extension. The uncompress command reverses this process:
compress myfile
uncompress myfile

General Performance

vmstat

$ vmstat 5 3
Displays system statistics (5 seconds apart; 3 times):
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr s0 s1 s2 s3 in sy cs us sy id
0 0 0 28872 8792 8 5 172 142 210 0 24 3 11 17 2 289 1081 201 14 6 80
0 0 0 102920 1936 1 95 193 6 302 1264 235 12 1 0 3 240 459 211 0 2 97
0 0 0 102800 1960 0 0 0 0 0 464 0 0 0 0 0 107 146 29 0 0 100

Having any processes in the b or w columns is a sign of a problem system.
Having an id of 0 is a sign that the cpu is overburdoned.
Having high values in pi and po show excessive paging.
  • procs (Reports the number of processes in each of the following states)
    • r : in run queue
    • b : blocked for resources (I/O, paging etc.)
    • w : runnable but swapped
  • memory (Reports on usage of virtual and real memory)
    • swap : swap space currently available (Kbytes)
    • free : size of free list (Kbytes)
  • page (Reports information about page faults and paging activity (units per second)
    • re : page reclaims
    • mf : minor faults
    • pi : Kbytes paged in
    • po : Kbytes paged out
    • fr : Kbytes freed
    • de : anticipated short-term memory shortfall (Kbytes)
    • sr : pages scanned by clock algorith
  • disk (Reports the number of disk operations per second for up to 4 disks
  • faults (Reports the trap/interupt rates (per second)
    • in : (non clock) device interupts
    • si : system calls
    • cs : CPU context switches
  • cpu (Reports the breakdown of percentage usage of CPU time (averaged across all CPUs)
    • us : user time
    • si : system time
    • cs : idle time

CPU Usage

sar

$ sar -u 10 8
Reports CPU Utilization (10 seconds apart; 8 times):
Time %usr %sys %wio %idle
11:57:31 72 28 0 0
11:57:41 70 30 0 0
11:57:51 70 30 0 0
11:58:01 68 32 0 0
11:58:11 67 33 0 0
11:58:21 65 28 0 7
11:58:31 73 27 0 0
11:58:41 69 31 0 0
Average 69 30 0 1

%usr: Percent of CPU in user mode
%sys: Percent of CPU in system mode
%wio: Percent of CPU running idle with a process waiting for block I/O
%idle: Percent of CPU that is idle

mpstat

$ mpstat 10 2
Reports per-processor statistics on Sun Solaris (10 seconds apart; 8 times):
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 6 8 0 438 237 246 85 0 0 21 8542 23 9 9 59
0 0 29 0 744 544 494 206 0 0 95 110911 65 29 6 0

ps

$ ps -e -o pcpu -o pid -o user -o args | sort -k 1 | tail -21r
Displays the top 20 CPU users on the system.
%CPU PID USER COMMAND
78.1 4789 oracle ora_dbwr_DDDS2
8.5 4793 oracle ora_lgwr_DDDS2
2.4 6206 oracle oracleDDDS2 (LOCAL=NO)
0.1 4797 oracle ora_smon_DDDS2
0.1 6207 oracle oracleDDDS2 (LOCAL=NO)
etc. etc. etc. etc.

The PID column can then be matched with the SPID column on the V$PROCESS view to provide more information on the process:
SELECT a.username, 
       a.osuser, 
       a.program, 
       spid, 
       sid, 
       a.serial#
FROM   v$session a,
       v$process b
WHERE  a.paddr = b.addr
AND    spid = '&pid';

Automatic Startup Scripts on Linux

Create a file in the /etc/init.d/ directory, in this case the file is called myservice, containing the commands you wish to run at startup and/or shutdown.

Use the chmod command to set the privileges to 750:
chmod 750 /etc/init.d/myservice
Link the file into the appropriate run-level script directories:
ln -s /etc/init.d/myservice /etc/rc0.d/K10myservice
ln -s /etc/init.d/myservice /etc/rc3.d/S99myservice
Associate the myservice service with the appropriate run levels:
chkconfig --level 345 dbora on
The script should now be automatically run at startup and shutdown (with "start" or "stop" as a commandline parameter) like other service initialization scripts.

CRON

There are two methods of editing the crontab file. First you can use the "crontab -l > filename" option to list the contents and pipe this to a file. Once you've editied the file you can then apply it using the "crontab filename":
  • Login as root
  • crontab -l > newcron
  • Edit newcron file.
  • crontab newcron
Alternatively you can use the "crontab -e" option to edit the crontab file directly.

The entries have the following elements:
field          allowed values
-----          --------------
minute         0-59
hour           0-23
day of month   1-31
month          1-12
day of week    0-7 (both 0 and 7 are Sunday)
user           Valid OS user
command        Valid command or script.
The first 5 fields can be specified using the following rules:
*       - All available values or "first-last".
3-4     - A single range representing each possible from the start to the end of the range inclusive.
1,2,5,6 - A specific list of values.
1-3,5-8 - A specific list of ranges.
0-23/2  - Every other value in the specified range.
The following entry runs a cleanup script a 01:00 each Sunday. Any output or errors from the script are piped to /dev/null to prevent a buildup of mails to root:
0 1 * * 0 /u01/app/oracle/dba/weekly_cleanup > /dev/null 2>&1

Cluster Wide CRON Jobs On Tru64

On clustered systems cron is node-specific. If you need a job to fire once per cluster, rather than once per node you need an alternative approach to the standard cron job. One approach is put forward in the HP best practices document (Using cron in a TruCluster Server Cluster), but in my opinion a more elegant solution is proposed by Jason Orendorf of HP Tru64 Unix Enterprise Team (TruCluster Clustercron).

In his solution Jason creates a file called /bin/cronrun with the following contents:
#!/bin/ksh
set -- $(/usr/sbin/cfsmgr -F raw /)
shift 12
[[ "$1" = "$(/bin/hostname -s)" ]] && exit 0
exit 1
This script returns TRUE (0) only on the node which is the CFS serving cluster_root.

All cluster wide jobs should have a crontab entry on each node of the cluster like:
5 * * * /bin/cronrun && /usr/local/bin/myjob
Although the cron jobs fire on all nodes, the "/bin/cronrun &&" part of the entry prevents the script from running on all nodes except the current CFS serving cluster_root.

Useful Profile Settings

The following .profile settings rely on the default shell for the user being set to the Korn shell (/bin/ksh).

The backspace key can be configured by adding the following entry:
stty erase "^H"
The command line history can be accessed using the [Esc][k] by adding the following entry:
set -o vi
Auto completion of paths using a double strike of the [Esc] key can be configured by adding the following entry:
set filec

Useful Files

Here are some files that may be of use:

Path Contents
/etc/passwd User settings
/etc/group Group settings for users.
/etc/hosts Hostname lookup information.
/etc/system Kernel parameters for Solaris.
/etc/sysconfigtab Kernel parameters for Tru64.

Monday, August 16, 2010

converting numbers to words

To convert date into numbers and words

select to_char(sysdate,'J') "WORDS" from dual;
select to_date(to_char(sysdate,'J'),'J') "WORDS" from dual;
select to_char(to_date( to_char(sysdate,'J') ,'J'), 'JSP') "WORDS" from dual;

----*--------------------------------*-----------------------
To convert numbers into words

select to_char(to_date(873,'J'), 'JSP') as converted_form from dual;

------*-------------------------------------*------------------------
To convert words into numbers


SELECT LEVEL wordasint FROM dual
WHERE TO_CHAR(TO_DATE(LEVEL,'J'), 'JSP') = 'ONE HUNDRED TWENTY-THREE'
CONNECT BY TO_CHAR(TO_DATE(LEVEL-1,'J'),'JSP') != 'ONE HUNDRED TWENTY-THREE'
AND LEVEL < 124; ---FOR 10 g and above
SELECT sp, n
FROM (SELECT 'EIGHTY-SIX THOUSAND THREE HUNDRED NINETY-NINE' sp FROM dual)
MODEL DIMENSION BY (1 dim)
MEASURES (0 n, sp) RULES ITERATE (86400) UNTIL (TO_CHAR(DATE '0001-01-01' +
(ITERATION_NUMBER/86400),'SSSSSSP')=sp[1])
(n[1]=ITERATION_NUMBER);

------------------------------------------------------------------------

Unknown Functions in Oracle

A Word : Risk of using Oracle Undocumented functions or procedures here is that without any notice Oracle may remove them and there will not be support from ORACLE

To get similar characters among the two fields (from 9i onwards)

SELECT merge$actions('lNvl', 'pppl') FROM dual;
SELECT replace(merge$actions(1204, 1260),'BB','') FROM dual;

------------------------------------------------------------------------------
To view blob details (from 10g onwards)

SELECT sys_op_cl2c(ad_finaltext),s.product_id FROM pm.print_media s

-----------------------------------------------------------------------------
To get blocknumber of a particular tables (from 9i onwards)

select dbms_rowid.rowid_block_number(rowid) from dual;

SELECT COUNT(DISTINCT dbms_rowid.rowid_block_number(rowid)),
COUNT( dbms_rowid.rowid_block_number(rowid))FROM pm.print_media;

----------------------------------------------------------------------
To convert into HEX (from 9i onwards)

SELECT sys_op_descend('9')FROM dual;

-----------------------------------------------------------------------------
To show generate null coloumn (from 9i onwards)

SELECT sys_op_lvl(1,21,1,1,1,1 ,2) FROM dual;

-------------------------------------------------------------------------------
To generates and returns a globally unique identifier of 16 bytes The returned data type is RAW(16) (from 9i onwards)

SELECT sys_guid() FROM dual;

-----------------------------------------------------------------------
To get comma seperated values in rows wise (from 10g onwards)

select sys.odcivarchar2list('Football','Rugby') from dual;

select column_value from table(
select sys.odcivarchar2list('Football','Rugby') from dual);
-----------------------------------------------------------------------
To get data in ascii characters (from 10g onwards)


SELECT sys_op_c2c(9068986) FROM dual;
-------------------------------------------------------------------------
TO get number representation of rowid (from 9i onwards)

SELECT rowid, sys_op_rpb(rowid),dbms_rowid.rowid_block_number(rowid) FROM scott.emp
-------------------------------------------------------------------------
Similar to null (from 10g onwards)

SELECT sys_fbt_insdel, decode(sys_fbt_insdel,null,1,0) FROM dual;
-------------------------------------------------------------------------
To convert hex to num (from 9i onwards)


SELECT sys_op_rawtonum('000000FF'),UTL_RAW.CAST_FROM_BINARY_INTEGER(255) FROM dual;
-------------------------------------------------------------------------
To check whether a number is even or odd (from 9i onwards)

Return the value of the bit at position N
The return value is 0 or 1

SELECT sys_op_vecbit('255',0),sys_op_vecbit('255',1),sys_op_vecbit('22',1),sys_op_vecbit('22',0) FROM dual;
-------------------------------------------------------------------------
Return the binary AND of two hex values (from 9i onwards)

SELECT sys_op_vecand('FC','FD') from dual;
SELECT sys_op_vecand('FF','FD') from dual;
-----------------------------------------------------------------------
Return the binary OR of two hex values (from 9i onwards)


SELECT sys_op_vecor(('FC'),('FE')) from dual;
-----------------------------------------------------------------------
Return the binary XOR of two hex values (from 9i onwards)

SELECT sys_op_vecxor(('FF'),('FE')) from dual;
-----------------------------------------------------------------------
To get ref value visible (from 9i onwards)

SELECT sys_op_r2o(CUSTOMER_REF),CUSTOMER_REF FROM oe.oc_orders WHERE rownum = 1;

-----------------------------------------------------------------------
To compare null (from 9i onwards)

SELECT 'hi there' FROM DUAL WHERE NULL = NULL;

SELECT 'hi there' FROM DUAL WHERE sys_op_map_nonnull (NULL) = sys_op_map_nonnull (NULL);
SELECT 'hi there' FROM DUAL WHERE nvl(null,'FF')='FF';
select sys_op_map_nonnull(null) from dual; --similat to nvl(null,'FF')
select * from dual where sys_op_map_nonnull(null) = 'FF';
SELECT 'hi there' FROM DUAL WHERE to_char(1||NULL) = to_char(1||NULL);
-----------------------------------------------------------------------

About Tkprof



tkprof
tkprof is one of the most useful utilities available to DBAs for diagnosing performance issues.  It essentially formats a trace file into a more readable format for performance analysis.  The DBA can then identify and resolve performance issues such as poor SQL, indexing, and wait events. 

tkprof has been historically difficult to use for many reasons.  First, the entire process of enabling tracing, finding trace files, and executing the utility against them is a burdensome task.  Once the DBA finally has the trace file output the typical response is “Now what do I do”?  Second, even though tkprof formats the data, it lacks any additional insight needed to remedy the problems revealed.  In fact, problems are not even highlighted, thereby putting more work on the DBA to analyze the output, assess the problems, and determine what to do. 
The DBA will use tkprof and session tracing when the database or a particular session is having performance problems.  tkprof will generally be used infrequently, when researching a very particular performance issue.  A user may complain that the response time for a session is abysmal compared to the prior week.  Session tracing and tkprofcan be used to see exactly what is happening on the database, enabling the DBA to take corrective action.

The utility can also be used to view SQL that is being executed for an application.  In some situations, this will be the only mechanism a DBA will have to view SQL.  These situations include the execution of encrypted PL/SQL code on the database or submission of SQL statements from third party applications.
Analyzing tkprof  Results
So what should DBAs be looking for?  Here’s a small checklist of items to watch for in tkprof formatted files: 

·     Compare the number of parses to number of executions.  A well-tuned system will have one parse per nexecutions of a statement and will eliminate the re-parsing of the same statement.  
·     Search for SQL statements that do not use bind variables (:variable).  These statements should be modified to use bind variables.
·     Identify those statements that perform full table scans, multiple disk reads, and high CPU consumption.  These performance benchmarks are defined by the DBA and need to be tailored to each database.  What may be considered a high number of disk reads for an OLTP application may not even be minimal for a data warehouse implementation.
The tkprof process will be explained in six easy steps.
Step 1: Check the Environment
Before tracing can be enabled, the environment must first be configured by performing the following steps:

·     Enable Timed Statistics – This parameter enables the collection of certain vital statistics such as CPU execution time, wait events, and elapsed times.  The resulting trace output is more meaningful with these statistics.  The command to enable timed statistics is:
ALTER SYSTEM SET TIMED_STATISTICS = TRUE;  

·     Check the User Dump Destination Directory – The trace files generated by Oracle can be numerous and large.  These files are placed by Oracle in user_dump_dest directory as specified in the init.ora.  The user dump destination can also be specified for a single session using the alter session command.  Make sure that enough space exists on the device to support the number of trace files that you expect to generate.

SQL> select value
         from v$parameter
         where name = 'user_dump_dest';

VALUE
---------------------------------
C:\oracle9i\admin\ORCL92\udump

Once the directory name is obtained, the corresponding space command (OS dependent) will report the amount of available space.  Delete unwanted trace files before starting a new trace to free up the disk space. 
Step 2: Turn Tracing On
The next step in the process is to enable tracing.  By default, tracing is disabled due to the burden (5-10%) it places on the database.  Tracing can be defined at the session level:

ALTER SESSION SET SQL_TRACE = TRUE;

DBMS_SESSION.SET_SQL_TRACE(TRUE);

A DBA may enable tracing for another user’s session by:

DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,true);

where the sid (Session ID) and serial# can be obtained from the v$session view.  This package is owned by the SYS user and therefore the executor must be SYS or be granted EXECUTE the privilege by SYS user. 

Once tracing is enabled, Oracle generates and stores the statistics in the trace file.    The trace file name is version specific.   Table 5.1 below contains the version naming conventions for foreground processes.

Version
Naming Convention
Example
7.3.4
snnn_pid.trc
s000_4714.trc
8.0.5
ora_pid_trc
ora_2345.trc
8.1.7
ora_pid_instance.trc
ora_13737_asgard81.trc
9.x
instance_ora_pid.trc
asgard91_ora_15313.trc
Table 5.1 - Oracle Trace File Naming Conventions
Supplied with this information, the DBA can construct a query that will return the trace file for a given session or for all sessions attached to the database.  The query below (Users_Trace_Files.sql) will show the trace file name for each process.

< users_trace_files.sql 

column username format a10
column trace_file format a70
select b.username, c.value || '\' || lower(d.value) || '_ora_' ||
       to_char(a.spid, 'fm00000') || '.trc' "TRACE_FILE"
from v$process a, v$session b, v$parameter c, v$parameter d
where a.addr   = b.paddr
and c.name     = 'user_dump_dest'
and d.name     = 'db_name'
and b.username is not null;

USERNAME   TRACE_FILE
---------- --------------------------------------------------------
SYS        C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03164.trc
SCOTT      C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_02264.trc
DAVE       C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03578.trc

Notice that the trace files are for each session and not for each named user.  Given that the SYS user has two connections to the database, the commands for each session would be in separate trace files.

The query can be modified to return the file name for the currently connected session. The script below will return the file name for the current session.

select c.value || '\' || lower(d.value) || '_ora_' ||
       to_char(a.spid, 'fm00000') || '.trc' "TRACE FILE"
from v$process a, v$session b, v$parameter c, v$parameter d
where a.addr = b.paddr
and b.audsid = userenv('sessionid')
and c.name   = 'user_dump_dest'
and d.name   = 'db_name';

TRACE FILE
---------------------------------------------------------------
C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03164.trc

Both queries above generate the trace file names (with Oracle9i on Windows XP) that would exist if the session were to be traced.  However, there is no indication in any V$ view that a session is currently being traced.  The only way to really know if tracing is being performed is to inspect the file names and dates in user_dump_dest directory.  For this reason, a DBA should not trace a session indefinitely, as it will continue to consume both performance resources and file system resources.  If the user_dump_dest directory fills, the Oracle database will come to a screeching halt.

When the DBA determines that enough data has been gathered, the next step is to disable tracing.
Step 3: Turn Tracing Off
The same options that we use to enable tracing are used to disable it.  These include:

ALTER SESSION SET SQL_TRACE = FALSE;

DBMS_SESSION.SET_SQL_TRACE(FALSE);

To disable tracing for another user’s session use:

DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,false);

This process is a perfect candidate for automation.  The code below (start_trace.sql) creates a stored procedure that automates all the steps discussed so far. It can also serve as a wrapper for the standard methods of enabling tracing. Start_trace.sql accepts the sid and serial# for the session that needs tracing.  It requires that a time interval, in seconds, be set  to run the trace so that it doesn’t run perpetually and bog down the session.  When the time has elapsed, it will disable tracing for the session and send the relevant trace information: user, time, and trace file name. 



create or replace procedure start_trace
 (
   v_sid     in number,
   v_serial# in number,
   seconds   in number)

---------------------------------------------
-- 2003 - Oracle Utilities
-- D. Moore
--
-- This procedure serves as a wrapper to 
-- session tracing.  It accepts 
-- a sid and serial#, along with the amount of
-- time in seconds that the trace should last.
-- The trace will be stopped when that time
-- period expires.  After tracing is turned
-- off, the name of the trace file will be
-- displayed.
---------------------------------------------

IS
   v_user           varchar2 (32); 
   stop_trace_cmd   varchar2 (200);
   duration         number;
   v_spid           number;
   dump_dest        varchar2 (200);
   db_name          varchar2 (32);
   v_version        varchar2 (32);
   v_compatible     varchar2 (32);
   file_name        varchar2 (32);
   no_session_found exception;

BEGIN
    begin
      select a.username, b.spid into v_user,v_spid 
         from v$session a, v$process b 
         where a.sid     = v_sid  and
               a.serial# = v_serial# and
               a.paddr   = b.addr;
      exception
      when NO_DATA_FOUND then
         raise no_session_found;
     
    end; 

    dbms_system.set_sql_trace_in_session(v_sid,v_serial#,true);
    dbms_output.put_line('Tracing Started for User: '
                 || v_user);
    dbms_output.put_line('Tracing Start Time: '
                 || TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS'));  
         

    ---------------------------------------------------
    -- Sleep for the amount of seconds specified as 
    -- seconds input parameter.  When complete, stop
    -- the tracing and display the resulting trace file
    -- name
    ---------------------------------------------------

    if seconds is null then
       duration := 60;
    else
       duration := seconds;
    end if;

    dbms_lock.sleep(duration);

    -- the time alotted has now expired. Disable
    -- tracing and output the trace file information
     
    dbms_system.set_sql_trace_in_session(v_sid,v_serial#,false);
    dbms_output.put_line ('Tracing Stop Time: '
                 || TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS'));  
   
    -- get all of the data needed to format the trace file name

    select value into dump_dest
       from v$parameter
       where name = 'user_dump_dest';

    select value into db_name
       from v$parameter
       where name = 'db_name'; 

    -- we need the version of the database in order to determine
    -- the naming scheme for the trace file

    dbms_utility.db_version(v_version, v_compatible);
      
    if substr(v_version,1,1) = '9' then
       file_name := db_name || '_ora_' || v_spid || '.trc';
    elsif substr(v_version,1,3) = '8.1' then
       file_name := 'ora_' || v_spid || '_' || db_name || '.trc';
    elsif substr(v_version,1,3) = '8.0' then
       file_name := 'ora_' || v_spid || '.trc';
    end if;

    dbms_output.put_line('Trace Directory: ' || dump_dest);
    dbms_output.put_line('Trace Filename: ' || file_name);

    exception
      when no_session_found then
         dbms_output.put_line('No session found for sid and serial#
                               specified');

END start_trace;

The output from start_trace.sql is displayed below.  The time interval specified was 30 and we can see the elapsed time of the trace in the timestamps below.

SQL> exec start_trace(17, 6157, 30);

Tracing Started for User: SCOTT
Tracing Start Time: 12-26-2002 14:55:12
Tracing Stop Time: 12-26-2002 14:55:42
Trace Directory: C:\oracle9i\admin\ORCL92\udump
Trace Filename: ORCL92_ora_5472.trc

The next step is to run tkprof against the trace file.  
Step 4: Locate Trace File and Execute tkprof
Locating the file is easy because the script above gives us the file name.  tkprof will format the raw trace file, although the file is somewhat readable without tkprof

Raw Trace File

PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535446373886 hv=159129
656 ad='12cbbe70'
 select * from employee where emp_id = 87933
END OF STMT
PARSE #1:c=0,e=37469,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=535446373874
EXEC #1:c=0,e=71,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535446375834
FETCH #1:c=31250,e=42564,p=10,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535446418910
FETCH #1:c=0,e=3852,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535446424026
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=30497 op='TABLE ACCESS FULL EMPLOYEE '
=====================
PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535448474894 hv=159129
656 ad='12cbbe70'
 select * from employee where emp_id = 87933
END OF STMT
PARSE #1:c=0,e=146,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535448474882
EXEC #1:c=0,e=76,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535448476767
FETCH #1:c=31250,e=30553,p=12,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535448507870
FETCH #1:c=15625,e=3832,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535448512927
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=30497 op='TABLE ACCESS FULL EMPLOYEE '
=====================
PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535449209407 hv=159129
656 ad='12cbbe70'
 select * from employee where emp_id = 87933
END OF STMT
PARSE #1:c=0,e=111,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449209395
EXEC #1:c=0,e=74,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449211302
FETCH #1:c=31250,e=32623,p=8,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535449244513
FETCH #1:c=15625,e=3918,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535449249648
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=30497 op='TABLE ACCESS FULL EMPLOYEE '
=====================
PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535449801444 hv=159129
656 ad='12cbbe70'
 select * from employee where emp_id = 87933
END OF STMT
PARSE #1:c=0,e=102,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449801433
EXEC #1:c=0,e=74,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449803310
FETCH #1:c=31250,e=31503,p=7,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535449835358
FETCH #1:c=15625,e=4039,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535449840721
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=30497 op='TABLE ACCESS FULL EMPLOYEE '
=====================
PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535450369301 hv=159129
656 ad='12cbbe70'
 select * from employee where emp_id = 87933
END OF STMT
PARSE #1:c=0,e=101,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535450369290
EXEC #1:c=0,e=76,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535450371203
FETCH #1:c=15625,e=28362,p=5,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535450400245
FETCH #1:c=15625,e=4333,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535450405578
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=30497 op='TABLE ACCESS FULL EMPLOYEE '
=====================

With minimal effort, a programmer could create a trace file parser and formatter similar to tkprof that provides the trace data in a format even more suitable for analysis.

The tkprof command can now be executed from the operating system prompt.

C:\oracle9i\admin\ORCL92\udump>tkprof ORCL92_ora_3064.trc output.txt insert=tkprof.sql record=Allsql.sql

tkprof: Release 9.2.0.1.0 - Production on Thu Dec 26 13:22:29 2002

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Based on the command above, tkprof will process the file ORCL92_ora_3064.trc and format the results in the fileoutput.txt.  Two other files were also created (tkprof.sqlallsql.sql) that will be discussed later.
Step 5: Analyze tkprof Output
This is the most difficult step in the process.  Each tkprof output file contains a header, body, and summary section. The header simply displays the trace file name, definitions, and sort options selected.  The body contains the performance metrics for SQL statements.  The summary section contains an aggregate of performance statistics for all SQL statements in the file. 

tkprof Output

tkprof: Release 9.2.0.1.0 - Production on Tue Dec 24 15:32:43 2002

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Trace file: ORCL92_ora_3064.trc
Sort options: default

********************************************************************************

count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************


select *
from
 employee where emp_id = 87933


call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse       10      0.00       0.03          0          0          0           0

Execute     10      0.00       0.00          0          0          0           0

Fetch       20      0.34       0.35         72       4730          0          10

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total       40      0.34       0.39         72       4730          0          10


Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 59

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS FULL EMPLOYEE

********************************************************************************


The output displays a table of performance metrics after each unique SQL statement.  Each row in the table corresponds to each of the three steps required in SQL processing.

1.   Parse – The translation of the SQL into an execution plan.  This step includes syntax checks, permissions, and all object dependencies.
2.   Execute – The actual execution of the statement.
3.   Fetch – The number of rows returned for a SELECT statement.
The table columns include the following:

·     Count – The number of times a statement was parsed, executed, or fetched.
·     CPU – The total CPU time in seconds for all parse, execute, or fetch calls.
·     Elapsed – Total elapsed time in seconds for all parse, execute, or fetch calls.
·     Disk – The number of physical disk reads from the datafiles for all parse, execute, or fetch calls.
·     Query – The number of buffers retrieved for all parse, execute, or fetch calls.
·     Current – The number of buffers retrieved in current mode (INSERT, UPDATE, or DELETE statements).
Observe from the tkprof output above that the SQL statement performed a TABLE ACCESS FULL, meaning a full-table scan.  Full-table scans can degrade performance, especially when accessing a small subset of the data in a table. In this case, the query is selecting one row, yet all 100,000 rows in the table are scanned.  This is a perfect situation to add an index on the EMP_ID column of the EMPLOYEE table:

SQL> CREATE INDEX emp_idx1 ON employee (emp_id);

Index created.

Let’s examine the performance of this query again, this time with the index enabled. 

select *
from
 employee where emp_id = 87933


call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse        1      0.03       0.05          1          1          0           0

Execute      1      0.00       0.00          0          0          0           0

Fetch        2      0.00       0.03          3          4          0           1

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total        4      0.03       0.09          4          5          0           1


Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 59

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID EMPLOYEE
      1   INDEX RANGE SCAN EMP_IDX1 (object id 30498)

********************************************************************************

The CPU speed improved by a multiple of 11 (.03 vs. .34) compared to the benchmark before the index was added.
Step 6: Load tkprof Results into Tables
Loading tkprof data into the database is optional, but it can be worthwhile for those DBAs that want historical data or the ability to access  data via SQL queries to generate reports.  The command used earlier specifiedinsert=tkprof.sql which generated the following SQL in tkprof.sql

CREATE TABLE  tkprof_table
(
 date_of_insert                       DATE
,cursor_num                           NUMBER
,depth                                NUMBER
,user_id                              NUMBER
,parse_cnt                            NUMBER
,parse_cpu                            NUMBER
,parse_elap                           NUMBER
,parse_disk                           NUMBER
,parse_query                          NUMBER
,parse_current                        NUMBER
,parse_miss                           NUMBER
,exe_count                            NUMBER
,exe_cpu                              NUMBER
,exe_elap                             NUMBER
,exe_disk                             NUMBER
,exe_query                            NUMBER
,exe_current                          NUMBER
,exe_miss                             NUMBER
,exe_rows                             NUMBER
,fetch_count                          NUMBER
,fetch_cpu                            NUMBER
,fetch_elap                           NUMBER
,fetch_disk                           NUMBER
,fetch_query                          NUMBER
,fetch_current                        NUMBER
,fetch_rows                           NUMBER
,ticks                                NUMBER
,sql_statement                        LONG
);
INSERT INTO tkprof_table values
(
  SYSDATE, 1, 0, 59, 0, 0, 0, 0, 0, 0, 0
, 1, 0, 192, 0, 0, 0, 1, 0
, 0, 0, 0, 0, 0, 0, 0, 4294966155
, 'alter session set sql_trace=true
');
INSERT INTO tkprof_table VALUES
(
  SYSDATE, 2, 1, 0, 1, 0, 1232, 0, 0, 0, 1
, 1, 0, 745, 0, 0, 0, 0, 0
, 1, 0, 115, 0, 3, 0, 1, 17866289
, 'select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1, spare
2 from obj$ where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or r
emoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is
null)and(subname=:6 or subname is null and :6 is null)
');
INSERT INTO tkprof_table VALUES
(
  SYSDATE, 3, 1, 0, 1, 0, 1400, 0, 0, 0, 1
, 1, 0, 658, 0, 0, 0, 0, 0
, 1, 0, 131, 0, 3, 0, 1, 5463
, 'select ts#,file#,block#,nvl(bobj#,0),nvl(tab#,0),intcols,nvl(clucols,0),audit
$,flags,pctfree$,pctused$,initrans,maxtrans,rowcnt,blkcnt,empcnt,avgspc,chncnt,a
vgrln,analyzetime, samplesize,cols,property,nvl(degree,1),nvl(instances,1),avgsp
c_flb,flbcnt,kernelcols,nvl(trigflag, 0),nvl(spare1,0),nvl(spare2,0),spare4,spar
e6 from tab$ where obj#=:1
');
INSERT INTO tkprof_table VALUES
(
  SYSDATE, 4, 1, 0, 2, 0, 1110, 0, 0, 0, 1
, 2, 15625, 757, 0, 0, 0, 0, 0
, 2, 0, 221, 0, 6, 0, 2, 8966
, 'select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(
lists,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0) from seg$ where
ts#=:1 and file#=:2 and block#=:3
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 4, 1, 0, 1, 0, 1802, 0, 0, 0, 1
, 1, 0, 1089, 0, 0, 0, 0, 0
, 2, 0, 489, 0, 5, 0, 1, 23441
, 'select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags, i.property,i.
pctfree$,i.initrans,i.maxtrans,i.blevel,i.leafcnt,i.distkey, i.lblkkey,i.dblkkey
,i.clufac,i.cols,i.analyzetime,i.samplesize,i.dataobj#, nvl(i.degree,1),nvl(i.in
stances,1),i.rowcnt,mod(i.pctthres$,256),i.indmethod#,i.trunccnt,nvl(c.unicols,0
),nvl(c.deferrable#+c.valid#,0), nvl(i.spare1,i.intcols),i.spare4,spare2,spare6,
 decode(i.pctthres$,null,null, mod(trunc(i.pctthres$/256),256)) from ind$ i, (se
lect enabled, min(cols) unicols, min(to_number(bitand(defer,1))) deferrable#, mi
n(to_number(bitand(defer,4))) valid# from cdef$ where obj#=:1 and enabled > 1 gr
oup by enabled) c where i.obj#=c.enabled(+) and i.bo#=:1
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 5, 1, 0, 1, 0, 910, 0, 0, 0, 1
, 1, 0, 573, 0, 0, 0, 0, 0
, 2, 0, 147, 0, 3, 0, 1, 5409
, 'select pos#,intcol#,col#,spare1,bo#,spare2 from icol$ where obj#=:1
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 6, 1, 0, 1, 15625, 1426, 0, 0, 0, 1
, 1, 0, 775, 0, 0, 0, 0, 0
, 6, 0, 1744, 0, 3, 0, 5, 10773
, 'select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,nvl
(scale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,scale,181,scale,182,scale,18
3,scale,231,scale,0),null$,fixedstorage,nvl(deflength,0),default$,rowid,col#,pro
perty, nvl(charsetid,0),nvl(charsetform,0),spare1,spare2,nvl(spare3,0) from col$
 where obj#=:1 order by intcol#
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 8, 1, 0, 1, 0, 831, 0, 0, 0, 1
, 1, 0, 597, 0, 0, 0, 0, 0
, 1, 0, 59, 0, 1, 0, 0, 5736
, 'select con#,obj#,rcon#,enabled,nvl(defer,0) from cdef$ where robj#=:1
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 9, 1, 0, 1, 0, 973, 0, 0, 0, 1
, 1, 0, 650, 0, 0, 0, 0, 0
, 1, 0, 43, 0, 2, 0, 0, 5050
, 'select con#,type#,condlength,intcols,robj#,rcon#,match#,refact,nvl(enabled,0)
,rowid,cols,nvl(defer,0),mtime,nvl(spare1,0) from cdef$ where obj#=:1
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 1, 0, 59, 1, 31250, 58068, 1, 1, 0, 1
, 1, 0, 85, 0, 0, 0, 0, 0
, 2, 0, 37301, 3, 4, 0, 1, 39511
, ' select * from employee where emp_id = 87933
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 2, 1, 0, 2, 0, 1122, 0, 0, 0, 1
, 2, 0, 672, 0, 0, 0, 0, 0
, 2, 0, 178, 0, 6, 0, 2, 12416444
, 'select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,o.datao
bj#,o.flags from obj$ o where o.obj#=:1
');
INSERT INTO tkprof _table VALUES
(
  SYSDATE, 1, 0, 59, 1, 0, 353, 0, 0, 0, 1
, 1, 0, 148, 0, 0, 0, 0, 0
, 0, 0, 0, 0, 0, 0, 0, 1160
, 'alter session set sql_trace=false
');

This file contains the DDL to create the table as well as the data to load.  If the table already exists, the error will be ignored when it tries to create the table again.
tkprof  Command Line Options
tkprof provides many useful command line options that provide additional functionality for the DBA.

·     print – Lists only the first n SQL statements in the output file.  If nothing is specified, all statements will be listed.  Use this option when the list needs to be limited to the “Top n” statements.  This is useful when combined with a sorting option to enable the top n statements by CPU, or disk reads, or parses, etc.
·     aggregate – When “Yes”, tkprof will combine the statistics from multiple user executions of the same SQL statement.  When “No”, the statistics will be listed each time the statement is executed.
·     insert – Creates a file that will load the statistics into a table in the database for further processing.  Choose this option if you want to perform any advanced analysis of the tkprof output.
·     sys – Enables or disables the inclusion of SQL statements executed by the SYS user, including recursive SQL statements.  The default is to enable. 
·     table – Used in the Explain Plan command (if specified) for Oracle to load data temporarily into an Oracle table.  The user must specify the schema and table name for the plan table.  If the table exists all rows will be deleted otherwise tkprof will create the table and use it.
·     record - creates a SQL script with the specified filename that contains all non-recursive SQL statements from the trace file.  For DBAs wanting to log the SQL statements in a separate file, this is the option to use.  In the example earlier, the contents of the Allsql.sql file include:
alter session set sql_trace=true ;
select * from employee where emp_id = 87933 ;
alter session set sql_trace=false ;

·     explain – Executes an Explain Plan for each statement in the trace file and displays the output.  Explain Plan is less useful when used in conjunction with tkprof than it is when used alone.  Explain Plan provides thepredicted optimizer execution path without actually executing the statement.  tkprof shows you the actualexecution path and statistics after the statement is executed.  In addition, running Explain Plan against SQL statements that were captured and saved is always problematic given dependencies and changes in the database environment.
·     sort – Sorts the SQL statements in the trace file by the criteria deemed most important by the DBA.  This option allows the DBA to view the SQL statements that consume the most resources at the top of the file, rather than searching the entire file contents for the poor performers.  The following are the data elements available for sorting:
·     prscnt – The number of times the SQL was parsed.
·     prscpu – The CPU time spent parsing.
·     prsela – The elapsed time spent parsing the SQL.
·     prsdsk – The number of physical reads required for the parse.
·     prsmis – The number of consistent block reads required for the parse.
·     prscu - The number of current block reads required for the parse.
·     execnt – The number of times the SQL statement was executed. 
·     execpu – The CPU time spent executing the SQL. 
·     exeela – The elapsed time spent executing the SQL.
·     exedsk – The number of physical reads during execution.
·     exeqry – The number of consistent block reads during execution.
·     execu – The number of current block reads during execution.
·     exerow – The number of rows processed during execution.
·     exemis – The number of library cache misses during execution.
·     fchcnt – The number of fetches performed.
·     fchcpu – The CPU time spent fetching rows.
·     fchela – The elapsed time spent fetching rows.
·     fchdsk – The number of physical disk reads during the fetch.
·     fchqry – The number of consistent block reads during the fetch.
·     fchcu – The number of current block reads during the fetch.
·     fchrow – The number of rows fetched for the query.
Many sort options exist, however some are more useful than others.  Execntexecpuexedsk and prscnt are the most useful sort parameters when formatting trace output with tkprof, because they are more indicative of most SQL performance issues.  The execution counts are most indicative of performance issues and therefore should bubble to the top.  In particular, this is true of the SQL statement that used the most CPU – execpu.  The prscnt parameter is important because it shows the SQL statements that are parsed most, usually a result of not using bind variables.
SQL Execution Statistics in 9i
The SQL tuning process prior to 9.2 involved executing SQL commands, then OS commands, and then SQL commands again.  This is a very time-consuming and burdensome process.  In 9.2, Oracle decided to retain the SQL metrics for each statement in the SGA (library cache) while the statement remains cached.  The DBA could then diagnose SQL issues at a SQL prompt and leave tkprof alone.  This is a vast improvement over prior versions.

Oracle9.2 contains the following views that enable the DBA to identify SQL issues directly from a SQL prompt. These views should be used to periodically check SQL statistics and full-table scans, alerting the DBA to problem areas requiring corrective action.

·     v$sql_plan  - This view shows the same information as shown by Explain Plan except it is the actual execution plan and not the predicted one – just like tkprof and even better than Explain Plan.
·     v$sql_plan_statistics  - This view contains the execution statistics for each operation (step) in the v$sql_plan. Queries should access this view and look for poor SQL operations including TABLE ACCESS FULL – full-table scans.
·     v$sql_plan_statistics_all  - This view combines data from v$sql_planv$sql_plan_statistics and v$sql_workarea

Both v$sql_plan_statistics and v$sql_plan_statistics_all are not populated by default.  The option statistics_level=all must be set.  
Enable tracing only on those sessions that are having problems.  Be selective to minimize the performance burden on the sessions and to retain more free space in the user dump destination directory.

Rename trace files after tracing is disabled.  The new file name should be something more meaningful that will be recognizable at a later date.   The name employee_index.trc is much more meaningful than ORCL92_ora_3172.trc
Delete trace files that are no longer needed to reduce clutter and free disk space.

Explain Plan is not as useful when used in conjunction with tkprof since the trace file contains the actual execution path of the SQL statement.  Use Explain Plan when anticipated execution statistics are desired without actually executing the statement.

When tracing a session, remember that nothing in v$session indicates that a session is being traced.  Therefore, trace with caution and remember to disable tracing after an adequate amount of trace data has been generated.

tkprof does not control the contents of a trace file, it simply formats them.  Oracle provides multiple ways to actually generate the trace file.   tkprof is valuable for detailed trace file analysis.  For those DBAs that pefer a simpler tracing mechanism with instant feedback, the autotrace utility should be used.