Sunday, August 15, 2010

EXPLAIN PLAN Usage

When an SQL statement is passed to the server the Cost Based Optimizer (CBO) uses database statistics to create an execution plan which it uses to navigate through the data. Once you've highlighted a problem query the first thing you should do is EXPLAIN the statement to check the execution plan that the CBO has created. This will often reveal that the query is not using the relevant indexes, or indexes to support the query are missing. Interpretation of the execution plan is beyond the scope of this article.

Plan Table

The explain plan process stores data in the PLAN_TABLE. This table can be located in the current schema or a shared schema and is created using in SQL*Plus as follows:
SQL> CONN sys/password AS SYSDBA
Connected
SQL> @$ORACLE_HOME/rdbms/admin/utlxplan.sql
SQL> GRANT ALL ON sys.plan_table TO public;
SQL> CREATE PUBLIC SYNONYM plan_table FOR sys.plan_table;

AUTOTRACE - The Easy Option?

Switching on the AUTOTRACE parameter in SQL*Plus causes an explain to be performed on every query.
SQL> SET AUTOTRACE ON
SQL> SELECT *
  2  FROM   emp e, dept d
  3  WHERE  e.deptno = d.deptno
  4  AND    e.ename  = 'SMITH';

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
---------- ---------- --------- ---------- --------- ---------- ----------
    DEPTNO     DEPTNO DNAME          LOC
---------- ---------- -------------- -------------
      7369 SMITH      CLERK           7902 17-DEC-80        800
        20         20 RESEARCH       DALLAS



Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   NESTED LOOPS
   2    1     TABLE ACCESS (FULL) OF 'EMP'
   3    1     TABLE ACCESS (BY INDEX ROWID) OF 'DEPT'
   4    3       INDEX (UNIQUE SCAN) OF 'PK_DEPT' (UNIQUE)




Statistics
----------------------------------------------------------
         81  recursive calls
          4  db block gets
         27  consistent gets
          0  physical reads
          0  redo size
        941  bytes sent via SQL*Net to client
        425  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>
This is a relatively easy way to get the execution plan but there is an issue. In order to get the execution plan the statement must be run to completion. If the query is particularly inefficient and/or returns many rows, this may take a considerable time. At first glance, using the TRACEONLY option of AUTOTRACE seems to remove this issue, but this option merely suppresses the output of the query data, it doesn't prevent the statement being run. As such, long running queries will still take a long time to complete, but they will not present their data. The following example show this in practice.
CREATE OR REPLACE FUNCTION pause_for_secs(p_seconds IN NUMBER) RETURN NUMBER A
BEGIN
  DBMS_LOCK.sleep(p_seconds);
  RETURN p_seconds;
END;
/

Function created.

SQL> SET TIMING ON
SQL> SET AUTOTRACE ON
SQL> SELECT pause_for_secs(10) FROM DUAL;

PAUSE_FOR_SECS(10)
------------------
                10

1 row selected.

Elapsed: 00:00:10.28

Execution Plan
----------------------------------------------------------
Plan hash value: 1550022268

-------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |      |     1 |       |     2   (0)| 00:00:01 |
|   1 |  FAST DUAL       |      |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        189  recursive calls
          0  db block gets
        102  consistent gets
          0  physical reads
          0  redo size
        331  bytes sent via SQL*Net to client
        332  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
         13  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> SET AUTOTRACE TRACEONLY
SQL> SELECT pause_for_secs(10) FROM DUAL;

1 row selected.

Elapsed: 00:00:10.26

Execution Plan
----------------------------------------------------------
Plan hash value: 1550022268

-------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |      |     1 |       |     2   (0)| 00:00:01 |
|   1 |  FAST DUAL       |      |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        331  bytes sent via SQL*Net to client
        332  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>
The query takes the same time to return (about 10 seconds) whether the TRACEONLY option is used or not. If the TRACEONLY option prevented the query running, you would expect it to return instantly, like an EXPLAIN PLAN.

The solution to this is to use the TRACEONLY EXPLAIN option, which only performs the EXPLAIN PLAN, rather than running the statement.

EXPLAIN PLAN

The EXPLAIN PLAN method doesn't require the query to be run, greatly reducing the time it takes to get an execution plan for long-running queries compared to AUTOTRACE. First the query must be explained:
SQL> EXPLAIN PLAN FOR
  2  SELECT *
  3  FROM   emp e, dept d
  4  WHERE  e.deptno = d.deptno
  5  AND    e.ename  = 'SMITH';

Explained.

SQL>
Then the execution plan displayed:
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql

Plan Table
--------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |          |       |      |        |       |       |
|  NESTED LOOPS             |          |       |      |        |       |       |
|   TABLE ACCESS FULL       |EMP       |       |      |        |       |       |
|   TABLE ACCESS BY INDEX RO|DEPT      |       |      |        |       |       |
|    INDEX UNIQUE SCAN      |PK_DEPT   |       |      |        |       |       |
--------------------------------------------------------------------------------

8 rows selected.

SQL>
For parallel queries use the utlxplp.sql script instead of utlxpls.sql.

Statement ID

If multiple people are accessing the same plan table, or you would like to keep a history of the execution plans you should use the STATEMENT_ID. This associates a user specified ID with each plan which can be used when retrieving the data.
SQL> EXPLAIN PLAN SET STATEMENT_ID = 'TIM' FOR
  2  SELECT *
  3  FROM   emp e, dept d
  4  WHERE  e.deptno = d.deptno
  5  AND    e.ename  = 'SMITH';

Explained.

SQL> @explain.sql TIM

PLAN                                   OBJECT_NAME     OBJECT_TYPE     BYTES  COST PARTITION_START PARTITION_STOP
-------------------------------------- --------------- --------------- ----- ----- --------------- ---------------
Select Statement                                                          57     4
  1.1 Nested Loops                                                        57     4
    2.1 Table Access (Full)            EMP             TABLE              37     3
    2.2 Table Access (By Index Rowid)  DEPT            TABLE              20     1
      3.1 Index (Unique Scan)          PK_DEPT         INDEX (UNIQUE)            0

5 rows selected.

SQL>
By default the Oracle scripts to not accept a statement_id parameter. You can easily modify the scripts or you can use the script listed under DBA Scripts on this site.

How Can we monitor how fast a table is imported?

If you need to monitor how fast rows are imported from a running import job, try one of the following methods:
Method 1:
select substr(sql_text,instr(sql_text,'INTO "'),30) table_name,
rows_processed,
round((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_min
from sys.v_$sqlarea
where sql_text like 'INSERT %INTO "%'
and command_type = 2
and open_versions > 0;
For this to work one needs to be on Oracle 7.3 or higher (7.2 might also be OK). If the import has more than one table, this statement will only show information about the current table being imported.
Contributed by Osvaldo Ancarola, Bs. As. Argentina.
Method 2:
Use the FEEDBACK=n import parameter. This command will tell IMP to display a dot for every N rows imported.

What is STATSPACK and how does one use it ?

Statspack is a set of performance monitoring and reporting utilities provided by Oracle from Oracle8i and above. Statspack provides improved BSTAT/ESTAT functionality, though the old BSTAT/ESTAT scripts are still available. For more information about STATSPACK, read the documentation in file $ORACLE_HOME/rdbms/admin/spdoc.txt.
Install Statspack:
cd $ORACLE_HOME/rdbms/admin
sqlplus "/ as sysdba" @spdrop.sql -- Install Statspack -
sqlplus "/ as sysdba" @spcreate.sql-- Enter tablespace names when prompted
Use Statspack:
sqlplus perfstat/perfstat
exec statspack.snap; -- Take a performance snapshots
exec statspack.snap;
o Get a list of snapshots
select SNAP_ID, SNAP_TIME from STATS$SNAPSHOT;
@spreport.sql -- Enter two snapshot id's for difference report
Other Statspack Scripts:
. sppurge.sql - Purge a range of Snapshot Id's between the specified begin and end Snap Id's
. spauto.sql - Schedule a dbms_job to automate the collection of STATPACK statistics
. spcreate.sql - Installs the STATSPACK user, tables and package on a database (Run as SYS).
. spdrop.sql - Deinstall STATSPACK from database (Run as SYS)
. sppurge.sql - Delete a range of Snapshot Id's from the database
. spreport.sql - Report on differences between values recorded in two snapshots
. sptrunc.sql - Truncates all data in Statspack tables

Saturday, August 14, 2010

How to get the time difference between two date columns?

Oracle allows two date values to be subtracted from each other returning a numeric value indicating the number of days between the two dates (may be a fraction). This example will show how to relate it back to a time value.

Let's investigate some solutions. Test data:

SQL> CREATE TABLE dates (date1 DATE, date2 DATE);
Table created.
SQL>
SQL> INSERT INTO dates VALUES (SYSDATE, SYSDATE-1);
1 row created.
SQL> INSERT INTO dates VALUES (SYSDATE, SYSDATE-1/24);
1 row created.
SQL> INSERT INTO dates VALUES (SYSDATE, SYSDATE-1/60/24);
1 row created.
SQL> SELECT (date1 - date2) FROM dates;
DATE1-DATE2
-----------
1
.041666667
.000694444

Solution 1:
---------------------------------------------

SQL> SELECT floor(((date1-date2)*24*60*60)/3600)
2 || ' HOURS ' ||
3 floor((((date1-date2)*24*60*60) -
4 floor(((date1-date2)*24*60*60)/3600)*3600)/60)
5 || ' MINUTES ' ||
6 round((((date1-date2)*24*60*60) -
7 floor(((date1-date2)*24*60*60)/3600)*3600 -
8 (floor((((date1-date2)*24*60*60) -
9 floor(((date1-date2)*24*60*60)/3600)*3600)/60)*60) ))
10 || ' SECS ' time_difference
11 FROM dates;

TIME_DIFFERENCE
--------------------------------------------------------------------------------
24 HOURS 0 MINUTES 0 SECS
1 HOURS 0 MINUTES 0 SECS
0 HOURS 1 MINUTES 0 SECS


Solution 2:
-----------------------


If you don't want to go through the floor and ceiling maths, try this method.

SQL> SELECT to_number( to_char(to_date('1','J') +
2 (date1 - date2), 'J') - 1) days,
3 to_char(to_date('00:00:00','HH24:MI:SS') +
4 (date1 - date2), 'HH24:MI:SS') time
5 FROM dates;

DAYS TIME
---------- --------
1 00:00:00
0 01:00:00
0 00:01:00

Friday, August 13, 2010

User Managed Recover of Datafile when Backup is not Available

If a datafile is damaged and you don't have any backup of the datafile then still you can recover your data file if the following conditions met.

1)You have all archived redo logs available since the data file creation.

2)The control file contains the name of the lost file. That means either the control file is current or it is taken after the data file creation.

3)The datafile does not belong to SYSTEM tablespace.

To illustrate the scenario I will demonstrate the whole procedure with an example.

A)Add a datafile to a tablespace.

SQL> ALTER TABLESPACE USER_TBS ADD DATAFILE '/oradata2/data1/dbase/datafile03.dbf' SIZE 1M;
Tablespace altered.

B)Drop the datafile.

SQL> !rm /oradata2/data1/dbase/datafile03.dbf

C)Make the affect data file offline if the database is open.

SQL> ALTER DATABASE CREATE DATAFILE '/oradata2/data1/dbase/datafile03.dbf' AS '/oradata1/arju/created_new.dbf';
ALTER DATABASE CREATE DATAFILE '/oradata2/data1/dbase/datafile03.dbf' AS '/oradata1/arju/created_new.dbf'
*
ERROR at line 1:
ORA-01182: cannot create database file 7 - file is in use or recovery
ORA-01110: data file 7: '/oradata2/data1/dbase/datafile03.dbf'

It can't rename as database currently using this file. So, make it offline.

SQL> alter database datafile '/oradata2/data1/dbase/datafile03.dbf' offline;
Database altered.

D) a new, empty datafile to replace a damaged datafile that has no corresponding backup. Here I craeted the damaged file 7 as to new location '/oradata1/arju/created_new.dbf'.

SQL> ALTER DATABASE CREATE DATAFILE '/oradata2/data1/dbase/datafile03.dbf' AS '/oradata1/arju/created_new.dbf';
Database altered.

E)Perform media recovery on the empty datafile.
SQL> RECOVER DATAFILE '/oradata1/arju/created_new.dbf';
Media recovery complete.

F)Make the datafile online.

SQL> alter database datafile '/oradata1/arju/created_new.dbf' ONLINE;
Database altered.

Now we will look at the scenario if while starting database it can't find datafile.

SQL> !rm /oradata1/arju/created_new.dbf

Shutdown immediate will fail as it can't find one datafile.

SQL> shutdown immediate;
ORA-01116: error in opening database file 7
ORA-01110: data file 7: '/oradata1/arju/created_new.dbf'
ORA-27041: unable to open file
Solaris-AMD64 Error: 2: No such file or directory
Additional information: 3

SQL> shutdown abort
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 167772160 bytes
Fixed Size 2019288 bytes
Variable Size 88080424 bytes
Database Buffers 71303168 bytes
Redo Buffers 6369280 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 7 - see DBWR trace file
ORA-01110: data file 7: '/oradata1/arju/created_new.dbf'


SQL> ALTER DATABASE CREATE DATAFILE '/oradata1/arju/created_new.dbf' AS '/oradata2/data1/dbase/datafile03.dbf';

Database altered.

SQL> RECOVER DATAFILE 7;
Media recovery complete.

SQL> ALTER DATABASE OPEN;
Database altered.

How to Restore spfile from backup

1.Connect to target Database with RMAN.

i)If you have lost your spfile while your database is running then,

RMAN>CONNECT TARGET /

ii)If your database is not up and you don't use recovery catalog then use,
RMAN>CONNECT TARGET /
RMAN>SET DBID 3386862614

2)Start the instance with dummy parameter file.
RMAN>STARTUP FORCE NOMOUNT

3)Restore server parameter file.
To restore in default location,
RMAN> RESTORE SPFILE FROM AUTOBACKUP;
To restore in another location,
RMAN> RESTORE SPFILE TO 'new_location' FROM AUTOBACKUP;

If you want to restore to a pfile then use,
RMAN> RESTORE SPFILE TO PFILE '/tmp/initTEMP.ora';

4)Start the instance.
RMAN>STARTUP;

Recover a Lost Datafile without backup.

In my database I have added one data file. I have not back up that data file. Now someone accidentally drop that datafile. Can I get back the my data file now? The answer is yes if my database run in archivelog mode and I have the available archive logs science the creation of datafile.

With an example I will demonstrate the procedure.

1)I create tablespace data with one datafile and after some time I have added another datafile. Both data file don't have any backup.

SQL> SELECT LOG_MODE FROM V$DATABASE;
LOG_MODE
------------
ARCHIVELOG

SQL> create tablespace data datafile '/oradata2/data.dbf' size 2M;
Tablespace created.

SQL> alter tablespace data add datafile '/oradata2/data1/data02.dbf' size 2M;
Tablespace altered.

SQL> select file_name from dba_data_files;
FILE_NAME
--------------------------------------------------------------------------------
/oradata2/data1/dbase/users01.dbf
/oradata2/data1/dbase/sysaux01.dbf
/oradata2/data1/dbase/undotbs01.dbf
/oradata2/data1/dbase/system01.dbf
/oradata2/data.dbf
/oradata2/data1/data02.dbf
6 rows selected.

2)Creating Tables to Populate the Datafiles.

SQL> create table before_delete tablespace data as select level a1 from dual connect by level <9999; Table created. SQL> insert into before_delete select level a1 from dual connect by level <9999; 9998 rows created. SQL> commit;
Commit complete.

SQL> select file_name from dba_data_files where file_id in (select file_id from dba_extents where segment_name='BEFORE_DELETE');

FILE_NAME
--------------------------------------------------------------------------------
/oradata2/data.dbf
/oradata2/data1/data02.dbf

3)Now delete both datafile by OS command.

SQL> !rm /oradata2/data.dbf

SQL> !rm /oradata2/data1/data02.dbf

4)Now I want to get back both datafile. Connect to RMAN and make the affected Tablespace offline immediate.

SQL> !rman target /
RMAN> sql'ALTER TABLESPACE DATA OFFLINE IMMEDIATE';

5)Perform Recovery of The tablespace.
RMAN> RECOVER TABLESPACE DATA;

6)Make the status online of the tablespace.
RMAN> sql'ALTER TABLESPACE DATA ONLINE';

In this case oracle at first create and empty datafile and then apply all archived redo logs and online redo logs on the tablespace up to the current time.