Friday, July 16, 2010

Recovering a datafile which has no backup using RMAN

Step 1: Confirm Database Name and Identify Tablespace Name to be used for DR Test.

SQL> select INSTANCE_NAME, VERSION from v$instance;

INSTANCE_NAME VERSION
---------------- -----------------
opsdba 10.2.0.2.0


SQL> select name from v$tablespace;

NAME
------------------------------
SYSTEM
UNDOTBS1
SYSAUX
USERS
TEMP1

SQL> select file_name from dba_data_files;

FILE_NAME
----------------------------------------------------------------------
----------
/u02/ORACLE/opsdba/users01.dbf
/u02/ORACLE/opsdba/sysaux01.dbf
/u02/ORACLE/opsdba/undotbs01.dbf
/u02/ORACLE/opsdba/system01.dbf
/u02/ORACLE/opsdba/users05.dbf
/u02/ORACLE/opsdba/users02.dbf
/u02/ORACLE/opsdba/users03.dbf
/u02/ORACLE/opsdba/users06.dbf
/u02/ORACLE/opsdba/users07.dbf
/u02/ORACLE/opsdba/users04.dbf

10 rows selected.

Step 2: Create a new tablespace with 1 Datafile which will be used for recovery exercise.

SQL> create tablespace drtbs datafile '/u02/ORACLE/opsdba/drtbs1.dbf'
size 100M extent management local;

Tablespace created.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSTEM
UNDOTBS1
SYSAUX
USERS
DRTBS
TEMP1

6 rows selected.

SQL> select file_name from dba_data_files where tablespace_name=
'DRTBS';

FILE_NAME
--------------------------------------------------------------
/u02/ORACLE/opsdba/drtbs1.dbf

SQL> exit

Step 3: Take a full Backup of Database & Archive log.

RMAN> backup database plus archivelog;


Starting backup at 28-JAN-07
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=154 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=362 stamp=612901138
input archive log thread=1 sequence=2 recid=363 stamp=612901141
input archive log thread=1 sequence=3 recid=364 stamp=612901146
input archive log thread=1 sequence=4 recid=365 stamp=612943256
input archive log thread=1 sequence=5 recid=366 stamp=612976032
input archive log thread=1 sequence=6 recid=367 stamp=612976036
input archive log thread=1 sequence=7 recid=368 stamp=613049876
input archive log thread=1 sequence=8 recid=369 stamp=613049878
input archive log thread=1 sequence=9 recid=370 stamp=613049879
input archive log thread=1 sequence=10 recid=371 stamp=613049880
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.145.1.1.
613089429 tag=TAG20070128T223709 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:16
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=372 stamp=613049882
input archive log thread=1 sequence=2 recid=373 stamp=613049884
input archive log thread=1 sequence=3 recid=374 stamp=613049885
input archive log thread=1 sequence=4 recid=375 stamp=613049887
input archive log thread=1 sequence=5 recid=376 stamp=613049888
input archive log thread=1 sequence=6 recid=377 stamp=613049889
input archive log thread=1 sequence=7 recid=378 stamp=613049890
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.146.1.1.
613089445 tag=TAG20070128T223709 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=379 stamp=613089428
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.147.1.1.
613089453 tag=TAG20070128T223709 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 28-JAN-07

Starting backup at 28-JAN-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u02/ORACLE/opsdba/system01.dbf
input datafile fno=00003 name=/u02/ORACLE/opsdba/sysaux01.dbf
input datafile fno=00002 name=/u02/ORACLE/opsdba/undotbs01.dbf
input datafile fno=00011 name=/u02/ORACLE/opsdba/drtbs1.dbf
input datafile fno=00004 name=/u02/ORACLE/opsdba/users01.dbf
input datafile fno=00005 name=/u02/ORACLE/opsdba/users02.dbf
input datafile fno=00006 name=/u02/ORACLE/opsdba/users03.dbf
input datafile fno=00007 name=/u02/ORACLE/opsdba/users05.dbf
input datafile fno=00010 name=/u02/ORACLE/opsdba/users04.dbf
input datafile fno=00008 name=/u02/ORACLE/opsdba/users06.dbf
input datafile fno=00009 name=/u02/ORACLE/opsdba/users07.dbf
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.148.1.1.
613089455 tag=TAG20070128T223735 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 28-JAN-07

Starting backup at 28-JAN-07
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=2 recid=380 stamp=613089480
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.149.1.1.
613089480 tag=TAG20070128T223800 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 28-JAN-07

Starting Control File and SPFILE Autobackup at 28-JAN-07
piece handle=/opt/oracle/product10gpr2/dbs/c-1493612009-20070128-03
comment=NONE
Finished Control File and SPFILE Autobackup at 28-JAN-07
RMAN>exit

Step 4: Add a New Datafile to that Tablespace and verify that the new file is now a member of that tablespace. Also switch few log files just for confirmation.

SQL> select file_name from dba_data_files where tablespace_name=
'DRTBS';

FILE_NAME
----------------------------------------------------------------------
----------
/u02/ORACLE/opsdba/drtbs1.dbf

SQL> alter tablespace drtbs add datafile '/u02/ORACLE/opsdba/drtbs2.
dbf' size 100m;

Tablespace altered.

SQL> select file_name from dba_data_files where tablespace_name=
'DRTBS';

FILE_NAME
----------------------------------------------------------------------
----------
/u02/ORACLE/opsdba/drtbs1.dbf
/u02/ORACLE/opsdba/drtbs2.dbf

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

Step 5: Create a new Table in that tablespace and perform some DML operation. Also after DML operations switch some logfile.

SQL> create table t1(col1 number(10)) tablespace DRTBS;

Table created.

SQL> insert into t1 values (&a);
Enter value for a: 1
old 1: insert into t1 values(&a)
new 1: insert into t1 values(1)

1 row created.

SQL> /
Enter value for a: 2
old 1: insert into t1 values(&a)
new 1: insert into t1 values(2)

1 row created.

SQL> /
Enter value for a: 3
old 1: insert into t1 values(&a)
new 1: insert into t1 values(3)

1 row created.

SQL> /
Enter value for a: 4
old 1: insert into t1 values(&a)
new 1: insert into t1 values(4)

1 row created.

SQL> commit;
Commit complete.

SQL> select * from t1;

COL1
----------
1
2
3
4

SQL> alter system switch logfile;

System altered.

Step 6: In the OS Level remove all files of that tablespace including the newly added one (whose backup does not exist).

opsdba:/opt/oracle>cd /u02/ORACLE/opsdba/
opsdba:/u02/ORACLE/opsdba>ls –lrt drtbs*.dbf
total 1441496
-rw-r----- 1 oracle dba 104865792 Jan 28 22:38 drtbs1.dbf
-rw-r----- 1 oracle dba 104865792 Jan 28 23:08 drtbs2.dbf
opsdba:/u02/ORACLE/opsdba>rm -r drtbs*.dbf
opsdba:/u02/ORACLE/opsdba>ls -lrt drtbs*.dbf
ls: drtbs*.dbf: No such file or directory
opsdba:/u02/ORACLE/opsdba>

Step 7: Try to bring the tablespace offline and we will get error message as follows.

opsdba:/u02/ORACLE/opsdba>sql

SQL> alter tablespace drtbs offline;
alter tablespace drtbs offline
*
ERROR at line 1:
ORA-01116: error in opening database file 11
ORA-01110: data file 11: '/u02/ORACLE/opsdba/drtbs1.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

Step 8: Now bring the Tablespace offline with IMMEDIATE option and confirm.

SQL> alter tablespace drtbs offline immediate;

Tablespace altered.

SQL> select TABLESPACE_NAME,STATUS from dba_tablespaces;

TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
UNDOTBS1 ONLINE
SYSAUX ONLINE
USERS ONLINE
TEMP1 ONLINE
DRTBS OFFLINE
6 rows selected.

Step 9: Now connect to RMAN and Confirm that No backup exist for the Newly added Datafile. Then try to restore the TABLESPACE and we will see that RMAN is creating that newly added Datafile as a part of the restore process. This is a new feature in 10G.

SQL> select file_id, file_name from dba_data_files where
tablespace_name='DRTBS';

FILE_ID
----------
FILE_NAME
----------------------------------------------------------------------
----------
11
/u02/ORACLE/opsdba/drtbs1.dbf

12
/u02/ORACLE/opsdba/drtbs2.dbf


SQL> exit;

opsdba:/u02/ORACLE/opsdba>rman target /

Recovery Manager: Release 10.2.0.2.0 - Production on Sun Jan 28 23:18:
09 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: OPSDBA (DBID=1493612009)

RMAN> list backup of datafile 11;

using target database control file instead of recovery catalog

List of Backup Sets
===================

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
129 Full 669.09M DISK 00:00:15 28-JAN-07
BP Key: 129 Status: AVAILABLE Compressed: NO Tag: TAG
20070128T223735
Piece Name: /opt/oracle/backup/opsdba/OPSDBA.20070128.148.1.1.
613089455
List of Datafiles in backup set 129
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
11 Full 2747296 28-JAN-07 /u02/ORACLE/opsdba/drtbs1.dbf

RMAN> list backup of datafile 12;

No output …

RMAN> restore tablespace drtbs;

Starting restore at 28-JAN-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=144 devtype=DISK

creating datafile fno=12 name=/u02/ORACLE/opsdba/drtbs2.dbf
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00011 to /u02/ORACLE/opsdba/drtbs1.dbf
channel ORA_DISK_1: reading from backup piece /opt/oracle/backup/
opsdba/OPSDBA.20070128.148.1.1.613089455
channel ORA_DISK_1: restored backup piece 1
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.148.1.1.
613089455 tag=TAG20070128T223735
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
Finished restore at 28-JAN-07



opsdba:/u02/ORACLE/opsdba>rman target /


Step 10: Start Recovery of that Tablespace.

opsdba:/u02/ORACLE/opsdba>rman target /

Recovery Manager: Release 10.2.0.2.0 - Production on Sun Jan 28 23:49:33 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: OPSDBA (DBID=1493612009)

RMAN> recover tablespace drtbs;

Starting recover at 28-JAN-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=154 devtype=DISK

starting media recovery
un Jan 28 23:22:36 2007
alter database recover logfile '/u02/ORACLE/opsdba/arch/arch_1_2_613052894.dbf'
Sun Jan 28 23:22:36 2007
Media Recovery Log /u02/ORACLE/opsdba/arch/arch_1_2_613052894.dbf
Sun Jan 28 23:22:36 2007
Recovery of Online Redo Log: Thread 1 Group 3 Seq 3 Reading mem 0
Mem# 0 errs 0: /u02/ORACLE/opsdba/redo03.log
Sun Jan 28 23:22:36 2007
Recovery of Online Redo Log: Thread 1 Group 2 Seq 4 Reading mem 0
Mem# 0 errs 0: /u02/ORACLE/opsdba/redo02.log
Sun Jan 28 23:22:36 2007
Recovery of Online Redo Log: Thread 1 Group 1 Seq 5 Reading mem 0
Mem# 0 errs 0: /u02/ORACLE/opsdba/redo01.log
Sun Jan 28 23:22:36 2007
Media Recovery Complete (opsdba)
Completed: alter database recover logfile '/u02/ORACLE/opsdba/arch/arch_1_2_613052894.dbf'
Sun Jan 28 23:22:52 2007
media recovery complete, elapsed time: 00:00:00

Finished recover at 28-JAN-07

RMAN> exit


Recovery Manager complete.

Step 11: Bring the Tablespace online and confirm .
SQL> alter tablespace drtbs online;

Tablespace altered.

SQL> select TABLESPACE_NAME,STATUS from dba_tablespaces;

TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
UNDOTBS1 ONLINE
SYSAUX ONLINE
USERS ONLINE
TEMP1 ONLINE
DRTBS ONLINE

6 rows selected.

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> select * from t1;

COL1
----------
1
2
3
4

SQL> select file_name from dba_data_files where tablespace_name=
'DRTBS';

FILE_NAME
----------------------------------------------------------------------
----------
/u02/ORACLE/opsdba/drtbs1.dbf
/u02/ORACLE/opsdba/drtbs2.dbf

SQL> exit


Step 12: As a standard practice immediately after the recovery please take a FULL DATABASE BACKUP.

opsdba:/u02/ORACLE/opsdba>rman target /

Recovery Manager: Release 10.2.0.2.0 - Production on Sun Jan 28 23:25:
01 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: OPSDBA (DBID=1493612009)

RMAN> backup database plus archivelog;


Starting backup at 28-JAN-07
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=144 devtype=DISK

skipping archive log file /u02/ORACLE/opsdba/arch/arch_1_1_613052894. dbf; already backed up 1 time(s)
skipping archive log file /u02/ORACLE/opsdba/arch/arch_1_2_613052894. dbf; already backed up 1 time(s)
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=3 recid=381 stamp=613091353
input archive log thread=1 sequence=4 recid=382 stamp=613091355
input archive log thread=1 sequence=5 recid=383 stamp=613092208
input archive log thread=1 sequence=6 recid=384 stamp=613092210
input archive log thread=1 sequence=7 recid=385 stamp=613092318
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.154.1.1.
613092318 tag=TAG20070128T232518 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 28-JAN-07

Starting backup at 28-JAN-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u02/ORACLE/opsdba/system01.dbf
input datafile fno=00003 name=/u02/ORACLE/opsdba/sysaux01.dbf
input datafile fno=00002 name=/u02/ORACLE/opsdba/undotbs01.dbf
input datafile fno=00011 name=/u02/ORACLE/opsdba/drtbs1.dbf
input datafile fno=00012 name=/u02/ORACLE/opsdba/drtbs2.dbf
input datafile fno=00004 name=/u02/ORACLE/opsdba/users01.dbf
input datafile fno=00005 name=/u02/ORACLE/opsdba/users02.dbf
input datafile fno=00006 name=/u02/ORACLE/opsdba/users03.dbf
input datafile fno=00007 name=/u02/ORACLE/opsdba/users05.dbf
input datafile fno=00010 name=/u02/ORACLE/opsdba/users04.dbf
input datafile fno=00008 name=/u02/ORACLE/opsdba/users06.dbf
input datafile fno=00009 name=/u02/ORACLE/opsdba/users07.dbf
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.155.1.1.
613092320 tag=TAG20070128T232520 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 28-JAN-07

Starting backup at 28-JAN-07
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=8 recid=386 stamp=613092345
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.156.1.1.
613092346 tag=TAG20070128T232545 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 28-JAN-07

Starting Control File and SPFILE Autobackup at 28-JAN-07
piece handle=/opt/oracle/product10gpr2/dbs/c-1493612009-20070128-07
comment=NONE
Finished Control File and SPFILE Autobackup at 28-JAN-07

RMAN> list backup of datafile 11;


List of Backup Sets
===================

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
129 Full 669.09M DISK 00:00:15 28-JAN-07
BP Key: 129 Status: AVAILABLE Compressed: NO Tag: TAG
20070128T223735
Piece Name: /opt/oracle/backup/opsdba/OPSDBA.20070128.148.1.1.
613089455
List of Datafiles in backup set 129
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
11 Full 2747296 28-JAN-07 /u02/ORACLE/opsdba/drtbs1.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
136 Full 669.73M DISK 00:00:21 28-JAN-07
BP Key: 136 Status: AVAILABLE Compressed: NO Tag: TAG
20070128T232520
Piece Name: /opt/oracle/backup/opsdba/OPSDBA.20070128.155.1.1.
613092320
List of Datafiles in backup set 136
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
11 Full 2748771 28-JAN-07 /u02/ORACLE/opsdba/drtbs1.dbf

RMAN> list backup of datafile 12;


List of Backup Sets
===================

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
136 Full 669.73M DISK 00:00:21 28-JAN-07
BP Key: 136 Status: AVAILABLE Compressed: NO Tag: TAG
20070128T232520
Piece Name: /opt/oracle/backup/opsdba/OPSDBA.20070128.155.1.1.
613092320
List of Datafiles in backup set 136
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
12 Full 2748771 28-JAN-07 /u02/ORACLE/opsdba/drtbs2.dbf

RMAN>

Thursday, July 15, 2010

UNIX FOR DBA

How to kill all similar processes with single command (in this case opmn)
ps -ef | grep opmn |grep -v grep | awk ‘{print $2}’ |xargs -i kill -9 {}

Locating Files under a particular directory
find . -print |grep -i test.sql
Using AWK in UNIX

To remove a specific column of output from a UNIX command – for example to determine the UNIX process Ids for all Oracle processes on server (second column)
ps -ef |grep -i oracle |awk '{ print $2 }'

Changing the standard prompt for Oracle Users
Edit the .profile for the oracle user
PS1="`hostname`*$ORACLE_SID:$PWD>"
Display top 10 CPU consumers using the ps command
/usr/ucb/ps auxgw | head -11

Show number of active Oracle dedicated connection users for a particular ORACLE_SID
ps -ef | grep $ORACLE_SID|grep -v grep|grep -v ora_|wc -l

Total number of semaphores held by all instances on server
ipcs -as | awk '{sum += $9} END {print sum}'

View allocated RAM memory segments
ipcs -pmb

Manually deallocate shared memeory segments
ipcrm -m ''

Show mount points for a disk in AIX
lspv -l hdisk13

Display amount of occupied space (in KB) for a file or collection of files in a directory or sub-directory
du -ks * | sort -n| tail

Display total file space in a directory
du -ks .

Cleanup any unwanted trace files more than seven days old
find . *.trc -mtime +7 -exec rm {} \;

Locate Oracle files that contain certain strings
find . -print | xargs grep rollback

Locate recently created UNIX files (in the past one day)
find . -mtime -1 -print

Finding large files on the server (more than 100MB in size)
find . -size +102400 -print

Crontab :
To submit a task every Tuesday (day 2) at 2:45PM
45 14 2 * * /opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1
To submit a task to run every 15 minutes on weekdays (days 1-5)
15,30,45 * 1-5 * * /opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1
To submit a task to run every hour at 15 minutes past the hour on weekends (days 6 and 0)
15 * 0,6 * * opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1

Difference Between LMTS and DMTS

Locally Managed Tablespaces:
A tablespace that manages its own extents maintains a bitmap in each datafile to keep track of the free or used status of blocks in that datafile. Each bit in the bitmap corresponds to a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables (like sys.uet$, sys.fet$) in the data dictionary (except for special cases such as tablespace quota information).
When you create a locally managed tablespace, header bitmaps are created for each datafile. If more datafiles are added, new header bitmaps are created for each added file.
Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free extents. The sizes of extents that are managed locally can be determined automatically by the system. Alternatively, all extents can have the same size in a locally managed tablespace.

Dictionary Managed Tablespaces:
In DMT, to keep track of the free or used status of blocks, oracle uses data dictionry tables. When an extent is allocated or freed for reuse, free space is recorded in the SYS.FET$ table, and used space in the SYS.UET$ table. Whenever space is required in one of these tablespaces, the ST (space transaction) enqueue latch must be obtained to do inserts and deletes agianst these tables. As only one process can acquire the ST enque at a given time, this often lead to contention. These changes generate rollback information because they update tables (like sys.uet$, sys.fet$) in the data dictionary.

Advantages of Locally Managed Tablespaces(LMT) over Dictionary Managed Tablespaces(DMT):

1. Reduced recursive space management
2. Reduced contention on data dictionary tables
3. No rollback generated
4. No coalescing required

Converting DMT to LMT:

SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local(‘ts1′);
PL/SQL procedure successfully completed.

Converting LMT to DMT:

SQL> exec dbms_space_admin.Tablespace_Migrate_FROM_Local(‘ts2′);
PL/SQL procedure successfully completed.

Important Points:
1. LMTs can be created as
a) AUTOALLOCATE: specifies that the tablespace is system managed. Users cannot specify an extent size.
b) UNIFORM: specifies that the tablespace is managed with uniform extents of SIZE bytes. The default SIZE is 1 megabyte.
2. One cannot create a locally managed SYSTEM tablespace in 8i.
3. This is possible with in 9.2.0.X, where SYSTEM tablespace is created by DBCA as locally managed by default. With a locally managed SYSTEM tablespace, the rest of the tablespaces in such database have to be locally managed as well.
4. Locally managed temporary tablespaces can not be of type “permanent”.

Transportable tablespace EXP/IMP

Transportable tablespace EXP/IMP with OS files
=============================================


* Using RMAN CONVERT

1. Prepare for export of the tablespace.

* Check that the tablespace will be seft contained:
SQL> execute sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
SQL> select * from sys.transport_set_violations;
==>These violations must be resolved before the tablespaces
can be transported

* The tablespaces need to be in READ ONLY mode in order to successfully
run a transport tablespace export.

SQL> ALTER TABLESPACE TBS1 READ ONLY;
SQL> ALTER TABLESPACE TBS2 READ ONLY;



2. Export the metadata.
* Using the original export utility

exp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log
transport_tablespace=y tablespaces=TBS1,TBS2

* Using datapump exp

CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir' ;
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;
expdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_TABLESPACES = TBS1,TBS2

If you want to perform a transport tablespace operation with a strict
containment check, use the TRANSPORT_FULL_CHECK parameter:

expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir
TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y

If the tablespace set being transported is not self-contained then the
export will fail.

3. Use V$TRANSPORTABLE_PLATFORM to determine the endianness of each platform.
You can execute the following query on each platform instance:

SELECT tp.platform_id,substr(d.PLATFORM_NAME,2,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;


4. If you see that the endian formats are different and then a conversion is
necessary for transporting the tablespace set.

RMAN> convert tablespace TBS1
to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';

RMAN> convert tablespace TBS2
to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';

then copy the datafiles with ftp or copy

4. Import the transportable tablespace

* Using the original import utility

imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log
transport_tablespace=y datafiles='/tmp/....','/tmp/...'

* Using datapump

CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir' ;
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_DATAFILES='/tmp/....','/tmp/...'
REMAP_SCHEMA=(source:target)
REMAP_SCHEMA=(source_sch2:target_schema_sch2)

You can use REMAP_SCHEMA if you want to change the ownership of the
transported database objects.

5. Put the tablespaces in read/write mode:

SQL> ALTER TABLESPACE TBS1 READ WRITE;
SQL> ALTER TABLESPACE TBS2 READ WRITE;


* Using DBMS_FILE_TRANSFER

You can also use DBMS_FILE_TRANSFER to copy datafiles to another host. You
need to follow the same steps specified above for ASM files.
But if the endian formats are different then you must use the RMAN convert
after transfering the files.

This is an example of usage:

RMAN> CONVERT DATAFILE
'/hq/finance/work/tru/tbs_31.f',
'/hq/finance/work/tru/tbs_32.f',
'/hq/finance/work/tru/tbs_41.f'
TO PLATFORM="Solaris[tm] OE (32-bit)"
FROM PLATFORM="HP TRu64 UNIX"
DB_FILE_NAME_CONVERT=
"/hq/finance/work/tru/", "/hq/finance/dbs/tru"
PARALLELISM=5;

Then the same example showing destination being an +ASM diskgroup:

RMAN> CONVERT DATAFILE
'/hq/finance/work/tru/tbs_31.f',
'/hq/finance/work/tru/tbs_32.f',
'/hq/finance/work/tru/tbs_41.f'
TO PLATFORM="Solaris[tm] OE (32-bit)"
FROM PLATFORM="HP TRu64 UNIX"
DB_FILE_NAME_CONVERT=
"/hq/finance/work/tru/", "+diskgroup"
PARALLELISM=5;

Rman Diagnositics

RMAN Performance Tuning Diagnostics
RMAN Performance Tuning Diagnostics [ID 311068.1]



PURPOSE
-------

Provide diagnostic reference for collecting RMAN backup
performance diagnostic information.

SCOPE & APPLICATION
-------------------

You should have some skills in database performance evaluation.

RMAN Performance Tuning Diagnostics
-----------------------------

Collecting performance information regarding RMAN backup
performance problems, bottlenecks come from somewhere. Not all tracing
shown here is required. It is up to you to determine which will help
you better define and isolate your performance issue. Collecting the
right information can help to quickly identify and resolve any Oracle
bottlenecks and provide evidence when it a 3rd party issue.

What should be collected?

1. RMAN debug output
2. Channel debug=5 trace=1 or 5 output trace files.
3. STATSPACK reports
4. If the CPU and slow sql (rather than IO) is the issue, the 10046 trace
and tkprof.
5. The resulting trace can be used with tkprof to get the explain plan
6. Collecting data from v$backup_async_io
7. Backup to tape is slow, how to determine if it's an Oracle or a media issue?

Prerequisites:

Make sure timed_statistics = true for all performance testing.
Always set the NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS' to get precision
time in the RMAN output.


1. RMAN debug output
This will always provide information to help the issue if RMAN is at fault.
Oracle will produce a useful error as sqlcode= or rc= in the debug. If you
do not see any errors and the channel released due to errors it is likely
the failure occurred outside of RMAN and failed in Oracle (channels) or the
device output used for backup.

2. Channel debug=5 trace=5 output trace files.
trace 1 - general trace info for skg and sbt
trace 5 - verbose channel tracing of skg and sbt calls

CONFIGURE CHANNEL DEVICE TYPE DISK DEBUG=5 TRACE 1;
or
CONFIGURE CHANNEL DEVICE TYPE SBT PARMS "ENV=(....) DEBUG=5 TRACE 5;
Trace is only useful for tape backup and restore tracing.

Example debug=5 output:
-----------------------

krbbpc: 04/29/2005 18:03:02: backup piece 1 started,
outfname=test2_ts_STXDB01Q_2agj3kam_T20050429_s74_s74_p1

-- cut to reduce output --

krbbpc: 04/29/2005 18:03:12: backup piece 1 finished,
handle=test2_ts_STXDB01Q_2agj3kam_T20050429_s74_s74_p1
elapsed time: 0:10
performance statistics:
count of waits for each file:
38: /odb/stx02/oradata07/stxdb01q/ts_impwrctr1_data_f1.dbf
output file throughput graph:
samples taken at 1 second intervals
bytes/second: min=131072, max=13107200, mean=4915200, stddev=4586958


3. STATSPACK reports
Statspack reports should be generated for the time during the backups to
capture the top wait events, the top sql, and pl/sql consuming the
resources. You can quickly see if the RMAN sql is taking the system resources
or is competing for resources. The reports should be at level 10 and split
into 15 minute intervals. This example shows a backup spending 80% of the
time waiting for sbtwrite2. So Oracle is waiting on tape IO.

Example output:
--------------

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
sbtwrite2 1,002 22 80.68
DFS lock handle 121 1 4.00
CPU time 1 3.60
control file sequential read 1,290 1 3.26
process startup 2 1 3.07

More can be determined from statspack and/or AWR output reports. The first main
indicator to look for bottlenecks are the top waits events.

Common Issue:
-------------

sbtwrite2 --> waiting for tape
sbtbackup --> waiting for tape
CPU Time --> sql or pl/sql to compile or runtime loop issue
IO Done --> waiting for IO slaves


4. If the CPU and slow sql (rather than IO) is the issue, the 10046 event trace
and tkprof. In the RMAN script add the sql 10046 event:

RMAN> sql "alter session set event ''10046 trace name context, level 12''";

Usually, slower performance sql will be related to the catalog, the 10046 trace
is best performed on the catalog database unless you are specifically looking
at sql performance in the target database. If there is a catalog in use, make
sure to review note:247611.1 "Known RMAN Performance Problems". This
provides new indexes and other workarounds that can be used to remedy known
issues for a variety of performance problems.


5. The resulting trace can be used with tkprof to get the explain plan
and cost of the sql execution.

- tkprof tracefile outfile [explain=user/password] [options...]
- TKPROF filename_source filename_output EXPLAIN=[username/password]
SYS=[YES/No] TABLE=[tablename]
$ tkprof ora_xxxx.trc tkprof.out explain=use/pwd sys=yes table=table_name

We always want sys=yes because the target is backed up by SYS AS SYSDBA so all
commands will run under the sys schema.

TABLE
Specifies the schema and name of the table into which TKPROF temporarily places
execution plans before writing them to the output file. If the specified table
already exists, TKProf deletes its rows then uses it for the EXPLAIN PLAN
command. If this table does not exist, TKProf creates, uses, then drops it.
This parameter is ignored if the EXPLAIN parameter isn't used.

6. Collecting data from V$BACKUP_ASYNC_IO

Determine how much time Oracle took to open the file to backup then close. In
this example the backup of datafile "/data17/mds_sm_odr_apr2004_tab01.dbf"
took 1 hour and 28 minutes. The archivelogs from sequence# 45411 until sequence#
45420 were all placed in a single backup that completed in 11 seconds. See Note:237083.1.

Example output
--------------

V$BACKUP_ASYNC_IO:

FNAME OPEN CLOSE H:MM
------------------------------------ -------------------- -------------------- ----
/data17/mds_sm_odr_apr2004_tab01.dbf 23-MAR-2005 02:53:52 23-MAR-2005 04:21:49 1:28
/arch/mdsp_arch1_45415_518895446.log 23-MAR-2005 04:01:19 23-MAR-2005 04:09:53 :08
/arch/mdsp_arch1_45417_518895446.log 23-MAR-2005 04:01:19 23-MAR-2005 04:09:53 :08
/arch/mdsp_arch1_45412_518895446.log 23-MAR-2005 04:01:19 23-MAR-2005 04:09:53 :08
/arch/mdsp_arch1_45418_518895446.log 23-MAR-2005 04:01:19 23-MAR-2005 04:09:53 :08
/arch/mdsp_arch1_45420_518895446.log 23-MAR-2005 04:01:19 23-MAR-2005 04:09:53 :08
/arch/mdsp_arch1_45419_518895446.log 23-MAR-2005 04:01:21 23-MAR-2005 04:09:53 :08
/arch/mdsp_arch1_45416_518895446.log 23-MAR-2005 04:01:21 23-MAR-2005 04:09:53 :08
/arch/mdsp_arch1_45414_518895446.log 23-MAR-2005 04:01:21 23-MAR-2005 04:09:53 :08
/arch/mdsp_arch1_45411_518895446.log 23-MAR-2005 04:09:53 23-MAR-2005 04:11:58 :02
/arch/mdsp_arch1_45413_518895446.log 23-MAR-2005 04:09:53 23-MAR-2005 04:11:58 :02
rigg1rac_1_1 23-MAR-2005 04:01:19 23-MAR-2005 04:12:05 :11


7. Backup to tape is slow, how to determine if it's an Oracle or a media issue?
Backup a tablespace that represents a good amount of data. But make sure
there is enough room to write to local disk also.
a. backup tablespace to tape # Time to tape?
RMAN> run {
allocate channel t1 device type 'sbt_tape'
parms 'SBT_LIBRARY='
debug=5 TRACE 2;
backup tablespace ;
}

b. backup tablespace to disk api # Time to disk?
RMAN> run {
allocate channel t1 device type 'sbt_tape'
parms 'SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/path/backup)'
debug=5 TRACE 2;
backup tablespace ;
}

c. backup validate tablespace # Time to just read the data no output
RMAN> backup validate tablespace ;

With the debugging on the channels you can compare the difference in time it
takes to read the data during validate and to backup the same data to disk and
tape using the same sbt calls that are used by RMAN in either case.

Oracle9i-->

CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS
'SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/path/backup)' debug=5 TRACE 2;

allocate channel t1 device type 'sbt_tape' parms
'SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/path/backup)' debug=5 TRACE 2;

REFERENCES
==========

Note.94224.1 FAQ- Statspack Complete Reference
Note 237083.1 Using V$BACKUP_ASYNC_IO / V$BACKUP_SYNC_IO to Monitor RMAN Performance
Note 247611.1 "Known RMAN Performance Problems".
Posted by javeedkaleem at 4:47 AM 0 comments
Reactions:
transpor the tablespace to different platform
------------------------------------------------------------------------------------------------
How t0 move tablespaces across platforms using Transportable Tablespaces with RMAN [ID 371556.1]

-------------------------------------------------------------------------------------------------

Modified 14-JUL-2009 Type HOWTO Status PUBLISHED



Starting with Oracle Database 10g, you can transport tablespaces across
platforms. In this note there is a step by step guide about how to do it
with ASM datafiles and with OS filesystem datafiles.


Supported platforms
====================

You can query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are
supported and to determine each platform's endian format (byte ordering).

SQL> select name, platform_id,platform_name from v$database;

NAME PLATFORM_ID PLATFORM_NAME
--------- ----------- ------------------------------
JK-ASSOC 13 Linux 64-bit for AMD


-------------------------------------------------------------------------------------------------
col platform_name for a30
set pages 100
SELECT PLATFORM_ID, PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM; -------------------------------------------------------------------------------------------------

SQL> COLUMN PLATFORM_NAME FORMAT A32
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- -------------------------------- --------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
10 Linux IA (32-bit) Little
6 AIX-Based Systems (64-bit) Big
3 HP-UX (64-bit) Big
5 HP Tru64 UNIX Little
4 HP-UX IA (64-bit) Big
11 Linux IA (64-bit) Little
15 HP Open VMS Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
13 Linux 64-bit for AMD Little
16 Apple Mac OS Big
12 Microsoft Windows 64-bit for AMD Little
17 Solaris Operating System (x86) Little


If the source platform and the target platform are of different endianness,
then an additional step must be done on either the source or target platform
to convert the tablespace being transported to the target format.
If they are of the same endianness, then no conversion is necessary and
tablespaces can be transported as if they were on the same platform.


Limitations on Transportable Tablespace Use
==============================================

1. The source and target database must use the same character set and national
character set.

2. You cannot transport a tablespace to a target database in which a tablespace
with the same name already exists. However, you can rename either the
tablespace to be transported or the destination tablespace before the
transport operation.

3. Objects with underlying objects (such as materialized views) or contained
objects (such as partitioned tables) are not transportable unless all of
the underlying or contained objects are in the tablespace set.
* Review Table "Objects Exported and Imported in Each Mode" from the
Oracle Database Utilities documentation, there are several object
types that are not exported in tablespace mode.

* If you use spatial indexes, then:
- be aware that TTS across different endian platforms are not supported
for spatial indexes in 10gR1 and 10gR2; such a limitation has been
released in 11g
- specific Spatial packages must be run before exporting and after
transportation, please see Oracle Spatial documentation.

4. Beginning with Oracle Database 10g Release 2, you can transport tablespaces
that contain XMLTypes, but you must use the IMP and EXP utilities,
not Data Pump. When using EXP, ensure that the CONSTRAINTS and TRIGGERS
parameters are set to Y (the default).

The following query returns a list of tablespaces that contain XMLTypes:

select distinct p.tablespace_name
from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t
where t.table_name=x.table_name and
t.tablespace_name=p.tablespace_name and
x.owner=u.username

Transporting tablespaces with XMLTypes has the following limitations:

a.The target database must have XML DB installed.
b.Schemas referenced by XMLType tables cannot be the XML DB standard
schemas.
c.Schemas referenced by XMLType tables cannot have cyclic dependencies.
d.Any row level security on XMLType tables is lost upon import.
e.If the schema for a transported XMLType table is not present in the
target database, it is imported and registered.
If the schema already exists in the target databasean error is returned
unless the ignore=y option is set.

5. Advanced Queues Transportable tablespaces do not support 8.0-compatible
advanced queues with multiple recipients.

6. You cannot transport the SYSTEM tablespace or objects owned by the user SYS.

7. Opaque Types Types(such as RAW, BFILE, and the AnyTypes) can be transported, but
but they are not converted as part of the cross-platform transport operation.
Their actual structure is known only to the application, so the application
must address any endianness issues after these types are moved to the new
platform.

8. Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are transportable
using Data Pump but not the original export utility, EXP.



Transportable tablespace EXP/IMP of ASM files
=============================================

* Using RMAN CONVERT
....................

There is no direct way to exp/imp ASM files as transportable tablespace.
However, the funcationality can be done via RMAN.

You must follow this steps:

1. Prepare for exporting the tablespace.

* Check that the tablespace will be seft contained:
SQL>execute sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
SQL> select * from sys.transport_set_violations;
==>These violations must be resolved before the tablespaces
can be transported

* The tablespaces need to be in READ ONLY mode in order to successfully
run a transport tablespace export.

SQL> ALTER TABLESPACE TBS1 READ ONLY;
SQL> ALTER TABLESPACE TBS2 READ ONLY;


2. Export the metadata.

* Using the original export utility
exp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log
transport_tablespace=y tablespaces=TBS1,TBS2

* Using datapump exp

CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir' ;
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

expdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_TABLESPACES = TBS1,TBS2

If you want to perform a transport tablespace operation with a strict
containment check, use the TRANSPORT_FULL_CHECK parameter:

expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir
TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y

If the tablespace set being transported is not self-contained, then the
export will fail.



3. Use V$TRANSPORTABLE_PLATFORM to find the exact platform name of
target database. You can execute the following query on target platform
instance:

SELECT tp.platform_id,substr(d.PLATFORM_NAME,2,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;


4. Generate an OS file from the ASM, with target platform format

RMAN> CONVERT TABLESPACE TBS1
TO PLATFORM 'HP-UX (64-bit)' FORMAT '/tmp/%U';
RMAN> CONVERT TABLESPACE TBS2
TO PLATFORM 'HP-UX (64-bit)' FORMAT '/tmp/%U';

5. Copy the generated file to target server if different from source, with
ftp or cp


6. Import the transportable tablespace

* Using the original import utility

imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log
transport_tablespace=y datafiles='/tmp/....','/tmp/...'

* Using datapump

CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir' ;
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_DATAFILES='/tmp/....','/tmp/...'
REMAP_SCHEMA=(source:target) REMAP_SCHEMA=(source_sch2:target_schema_sch2)

You can use REMAP_SCHEMA if you want to change the ownership of the
transported database objects.

7. Put the tablespaces in read/write mode:

SQL> ALTER TABLESPACE TBS1 READ WRITE;
SQL> ALTER TABLESPACE TBS2 READ WRITE;

If you want to transport the datafiles from ASM area to filesystem, you have
finished after the above steps. But if you want to transport tablespaces
between two ASM areas you must continue.

8. Copy the datafile '/tmp/....dbf' into the ASM area using rman:

rman nocatalog target /
RMAN> backup as copy datafile '/tmp/....dbf' format '+DGROUPA';

# where +DGROUPA is the name of the ASM diskgroup

9. Switch the datafile to the copy.
If the 10g database is open you need to offline the datafile first

SQL> alter database datafile '/tmp/....dbf' offline;
Switch to the copy:
rman nocatalog target /
RMAN> switch datafile '/tmp/....dbf' to copy;

# Note down the name of the copy created in the +DGROUPA diskgroup
# ex. '+DGROUPA/s101/datafile/tts.270.5'

10. Put the datafile online again, we need to recover it first

SQL> recover datafile '+DGROUPA/s101/datafile/tts.270.5';
SQL> alter database datafile '+DGROUPA/s101/datafile/tts.270.5' online;

11. Check if datafile is indeed part of the ASM area and online:

SQL> select name, status from v$datafile;

Output should be:

+DGROUPA/s101/datafile/tts.270.5 ONLINE


* Using DBMS_FILE_TRANSFER
..........................

You can also use DBMS_FILE_TRANSFER to copy datafiles from one ASM disk group
to another, even on another host. Starting with 10g release 2 you can also use
DBMS_FILE_TRANSFER also to copy datafiles from ASM to filesystem and to
filesystem to ASM.

We can use the PUT_FILE Procedure

This procedure reads a local file or ASM and contacts a remote database
to create a copy of the file in the remote file system. The file that
is copied is the source file, and the new file that results from the copy
is the destination file.
The destination file is not closed until the procedure completes successfully.

Syntax:

DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object IN VARCHAR2,
source_file_name IN VARCHAR2,
destination_directory_object IN VARCHAR2,
destination_file_name IN VARCHAR2,
destination_database IN VARCHAR2);

Where:
source_directory_object ->The directory object from which the file is copied
at the local source site. This directory object must
exist at the source site.
source_file_name ->The name of the file that is copied from the local
file system. This file must exist in the local file
system in the directory associated with the source
directory object.
destination_directory_object -> The directory object into which the file is
placed at the destination site. This directory object
must exist in the remote file system.
destination_file_name ->The name of the file placed in the remote file system
A file with the same name must not exist in the
destination directory in the remote file system.
destination_database ->The name of a database link to the remote database
to which the file is copied.


If we want to use DBMS_FILE_TRANSFER.PUT_FILE to transfer the file from source
to destination host, the steps 3,4,5 should be changed by the following.

1) Create a directory at target database host, and give permissions to
local user.
This is the directory object into which the file is placed at the
destination site, it must exist in the remote file system.

CREATE OR REPLACE DIRECTORY target_dir AS '+DGROUPA' ;
GRANT WRITE ON DIRECTORY target_dir TO "USER";

2) Create a directory at source database host. The directory object from which
the file is copied at the local source site. This directory object must
exist at the source site.

CREATE OR REPLACE DIRECTORY source_dir AS '+DGROUPS/subdir' ;
GRANT READ,WRITE ON DIRECTORY source_dir TO "USER";
CREATE OR REPLACE DIRECTORY source_dir_1 AS '+DGROUPS/subdir/subdir_2' ;

3) Create a dblink to connect to target database host:

CREATE DATABASE LINK DBS2 CONNECT TO IDENTIFIED BY
USING 'target_connect';

Where target_connect is the connect string for target database and USER
is the user that we are going to use to transfer the datafiles.

4) Connect to source instance:

dbs1 => Connect string to source database
dbs2 => dblink to target database
a1.dat => Filename at source database
a4.dat => Filename at target database


CONNECT user/password@dbs1

-- - put a1.dat to a4.dat (using dbs2 dblink)
-- - level 2 sub dir to parent dir
-- - user has read privs on source_dir_1 at dbs1 and write on target_dir
-- - in dbs2
BEGIN
DBMS_FILE_TRANSFER.PUT_FILE ( 'source_dir_1' , 'a1.dat' ,
'target_dir' , 'a4.dat' ,
'dbs2' ) ;
END ;

General Issues

Regular scripts for Oracle DBA
select prev_hash_value from v$session where sid='&sid;
SQL> select prev_hash_value from v$session where sid='466'
select sql_text from v$sqlarea where hash_value='&hash_value'
SQL>select sql_text from v$sqlarea where hash_value='590983'now kill the holders session;select * from v$lock where block=1;
select username ,status,logon_time,process,osuser,sid,serial#,sql_hash_value from v$session where status='ACTIVE' order by logon_time;

+++++++++++++++++Session WAIT's :+++++++++++++++++
set pages 1000
set linesize 180
col username for a10
col module for a10
col event for a30
select a.sid, b.username, substr(b.module,1,10) Module, a.event, a.p1, substr(a.p1raw,1,25) p1raw, a.p2, a.p3, a.state, a.wait_time from v$session_wait a, v$session b where a.event not in ('SQL*Net message from client','wakeup time manager','pipe get','rdbms ipc message','smon timer','pmon timer','null event','jobq slave wait')
and a.sid=b.sid;
++++++++++++++++++++SQL TEXT for SID :++++++++++++++++++++
select sql_text,HASH_VALUE from v$sqltext t, v$session s where t.address = s.sql_addressand s.sid = &1 order by piece/CLEAR COLUMNS
+++++++++++++++++++LOCK DETAIL :+++++++++++++++++++
select * from v$lock where block=1;
+++++++++++++++++LOCK Details :+++++++++++++++++
col ORACLE_USERNAME for a13
col OS_USER_NAME for a10
col object_name for a40
PROMPT "PRESSS ENTER FOR ALL SIDs"
select a.ORACLE_USERNAME,a.OS_USER_NAME,a.SESSION_ID,b.object_name,c.NAME "RBS NAME",decode(a.LOCKED_MODE,1,'No Lock',2,'Row Share',3,'Row Excl',4,'Share',5,'Shr Row Excl',6,'Exclusive',null) "LOCK MODE",d.modulefrom v$locked_object a,dba_objects b,v$rollname c,v$session d where a.OBJECT_ID=b.object_idand a.XIDUSN=c.USNand a.session_id=d.sidand a.SESSION_ID like '%&sid%'order by session_id
+++++++++++++++++++++++long running query:+++++++++++++++++++++++
SELECT to_char(start_time,'hh24:mi:ss') stime,message,( sofar/totalwork)* 100 percent FROM v$session_longops where sid ='&1'SELECT sid,SQL_ID,to_char(start_time,'hh24:mi:ss') stime,message,( sofar/totalwork)* 100 percent FROM v$session_longops where message like '%WIP%';
For RMAN :
===========
SELECT sid, serial#, context, sofar, totalwork, round(sofar/totalwork*100,2) "% Complete" FROM v$session_longops WHERE opname LIKE 'RMAN%' AND opname NOT LIKE '%aggregate%' AND totalwork != 0 AND sofar <> totalwork;

For perticular SID :
=====================
SELECT sid, serial#, context, sofar, totalwork, round(sofar/totalwork*100,2) "% Complete" FROM v$session_longops WHERE sid='&1' AND totalwork != 0 AND sofar <> totalwork

For All Sessions :
=============
SELECT sid, serial#, context, sofar, totalwork, round(sofar/totalwork*100,2) "% Complete" FROM v$session_longops WHERE totalwork != 0 AND sofar <> totalwork

+++++++++++++++++++++++++++
Rollback Segments Usage :
+++++++++++++++++++++++++++
select a.sid, a.username, a.osuser, b.USED_UBLK "UNDO BLOCKS", c.Name "UNDO NAME"from v$session a, v$transaction b, v$rollname cwhere a.saddr = b.ses_addr and b.XIDUSN = c.USN ;
++++++++++++++++++++++++++++++++
Find Unix Process ID from SID:
++++++++++++++++++++++++++++++++
set pages 1000
set linesize 190
col username for a10
select a.PID "Ora PID",a.SPID "Server PID",a.LATCHWAIT,b.Program,b.Username,b.Osuser,b.Process "Clinet Process"from v$process a,v$session b where a.addr=b.paddr and b.sid=&1

++++++++++++++++++++++++++++++++
Find SID from Unix Process ID :
++++++++++++++++++++++++++++++++
undefine unixprocessid
set pages 5000
set verify offset feed on
set long 20000
set pages 5000
SELECT b.osuser,
b.username,
a.pid,
a.spid,
c.sql_text,
b.saddr,
b.sid,
b.serial#,
to_char(b.logon_time,'DD-MON-YY HH24:MI:SS') "LOGON_TIME",
b.username,
b.program,
b.process client,
a.spid server
FROM v$sqlarea c,v$session b,v$process a
WHERE a.spid = '&1'
AND b.paddr = a.addr
AND b.sql_hash_value = c.hash_value(+)
AND b.sql_address = c.address(+) ;

ps aux head
ps -ef sort +3 tail -10

++++++++++++++++++++++
SORT Segment Usage :
++++++++++++++++++++++
select rpad(sid,5,' ')as sid,username,rpad(program,15,' ') as prgm,rpad(blocks*16384/(1024*1024)' MB ',10,' ') as tempsize, segtype from v$session,v$sort_usage where saddr=session_addr order by blocks ;

SELECT s.sid, s.serial#, s.username, u.tablespace, u.contents, u.extents, u.blocksFROM v$session s, v$sort_usage u WHERE s.saddr=u.session_addr
/
CLEAR COLUMNS

column TABLESPACE_NAME format a10
column USERNAME format a10
prompt Total Temp spaceprompt
=======================================
select sum(bytes/1024/1024) Tot_MB,tablespace_name from v$temp_extent_mapgroup by tablespace_nameorder by tablespace_name/ prompt Used Temp spaceprompt =======================================
select (bytes_used/1024/1024) USED_MB,tablespace_name from v$temp_extent_poolorder by tablespace_name
/
prompt Actual user who is using sort segment
prompt =====================================
SELECT s.username,s.sid, u.tablespace, u.contents, u.extents, u.blocks,u.segtypeFROM v$session s, v$sort_usage uWHERE s.saddr=u.session_addr;

prompt Actual usage of sort segment
prompt ============================
select TABLESPACE_NAME,TOTAL_BLOCKS,USED_BLOCKS,FREE_BLOCKS from v$sort_segment;

prompt extent info for sort segement
prompt =============================
SELECT tablespace_name, extent_size, total_extents, used_extents, free_extents, max_used_size FROM v$sort_segment;

prompt sort ratio
prompt ==========
select ((mem.value)/(mem.value+disk.value)) "Sort Ratio"from v$sysstat mem, v$sysstat diskwhere mem.name = 'sorts (memory)' and disk.name = 'sorts (disk)';

prompt temp tablepsace actual usage ratio
prompt ==================================
select (b.value/a.value)*100 temp_use_ratio from(select sum(bytes) value from v$temp_extent_map) a,(select sum(BYTES_USED) value from v$temp_extent_pool) b;

+++++++++++++++++++++++++++++++++++++
MVIEW refresh Status on MASTER Site :
+++++++++++++++++++++++++++++++++++++
set pages 1000
set linesize 180
col SNAPSHOT_SITE for a30
select r.name,
r.snapshot_site,
l.current_snapshots
from dba_registered_snapshots r, dba_snapshot_logs l
where r.snapshot_id = l.snapshot_id
and
l.master in (select master from dba_snapshot_logs) ;
CLEAR COLUMNS
++++++++++++++++++++++++
LOCK Session Details :
++++++++++++++++++++++++
SELECT
DECODE(request,0,'Holder: ','Waiter: ')sid sess,
id1,
id2,
lmode,
request,
type
FROM V$LOCK
WHERE (id1, id2, type) IN(SELECT id1, id2, type FROM V$LOCK WHERE request>0)ORDER BY id1, request;

TX type:6 ==> this is transaction exclusive lock
( due to update where other transaction waiting for the update of same )

++++++++++++++++++++++++++++++++++++++++++++
Query To Get the SID FROM UNIX PROCESS ID
++++++++++++++++++++++++++++++++++++++++++++
select sid, serial# from v$session where process='&1';
column schemaname format a10
heading "Oracle User"
column osuser format a15
heading "Osuser"
column module format a18
heading "Module"
column terminal format a10
heading "Terminal"
column status format a10
heading "Status"
column sid format 99999
heading "Sid"
column serial# format 9999999
heading "Serial#"
column program format a20
heading "Program"
col username format a8
SELECT p.spid,
s.osuser,
s.username,
s.sid,
s.serial#,
s.module,
s.status,
to_char(s.logon_time, 'DD-MON-YY HH24:MI:SS') logontime,
s.program,
s.LAST_CALL_ET/60 Since_Updated
FROM v$process p, v$session s WHERE s.sid= '&1'

++++++++++++++++++++++++++++++++++
Query To Check The LogOn Time
+++++++++++++++++++++++++++++++++++
select 'Sid, Serial# ' s.sid' , 's.serial#chr(10) 'Form User : ' fu.user_namechr(10) 'Machine - Terminal : ' s.machine' - ' s.terminalchr(10)'OS Process Ids : ' s.process'(Client) 'p.spid' (Server)' ' (Since)' to_char(s.logon_time,'DD-MON-YYYY HH24:MI:SS')chr(10) 'Client Program Name : 's.programchr(10) 'Action / Module : 's.action' / 's.modulechr(10) 'User Description : ' fu.descriptionchr(10)
'-----------------------------------------------------------------'
from gv$process p,gv$session s, apps.fnd_logins f, apps.fnd_user fu where p.addr = s.paddr and p.inst_id=s.inst_id and f.spid (+) = s.process and p.spid='&1' and fu.user_id (+) = f.user_id and f.end_time is null order by f.start_time;
Posted by javeedkaleem at 10:57 PM 0 comments
Reactions:
How to take backup of dblinks & synonym etc
For Dropping objects :
select 'drop 'object_type,owner'.' object_name';' from dba_objects where owner='BPA2' and object_type!='INDEX';
select 'drop 'object_type,owner'.' object_name' cascade constraint;' from dba_objects where owner='BPA2' and object_type!='INDEX';
Taking the backup of Synonyms :
select 'create synonym 'synonym_name' for 'table_owner'.'table_name'@'db_link'; ' Create_Synonym_Script from dba_synonyms where owner=upper('&owner');
Taking the backup of Database links :
select 'conn 'owner'/&&'owner chr(10)'create database link 'rtrim(ltrim(db_link))' ''connect to 'rtrim(ltrim(username))' identified by 'rtrim(ltrim(password))' using '''''rtrim(ltrim(b.host))''''';' from dba_db_links A, sys.link$ B where A.db_link = B.name and B.ctime = A.created and owner like '&Owner' order by owner;


DB link :
select 'conn 'owner'/&&'owner chr(10)'create database link 'rtrim(ltrim(db_link))' ''connect to 'rtrim(ltrim(username))' identified by 'rtrim(ltrim(password))' using '''''rtrim(ltrim(b.host))''''';' from dba_db_links A, sys.link$ B where A.db_link = B.name and B.ctime = A.created and owner like '&Owner' order by owner;

Synonym:
select 'create synonym 'synonym_name' for 'table_owner'.'table_name'@'db_link'; '
Create_Synonym_Script from dba_synonyms where owner=upper('&owner');
select count(*),object_type,status from user_objects group by object_type,status;
Posted by javeedkaleem at 10:49 PM 0 comments
Reactions:
Blocking session issue how to resolve them
***************** blocking session checking & blocking sql ************
select count(*) from v$Lock where block=1;
col host_name for a20
select instance_name, status , host_name from v$instance;
select count(*) from v$Lock where block=1;
select prev_hash_value from v$session where sid='&466';
select sql_text from v$sqlarea where hash_value='&hash_value';
************************* blocking session picture **************************
SQL> select sid from v$lock where block=1;
O/p give u the sid which is blocking
SQL> select prev_hash_value from v$session where sid='&466';
O/P paste sid it gives the hash_value
SQL> select sql_text from v$sqlarea where hash_value='&hash_value'
paste ths hash_value to get the sql blocking the session
*************************************************************************

Oracle Datapump

INTRODUCTION

Oracle Data Pump is the replacement for the original Export and Import utilities.
Available starting in Oracle Database 10g, Oracle Data Pump enables very high-speed
movement of data and metadata from one database to another.

The new Data Pump Export and Import utilities have a similar look and feel to the
original utilities, but they are much more efficient and give you greater control and
management of your import and export jobs.

This paper gives users of the original Export and Import utilities a primer on how to move
up to the faster, more powerful, and more flexible Data Pump Export and Import utilities.
See the Oracle Database 10g Utilities Guide for more comprehensive information about
Oracle Data Pump.
NEW CONCEPTS IN ORACLE DATA PUMP

There are two new concepts in Oracle Data Pump that are different from original Export
and Import.

DIRECTORY OBJECTS
Data Pump differs from original Export and Import in that all jobs run primarily on the
server using server processes. These server processes access files for the Data Pump
jobs using directory objects that identify the location of the files. The directory objects
enforce a security model that can be used by DBAs to control access to these files.

INTERACTIVE COMMAND-LINE MODE
Besides regular operating system command-line mode, there is now a very powerful
interactive command-line mode which allows the user to monitor and control Data Pump
Export and Import operations.
CHANGING FROM ORIGINAL EXPORT/IMPORT TO ORACLE DATA PUMP

CREATING DIRECTORY OBJECTS

In order to use Data Pump, the database administrator must create a directory object
and grant privileges to the user on that directory object. If a directory object is not
specified, a default directory object called data_pump_dir is provided. The default
data_pump_dir is available only to privileged users unless access is granted by the DBA.
In the following example, the following SQL statement creates a directory object named
dpump_dir1 that is mapped to a directory located at /usr/apps/datafiles.

You would login to SQL*Plus as system and enter the following SQL command to
create a directory.

1. SQL> CREATE DIRECTORY dpump_dir1 AS ‘/usr/apps/datafiles’;

After a directory is created, you need to grant READ and WRITE permission on the
directory to other users. For example, to allow the Oracle database to read and to write
to files on behalf of user scott in the directory named by dpump_dir1, you must execute
the following command:

2. SQL> GRANT READ,WRITE ON DIRECTORY dpump_dir1 TO scott;

Note that READ or WRITE permission to a directory object means only that the Oracle
database will read or write that file on your behalf. You are not given direct access to
those files outside of the Oracle database unless you have the appropriate operating
system privileges. Similarly, the Oracle database requires permission from the operating
system to read and write files in the directories.

Once the directory access is granted, the user scott can export his database objects with
command arguments:

3. >expdp username/password DIRECTORY=dpump_dir1 dumpfile=scott.dmp

COMPARISON OF COMMAND-LINE PARAMETERS FROM ORIGINAL EXPORT AND IMPORT
TO DATA PUMP

Data Pump commands have a similar look and feel to the Original Export and Import
commands, but are different. Below are a few examples that demonstrate some of
these differences.

1) Example import of tables from scott’s account to jim’s account

Original Import:
> imp username/password FILE=scott.dmp FROMUSER=scott TOUSER=jim
TABLES=(*)

Data Pump Import:
> impdp username/password DIRECTORY=dpump_dir1 DUMPFILE=scott.dmp
TABLES=scott.emp REMAP_SCHEMA=scott:jim

Note how the FROMUSER/TOUSER syntax is replaced by the REMAP_SCHEMA
option.

2) Example export of an entire database to a dump file with all GRANTS, INDEXES,
and data

> exp username/password FULL=y FILE=dba.dmp GRANTS=y INDEXES=y ROWS=y

> expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX
DIRECTORY=dpump_dir1 DUMPFILE=dba.dmp CONTENT=ALL

Data Pump offers much greater metadata filtering than Original Export and Import. The
INCLUDE parameter allows you to specify which object (and its dependent objects) you
want to keep in the export job. The EXCLUDE parameter allows you to specify which
object (and its dependent objects) you want to keep out of the export job. You cannot
mix the two parameters in one job. Both parameters work with Data Pump Import as
well, and you can use different INCLUDE and EXCLUDE options for different operations
on the same dump file.

3) Tuning Parameters

Unlike Original Export and Import, which used the BUFFER, COMMIT, COMPRESS,
CONSISTENT, DIRECT, and RECORDLENGTH parameters, Data Pump needs no
tuning to achieve maximum performance. Data Pump chooses the best method to
ensure that data and metadata are exported and imported in the most efficient manner.
Initialization parameters should be sufficient upon installation.

4) Moving data between versions

The Data Pump method for moving data between different database versions is different
from the method used by original Export and Import. With original Export, you had to run
an older version of Export to produce a dump file that was compatible with an older
database version. With Data Pump, you use the current Export version and simply use
the VERSION parameter to specify the target database version. You cannot specify
versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g).

Example:
> expdp username/password TABLES=hr.employees VERSION=10.1
DIRECTORY=dpump_dir1 DUMPFILE=emp.dmp

Data Pump Import can always read dump file sets created by older versions of Data
Pump Export.

Note that Data Pump Import cannot read dump files produced by original Export.
MAXIMIZING THE POWER OF ORACLE DATA PUMP

Data Pump works great with default parameters, but once you are comfortable with Data
Pump, there are new capabilities that you will want to explore.

PARALLELISM

Data Pump Export and Import operations are processed in the database as a Data
Pump job, which is much more efficient that the client-side execution of original Export
and Import. Now Data Pump operations can take advantage of the server’s parallel
processes to read or write multiple data streams simultaneously (PARALLEL is only
available in the Enterprise Edition of Oracle Database 10g.)

The number of parallel processes can be changed on the fly using Data Pump’s
interactive command-line mode. You may have a certain number of processes running
during the day and decide to change that number if more system resources become
available at night (or vice versa). For best performance, you should do the following:

• Make sure your system is well balanced across CPU, memory, and I/O.

• Have at least one dump file for each degree of parallelism. If there aren’t enough
dump files, performance will not be optimal because multiple threads of
execution will be trying to access the same dump file.

• Put files that are members of a dump file set on separate disks so that they will
be written and read in parallel.

• For export operations, use the %U variable in the DUMPFILE parameter so
multiple dump files can be automatically generated.

Example:
> expdp username/password DIRECTORY=dpump_dir1 JOB_NAME=hr
DUMPFILE=par_exp%u.dmp PARALLEL=4

REMAP

• REMAP_TABLESPACE – This allows you to easily import a table into a different
tablespace from which it was originally exported. The databases have to be 10.1
or later.

Example:
> impdp username/password REMAP_TABLESPACE=tbs_1:tbs_6
DIRECTORY=dpumpdir1 DUMPFILE=employees.dmp

• REMAP_DATAFILES – This is a very useful feature when you move databases
between platforms that have different file naming conventions. This parameter
changes the source datafile name to the target datafile name in all SQL
statements where the source datafile is referenced. Because the
REMAP_DATAFILE value uses quotation marks, it’s best to specify the
parameter within a parameter file.

Example:
The parameter file, payroll.par, has the following content:
DIRECTORY=dpump_dir1
FULL=Y
DUMPFILE=db_full.dmp
REMAP_DATAFILE=”’C:\DB1\HRDATA\PAYROLL\tbs6.dbf’:’/db1/hrdata/payroll/t
bs6.dbf’”
You can then issue the following command:
> impdp username/password PARFILE=payroll.par

EVEN MORE ADVANCED FEATURES OF ORACLE DATA PUMP

Beyond the command-line and performance features of Oracle Data Pump are new
capabilities that DBAs will find invaluable. A couple of prominent features are described
here.

INTERACTIVE COMMAND-LINE MODE

You have much more control in monitoring and controlling Data Pump jobs with
interactive command-line mode. Because Data Pump jobs run entirely on the server,
you can start an export or import job, detach from it, and later reconnect to the job to
monitor its progress. Here are some of the things you can do while in this mode:

• See the status of the job. All of the information needed to monitor the job’s
execution is available.

• Add more dump files if there is insufficient disk space for an export file.

• Change the default size of the dump files.

• Stop the job (perhaps it is consuming too many resources) and later restart it
(when more resources become available).

• Restart the job. If a job was stopped for any reason (system failure, power
outage), you can attach to the job and then restart it.

• Increase or decrease the number of active worker processes for the job.
(Enterprise Edition only.)

• Attach to a job from a remote site (such as from home) to monitor status.

NETWORK MODE

Data Pump gives you the ability to pass data between two databases over a network (via
a database link), without creating a dump file on disk. This is very useful if you’re
moving data between databases, like data marts to data warehouses, and disk space is
not readily available. Note that if you are moving large volumes of data, Network mode
is probably going to be slower than file mode.

Network export creates the dump file set on the instance where the Data Pump job is
running and extracts the metadata and data from the remote instance.

Network export gives you the ability to export read-only databases. (Data Pump Export
cannot run locally on a read-only instance because the job requires write operations on
the instance.) This is useful when there is a need to export data from a standby
database.

GENERATING SQLFILES

In original Import, the INDEXFILE parameter generated a text file which contained the
SQL commands necessary to recreate tables and indexes that you could then edit to get
a workable DDL script.

With Data Pump, it’s a lot easier to get a workable DDL script. When you run Data
Pump Import and specify the SQLFILE parameter, a text file is generated that has the
necessary DDL (Data Definition Language) in it to recreate all object types, not just
tables and indexes. Although this output file is ready for execution, the DDL statements
are not actually executed, so the target system will not be changed.

SQLFILEs can be particularly useful when pre-creating tables and objects in a new
database. Note that the INCLUDE and EXCLUDE parameters can be used for tailoring
sqlfile output. For example, if you want to create a database that contains all the tables
and indexes of the source database, but that does not include the same constraints,
grants, and other metadata, you would issue a command as follows:

>impdp username/password DIRECTORY=dpumpdir1 DUMPFILE=expfull.dmp
SQLFILE=dpump_dir2:expfull.sql INCLUDE=TABLE,INDEX

The SQL file named expfull.sql is written to dpump_dir2 and would include SQL DDL
that could be executed in another database to create the tables and indexes as desired.

FREQUENTLY ASKED QUESTIONS

Why are directory objects needed?
They are needed to ensure data security and integrity. Otherwise, users would be able
to read data that they should not have access to and perform unwarranted operations on
the server.

What makes Data Pump faster than original Export and Import?

There are three main reasons that Data Pump is faster than original Export and Import.
First, the Direct Path data access method (which permits the server to bypass SQL and
go right to the data blocks on disk) has been rewritten to be much more efficient and
now supports Data Pump Import and Export. Second, because Data Pump does its
processing on the server rather than in the client, much less data has to be moved
between client and server. Finally, Data Pump was designed from the ground up to take
advantage of modern hardware and operating system architectures in ways that original
Export/ and Import cannot. These factors combine to produce significant performance
improvements for Data Pump over original Export and Import

How much faster is Data Pump than the original Export and Import utilities?

For a single stream, Data Pump Export is approximately 2 times faster than original
Export and Data Pump Import is approximately 15 to 40 times faster than original Import.
Speed can be dramatically improved using the PARALLEL parameter.

Why is Data Pump slower on small jobs?

Data Pump was designed for big jobs with lots of data. Each Data Pump job has a
master table that has all the information about the job and is needed for restartability.
The overhead of creating this master table makes small jobs take longer, but the speed
in processing large amounts of data gives Data Pump a significant advantage in medium
and larger jobs.

Are original Export and Import going away?

Original Export is being deprecated with the Oracle Database 11g release. Original
Import will always be supported so that dump files from earlier releases (release 5.0 and
later) will be able to be imported. Original and Data Pump dump file formats are not
compatible.

Are Data Pump dump files and original Export and Import dump files compatible?

No, the dump files are not compatible or interchangeable. If you have original Export
dump files, you must use original Import to load them.

How can I monitor my Data Pump jobs to see what is going on?

In interactive mode, you can get a lot of detail through the STATUS command. In
SQL, you can query the following views:

• DBA_DATAPUMP_JOBS - all active Data Pump jobs and the state of each
job
• USER_DATAPUMP_JOBS – summary of the user’s active Data Pump jobs
• DBA_DATAPUMP_SESSIONS – all active user sessions that are attached to
a Data Pump job
• V$SESSION_LONGOPS – shows all progress on each active Data Pump job

Can I use Oracle Enterprise Manager with Data Pump?

Yes, OEM supports a fully functional interface to Data Pump.

Can I use gzip with Data Pump?

You can do a Data Pump Export and then run the dump files through gzip to compress
them. You cannot compress the dump files prior to writing them. (The new
COMPRESS parameter can be used to compress metadata, and metadata compression
is enabled by default in Oracle Database 10g Release 2.)
CONCLUSION

Data Pump is fast and flexible. It replaces original Export and Import starting in Oracle
Database 10g. Moving to Data Pump is easy, and opens up a world of new options and
features.

HOW TO APPLY ORACLE PATCH

HOW TO APPLY ORACLE PATCH (OPATCH) WITHOUT DOWNTIME.
====================================================

This is commonly asked question in interviews.

To apply Opatch
coditions are db and listener both must be down as opatch will update your current ORACLE_HOME with patches.
in single instance its not possible.
but for RAC instance its possible.
as in RAC there will be two seperate oracle home and two seperate instances running once instance on each oracle_home

use this command:

opatch napply -skip_subset -skip_duplicate -local -oh $ORACLE_HOME


when using -local parameter and -oh $ORACLE_HOME this means this patch session will only apply patch to current sourced ORACLE_HOME.


steps before applying patch:
----------------------------

1) check the database status.
wch_db.sql
-----------
select name,
open_mode,
database_name,
created,
log_mode,
platform_name
from v$database;



2) Check the object's invalid.

user_inv.sql
============
SELECT owner,
COUNT(*)
FROM dba_objects
WHERE status = 'INVALID'
GROUP BY owner;

count_inv.sql
-------------
select count(*)
from dba_objects
WHERE status ='INVALID';

3) Take backup of invalid's

create table bk_inv_ as select * from dba_objects
where status='INVALID';

4) check opatch version using
opatch -v
if opatch version is not compatible check the readme file and
download the latest version and uncompress
in $ORACLE_HOME.

5) check oraInst.loc file pointing to your current $ORACLE_HOME or not.
cat /etc/oraInst.loc

inventory_loc=/u01/app/oracle/10.2.0/GlobalOraInv
inst_group=dba

if your server have more then one $ORACLE_HOME then comment the other $ORACLE_HOME and
uncomment the current $ORACLE_HOME

inventory must point to the current $ORACLE_HOME which is getting patched.

6) check free space on $ORACLE_HOME
df -h $ORACLE_HOME

7) chek the utilities like
which ld
which ar
which make
etc as per readme file.

8) unzip the patch
unzip -d /loc_2_unzip p.zip

9) Go the patch directory
cd /loc_2_unzip/patch_number

10) Bring down the listner.
cd $ORACLE_HOME/bin
lsnrctl stop

11) Bring down the database
Shutdown immediate.

12) export opatch
export PATH=$PATH:$HOME:$ORACLE_HOME/OPatch:/bin

13) Start the patch
opatch napply -skip_subset -skip_duplicate

for RAC database then database can be up
as it may be having more then one instance
so you can bring down one instance and listener and apply the patch and open it
and then do the same on another node.
like this db will be up and no user will face issue in outage also.

to apply opatch in RAC instance
opatch napply -skip_subset -skip_duplicate -local -oh $ORACLE_HOME

when using -local parameter and
-oh $ORACLE_HOME this means this patch session will only apply patch to current ORACLE_HOME only.

--------------------------------------------------------
. All-Node Patch

. Shutdown all Oracle instances on all nodes
. Apply the patch to all nodes
. Bring all nodes up

. Minimum downtime

. Shutdown the Oracle instance on node 1
. Apply the patch to the Oracle instance on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the Oracle instance on node 2
. Shutdown the Oracle instance on node 3
. At this point, instances on nodes 1 and 2 can be brought up
. Apply the patch to the Oracle instance on node 3
. Startup the Oracle instance on node 3

. (no downtime)

. Shutdown the Oracle instance on node 1
. Apply the patch to the Oracle instance on node 1
. Start the Oracle instance on node 1
. Shutdown the Oracle instance on node 2
. Apply the patch to the Oracle instance on node 2
. Start the Oracle instance on node 2
. Shutdown the Oracle instance on node 3
. Apply the patch to the Oracle instance on node 3
. Start the Oracle instance on node 3
-------------------------------------------------------------

14) Once patch installation is completed need to do post patching steps.

a) starup the instance
startup
b) Loading modified sqlfiles into the database.
@$ORACLE_HOME/rdbms/admin/catbundle.sql cpu apply

to check the logs generated
catbundle_CPU__APPLY_.log
catbundle_CPU__GENERATE_.log

c) Recompiling Views in the Database
shutdown immediate
startup upgrade
@$ORACLE_HOME/cpu/view_recompile/view_recompile_jan2008cpu.sql
shutdown immediate
startup

If it is a RAC instance.
shutdown
startup nomount
alter database set cluster database=false scope=spfile;
shutdown
startup upgrade
@?/cpu/view_recompile/view_recompile_jan2008cpu.sql
shutdown
startup
alter database set cluster database=true scope=spfile;

restart the database.
cd $CRS_HOME/bin
srvctl start database -d
15) If any invalid objects were reported, run the utlrp.sql script as follows



user_inv.sql
============
SELECT owner,
COUNT(*)
FROM dba_objects
WHERE status = 'INVALID'
GROUP BY owner;

count_inv.sql
-------------
select count(*)
from dba_objects
WHERE status ='INVALID';

if any new invalids seen then again take backup of invalid objects and compile it.

create table bk_inv_ as select * from dba_objects
where status='INVALID';

@?/rdbms/admin/utlrp.sql --- to compile the invalid objects.


16) Confirm that patch has been applied successfully or not at db level also.


post_patch.sql
--------------
col action_time for a40
col action for a15
col namespace for a15
col version for a15
col comments for a40
set pages 1000
set lines 170
select * from registry$history ;

Installed Components in Oracle

View my complete profile
Tuesday, March 9, 2010
INSTALL DATABASE COMPONENTS
INSTALL DATABASE COMPONENTS

To check which components are presently installed run this query.

SELECT comp_name || ' : '||version|| ' : '|| status FROM dba_registry;


SQL> SELECT comp_name || ' : '||version|| ' : '|| status FROM dba_registry;



COMP_NAME||':'||VERSION||':'||STATUS

--------------------------------------------------------------------------------

Oracle Database Catalog Views : 10.2.0.4.0 : VALID

Oracle Database Packages and Types : 10.2.0.4.0 : VALID

JServer JAVA Virtual Machine : 10.2.0.4.0 : VALID

Oracle XDK : 10.2.0.4.0 : VALID

Oracle Database Java Packages : 10.2.0.4.0 : VALID

Oracle Expression Filter : 10.2.0.4.0 : VALID

Oracle XML Database : 10.2.0.4.0 : VALID





----------------------------------------------------------------------------------------------------------------------------

install components missing:

===========================



https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=472937.1



SQL> SELECT comp_name || ' : '||version|| ' : '|| status FROM dba_registry;



COMP_NAME||':'||VERSION||':'||STATUS

--------------------------------------------------------------------------------

Oracle Database Catalog Views : 10.2.0.4.0 : VALID

Oracle Database Packages and Types : 10.2.0.4.0 : VALID

JServer JAVA Virtual Machine : 10.2.0.4.0 : VALID

Oracle XDK : 10.2.0.4.0 : VALID

Oracle Database Java Packages : 10.2.0.4.0 : VALID

Oracle Expression Filter : 10.2.0.4.0 : VALID

Oracle XML Database : 10.2.0.4.0 : VALID





@?/javavm/install/initjvm.sql;

http://www.adp-gmbh.ch/ora/xml_db/install.html

Oracle XML Database (http://www.lorentzcenter.nl/awcourse/oracle/appdev.920/a96620/appaman.htm#637391)

===================

@?/rdbms/admin/catqm.sql change_on_install XDB TEMP

Reconnect to SYS again and run the following:

catxdbj.sql #Load xdb java library



@?/xdk/admin/initxml.sql;

@?/xdk/admin/xmlja.sql;

@?/rdbms/admin/catjava.sql;

@?/rdbms/admin/catexf.sql;



for installing [Oracle XML Database]

1) create tablespace XDB

2) @?/rdbms/admin/catqm xml_password xdb temp

--->[ eg: catqm



create tablespace XDB

datafile '/ora22/u01/oracle/str2/data/xdb.dbf' size 500M;



@?/rdbms/admin/catqm xml_passwd XDB temp



SQL> select username , password from dba_users where username='XDB';



USERNAME PASSWORD

------------------------------ ------------------------------

XDB FD6C945857807E3C





alter user xdb identified by values 'FD6C945857807E3C':





----------------

hanmant: @?/javavm/install/initjvm.sql;

@?/xdk/admin/initxml.sql;

@?/xdk/admin/xmlja.sql;

@?/rdbms/admin/catjava.sql;

@?/rdbms/admin/catexf.sql;



11:06:34 AM: hanmant: @?/rdbms/admin/catalog.sql;

@?/rdbms/admin/catblock.sql;

@?/rdbms/admin/catproc.sql;

@?/rdbms/admin/catoctk.sql;

@?/rdbms/admin/owminst.plb;

@?/sqlplus/admin/pupbld.sql;

@?/sqlplus/admin/help/hlpbld.sql

@?/sqlplus/admin/helphelpus.sql;



$ORACLE_HOME/dm/admin/dmuserld.sql



for installing ORACLE TEXT:

---------------------------

https://support.oracle.com/CSP/main/article?cmd=show&id=280713.1&type=NOT



SQL> connect SYS/password@tns_ as SYSDBA

SQL> spool text_install.txt

SQL> @?/ctx/admin/catctx.sql CTXSYS SYSAUX TEMP NOLOCK





for manually installing ORACLE DATA MINING:

-------------------------------------------

https://support.oracle.com/CSP/main/article?cmd=show&id=420791.1&type=NOT



For RDBMS 10.2.x

1. Set ORACLE_HOME and ORACLE_SID.

2. START SQLPlus and connect with SYS user as SYSDBA.

3. Execute the following commands:



Unix - Linux



SQL> run $ORACLE_HOME/rdbms/admin/dminst.sql SYSAUX TEMP

SQL> run $ORACLE_HOME/rdbms/admin/odmpatch.sql

SQL> run $ORACLE_HOME/rdbms/admin/utlrp.sql





Windows



SQL> start %ORACLE_HOME%\rdbms\admin\dminst.sql SYSAUX TEMP

SQL> start %ORACLE_HOME%\rdbms\admin\odmpatch.sql

SQL> start %ORACLE_HOME%\rdbms\admin\utlrp.sql



4. Ensure 'Oracle Data Mining' is at Valid status in dba_registry by executing the following query:



SQL> select COMP_NAME,VERSION,STATUS from dba_registry where COMP_NAME='Oracle Data Mining';







SQL> SELECT comp_name || ' : '||version|| ' : '|| status FROM dba_registry;



COMP_NAME||':'||VERSION||':'||STATUS

--------------------------------------------------------------------------------

Oracle Text : 10.2.0.3.0 : VALID

Oracle Workspace Manager : 10.2.0.1.0 : VALID

Oracle Data Mining : 10.2.0.3.0 : VALID

Oracle XML Database : 10.2.0.3.0 : VALID

Oracle Expression Filter : 10.2.0.3.0 : VALID

Oracle Database Catalog Views : 10.2.0.3.0 : VALID

Oracle Database Packages and Types : 10.2.0.3.0 : VALID

JServer JAVA Virtual Machine : 10.2.0.3.0 : VALID

Oracle XDK : 10.2.0.3.0 : VALID

Oracle Database Java Packages : 10.2.0.3.0 : VALID



10 rows selected.

Some of the interesting notes on RMAN

1.1. Where should the catalog be created?

The recovery catalog to be used by rman should be created in a seperate database other than the target database. The reason been that the target database will be shutdown while datafiles are restored.

1.2. How do I create a catalog for rman?

First create the user rman.

CREATE USER rman IDENTIFIED BY rman
TEMPORARY TABLESPACE temp
DEFAULT TABLESPACE tools
QUOTA UNLIMITED ON tools;

GRANT connect, resource, recovery_catalog_owner TO rman;
exit

Then create the recovery catalog:-

rman catalog=rman/rman
create catalog tablespace tools;
exit

Then register the database

oracle@debian:~$ rman target=/ catalog=rman/rman@newdb

Recovery Manager: Release 10.1.0.2.0 – Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

connected to target database: TEST (DBID=1843143191)
connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

Note
If you try rman catalog=rman/rman and try to register the database it will not work.

Note
We have 2 Databases here 1 is newdb which is solely for catalog and the other is TEST which is our database on which we want to perform all rman operations.

1.3. How many times does oracle ask before dropping a catalog?

The default is two times one for the actual command, the other for confirmation.

1.4. How to view the current defaults for the database.

rman> show all;

RMAN> show all
2> ;

RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/u02/app/oracle/product/10.1.0/db_1/dbs/snapcf_test.f’; # default

1.5. Backup the database.

RMAN> run{
configure retention policy to recovery window of 2 days;
backup database plus archivelog;
delete noprompt obsolete;
}
tarting backup at 04-JUL-05
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=256 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
……………

1.6. How to resolve the ora-19804 error

Basically this error is because of flash recovery area been full. One way to solve is to increase the space available for flashback database.

sql>ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=5G; –It can be set to K,M or G.
rman>backup database;
……………….
channel ORA_DISK_1: specifying datafile(s) in backupset
including current controlfile in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 04-JUL-05
channel ORA_DISK_1: finished piece 1 at 04-JUL-05
piece handle=/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_04/o1_mf_ncsnf_TAG20050704T205840_1dmy15cr_.bkp comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 04-JUL-05

Oracle Flashback

After taking a back up resync the database.

Restoring the whole database.

run {
shutdown immediate;
startup mount;
restore database;
recover database;
alter database open;
}

1.7. What are the various reports available with RMAN

rman>list backup; rman> list archive;

1.8. What does backup incremental level=0 database do?

Backup database level=0 is a full backup of the database. rman>>backup incremental level=0 database;

You can also use backup full database; which means the same thing as level=0;

1.9. What is the difference between DELETE INPUT and DELETE ALL command in backup?

Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we archive the files, when a command is issued through rman to backup archivelogs it uses one of the location to backup the data. When we specify delete input the location which was backedup will get deleted, if we specify delete all all log_archive_dest_n will get deleted.

DELETE all applies only to archived logs. delete expired archivelog all;

Chapter 2. Recovery

Recovery involves placing the datafiles in the appropriate state for the type of recovery you are performing. If recovering all datafiles, then mount the database, if recovering a single tablespace or datafile then you can keep the database open and take the tablespace or datafile offline. Perform the required recovery and put them back online.

Put the commands in a rman script .rcv file such as myrman.rcv

run
{
# shutdown immediate; # use abort if this fails
startup mount;
#SET UNTIL TIME ‘Nov 15 2001 09:00:00′;
# SET UNTIL SCN 1000; # alternatively, you can specify SCN
SET UNTIL SEQUENCE 9923; # alternatively, you can specify log sequence number
restore database;
recover database;
alter database open;
}

Run the myrman.rcv file as :- rman target / @myrman.rcv

After successful restore & recovery immediately backup your database, because the database is in a new incarnation.

ALTER DATABASE open resetlogs; –command creates a new incarnation of the database — database with a new stream of sequence numbers starting with sequence 1.

Before running RESETLOGS it is a good practice to open the database in read only mode and examining the data contents.

2.1. Simulating media failure.

2.1.1. How to simulate media failure and recover a tablespace in the database ?
2.1.2. What is the difference between alter database recover and sql*plus recover command?

2.1.1. How to simulate media failure and recover a tablespace in the database ?

Firstly create the table in the required tablespace.

CREATE TABLE mytest ( id number(10));

Then insert into the table mytest values(100); execute the insert statement a couple of times but do not commit the results.

Take the tablespace offline, this is possible only if the database is in archivelog mode.

now commit the transaction. by issuing commit.

Now try to bring the tablespace online, at this point you will get the error that datafile 4 needs media recovery.

issue the following command to recover the tablespace, please note that the database itself can remain open.

SQL>recover tablespace users;
media recovery completed.

now bring the tablespace online.

SQL>alter tablespace users online;

2.1.2. What is the difference between alter database recover and sql*plus recover command?

ALTER DATABASE recover is useful when you as a user want to control the recovery. SQL*PLUS recover command is useful when we prefer automated recovery.

Chapter 3. Duplicate database with control file

What are the steps required to duplicate a database with control file?

Copy initSID.ora to the new initXXX.ora file. i.e.,

cp $ORACLE_HOME/dbs/inittest.ora $ORACLE_HOME/dbs/initDUP.ora

Edit the parameters that are specific to location and instance:-

db_name = dup
instance_name = dup
control_files = change the location to point to dup
background_dump_dest = change the location to point to dup/bdump
core_dump_dest = change the location to point to dup/cdump
user_dump_dest = change the location to point to dup/udump
log_archive_dest_1 = dup/archive
db_file_name_convert = (test, dup)
log_file_name_convert = (test, dup)
remote_login_passwordfile = exclusive

Actual settings :-

*.background_dump_dest=’/u02/app/oracle/admin/DUP/bdump’
*.compatible=’10.1.0.2.0′
*.control_files=’/u02/app/oracle/oradata/DUP/control01.ctl’,'/u02/app/oracle/oradata/DUP/control02.ctl’,'/u02/app/oracle/oradata/DUP/control03.ctl’
*.core_dump_dest=’/u02/app/oracle/admin/DUP/cdump’
*.db_block_size=8192
*.db_cache_size=25165824
*.db_domain=”
*.db_file_multiblock_read_count=16
*.db_name=’DUP’
*.db_recovery_file_dest=’/u02/app/oracle/flash_recovery_area’
*.db_recovery_file_dest_size=2147483648
*.dispatchers=’(PROTOCOL=TCP) (SERVICE=DUPXDB)’
*.java_pool_size=50331648
*.job_queue_processes=10
*.large_pool_size=8388608
*.log_archive_dest_1=’LOCATION=/u02/app/oracle/oradata/payroll MANDATORY’
*.open_cursors=300
*.pga_aggregate_target=25165824
*.processes=250
*.shared_pool_size=99614720
*.sort_area_size=65536
*.undo_management=’AUTO’
*.undo_tablespace=’UNDOTBS1′
*.user_dump_dest=’/u02/app/oracle/admin/DUP/udump’
*.remote_login_passwordfile=exclusive
*.db_file_name_convert = (test, dup)
*.log_file_name_convert =(test,dup)

Make the directories for the dump destination:-

oracle@debian:/u02/app/oracle/admin/DUP$ mkdir bdump
oracle@debian:/u02/app/oracle/admin/DUP$ mkdir cdump
oracle@debian:/u02/app/oracle/admin/DUP$ mkdir udump

Make a directory to hold control files, datafiles and such:-

oracle@debian:/u02/app/oracle/oradata/PRD$ cd ..
oracle@debian:/u02/app/oracle/oradata$ mkdir DUP

Ensure that the oracle sid is pointing to the right Database. Make an ora password file so that other users can connect too.

export ORACLE_SID=DUP
$orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=easypass
sqlplus / as sysdba
sql>startup nomount;

oracle@debian:/u02/app/oracle/product/10.1.0/db_1/dbs$ sqlplus / as sysdba;

SQL*Plus: Release 10.1.0.2.0 – Production on Wed Aug 24 21:05:26 2005

Copyright (c) 1982, 2004, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 188743680 bytes
Fixed Size 778036 bytes
Variable Size 162537676 bytes
Database Buffers 25165824 bytes
Redo Buffers 262144 bytes
SQL>

Check net8 connectivity sqlplus sys/easypass@dup if that goes through successfully then exit. The idea is to check for sql*net connectivity.

if you get ORA-12154: TNS:could not resolve the connect identifier specified then more work needs to be done.
DUP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = debian)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = dup)
)

$tnsping dup
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = debian)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = dup)))
OK (0 msec)

Even with this if you are getting ORA-12528: TNS:listener: all appropriate instances are blocking new connections then we have to connect to the auxiliary (the database to be duplicated as / ) and the target database ( source ) with user/pass@test
start duplicating the database. export ORACLE_SID=DUP

rman target sys/easypass@test auxiliary /
run{
allocate auxiliary channel ch1 type disk;
duplicate target database to dup;
}

oracle@debian:/u02/app/oracle/product/10.1.0/db_1/network/admin$ rman target sys/kernel@test auxiliary /

Recovery Manager: Release 10.1.0.2.0 – Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04005: error from target database:
ORA-01017: invalid username/password; logon denied

The work around to this is to create a user with dba privileges and connect through that users id .

$export ORACLE_SID=test
SQL>grant sysdba to mhg;

oracle@debian:/u02/app/oracle/product/10.1.0/db_1/network/admin$ rman target mhg/mhg@test auxiliary /

Recovery Manager: Release 10.1.0.2.0 – Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

connected to target database: TEST (DBID=1843143191)
connected to auxiliary database: DUP (not mounted)

oracle@debian:~$ rman target mhg/mhg@test auxiliary / @run.rcv

Recovery Manager: Release 10.1.0.2.0 – Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

connected to target database: TEST (DBID=1843143191)
connected to auxiliary database: DUP (not mounted)

RMAN> run{
2> allocate auxiliary channel c1 type disk;
3> duplicate target database to dup;
4> }
5>
using target database controlfile instead of recovery catalog
allocated channel: c1
channel c1: sid=270 devtype=DISK

Starting Duplicate Db at 24-AUG-05
released channel: c1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 08/24/2005 21:13:09
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary filename /u02/app/oracle/oradata/test/users01.dbf conflicts with a file used by the target database

This error is primarily because the files of the test database are already present, this is a bad thing we have to use db_file_name_convert and long_file_name_convert to overcome these errors.

This is the final run output:-

oracle@debian:~$ rman target mhg/mhg@test auxiliary /

Recovery Manager: Release 10.1.0.2.0 – Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

connected to target database: TEST (DBID=1843143191)
connected to auxiliary database: DUP (not mounted)

RMAN> @run.rcv

RMAN> run{
2> allocate auxiliary channel c1 type disk;
3> duplicate target database to dup;
4> }
using target database controlfile instead of recovery catalog
allocated channel: c1
channel c1: sid=270 devtype=DISK

Starting Duplicate Db at 24-AUG-05

contents of Memory Script:
{
set until scn 2150046;
set newname for datafile 1 to
“/u02/app/oracle/oradata/DUP/system2.dbf”;
set newname for datafile 2 to
“/u02/app/oracle/oradata/DUP/undotbs01.dbf”;
set newname for datafile 3 to
“/u02/app/oracle/oradata/DUP/sysaux01.dbf”;
set newname for datafile 4 to
“/u02/app/oracle/oradata/DUP/users01.dbf”;
restore
check readonly
clone database
;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 24-AUG-05

channel c1: starting datafile backupset restore
channel c1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /u02/app/oracle/oradata/DUP/system2.dbf
restoring datafile 00002 to /u02/app/oracle/oradata/DUP/undotbs01.dbf
…………..
datafile copy filename=/u02/app/oracle/oradata/DUP/sysaux01.dbf recid=2 stamp=567206656

cataloged datafile copy
datafile copy filename=/u02/app/oracle/oradata/DUP/users01.dbf recid=3 stamp=567206656

datafile 2 switched to datafile copy
input datafilecopy recid=1 stamp=567206656 filename=/u02/app/oracle/oradata/DUP/undotbs01.dbf
datafile 3 switched to datafile copy
input datafilecopy recid=2 stamp=567206656 filename=/u02/app/oracle/oradata/DUP/sysaux01.dbf
datafile 4 switched to datafile copy
input datafilecopy recid=3 stamp=567206656 filename=/u02/app/oracle/oradata/DUP/users01.dbf

contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 24-AUG-05

RMAN> **end-of-file**

This ends a successful duplication of a database without control file.

Chapter 4. Using rman tocheck logical and physical block corruption

To generate block corruption you can use the dd unix utility
caution, it will corrupt your block(s):-
$dd if=/dev/null of=/u02/oradata/myrac/anyfile.dbf bs=8192 conv=notrunc seek=10 count=1
seek=10 write at block 10, count=1 write to only that block
now you can run dbv to verify that the blocks are actually corrupt
and then recover the datafile by using oracle’s blockrecover command.

export ORACLE_HOME=test
rman target /
run {
allocate channel d1 type disk;
backup check logical validate database;
release channel d1;
}
To validatea datafile(s) :-
run {
allocate channel d1 type disk;
backup check logical validate datafile 1,2;
release channel d1;
}

During this command every block is written to memory and then subsequently rewriten to another portion of the memory, during
this memory to memory write every block is checked for corruption.
RMAN’s backup command with validate and check logical clause allow to quickly validate for both physical and logical corruption.

Chapter 5. Checking for datafile corruption

A corrupted block requires dropping an object. The message identifies the block in error by file number and block number. The cure has always been to run a query such as: SELECT owner, segment_name, segment_type FROM dba_extents WHERE file_id = AND BETWEEN block_id AND block_id + blocks – 1; where and were the numbers from the error message. This query indicates which object contains the corrupted block. Then, depending on the object type, recovery is either straightforward (for indexes and temporary segments), messy (for tables), or very messy (for active rollback segments and parts of the data dictionary). In Oracle 9i Enterprise Edition, however, a new Recovery Manager (RMAN) command, BLOCKRECOVER, can repair the block in place without dropping and recreating the object involved. After logging into RMAN and connecting to the target database, type: BLOCKRECOVER DATAFILE filenumber BLOCK blocknumber; A new view, V$DATABASE_BLOCK_CORRUPTION, gets updated during RMAN backups, and a block must be listed as corrupt for a BLOCKRECOVER to be performed. To recover all blocks that have been marked corrupt, the following RMAN sequence can be used: BACKUP VALIDATE DATABASE; BLOCKRECOVER CORRUPTION LIST; This approach is efficient if only a few blocks need recovery. For large-scale corruption, it’s more efficient to restore a prior image of the datafile and recover the entire datafile, as before. As with any new feature, test it carefully before using it on a production database.

run {
allocate channel ch1 type ;
blockrecover datafile block ;
}

1. What are the steps to start the database from a text control file?
1.1. What are the steps required to start a database from text based control file?
1.2. Give a complete scenario of backup, delete and restore.
1.3. How do I backup archive log?
1.4. How do I do a incremental backup after a base backup?
1.5. What is ORA-002004 error?
1.6. What Information is Required for RMAN TAR?
1.7. How To turn Debug Feature on in rman?
1. What are the steps to start the database from a text control file?
1.1. What are the steps required to start a database from text based control file?
1.2. Give a complete scenario of backup, delete and restore.
1.3. How do I backup archive log?
1.4. How do I do a incremental backup after a base backup?
1.5. What is ORA-002004 error?
1.6. What Information is Required for RMAN TAR?
1.7. How To turn Debug Feature on in rman?

1.1. What are the steps required to start a database from text based control file?

ALTER DATABASE BACKUP CONTROLFILE TO ‘/oracle/backup/cf.bak’ REUSE; –or to a file name on the OS. With this command you will get a text based version of your control file. REUSE clause specifies Oracle to overwrite the control files. If we ignore this option Oracle will not overwrite the control file if it is already present in the directory specified by the initSID.ora file.

Start the database in nomount mode. If you have 3 control file entries in pfile / spfile you will get 3 new control files.

Now run the control file script to create your control files.

recover database using backup controlfile until cancel

1.2. Give a complete scenario of backup, delete and restore.

Given that you want to take a base level backup, simulate complete failure by removing controlfile, datafile, redo log, archive log, these are the steps to be followed.

First take a base level backup of the database.

backup incremental level=0 database;

Simulate media failure by removing the control file and data file. sqlplus / as sysdba; shutdown immediate; exit; rm control* system*

When we don’t have a control file the problem becomes quite complex the reason been that the rman backup information is stored in the control file. So when we don’t have the control file we won’t have the information about backups. First step should be towards restoring the control file. Fortunately we can do a listing in our flash recovery area and guess which backup piece has the information about our control file. In my box following is the listing on the flash recovery area:-

/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_30/
o1_mf_ncnn0_TAG20050730T130722_1gqn4jy2_.bkp
o1_mf_nnnd0_TAG20050730T130722_1gqmzdjz_.bkp
now I am assuming xxcnn0xxx has the control file information in it.

We have to use a nifty pl/sql program to recover our control file, once it is done successfully then we can go on our merry way using rman to recover the rest of the database.

DECLARE
v_devtype VARCHAR2(100);
v_done BOOLEAN;
v_maxPieces NUMBER;
TYPE t_pieceName IS TABLE OF varchar2(255) INDEX BY binary_integer;
v_pieceName t_pieceName;
BEGIN
— Define the backup pieces… (names from the RMAN Log file)
v_pieceName(1) :=
‘/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_30/o1_mf_ncnn0_TAG20050730T130722_1gqn4jy2_.bkp’;
v_maxPieces := 1;
— Allocate a channel… (Use type=>null for DISK, type=>’sbt_tape’ for TAPE)
v_devtype := DBMS_BACKUP_RESTORE.deviceAllocate(type=>NULL, ident=>’d1′);
— Restore the first Control File…
DBMS_BACKUP_RESTORE.restoreSetDataFile;
— CFNAME mist be the exact path and filename of a controlfile taht was backed-up
DBMS_BACKUP_RESTORE.restoreControlFileTo(cfname=>’/u02/app/oracle/oradata/test/control01.ctl’);
dbms_output.put_line(‘Start restoring ‘||v_maxPieces||’ pieces.’);
FOR i IN 1..v_maxPieces LOOP
dbms_output.put_line(‘Restoring from piece ‘||v_pieceName(i));
DBMS_BACKUP_RESTORE.restoreBackupPiece(handle=>v_pieceName(i), done=>v_done, params=>null);
exit when v_done;
END LOOP;
— Deallocate the channel…
DBMS_BACKUP_RESTORE.deviceDeAllocate(‘d1′) ;
EXCEPTION
WHEN OTHERS THEN
DBMS_BACKUP_RESTORE.deviceDeAllocate;
RAISE;
END;
/

Pl/SQL completed successfully. I had 3 control files, the above command will restore only one control file so I will do a operating system copy to restore the rest of the control files. cp control01.ctl control02.ctl cp control01.ctl control03.ctl

After control file is restored launch rman and list all the backup information.,

rman target /
rman>sql ‘ alter database mount’;
rman>list backup;
BS Key Type LV Size Device Type Elapsed Time Completion Time
——- —- — ———- ———– ———— —————
21 Incr 0 2G DISK 00:02:39 30-JUL-05
BP Key: 21 Status: AVAILABLE Compressed: NO Tag: TAG20050730T130722
Piece Name: /u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_30/o1_mf_nnnd0_TAG20050730T130722_1gqmzdjz_.bkp List of Datafiles in backup set 21
File LV Type Ckp SCN Ckp Time Name
—- — —- ———- ——— —-
1 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/system2.dbf
2 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/undotbs01.dbf
3 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/sysaux01.dbf
4 0 Incr 1723296 30-JUL-05 /u02/app/oracle/oradata/test/users01.dbf

The above command indicates that the backup set key is 21 and tag is blah and time is blah.

Connect to rman target / restore database; recover database; exit;

sqlplus / as sysdba; startup; Now the database should have been recovered to your current SCN at which time we encountered a complete media failure.

1.3. How do I backup archive log?

In order to backup archivelog we have to do the following:-

run {
backup
(archivelog all delete input);
}

If you want to delete archive logs ignoring those that were inaccesible after backup you have to use (archivelog all skip inaccessible delete input);

1.4. How do I do a incremental backup after a base backup?

RMAN> backup incremental level=1 database plus archivelog delete all input;

This will take a incremental backup of the database and make a copy of archivelog and delete all input.

1.5. What is ORA-002004 error?

A disk I/O failure was detected on reading the controlfile.

Basically check whether the control file is available, permissions
are right on the control file,
spfile/init.ora right to the right location, if all checks were
done still you are getting the error, then from the multiplexed
control file overlay on the corrupted one, let us say you have
three control files control01.ctl, control02.ctl and control03.ctl
and now you are getting errors on control03.ctl then just cp control01.ctl
over to control03.ctl and you should be all set.
In order to issue
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
database should be mounted and in our case it is not mounted then the only
other option available is to restore control file from backup or cp the
multiplexed control file over to the bad one.

1.6. What Information is Required for RMAN TAR?

Hardware Configuration
* The name of the node that hosts the database
* The make and model of the production machine
* The version and patch of the operating system
* The disk capacity of the host
* The number of disks and disk controllers
* The disk capacity and free space
* The media management vendor (if you use a third-party media manager)
* The type and number of media management devices
Software Configuration
* The name of the database instance (SID)
* The database identifier (DBID)
* The version and patch release of the Oracle database server
* The version and patch release of the networking software
* The method (RMAN or user-managed) and frequency of database backups
* The method of restore and recovery (RMAN or user-managed)
* The datafile mount points

You should keep this information both in electronic and hardcopy form. For example, if you save this information in a text file on the network or in an email message, then if the entire system goes down, you may not have this data available.

1.7. How To turn Debug Feature on in rman?

run {
allocate channel c1 type disk;
debug on;
}
rman>list backup of database;

You will see a output similar to
DBGMISC: ENTERED krmkdftr [18:35:11.291]

DBGSQL: EXEC SQL AT TARGET begin dbms_rcvman . translateDataFile (
:fno ) ; end ; [18:35:11.291]
DBGSQL: sqlcode=0 [18:35:11.300]
DBGSQL: :b1 = 1
DBGMISC: ENTERED krmkgdf [18:35:11.301]
DBGMISC: ENTERED krmkgbh [18:35:11.315]
DBGMISC: EXITED krmkgbh with status Not required – no flags
[18:35:11.315] elapsed time [00:00:00:00.000]
DBGMISC: EXITED krmkgdf [18:35:11.315] elapsed time [00:00:00:00.014]
DBGMISC: EXITED krmkdftr [18:35:11.315] elapsed time [00:00:00:00.024]
DBGMISC: EXITED krmknmtr with status DF [18:35:11.315] elapsed time
[00:00:00:00.024]
DBGMISC: EXITED krmknmtr with status DFILE [18:35:11.315] elapsed time
[00:00:00:00.024]
DBGMISC: EXITED krmknmtr with status backup [18:35:11.315] elapsed time
[00:00:00:00.024]
DBGMISC: krmknmtr: the parse tree after name translation is:
[18:35:11.315]
DBGMISC: EXITED krmknmtr with status list [18:35:11.316] elapsed time
[00:00:00:00.078]
DBGMISC: krmkdps: this_reset_scn=1573357 [18:35:11.316]
DBGMISC: krmkdps: this_reset_time=19-AUG-06 [18:35:11.316]
DBGMISC: krmkdps: untilSCN= [18:35:11.317]

You can always turn debug off by issuing

rman>debug off;