Thursday, September 16, 2010

Oracle added about 482 new features in the Oracle Database 11g Release 1.
New Datatypes The new datatypes brought in Oracle 11g are:
  • Binary XML type - up to 15 times faster over XML LOBs.
  • DICOM (Digital Imaging and Communications in Medicine) medical images.
  • 3D spatial support.
  • RFID tag datatypes.
New background processes
  • ACMS - Atomic Controlfile to Memory Server
  • DBRM - Database Resource Manager
  • DIA0 - Diagnosibility process 0
  • DIAG - Diagnosibility process
  • FBDA - Flashback Data Archiver
  • GTX0 - Global Transaction Process 0
  • KATE - Konductor (Conductor) of ASM Temporary Errands
  • MARK - Mark Allocation unit for Resync Koordinator (coordinator)
  • SMCO - Space Manager
  • VKTM - Virtual Keeper of TiMe process
  • W000 - Space Management Worker Processes
SQL*Plus
  • SQL*Plus can show the BLOB/BFILE columns in select query.
  • The errors while executing a script/SQL can be logged on to a table (SPERRORLOG, by default).
    SQL> set errorlogging on --->> errors will be logged onto SPERRORLOG.
    SQL> set errorlogging on table scott.error_log --->> errors will be logged onto user defined table.
    SQL> set errorlogging on truncate --->> will truncate all the rows in the table.
    SQL> set errorlogging on identifier identifier-name --->> useful to query the logging table
SQL
  • Automatic SQL tuning with self-learning capabilities.
  • Tables can have virtual columns (calculated from other columns).
    SQL> CREATE TABLE TABLE-NAME ( ... , virtual-col-name virtual-col-type GENERATED ALWAYS AS condition VIRTUAL);
  • Indexes on virtual columns(VC) and partitioning on virtual columns.
  • Fast "alter table ... add column" with default values.
  • Online rebuilding of indexes with no pause on DML activity. Online table and index redefinition.
    SQL> alter index index-name rebuild online;
  • Ability to mark a table as read only.
    SQL> alter table table-name read only;
  • New PIVOT (to create a crosstab report on any relational table) and UNPIVOT (to convert any crosstab report to be stored as a relational table) operations. Pivot can produce the output in text or XML.
  • Table compression occurs on all DML activities. The blocks will be compressed, not the rows.
    SQL> create table table-name ... compress for all operations;
PL/SQL
  • Native compilation no longer requires a C-compiler. But plsql_code_type parameter should be NATIVE.
    SQL> alter session set plsql_code_type = native;
  • New SIMPLE_INTEGER datatype - subtype of PLS_INTEGER, always NOT NULL, wraps instead of overflows and is faster than PLS_INTEGER. Will be faster in native compilation.
  • SQL and PL/SQL result caching (in SGA).
  • You can create triggers as disabled.
  • Can specify trigger firing order (FOLLOWS clause).
    SQL> create trigger T2 ... follows T1;
  • Compound triggers - a trigger can be before statement, after statement, before row, after row and all in one.
  • DML triggers are up to 25% faster - in particular, row level triggers doing updates against other tables.
  • New CONTINUE statement - starts the next iteration of the loop.
  • Finer grained dependency tracking - when parent table undergone structural changes, child/dependent objects are not invalidated simply.
  • Dynamic SQL enhancements.
  • Ability to reference sequences directly(no need to select seq.nextval into :n from dual). We can use :n := seq.nextval;
  • Dynamic cursor can be converted to ref cursor and vice versa.
  • Adaptive Cursors - if a cursor has bind variable, the database observes cursor for a while to see what type of values are passed to the variable and if execution plan needs recalculation. Adaptive cursors are activated and used automatically.
ASM
  • Support for rolling upgrades.
  • We can maintain version compatibilities at diskgroup level.
    SQL> alter diskgroup dg-name set attribute 'compatible.rdbms'='11.1';
    SQL> alter diskgroup dg-name set attribute 'compatible.asm'='11.1';
  • ASM drops disks and if they remain offline for more than 3.6 hours. The diskgroups default time limit is altered by changing the DISK_REPAIR_TIME parameter with a unit of minutes(M/m) or hours(H/h).
    SQL> alter diskgroup dg-name set attribute 'disk_repair_time'='4.5h';
  • Automatic bad block detection and repair.
  • Supports variable extent(allocation unit) sizes. The total number of extents in shared pool will be significantly reduced and improved performance.
    SQL> create diskgroup ... attribute 'au_size' = 'number-of-bytes';
  • New SYSASM role (like SYSDBA, SYSOPER) & OSASM OS group (like OSDBA, OSOPER) to manage ASM instance only. This will separate storage administration from database administration.
    $ sqlplus "/as sysasm" or $ asmcmd -a sysasm
  • ASM Preferred Mirror Read or Preferred Read Failure Groups - ASM_PREFERRED_READ_FAILURE_GROUPS parameter is set to the preferred failure groups for each node.
  • Faster Mirror Resync - Fast mirror resync after temporary connectivity lost.
  • We can drop a diskgroup forcefully.
    SQL> drop diskgroup dg-name force including contents;
  • Can mount the disk in restricted mode, to rebalance faster.
    SQL> alter diskgroup dg-name mount restricted;
  • New commands in ASMCMD.
    • cp - to copy between ASM and local or remote destination.
    • md_backup - to backup metadata.
    • md_restore - to restore metadata.
    • lsdsk - to list(check) disks.
    • remap - to repair a range of physical blocks on disk.
Datapump
  • New options in Datapump export.
    DATA_OPTIONS, ENCRYPTION, ENCRYPTION_ALGORITHM, ENCRYPTION_MODE, REMAP_DATA, REUSE_DUMPFILES, TRANSPORTABLE
  • New options in Datapump import.
    DATA_OPTIONS, PARTITION_OPTIONS, REMAP_DATA, REMAP_TABLE, TRANSPORTABLE
  • New option in Datapump export interactive mode - REUSE_DUMPFILES.
  • In datapump import, we can specify how the partitions should transform by using PARTITION_OPTIONS.
  • Dumpfile can be compressed. In 10g, only metadata can be compressed. From 11g, both data & metadata can be compressed. Dumpfile will be uncompressed automatically before importing.
  • Encryption: The dumpfile can be encrypted while creating. This encryption occurs on the entire dumpfile, not just on the encrypted columns as it was in the Oracle Database 10g.
  • Masking: when you import data from production to test or development instances, we have to make sure sensitive data such as credit card details, etc. are obfuscated/remapped (altered in such a way that they are not identifiable). From 11g, Data Pump enables you do that by creating a masking function and then using that during import.
RMAN
  • Multisection backups of same file - RMAN can backup or restore a single file in parallel by dividing the work among multiple channels. Each channel backs up one file section, which is a contiguous range of blocks. This speeds up overall backup and restore performance, and particularly for bigfile tablespaces, in which a datafile can be sized upwards of several hundred GB to TB's.
  • Recovery will make use of flashback logs in FRA (Flash/Fast Recovery Area).
  • Fast Backup Compression - in addition to the Oracle Database 10g backup compression algorithm (BZIP2), RMAN now supports the ZLIB algorithm, which offers 40% better performance, with a trade-off of no more than 20% lower compression ratio, versus BZIP2.
    RMAN> configure compression algorithm 'ZLIB' ;
  • Will backup uncommitted undo only, not committed undo.
  • Data Recovery Advisor (DRA) - quickly identify the root cause of failures; auto fix or present recovery options to the DBA.
  • Virtual Private Catalog - a recovery catalog administrator can grant visibility of a subset of registered databases in the catalog to specific RMAN users.
    RMAN> grant catalog for database db-name to user-name;
  • Catalogs can be merged/moved/imported from one database to another.
  • New commands in RMAN
    • RMAN> list failure;
    • RMAN> list failure errnumber detail;
    • RMAN> advise failure;
    • RMAN> repair failure;
    • RMAN> repair failure preview;
    • RMAN> validate database; -- checks for corrupted blocks
    • RMAN> create virtual catalog;
Partitioning
  • Partition advisor - figure out what partitions to create.
  • Automated partitioning by interval (new partitions are added automatically).
  • Automated reference partitioning by Parent/Child reference (as partitions are created, partitions are created in tables that reference them).
  • Partitioning by virtual columns.
  • New composite partitioning types: Range-Range, List-Range, List-Hash, List-List, Interval-Range, Interval-List and Interval-Interval.
  • System partitioning is introduced.
  • Support for transportable partitions - for moving partitions between different operating systems.
  • Staleness checking in partitions - only outdated partitions will be refreshed when we run dbms_mview.refresh().
Compression
  • Support compression on INSERT, UPDATE and DELETE operations. 10g only supported compression for bulk data-loading operations.
  • Advanced compression allows for a 2-3 X compression rate of structured and unstructured data.
Performance improvements
  • RAC - 70% faster (ADDM has a better global view of the RAC cluster).
  • Streams - 30-50% faster.
  • Optimizer stats collection - 10x faster.
  • OLAP (Online Analytic Processing) based materialized views for fast OLAP cube building. Cube-organized MView supports automatic query rewrite and automatic refresh of the cube.
  • SQL Result Cache - new memory area in SGA for storing SQL query results, PL/SQL function results and OCI call results. When you execute a query with the hint result_cache, the results are stored in the SQL Result Cache. Query results caching is 25% faster. The size of the cache is determined by result_cache_max_size, result_cache_max_result, result_cache_mode, result_cache_remote_expiration.
  • Invisible indexes - indexes will be ignored by the optimizer. Handy for testing without dropping. To make it visible, recreate it.
    SQL> alter index index-name invisible;
  • Oracle secure files - 5x faster than normal file systems.
Availability improvements
  • Ability to apply many patches on-line without downtime (RAC and single instance databases).
  • XA transactions spanning multiple servers.
  • Improved runtime connection load balancing.
  • Flashback Transaction/Oracle Total Recall.
Security improvements
  • Support for case sensitive and multi-byte passwords (disabled by setting SEC_CASE_SENSITIVE_LOGON parameter to FALSE).
  • Transparent Data Encryption - support for tablespace level encryption.
  • Hardware based master key protection.
  • Encrypt backups.
  • Kerberos authentication - strong passwords.
  • Add Multi-factor DBA controls with Data Vault.
Manageability improvements
  • New MEMORY_TARGET, MEMORY_MAX_TARGET parameters. When you set MEMORY_TARGET, Oracle will dynamically assign memory to SGA & PGA as and when needed i.e. MEMORY_TARGET=SGA_TARGET+PGA_AGGREGATE_TARGET. New views related this are v$memory_dynamic_components, v$memory_resize_ops.
  • From 11g, SID clause in "alter system reset" command is optional.
    SQL> alter system [SID=instance-name] reset parameter-name;
  • New DIAGNOSTIC_DEST parameter as replacement for BACKGROUND_DUMP_DEST, CORE_DUMP_DEST and USER_DUMP_DEST. It defaults to $ORACLE_BASE/diag/.
  • From 11g, we have two alert log files. One is the traditional alert_SID.log (in DIAGNOSTIC_DEST/trace) and the other one is a log.xml file (in DIAGNOSTIC_DEST/alert). The xml file gives a lot more information than the traditional alert log file. We can have logging information for DDL operations in the alert log files. If log.xml reaches 10MB size, it will be renamed and will create new alert log file. log.xml can be accessed from ADR command line.
    ADRCI> show alert
  • Logging information for DDL operations will be written into alert log files, is not enabled by default and you must change the new parameter to TRUE.
    SQL> ALTER SYSTEM SET enable_ddl_logging=TRUE SCOPE=BOTH;
  • Parameter(p) file & server parameter(sp) file can be created from memory.
    SQL> create pfile[=location] from memory;
    SQL> create spfile[=location] from memory;
  • DDL wait option - Oracle will automatically wait for the specified time period during DDL operations and will try to run the DDL again.
    SQL> ALTER SYSTEM/SESSION SET DDL_LOCK_TIMEOUT = n;
  • We can define the statistics to be pending, which means newly gather statistics will not be published or used by the optimizer — giving us an opportunity to test the new statistics before we publish them.
  • From Oracle Database 11g, we can create extended statistics on
    (i) expressions of values, not only on columns
    (ii) on multiple columns (column group), not only on single column.
  • Table level control of CBO statistics refresh threshold.
    SQL> exec dbms_stats.set_table_prefs(’HR’, EMP’, ‘STALE_PERCENT’, ‘20');
  • Flashback Data Archive - flashback will make use of flashback logs, explicitly created for that table, in FRA (Flash/Fast Recovery Area), will not use undo. Flashback data archives can be defined on any table/tablespace. Flashback data archives are written by a dedicated background process called FBDA so there is less impact on performance. Can be purged at regular intervals automatically.
  • Analytic Workspace Manager (AWM) - a tool to manage OLAP objects in the database.
  • Users with default passwords can be found in DBA_USERS_WITH_DEFPWD.
  • Hash value of the passwords in DBA_USERS (in ALL_USERS and USER_USERS) will be blank. If you want to see the value, query USER$.
  • Default value for audit_trail is DB, not NULL. By default some system privileges will be audited.
  • LogMiner can be accessed from Oracle Enterprise Manager.
Data Guard improvements
  • Standby databases can now simultaneously be in read and recovery mode - so use it for running reports 24x7.
  • Online upgrades: Test on standby and roll to production/primary.
  • Snapshot standby - physical standby database can be temporarily converted into an updateable one called snapshot standby database.
  • Creation of physical standby is become easier.
  • Incremental backup on physical readable physical standby.
  • Offload: Complete database and fast incremental backups.
  • Logical standby databases now support XML and CLOB datatypes as well as transparent data encryption.
  • We can compress the redo data that goes to the standby server, by setting compression=enable.
  • When transfering redo data to standby, if the standby does not respond in time, the log transfering/shipping service will wait for specified timeout value (set by net_timeout=n) and then give up.
SecureFiles SecureFiles provide faster access to unstructured data than normal file systems, provides the benefits of LOBs and external files. For example, write access to SecureFiles is faster than a standard Linux file system, while read access is about the same. SecureFiles can be encrypted for security, de-duplicated and compressed for more efficient storage, cached (or not) for faster access (or save the buffer cache space), and logged at several levels to reduce the mean time to recover (MTTR) after a crash.

create table table-name ( ... lob-column lob-type [deduplicate] [compress high/low] [encrypt using 'encryption-algorithm'] [cache/nocache] [logging/nologging] ...) lob (lob-column) store as securefile ...;To create SecureFiles:
(i) The initialization parameter db_securefile should be set to PERMITTED (the default value).
(ii) The tablespace where you are creating the securefile should be Automatic Segment Space Management (ASSM) enabled (default mode in Oracle Database 11g).
Real Application Testing(RAT) Real Application Testing (RAT) will make decision making easier in migration, upgradation, patching, initialization parameter changes, object changes, hardware replacements, and operating system changes and moving to RAC environment. RAT consists of two components:
  • Database Replay - capture production workload and replay on different (standby/test/development) environment. Capture the activities from source database in the form of capture files in capture directory. Transfer these files to target box. Replay the process on target database.
  • SQL Performance Analyzer (SPA) - identifies SQL execution plan changes and performance regressions. SPA allows us to get results of some specific SQL or entire SQL workload against various types of changes such as initialization parameter changes, optimizer statistics refresh, and database upgrades, and then produces a comparison report to help you assess their impact. Accessible through Oracle Enterprise Manager or dbms_sqlpa package.
Other features
  • Temporary tablespace can be shrinked (up to specified size).
    SQL> alter tablespace temp-tbs shrink space;
    SQL> alter tablespace temp-tbs shrink space keep n{K|M|G|T|P|E};
  • Online application upgrades and hot patching. Features based patching is also available.
  • Real-time SQL Monitoring, allows us to see the different metrics of the SQL being executed in real time. The stats are exposed through V$SQL_MONITOR, which is refreshed every second.
  • "duality" between SQL and XML - users can embed XML within PL/SQL and vice versa.
  • New binary XML datatype, a new XML index & better XQuery support.
  • Query rewriting will occur more frequently and for remote tables also.
  • Automatic Diagnostic Repository (ADR)- automated capture of fault diagnostics for faster fault resolution. The location of the files depends on DIAGNOSTIC_DEST parameter. This can be managed from Database control or command line. For command line, execute $ ./adrci
  • Repair advisers to guide DBAs through the fault diagnosis and resolution process.
  • SQL Developer is installed with the database server software (all editions). The Windows SQL*Plus GUI is deprecated.
  • APEX (Oracle Application Express), formerly known as HTML DB, shipped with the DB.
  • Checkers - DB Structure Integrity Checker, Data Block Integrity Checker, Redo Integrity Checker, Undo Segment Integrity Checker, Transaction Integrity Checker, Dictionary Integrity Checker.
  • 11g SQL Access Advisor provides recommendations with respect to the entire workload, including considering the cost of creation and maintaining access structure.
  • hangman Utility – hangman(Hang Manager) utility to detect database bottlenecks.
  • Health Monitor (HM) utility - Health Monitor utility is an automation of the dbms_repair corruption detection utility.
  • The dbms_stats package has several new procedures to aid in supplementing histogram data, and the state of these extended histograms can be seen in the user_tab_col_statistics view:
    dbms_stats.create_extended_stats
    dbms_stats.show_extended_stats_name
    dbms_stats.drop_extended_stats
  • New package DBMS_ADDM introduced in 11g.
  • New parameters have been added to enhance the default security of the database.
    * SEC_RETURN_SERVER_RELEASE_BANNER
    * SEC_PROTOCOL_ERROR_FURTHER_ACTION
    * SEC_PROTOCOL_ERROR_TRACE_ACTION
    * SEC_MAX_FAILED_FAILED_LOGIN_ATTEMPTS
    * SEC_DISABLE_OLDER_ORACLE_RPCS
Desupported features The following features are desupported/deprecated in Oracle Database 11g Release 1 (11.1.0):
  • Oracle export utility (exp). Imp is still supported for backwards compatibility.
  • Windows SQL*Plus GUI & iSQLPlus will not be shipped anymore. Use SQL Developer instead.
  • Oracle Enterprise Manager Java console.
  • copy command is deprecated.

RMAN new features

RMAN new features in Oracle 10g
  • Managing recovery related files with flash recovery area.
  • Optimized incremental backups using block change tracking (Faster incremental backups) using a file (named block change tracking file). CTWR (Change Tracking Writer) is the background process responsible for tracking the blocks.
  • Reducing the time and overhead of full backups with incrementally updated backups.
  • Comprehensive backup job tracking and administration with Enterprise Manager.
  • Backup set binary compression.
  • New compression algorithm BZIP2 brought in.
  • Automated Tablespace Point-in-Time Recovery.
  • Automatic channel failover on backup & restore.
  • Cross-Platform tablespace conversion.
  • Ability to preview the backups required to perform a restore operation.
    RMAN> restore database preview [summary];
    RMAN> restore tablespace tbs1 preview;
RMAN new features in Oracle 11g Release 1
  • Multisection backups of same file - RMAN can backup or restore a single file in parallel by dividing the work among multiple channels. Each channel backs up one file section, which is a contiguous range of blocks. This speeds up overall backup and restore performance, and particularly for bigfile tablespaces, in which a datafile can be sized upwards of several hundred GB to TB's.
  • Recovery will make use of flashback logs in FRA (Flash Recovery Area).
  • Fast Backup Compression - in addition to the Oracle Database 10g backup compression algorithm (BZIP2), RMAN now supports the ZLIB algorithm, which offers 40% better performance, with a trade-off of no more than 20% lower compression ratio, versus BZIP2.
    RMAN>
    configure compression algorithm 'ZLIB' ;
  • Will backup uncommitted undo only, not committed undo.
  • Data Recovery Advisor (DRA) - quickly identify the root cause of failures; auto fix or present recovery options to the DBA.
  • Virtual Private Catalog - a recovery catalog administrator can grant visibility of a subset of registered databases in the catalog to specific RMAN users.
    RMAN>
    grant catalog for database db-name to user-name;
  • Catalogs can be merged/moved/imported from one database to another.
  • New commands in RMAN
    • RMAN> list failure;
    • RMAN> list failure errnumber detail;
    • RMAN> advise failure;
    • RMAN> repair failure;
    • RMAN> repair failure preview;
    • RMAN> validate database; -- checks for corrupted blocks
    • RMAN> create virtual catalog;
RMAN new features in Oracle 11g Release2
  • The following are new clauses and format options for the SET NEWNAME command:A single SET NEWNAME command can be applied to all files in a database or tablespace.
    SET NEWNAME FOR DATABASE TO format;
    SET NEWNAME FOR TABLESPACE tsname TO format;
  • # New format identifiers are as follows:
    # %U - Unique identifier. data_D-%d_I-%I_TS-%N_FNO-%f
    # %b - UNIX base name of the original datafile name. For example, if the original datafile name was $ORACLE_HOME/data/tbs_01.f, then %b is tbs_01.f.
RMAN related views
Control File V$ View Recovery Catalog View View Describes
V$ARCHIVED_LOG RC_ARCHIVED_LOG Archived and unarchived redo logs
V$BACKUP_DATAFILE RC_BACKUP_CONTROLFILE Control files in backup sets
V$BACKUP_CORRUPTION RC_BACKUP_CORRUPTION Corrupt block ranges in datafile backups
V$BACKUP_DATAFILE RC_BACKUP_DATAFILE Datafiles in backup sets
V$BACKUP_FILES RC_BACKUP_FILES RMAN backups and copies in the repository
V$BACKUP_PIECE RC_BACKUP_PIECE Backup pieces
V$BACKUP_REDOLOG RC_BACKUP_REDOLOG Archived logs in backups
V$BACKUP_SET RC_BACKUP_SET Backup sets
V$BACKUP_SPFILE RC_BACKUP_SPFILE Server parameter files in backup sets
V$DATAFILE_COPY RC_CONTROLFILE_COPY Control file copies on disk
V$COPY_CORRUPTION RC_COPY_CORRUPTION Information about datafile copy corruptions
V$DATABASE RC_DATABASE Databases registered in the recovery catalog (RC_DATABASE) or information about the currently mounted database (V$DATABASE)
V$DATABASE_
BLOCK_CORRUPTION
RC_DATABASE_
BLOCK_CORRUPTION
Database blocks marked as corrupt in the most recent RMAN backup or copy
V$DATABASE_INCARNATION RC_DATABASE_INCARNATION All database incarnations registered in the catalog
V$DATAFILE RC_DATAFILE All datafiles registered in the recovery catalog
V$DATAFILE_COPY RC_DATAFILE_COPY Datafile image copies
V$LOG_HISTORY RC_LOG_HISTORY Historical information about online redo logs
V$OFFLINE_RANGE RC_OFFLINE_RANGE Offline ranges for datafiles
V$PROXY_ARCHIVEDLOG RC_PROXY_ARCHIVEDLOG Archived log backups created by proxy copy
V$PROXY_CONTROLFILE RC_PROXY_CONTROLFILE Control file backups created by proxy copy
V$PROXY_DATAFILE RC_PROXY_DATAFILE Datafile backups created by proxy copy
V$LOG and V$LOGFILE RC_REDO_LOG Online redo logs for all incarnations of the database since the last catalog resynchronization
V$THREAD RC_REDO_THREAD All redo threads for all incarnations of the database since the last catalog resynchronization
V$RESTORE_POINT RC_RESTORE_POINT All restore points for all incarnations of the database since the last catalog resynchronization
- RC_RESYNC Recovery catalog resynchronizations
V$RMAN_CONFIGURATION RC_RMAN_CONFIGURATION RMAN persistent configuration settings
V$RMAN_OUTPUT RC_RMAN_OUTPUT Output from RMAN commands for use in Enterprise Manager

V$RMAN_STATUS
RC_RMAN_STATUS Historical status information about RMAN operations
V$TABLESPACE RC_TABLESPACE All tablespaces registered in the recovery catalog, all dropped tablespaces, and tablespaces that belong to old incarnations
RC_TEMPFILE V$TEMPFILE All tempfiles registered in the recovery catalog

RMAN related Packages
DBMS_RCVCAT
DBMS_RCVMAN
DBMS_BACKUP_RESTORE

Data Pump export/import utility

The Data Pump export utility provides a mechanism for transferring data objects between Oracle databases.
With datapump, we can do all exp/imp activities, except incremental backups.

Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)

USERID must be the first parameter on the command line. This user must have read & write permissions on DIRECTORY.
$ expdp help=y
Keyword Description (Default)
ATTACHAttach to an existing job, e.g. ATTACH [=job name].
COMPRESSIONReduce the size of a dumpfile. Valid keyword values are: ALL, (METADATA_ONLY), DATA_ONLY and NONE.
CONTENTSpecifies data to unload. Valid keyword values are: (ALL), DATA_ONLY, and METADATA_ONLY.
DATA_OPTIONSData layer flags. Valid value is: XML_CLOBS - write XML datatype in CLOB format.
DIRECTORY Directory object to be used for dumpfiles and logfiles. (DATA_PUMP_DIR)
e.g. create directory extdir as '/path/';
DUMPFILE List of destination dump files (EXPDAT.DMP),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ENCRYPTION Encrypt part or all of a dump file. Valid keyword values are: ALL, DATA_ONLY, METADATA_ONLY, ENCRYPTED_COLUMNS_ONLY, or NONE.
ENCRYPTION_ALGORITHM Specify how encryption should be done. Valid keyword values are: (AES128), AES192, and AES256.
ENCRYPTION_MODE Method of generating encryption key. Valid keyword values are: DUAL, PASSWORD, and (TRANSPARENT).
ENCRYPTION_PASSWORDPassword key for creating encrypted data within a dump file.
ESTIMATECalculate job estimates. Valid keyword values are: (BLOCKS) and STATISTICS.
ESTIMATE_ONLYCalculate job estimates without performing the export.
EXCLUDEExclude specific object types. e.g. EXCLUDE=TABLE:EMP
FILESIZESpecify the size of each dumpfile in units of bytes.
FLASHBACK_SCNSCN used to reset session snapshot.
FLASHBACK_TIMETime used to find the closest corresponding SCN value.
FULLExport entire database (N). To use this option user must have EXP_FULL_DATABASE role.
HELPDisplay help messages (N).
INCLUDEInclude specific object types. e.g. INCLUDE=TABLE_DATA.
JOB_NAMEName of export job (default name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE).
LOGFILESpecify log file name (EXPORT.LOG).
NETWORK_LINKName of remote database link to the source system.
NOLOGFILEDo not write logfile (N).
PARALLELChange the number of active workers for current job.
PARFILESpecify parameter file name.
QUERYPredicate clause used to export a subset of a table. e.g. QUERY=emp:"WHERE dept_id > 10".
REMAP_DATASpecify a data conversion function. e.g.
REMAP_DATA=EMP.EMPNO:SCOTT.EMPNO
REUSE_DUMPFILES Overwrite destination dump file if it exists (N).
SAMPLEPercentage of data to be exported.
SCHEMASList of schemas to export (login schema).
SOURCE_EDITION Edition to be used for extracting metadata (from 11g release2).
STATUSFrequency (secs) job status is to be monitored where the default (0) will show new status when available.
TABLESIdentifies a list of tables to export. e.g. TABLES=HR.EMP,SH.SALES:SALES_1995.
TABLESPACESIdentifies a list of tablespaces to export.
TRANSPORTABLESpecify whether transportable method can be used. Valid keyword values are: ALWAYS, (NEVER).
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACESList of tablespaces from which metadata will be unloaded.
VERSIONVersion of objects to export. Valid keywords are: (COMPATIBLE), LATEST, or any valid database version.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed
Datapump Export interactive mode
While exporting is going on, press Control-C to go to interactive mode, it will stop the displaying of the messages on the screen, but not the export process itself.
Export> [[here you can use the below interactive commands]]
Command Description
ADD_FILE Add dumpfile to dumpfile set.
CONTINUE_CLIENTReturn to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job. PARALLEL=number of workers
REUSE_DUMPFILES Overwrite destination dump file if it exists (N).
START_JOB Start/resume current job. Valid value is: SKIP_CURRENT.
STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available. STATUS[=interval]
STOP_JOB Orderly shutdown of job execution and exits the client. STOP_JOB=IMMEDIATE performs an immediate shutdown of Datapump job.

Note: values within parenthesis are the default values.
The options in sky blue color are the enhancements in 11g Release1.

The options in blue color are the enhancements in 11g Release2.
Whenever datapump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has been completed and from where to continue etc.
Datapump Export Examples
SQL> CREATE DIRECTORY dp_dir AS '/u02/dpdata';
SQL> GRANT READ, WRITE ON DIRECTORY dp_dir TO user_name;
==> creating an external directory and granting privileges.

$ expdp DUMPFILE=liv_full.dmp LOGFILE=liv_full.log FULL=y PARALLEL=4
==> exporting whole database, with the help of 4 processes.

$ expdp DUMPFILE=master.dmp LOGFILE=master.log SCHEMAS=satya
(or)
$ expdp system/manager SCHEMAS=hr DIRECTORY=data_pump_dir LOGFILE=example1.log FILESIZE=300000 DUMPFILE=example1.dmp JOB_NAME=example1
==> exporting all the objects of a schema.

$ expdp ATTACH=EXAMPLE1
==> continuing or attaching job to background process.

$ expdp DUMPFILE=search.dmp LOGFILE=search.log SCHEMAS=search,own,tester
==> exporting all the objects of multiple schemas.

$ expdp anand/coffee TABLES=kick DIRECTORY=ext_dir DUMPFILE=expkick_%U.dmp PARALLEL=4 JOB_NAME=kick_export
==> exporting all the rows in table.

$ expdp DUMPFILE=t5.dmp LOGFILE=t5.log SCHEMAS=ym ESTIMATE_ONLY=Y
(or)
$ expdp LOGFILE=t5.log SCHEMAS=manage ESTIMATE_ONLY=Y
==> estimating export time and size.

$ expdp DUMPFILE=extdir:avail.dmp LOGFILE=extdir:avail.log
==> exporting without specifying DIRECTORY option and specifying the external directory name within the file names.

$ expdp SCHEMAS=u1,u6 .... COMPRESSION=metadata_only
==> exporting two schemas and compressing the metadata.

$ expdp SCHEMAS=cpp,java .... COMPRESSION=all
==> exporting two schemas and compressing the data (valid in 11g or later).

$ expdp username/password FULL=y DUMPFILE=dba.dmp DIRECTORY=dpump_dir INCLUDE=GRANT INCLUDE=INDEX CONTENT=ALL
==> exporting an entire database to a dump file with all GRANTS, INDEXES and data

$ expdp DUMPFILE=dba.dmp DIRECTORY=dpump_dir INCLUDE=PROCEDURE
==> exporting all the procedures.

$ expdp username/password DUMPFILE=dba.dmp DIRECTORY=dpump_dir INCLUDE=PROCEDURE:\"=\'PROC1\'\",FUNCTION:\"=\'FUNC1\'\"
==> exporting procedure PROC1 and function FUNC1.

$ expdp username/password DUMPFILE=dba.dmp DIRECTORY=dpump_dir
INCLUDE=TABLE:"LIKE 'TAB%'"
(or)
$ expdp username/password DUMPFILE=dba.dmp DIRECTORY=dpump_dir EXCLUDE=TABLE:"NOT LIKE 'TAB%'"
==> exporting only those tables whose name start with TAB.

$ expdp TABLES=hr.employees VERSION=10.1 DIRECTORY=dpump_dir1 DUMPFILE=emp.dmp
==> exporting data with version. Datapump Import can always read dump file sets created by older versions of Data Pump Export.

$ expdp TABLES=holder,activity REMAP_DATA=holder.cardno:hidedata.newcc
REMAP_DATA=activity.cardno:hidedata.newcc DIRECTORY=dpump_dir DUMPFILE=hremp2.dmp
==> exporting and remapping of data.

Exporting using Datapump API (DBMS_DATAPUMP package)
declare
handle number;
begin
handle := dbms_datapump.open ('EXPORT', 'SCHEMA');
dbms_datapump.add_file(handle, 'scott.dmp', 'EXTDIR');
dbms_datapump.metadata_filter(handle, 'SCHEMA_EXPR','=''SCOTT''');
dbms_datapump.set_parallel(handle,4);
dbms_datapump.start_job(handle);
dbms_datapump.detach(handle);
exception
when others then
dbms_output.put_line(substr(sqlerrm, 1, 254));
end;
/

impdp utility

The Data Pump Import utility provides a mechanism for transferring
data objects between Oracle databases.

Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)

USERID must be the first parameter on the command line. This user must have read & write permissions on DIRECTORY.
$ impdp help=y

Keyword Description (Default)
ATTACH Attach to an existing job, e.g. ATTACH [=job name].
CONTENT Specifies data to load. Valid keywords are:(ALL), DATA_ONLY, and METADATA_ONLY.
DATA_OPTIONS Data layer flags. Valid value is: SKIP_CONSTRAINT_ERRORS-constraint errors are not fatal.
DIRECTORY Directory object to be used for dump, log, and sql files. (DATA_PUMP_DIR)
DUMPFILE List of dumpfiles to import from (EXPDAT.DMP), e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ENCRYPTION_PASSWORD Password key for accessing encrypted data within a dump file. Not valid for network import jobs.
ESTIMATE Calculate job estimates. Valid keywords are:(BLOCKS) and STATISTICS.
EXCLUDE Exclude specific object types. e.g. EXCLUDE=TABLE:EMP
FLASHBACK_SCN SCN used to reset session snapshot.
FLASHBACK_TIME Time used to find the closest corresponding SCN value.
FULL Import everything from source (Y). To use this option (full import of the database) the user must have IMP_FULL_DATABASE role.
HELP Display help messages (N).
INCLUDE Include specific object types. e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of import job (default name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE).
LOGFILE Log file name (IMPORT.LOG).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile.
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file name.
PARTITION_OPTIONS Specify how partitions should be transformed. Valid keywords are: DEPARTITION, MERGE and (NONE).
QUERY Predicate clause used to import a subset of a table. e.g. QUERY=emp:"WHERE dept_id > 10".
REMAP_DATA Specify a data conversion function. e.g. REMAP_DATA=EMP.EMPNO:SCOTT.EMPNO
REMAP_DATAFILE Redefine datafile references in all DDL statements.
REMAP_SCHEMA Objects from one schema are loaded into another schema.
REMAP_TABLE Table names are remapped to another table. e.g. REMAP_TABLE=EMP.EMPNO:SCOTT.EMPNO.
REMAP_TABLESPACE Tablespace object are remapped to another tablespace.
REUSE_DATAFILES Tablespace will be initialized if it already exists(N).
SCHEMAS List of schemas to import.
SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
SOURCE_EDITION Edition to be used for extracting metadata (from 11g release2).
SQLFILE Write all the SQL DDL to a specified file.
STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available.
STREAMS_CONFIGURATION Enable the loading of streams metadata.
TABLE_EXISTS_ACTION Action to take if imported object already exists. Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
TABLES Identifies a list of tables to import. e.g. TABLES=HR.EMP,SH.SALES:SALES_1995.
TABLESPACES Identifies a list of tablespaces to import.
TARGET_EDITION Edition to be used for loading metadata (from 11g release2).
TRANSFORM Metadata transform to apply to applicable objects. Valid keywords: SEGMENT_ATTRIBUTES, STORAGE, OID and PCTSPACE.
TRANSPORTABLE Options for choosing transportable data movement. Valid keywords: ALWAYS and (NEVER).
Only valid in NETWORK_LINK mode import operations.

TRANSPORT_DATAFILES List of datafiles to be imported by transportable mode.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded. Only valid in NETWORK_LINK mode import operations.
VERSION Version of objects to export. Valid keywords are:(COMPATIBLE), LATEST, or any valid database version. Only valid for NETWORK_LINK and SQLFILE.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed
Datapump Import interactive mode
While importing is going on, press Control-C to go to interactive mode.
Import> [[here you can use the below interactive commands]]

Command Description (Default)
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job. PARALLEL=number of workers
START_JOB Start/resume current job. START_JOB=SKIP_CURRENT will start the job after skipping any action which was in progress when job was stopped.
STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available. STATUS[=interval]
STOP_JOB Orderly shutdown of job execution and exits the client. STOP_JOB=IMMEDIATE performs an immediate shutdown of Datapump job.

Note: values within parenthesis are the default values.
The options in sky blue color are the
enhancements in 11g Release1.
The options in blue color are the enhancements in 11g Release2.

The order of importing objects is:
Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions

Datapump Import Examples
$ impdp DUMPFILE=aslv_full.dmp LOGFILE=aslv_full.log PARALLEL=4
==> importing all the exported data, with the help of 4 processes.

$ impdp system/manager DUMPFILE=testdb_emp.dmp LOGFILE=testdb_emp_imp.log TABLES=tester.employee
==> importing all the records of table (employee table records in tester schema).

$ impdp DUMPFILE=visi.dmp LOGFILE=ref1imp.log TABLES=(brand, sfms)
==> importing all the records of couple of tables.

$ impdp system DUMPFILE=example2.dmp REMAP_TABLESPACE=system:example2 LOGFILE=example2imp.log JOB_NAME=example2
==> importing data of one tablespace into another tablespace.

$ impdp DUMPFILE=prod.dmp LOGFILE=prod.log REMAP_TABLESPACE=FRI:WED TABLE_EXISTS_ACTION=REPLACE PARALLEL=4
==> importing data and replacing already existing tables.

$ impdp user1/user1 DUMPFILE=btw:avail.dmp INCLUDE=PROCEDURE
==> importing only procedures from the dump file.

$ impdp username/password DIRECTORY=dpump_dir DUMPFILE=scott.dmp TABLES=scott.emp REMAP_SCHEMA=scott:jim
==> importing of tables from scott’s account to jim’s account

$ impdp DIRECTORY=dpump_dir FULL=Y DUMPFILE=db_full.dmp
REMAP_DATAFILE=”’C:\DB1\HR\PAYROLL\tbs6.dbf’:’/db1/hr/payroll/tbs6.dbf’”
==> importing data by remapping one datafile to another.

$ impdp username/password DIRECTORY=dpump DUMPFILE=expfull.dmp SQLFILE=dpump_dir2:expfull.sql INCLUDE=TABLE,INDEX
==> will create sqlfile with DDL that could be executed in another database/schema to create the tables and indexes.

$ impdp DIRECTORY=dpump_dir DUMPFILE=emps.dmp REMAP_DATA=emp.empno:fixusers.newempid REMAP_DATA=card.empno:fixusers.newempi TABLE_EXISTS_ACTION=append
==> importing and remapping of data.

Importing using Datapump API (DBMS_DATAPUMP package)
declare
handle number;
begin
handle := dbms_datapump.open ('IMPORT', 'SCHEMA');
dbms_datapump.add_file(handle, 'scott.dmp', 'EXTDIR');
dbms_datapump.set_parameter(handle,'TABLE_EXISTS_ACTION','REPLACE');
dbms_datapump.set_parallel(handle,4);
dbms_datapump.start_job(handle);
dbms_datapump.detach(handle);
exception
when others then
dbms_output.put_line(substr(sqlerrm, 1, 254));
end;
/

Here is a general guideline for using the PARALLEL parameter:
- Set the degree of parallelism to two times the number of CPUs, then tune from there.
- For Data Pump Export, the PARALLEL parameter value should be less than or equal to the number of dump files.
- For Data Pump Import, the PARALLEL parameter value should not be much larger than the number of files in the dump file set.
- A PARALLEL greater than one is only available in Enterprise Edition.

Original Export is desupported from 10g release 2.

Original
Import will be maintained and shipped forever, so that Oracle Version 5.0 through Oracle9i dump files will be able to be loaded into Oracle 10g and later. Datapump Import can only read Oracle Database 10g (and later) Datapump Export dump files. Oracle recommends that customers convert to use the Oracle Datapump.