Monday, April 11, 2011

Restore and Recovery Scenarios


Full Database Restore

$ORACLE_HOME/bin/rman target / nocatalog
RMAN> shutdown abort;
RMAN> startup mount;
RMAN> restore database;
RMAN> recover database;
RMAN> alter database open;
database opened

Tablespace Restore (online)

$ORACLE_HOME/bin/rman target / nocatalog
RMAN> sql 'alter tablespace users offline';
RMAN> restore tablespace users;
RMAN> recover tablespace users;
RMAN> sql 'alter tablespace users online';
* A SYSTEM tablespace cannot be recovered with the database online.

Tablespace Restore (offline)

$ORACLE_HOME/bin/rman target / nocatalog
RMAN> shutdown abort;
RMAN> startup mount;
RMAN> restore tablespace users;
RMAN> recover tablespace users;
RMAN> alter database open;
database opened

Restoring a Specific Datafile

$ORACLE_HOME/bin/rman target / nocatalog
RMAN> shutdown abort;
RMAN> startup mount;
RMAN> restore datafile '/oradata/DB1/dbf/users01.dbf';
RMAN> recover datafile '/oradata/DB1/dbf/users01.dbf';
RMAN> alter database open;
database opened

Control File Restoration

Prerequisite: In your rman backup directory determine the latest control file backup.
Default Format: c-nnnnnnnnnn-nnnnnnnn-nn
$ORACLE_HOME/bin/rman target / nocatalog
RMAN> shutdown abort;
RMAN> startup nomount;
RMAN> set dbid = 1184749195
RMAN> restore controlfile from '/oradata/DB1/rman/c-1184749195-20060626-02'
RMAN> alter database mount;
RMAN> restore database;
RMAN> recover database;
RMAN> alter database open resetlogs;
database opened

Database Point-In-Time-Recovery (PITR)

Also known as time-based incomplete recovery.
$ORACLE_HOME/bin/rman target / nocatalog
RMAN> shutdown abort;
RMAN> startup mount;
RMAN> restore database until time "to_date('09/03/07 13:00:00', 'MM/DD/YY HH24:MI:SS')";
RMAN> recover database until time "to_date('09/03/07 13:00:00', 'MM/DD/YY HH24:MI:SS')";
RMAN> alter database open resetlogs;
database opened
* Make sure you perform a full backup after this operation!

Restore to Another System

Prerequisites
  • Ideally ensure destination system configured exactly like source.
    • Same OS version and patch level.
    • Same drives (C:, D:, S: etc.).
    • CPU and RAM same or better.
  • The same version of Oracle is installed on the target system as the source.
  • Ensure the ORACLE_HOME and ORACLE_SID environment variables are set.
  • Ensure the listener is running.
  • Copy RMAN backupset files to the destination system rman directory.
  • If Windows:
    1. Create the password file.
      orapwd file=orapwDB1 password=mypassword
      Creates the file %ORACLE_HOME%\dbs\orapwDB1
    2. Copy %ORACLE_HOME%\dbs\orapwDB1 to %ORACLE_HOME%\database.
      In some instances of a restore like this it may look for the file here.
    3. Create or start the Windows database instance service.
      oradim -new -sid DB1 -intpwd mypassword -startmode MANUAL
      Creates the file: %ORACLE_HOME%\database\PWDDB1.ORA
  • Ensure the drive\path to the admin (adump,bdump,cdump,udump), data and redo directories on the source and destination systems are identical.
    Example:
    Admin Dump Directories
    mkdir C:\oracle\product\10.2.0\admin 
    mkdir C:\oracle\product\10.2.0\admin\DB1 
    mkdir C:\oracle\product\10.2.0\admin\DB1\adump 
    mkdir C:\oracle\product\10.2.0\admin\DB1\bdump 
    mkdir C:\oracle\product\10.2.0\admin\DB1\cdump 
    mkdir C:\oracle\product\10.2.0\admin\DB1\udump 
    
    Data Directories
    mkdir D:\oradata
    mkdir D:\oradata\DB1
    
    Redo and Archive Log Directories
    mkdir D:\oradata\DB1\recovery1
    mkdir D:\oradata\DB1\recovery2

Procedure
Restore SPFILE and Control File
%ORACLE_HOME%\bin\rman target / nocatalog 
RMAN> set dbid 161080442 
RMAN> startup nomount; 
      Creates the file: %ORACLE_HOME%\database\hc_db1.dat
RMAN> restore spfile from 'R:\rman\C-161080442-20080313-00'; 
      Creates the file: %ORACLE_HOME%\database\SPFILEDB1.ORA
RMAN> startup force nomount
RMAN> restore controlfile from 'R:\rman\C-161080442-20080313-00'; 
RMAN> shutdown immediate 
RMAN> exit 

Restore and Recover the Data
%ORACLE_HOME%\bin\rman target / nocatalog 
RMAN> startup mount; 
RMAN> restore database; 
      For a large database this step may take some time.
RMAN> recover database; 
      If you do not have\need the very last log(s) you can disregard any error messages.
      ORA-00310: archived log contains sequence 100; sequence 101 required...
RMAN> alter database open resetlogs; 
      database opened
* Make sure you perform a full backup after this operation!

Tuesday, January 4, 2011

Applying CPU Patch


1) take the below commands outputs.
select name from v$database;
select * from registry$history;
select * from v$version;
select * from dba_registry_history;
select count(1) from dba_objects where status like 'I%';
SELECT OBJECT_NAME,OBJECT_TYPE,owner FROM DBA_OBJECTS WHERE STATUS= 'INVALID';
cd $ORACLE_HOME/OPatch
opatch version
opatch lsinventory

2) Shut down the database & stop the listener.

3) check the iventory path and make sure the inventory path should be point correct inventory.

4)use below commands and take the home and inventory backups.
cd /oracle10g/PRDRCD1/product/10.2

tar cvf - .|gzip -c > /oradb/PRDRCD1/oradata1/back_home/home_prdrcd1_`hostname`_`date +%Y%m%d`.tar.gz

cd /oracle10g/PRDRCD1/product/10.2/inventory

tar cvf - .|gzip -c > /oradb/PRDRCD1/oradata1/back_home/oraInvent_prdrcd1_`hostname`_`date +%Y%m%d`.tar.gz

cd /oracle10g/oraInventory

tar cvf - .|gzip -c > /oradb/PRDRCD1/oradata1/back_home/oracle10g_oraInventory_`hostname`_`date +%Y%m%d`.tar.gz

5)take the OPatch backup.
cd $ORACLCE_HOME
cp OPatch OPatch_bak

6) unzip the p6880880_102000_SOLARIS64.zip under ORACLE_HOME

7) go to CPU patch directory and apply the CPU patch
cd /opt/oracle/July2010/9655017
export PATH=$PATH:/usr/ccs/bin
export PATH=$ORACLE_HOME/OPatch:$PATH:.
opatch version
opatch napply -skip_subset -skip_duplicate

8) Run catbunle.sql
cd $ORACLE_HOME/rdbms/admin
sqlplus /'as sysdba'
startup
@catbundle.sql cpu apply

9) run the utlrp.sql

10) excute the below statement
SELECT * FROM registry$history where ID = '6452863';
if that statement returns no rows please execute below commands
cd $ORACLE_HOME/cpu/view_recompile
sqlplus /'as sysdba'
@recompile_precheck_jan2008cpu.sql
sql>shut immediate
startup upgrade
@view_recompile_jan2008cpu.sql
shut immediate
startup
@utlrp.sql

11) perform 1st step for taking patch information.

Monday, January 3, 2011

Oracle 11g features for DBA's

Oracle 11g DBA new features

  • Enhanced ILM - Information Lifecycle Management (ILM) has been around for decades, but Oracle has made a push to codify the approach in 11g.  Read more about Oracle 11g ILM here:  Inside Oracle 11g ILM - Information lifecycle management.
     
  • Table-level control of CBO statistics refresh threshold - (source Lutz Hartmann) When Oracle automatically enables statistics collection, the default "staleness" threshold of 10% can now be changed with the dbms_stats.set_table_prefs procedure:
exec dbms_stats.set_table_prefs(’HR’, EMPS’, ‘STALE_PERCENT’, ‘15′)
 There are three new arguments to the set_table_prefs procedure, designed to allow the DBA more control over the freshness of their statistics:
stale_percent - overrides the one-size-fits-all value of 10%
incremental - Incremental statistics gathering for partitions
publish - Allows the DBA to test new statistics before publishing them to the data dictionary
This is an important 11g new feature because the DBA can now control the quality of optimizer statistics at the table level, thereby improving the behavior of the SQL optimizer to always choose the “best” execution plan for any query.
  • File Group Repository - Oracle introduced an exciting new feature in 10gr2 dubbed the Oracle File Group Repository (FGR).  The FGR allows the DBA to define a logically-related group of files and build a version control infrastructure.  The working of the Oracle file group repository were created to support Oracle Streams, and they mimic the functionality of an IBM mainframe generation data group (GDG), in that you can specify relative incarnations of the file sets (e.g. generation 0, generation -3).
     
  • Interval partitioning for tables - This is a new 11g partitioning scheme that automatically creates time-based partitions as new data is added. Source: Mark Rittman  This is a marvelous one ! You can now partition by date, one partition per month for example, with automatic partition creation. Source: Laurent Schneider
     
  • New load balancing utilities -There are several new load balancing utilities in 11g (first introduced in 10gr2):
  • Web server load balancing - The web cache component includes Apache extension to load-balance transactions to the least-highly-loaded Oracle HTTP server (OHS).
     
  • RAC instance load balancing - Staring in Oracle 10g release 2, Oracle JDBC and ODP.NET provide connection pool load balancing facilities through integration with the new “load balancing advisory” tool.  This replaces the more-cumbersome listener-based load balancing technique.
     
  • Automated Storage Load balancing - Oracle’s Automatic Storage Management (SAM) now enables a single storage pool to be shared by multiple databases for optimal load balancing.  Shared disk storage resources can alternatively be assigned to individual databases and easily moved from one database to another as processing requirements change.
     
  • Data Guard Load Balancing – Oracle Data Guard allows for load balancing between standby databases.
     
  • Listener Load Balancing - If advanced features such as load balancing and automatic failover are desired, there are optional sections of the listener.ora file that must be present
  • New table Data Type "simple_integer" - A new 11g datatype dubbed simple_integer is introduced. The simple_integer data type is always NOT NULL, wraps instead of overflows and is faster than PLS_INTEGER. Source: Lewis Cunningham
     
  • Improved table/index compression - Segment compression now works for all DML, not just direct-path loads, so you can create tables compressed and use them for regular OLTP work. Also supports column add/drop. Mark Rittman
     
  • Faster DML triggers - DML triggers are up to 25% faster. This especially impacts row level triggers doing updates against other tables (think Audit trigger). Source: Lewis Cunningham
     
  • Improved NFS data file management - Kevin Closson has some great notes on Oracle 11g improvement in Networked Attached Storage (NAS). "I’ve already blogged that 11g “might” have an Oracle-provided NFS client. Why is this? It’s because Oracle knows full well that taking dozens of commodity servers and saddling them up with multi-protocol connectivity is a mess.
     
  • Server-side connection pooling - In 11g server-side connection pooling, an additional layer to the shared server, to enable faster [actually to bypass] session creation. Source: Laurent Schneider  Server-side connection pooling allows multiple Oracle clients to share a server-side pool of sessions (USERIDs must match). Clients can connect and disconnect (think PHP applications) at will without the cost of creating a new server session - shared server removes the process creation cost but not the session creation cost. Mark Rittman
     
  • RMAN UNDO bypass - RMAN backup can bypass undo. Undo tablespaces are getting huge, but contain lots of useless information. Now RMAN can bypass those types of tablespace. Great for exporting a tablespace from backup. Source: Laurent Schneider
     
  • Capture/replay database workloads - Sounds appealing. You can capture the workload in prod and apply it in development. Oracle is moving toward more workload-based optimization, adjusting SQL execution plans based on existing server-side stress.  This can be very useful for Oracle regression testing. Source: Laurent Schneider
     
  • Scalability Enhancements - The features in 11g focused on scalability and performance can be grouped into four areas: Scalable execution, scalable storage, scalable availability and scalable management. Mark Rittman
     
  • Virtual columns - Oracle 11g virtual table columns are columns that are actually functions ("create table t1 (c1 number, c2 number, c3 as (c1+c2) virtual"), and similarly, virtual indexes that are based on functions.  Also see Oracle 11g function-based virtual columnsSource: Source: Mark Rittman
     
  • REF partitioning - The 11g REF partitioning allows you to partition a table based on the values of columns within other tables. Source: Mark Rittman
     
  • A "super" object-oriented DDL keyword - This is used with OO Oracle when instantiating a derivative type (overloading), to refer to the superclass from whence the class was derived.
     
  • Oracle 11g XML data storage - Starting in 11g, you can store XML either as a CLOB or a binary data type, adding flexibility.  Oracle11g will support query mechanisms for XML including XQuery and SQL XML, emerging standards for querying XML data stored inside tables. 
     
  • New Trigger features - A new type of "compound" trigger will have sections for BEFORE, ROW and AFTER processing, very helpful for avoiding errors, and maintaining states between each section. 
     
  • Partitioning - partitioning by logical object and automated partition creation.
     
  • LOB's - New high-performance LOB features.
     
  • Automatic Diagnostic Repository (ADR) - When critical errors are detected, they automatically create an “incident”. Information relating to the incident is automatically captured, the DBA is notified and certain health checks are run automatically. This information can be packaged to be sent to Oracle support (see following). Source: Dr. Tim Hall  The ADR can be accessed via OEM or a command-line interface.
     
  • Hangman Utility – The Hang Manager (hangman) utility is a new 11g tool to detect database bottlenecks.  An extension of the dba_waiters and dba_blockers views, the hangman tables have a “hang chain” that allow the DBA to find the source of “hangs”, such as the “deadly embrace” where mutually blocking locks or latches hang a process.   In 11g, the hangman utility is installed on all RAC nodes by default, allowing for easier inter-node hang diagnostics.
     
  • Health Monitor (HM) utility - The Health Monitor utility is an automation of the dbms_repair corruption detection utility.  When a corruption-like problem happens, the HR utility will checks for possible corruption within database blocks, redo log blocks, undo segments, or dictionary table blocks.
     
  • Incident Packaging Service (IPS) - This wraps up all information about an incident, requests further tests and information if necessary, and allows you to send the whole package to Oracle Support. Source: Dr. Tim Hall
     
  • Feature Based Patching - All one-off patches will be classified as to which feature they affect. This allows you to easily identify which patches are necessary for the features you are using. EM will allow you to subscribe to a feature based patching service, so EM automatically scans for available patches for the features you are using. Source: Dr. Tim Hall
     
  • New Oracle11g Advisors - New 11g Oracle Streams Performance Advisor and Partitioning Advisor. Source: Mark Rittman
     
  • Enhanced Read only tables -
     
  • Table trigger firing order - Oracle 11g PL/SQL will you to specify trigger firing order.
     
  • Invisible indexes - Rich Niemiec claims that the new 11g "invisible indexes" are a great new feature.  It appears that the invisible indexes will still exist, that they can just be marked as "invisible" so that they cannot be considered by the SQL optimizer. With the overhead of maintaining the index intact, I don't see why this is very useful.  Also see 11g Function-based columns.
     

Saturday, December 11, 2010

Flash/Fast Recovery Area (FRA)



The flash recovery area is the most powerful tool available from Oracle 10g, that plays a vital role in performing database backup & recovery operations. From 11g release2, flash recovery area is called as fast recovery area.

Flash Recovery Area can be defined as a single, centralized, unified storage area that keep all the database backup & recovery related files and performs those activities in Oracle databases.

Unified Backup Files Storage, all backup components can be stored in one consolidated spot. The flash recovery area is managed via Oracle Managed Files (OMF), and it can utilize disk resources managed by Automatic Storage Management (ASM). Flash recovery area can be configured for use by multiple database instances.

Automated Disk-Based Backup and Recovery, once the flash recovery area is configured, all backup components are managed automatically by Oracle.

Automatic Deletion of Backup Components, once backup components have been successfully created, RMAN can be configured to automatically clean up files that are no longer needed (thus reducing risk of insufficient disk space for backups).

Disk Cache for Tape Copies, if your disaster recovery (DR) plan involves backing up to alternate media, the flash recovery area can act as a disk cache area for those backup components that are eventually copied to tape.

Flashback Logs, the FRA is also used to store and manage flashback logs, which are used during flashback backup operations to quickly restore a database to a prior desired state.

You can designate the FRA as the location for one of the control files and redo log members to limit the exposure in case of disk failure.

In case of a media failure or a logical error, the flash recovery area is referred to retrieve all the files needed to recover a database.

Following are the various entities that can be considered as FRA:

File System:
1. A single directory
2. An entire file system

Raw Devices:
1. Automatic storage management (ASM)

FRA Components

The flash/fast recovery area can contain the following:
  • Control files: During database creation, a copy of the control file is created in the flash recovery area.
  • Online redologs: Online redologs can be kept in FRA.
  • Archived log files: During the configuration of the FRA, the LOG_ARCHIVE_DEST_10 parameter in init.ora file is automatically set to the flash recovery area location. Archived log files are created by ARCn processes in the flash recovery area location and the location defined by LOG_ARCHIVE_DEST_n.
  • Flashback logs: Flashback logs are kept in the flash recovery area when flashback database is enabled.
  • Control file and SPFILE backups: The flash recovery area also keeps the control file and SPFILE backups, which is automatically generated by Recovery Manager (RMAN) only if RMAN has been configured for control file autobackup.
  • Datafile copies: The flash recovery area also keeps the datafile copies.
  • RMAN backup sets: The default destination of backup sets and image copies generated by RMAN is the flash recovery area.
Notes:
  • The FRA is shared among databases in order to optimize the usage of disk space for database recovery operations.
  • Before any backup and recovery activity can take place, the Flash Recovery Area must be set up. The flash recovery area is a specific area of disk storage that is set aside exclusively for retention of backup components such as datafile image copies, archived redo logs, and control file auto backup copies.
  • RMAN also transfers the restored archive files from tape to the flash recovery area in order to perform recovery operations.
Configuring FRA

Following are the three initialization parameters that should be defined in order to set up the flash recovery area:
  • DB_RECOVERY_FILE_DEST_SIZE
  • DB_RECOVERY_FILE_DEST
  • DB_FLASHBACK_RETENTION_TARGET
DB_RECOVERY_FILE_DEST_SIZE specifies the total size of all files that can be stored in the Flash Recovery Area. The size of the flash recovery area should be large enough to hold a copy of all data files, all incremental backups, online redo logs, archived redo log not yet backed up on tape, control files, and control file auto backups.

SQL> ALTER SYSTEM SET db_recovery_file_dest_size = 10g SCOPE = BOTH;

DB_RECOVERY_FILE_DEST parameter is to specify the physical location where all the flash recovery files are to be stored. Oracle recommends that this be a separate location from the datafiles, control files, and redo logs.


SQL> ALTER SYSTEM SET db_recovery_file_dest = '/OFR1' SCOPE = BOTH;

If the database is using Automatic Storage Management (ASM) feature, then the shared disk area that ASM manages can be targeted for the Flashback Recovery Area.

SQL> ALTER SYSTEM SET db_recovery_file_dest = '+dgroup1'
SCOPE = BOTH;

The DB_RECOVERY_FILE_DEST_SIZE and DB_RECOVERY_FILE_DEST are defined to make the flash recovery area usable without shutting down and restarting the database instance i.e. these two parameters are dynamic.


SQL> ALTER SYSTEM SET db_flashback_retention_target = 1440 SCOPE = BOTH;
Notes:
  • DB_RECOVERY_FILE_DEST_SIZE is defined before DB_RECOVERY_FILE_DEST in order to define the size of the flash recovery area.
  • If the value specified in the DB_RECOVERY_FILE_DEST parameter is cleared then as a result the flash recovery area is disabled.
  • DB_RECOVERY_FILE_DEST_SIZE parameter cannot be cleared up prior to the DB_RECOVERY_FILE_DEST parameter.
The flash recovery area can be created and maintained using Oracle Enterprise Manager Database Control.

Enabling Flashback
SQL> alter database flashback on;

The database must be in archive log mode to enable flashback.
Configuring Online Redolog Creation in Flash Recovery Area
To store online redologs in FRA, you have to set DB_CREATE_ONLINE_LOG_DEST_1 (OMF init parameter) to FRA location and create the online log groups/members.

The initialization parameters that determine where online redolog files are created are DB_CREATE_ONLINE_LOG_DEST_n, DB_RECOVERY_FILE_DEST and DB_CREATE_FILE_DEST.

Configuring Control File Creation in Flash Recovery Area
To store control file in FRA, you have to set CONTROL_FILES parameter to FRA location.
The initialization parameters CONTROL_FILES, DB_CREATE_ONLINE_LOG_DEST_n, DB_RECOVERY_FILE_DEST and DB_CREATE_FILE_DEST all interact to determine the location where the database control files are created.

Configuring Archived Redolog Creation in Flash Recovery Area
If Archive log mode is enabled and LOG_ARCHIVE_DEST & DB_RECOVERY_FILE_DEST are not set, then the archive logs will be generated in $ORACLE_HOME/dbs directory.

If LOG_ARCHIVE_DEST is set & DB_RECOVERY_FILE_DEST is not set, then the archive logs will be generated at LOG_ARCHIVE_DEST path.

If you enable FRA (DB_RECOVERY_FILE_DEST is set), then the archive log files will be generated in FRA, and it will ignore the LOG_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT i.e. FRA will follow its own naming convention. The generated filenames for the archived redologs in the flash recovery area are Oracle Managed Filenames and are not determined by LOG_ARCHIVE_FORMAT.

It is recommended to use flash recovery area as an archived log location because the archived logs are automatically managed by the database. Whatever archiving scheme you choose, it is always advisable to create multiple copies of archived logs.

You can always define a different location for archive redo logs, if you use a different location, then you can’t just erase the values of the parameters for LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST in order to specify the location of the FRA.

To place your log files somewhere else other than the FRA you should use a different parameter to specify the archived redo log locations: use LOG_ARCHIVE_DEST_1 instead of LOG_ARCHIVE_DEST.

Suppose log_archive_dest was set to ‘+arc_disk3′, you can use LOG_ARCHIVE_DEST_1 to specify the same location for the archived redologs.
Query the parameter to verify its current value:
SQL> show parameter log_archive_dest
SQL> show parameter log_archive_dest_1
SQL> alter system set log_archive_dest_1=’location=+arc_disk3′ scope=both;
SQL> alter system set log_archive_dest=” scope=both;

Managing Flash/Fast Recovery Area


As the DB_RECOVERY_FILE_DEST_SIZE parameter specifies the space for the flash recovery area. In a situation when the space does not prove enough for all flash recovery files, then in such a case Oracle itself keeps track of those files that are not required on the disk. These unnecessary files are then deleted to resolve the space issue in the flash recovery area.

Whenever a file is deleted from the flash recovery area, a message is written in the alert log.

There are various other circumstances in which messages are written in the alert log:
1. When none of the files are deleted.
2. When the used space in the FRA is 85 percentage (a warning).
3. When the used space in the FRA is 97 percentage (a critical warning).
4. The warning messages issued can be viewed in the DBA_OUTSTANDING_ALERTS data dictionary view and are also available in the OEM Database Control main window.

To recover from these alerts, a number of steps can be taken as remedial options:
1. Adjust the retention policy to keep fewer copies of data files.
In case the retention policy is sounds good, then the steps taken to recover from the alerts are:
  • More disk space should be added.
  • Backup some of the flash recovery files to another destination such as another disk or tape drive.
2. Reduce the number of days in the recovery window

RMAN files creation in the Flash Recovery Area

This section describes RMAN commands or implicit actions (such as control file auto backup) that can create files in the flash recovery area, and how to control whether a specific command creates files there or in some other destination. The assumption in all cases is that a flash recovery area has already been configured for your database. The commands are:

· BACKUP
Do not specify a FORMAT option to the BACKUP command, and do not configure a FORMAT option for disk backups. In such a case, RMAN creates backup pieces and image copies in the flash recovery area, with names in Oracle Managed Files name format.

· CONTROLFILE AUTOBACKUP
RMAN can create control file autobackups in the flash recovery area. Use the RMAN command CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR to clear any configured format option for the control file autobackup location on disk. Control file autobackups will be placed in the flash recovery area when no other destination is configured.

· RESTORE ARCHIVELOG
Explicitly or implicitly (as in the case of, set one of the LOG_ARCHIVE_DEST_n) parameters to 'LOCATION=USE_DB_RECOVERY_FILE_DEST'. If you do not specify SET ARCHIVELOG DESTINATION to override this behavior, then restored archived redo log files will be stored in the flash recovery area.

· RECOVER DATABASE or TABLESPACE, BLOCKRECOVER, and FLASHBACK DATABASE
These commands restore archived redo logs from backup for use during media recovery, as required by the command. RMAN restores any redo log files needed during these operations to the flash recovery area, and delete them once they are applied during media recovery.

To direct the restored archived redo logs to the flash recovery area, set one of the LOG_ARCHIVE_DEST_n parameters to 'LOCATION=USE_DB_RECOVERY_FILE_DEST", and make sure you are not using SET ARCHIVELOG DESTINATION to direct restored archived logs to some other destination.

You can use RMAN to remove old archivelog:
$ rman target=/
RMAN> delete noprompt archivelog all;
RMAN> delete noprompt backup of database;
RMNA> delete noprompt copy of database;

Resolving full Flash Recovery Area

You have a number of choices on how to resolve full flash/fast recovery area when there are no files eligible for deletion:
  • Make more disk space available, and increase DB_RECOVERY_FILE_DEST_SIZE to reflect the new space.
  • Move backups from the flash recovery area to a tertiary device such as tape. One convenient way to back up all of your flash recovery area files to tape at once is the BACKUP RECOVERY AREA command.
After you transfer backups from the flash recovery area to tape, you can resolve the full recovery area condition by deleting files from the flash recovery area, using forms of the RMAN DELETE command.

Note:
  • Flashback logs cannot be backed up outside the flash recovery area. Therefore, in a BACKUP RECOVERY AREA operation the flashback logs are not backed up to tape.
  • Flashback logs are deleted automatically to satisfy the need for space for other files in the flash recovery area. However, a guaranteed restore point can force the retention of flashback logs required to perform Flashback Database to the restore point SCN. See
  • Delete unnecessary files from the flash recovery area using the RMAN DELETE command. (Note that if you use host operating system commands to delete files, then the database will not be aware of the resulting free space. You can run the RMAN CROSSCHECK command to have RMAN re-check the contents of the flash recovery area and identify expired files, and then use the DELETE EXPIRED command to remove missing files from the RMAN repository.)
You may also need to consider changing your backup retention policy and, if using Data Guard, consider changing your archivelog deletion policy.

Changing the Flash Recovery Area to a new location

If you need to move the flash recovery area of your database to a new location, you can follow this procedure:

1. Change the DB_RECOVERY_FILE_DEST initialization parameter.
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+disk1' SCOPE=BOTH SID='*';
2. After you change this parameter, all new flash recovery area files will be created in the new location.
3. The permanent files (control files and online redo log files), flashback logs and transient files can be left in the old flash recovery area location. The database will delete the transient files from the old flash recovery area location as they become eligible for deletion.

Oracle will clean up transient files remaining in the old flash recovery area location as they become eligible for deletion.

In Oracle Database 11g, a new feature introduced i.e. Flashback Data Archive - flashback will make use of flashback logs, explicitly created for that table, in FRA, will not use undo. Flashback data archives can be defined on any table/tablespace. Flashback data archives are written by a dedicated background process called FBDA so there is less impact on performance. Can be purged at regular intervals automatically.
Related views
V$RECOVERY_FILE_DEST
V$FLASH_RECOVERY_AREA_USAGE
V$DBA_OUTSTANDING_ALERTS
V$FLASHBACK_DATABASE_LOGFILE

Automatic Workload Repository (AWR)

The AWR collect and stores database statistics relating to problem detection and tuning. AWR is a replacement for the statspack utility which helps gather database performance statistics. AWR generates snapshots of the key performance data, such as system and session statistics, segment-usage statistics, time model statistics, high load statistics and stores in the sysaux.

AWR provides statistics in 2 formats
·         Temporary – in memory collection of statistics in the SGA, accessible via the V$ views.
·         Persistent – type of performance data in the form of regular AWR snapshots which you access via DBA_views.
The MMON process is responsible for collecting the statistics in the SGA and saving them to the sysaux tablespaces.

The AWR is used to collect performance statistics including:

  • Wait events used to identify performance problems.
  • Time model statistics indicating the amount of DB time associated with a process from the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.
  • Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view.
  • Some system and session statistics from the V$SYSSTAT and V$SESSTAT views.
  • Object usage statistics.
  • Resource intensive SQL statements.
Some of the Important Views and tables regarding to AWR

The following workload repository views are available:
  • V$ACTIVE_SESSION_HISTORY - Displays the active session history (ASH) sampled every second.
  • V$METRIC - Displays metric information.
  • V$METRICNAME - Displays the metrics associated with each metric group.
  • V$METRIC_HISTORY - Displays historical metrics.
  • V$METRICGROUP - Displays all metrics groups.
  • V$SYS_TIME_MODEL – time model stats (db time, java execution time, pl/sql execution time,etc.,)
  • V$OSSTAT – operating system stats(avg busy tick, avg idle ticks, etc)
  • V$SERVICE_STATS - wait statistics(db cpu, app wait time, user commits, etc)
  • V$SESSTAT – session stats
  • V$SYSSTAT – system stats
  • DBA_HIST_ACTIVE_SESS_HISTORY - Displays the history contents of the active session history.
  • DBA_HIST_BASELINE - Displays baseline information.
  • DBA_HIST_DATABASE_INSTANCE - Displays database environment information.
  • DBA_HIST_SNAPSHOT - Displays snapshot information.
  • DBA_HIST_SQL_PLAN - Displays SQL execution plans.
  • DBA_HIST_WR_CONTROL - Displays AWR settings.



Generating, Modifying And Droping The Snapshots

Creating of Snapshots

Extra snapshots can be taken and existing snapshots can be removed using:
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;

Modifying an existing snapshot

By default snapshots of the relevant data are taken every hour and retained for 7 days. The default values for these settings can be altered using:
BEGIN
  DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
    retention => 43200,        -- Minutes (= 30 Days). Current value retained if NULL.
    interval  => 30);          -- Minutes. Current value retained if NULL.
END;
/
The changes to the settings are reflected in the DBA_HIST_WR_CONTROL view. Typically the retention period should capture at least one complete workload cycle. If you system has monthly archive and loads a 1 month retention time would be more beneficial that the default 7 days. An interval of "0" switches off snapshot collection, which in turn stops much of the self-tuning functionality, hence this is not recommended. Automatic collection is only possible if the STATISTICS_LEVEL parameter is set to TYPICAL or ALL. If the value is set to BASIC manual snapshots can be taken, but they will be missing some statistics.

Droping The Snapshots

BEGIN
  DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range (
    low_snap_id  => 22,
    high_snap_id => 32);
END;
/

Workload Repository Reports

Oracle provide two scripts to produce workload repository reports (awrrpt.sql and awrrpti.sql). They are similar in format to the statspack reports and give the option of HTML or plain text formats. The two reports give essential the same output but the awrrpti.sql allows you to select a single instance. The reports can be generated as follows:

@$ORACLE_HOME/rdbms/admin/awrrpt.sql
@$ORACLE_HOME/rdbms/admin/awrrpti.sql

The scripts prompt you to enter the report format (html or text), the start snapshot id, the end snapshot id and the report filename. The resulting report can be opend in a browser or text editor accordingly.

Sample output for AWR reports

OUTPUT
-----------------------------------------------------------------
WORKLOAD REPOSITORY report for

DB Name         DB Id    Instance     Inst Num Release     Cluster Host
------------ ----------- ------------ -------- ------- ------
DBDABR          37933856 dbdabr              1             10.1.0.2.0       NO      Host1

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:      2900 19-Aug-04 11:00:29        18       5.2
  End Snap:      2911 19-Aug-04 22:00:16        18       4.6
   Elapsed:              659.78 (mins)
   DB Time:               10.08 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache:      48M      Std Block Size:         8K
Shared Pool Size:  56M          Log Buffer:       256K

Load Profile
~~~~~~~~~~~~          Per Second       Per Transaction
                   --------------       -----------
        Redo size: 1,766.20             18,526.31
    Logical reads:    39.21                411.30
    Block changes:    11.11                116.54
   Physical reads:     0.38                  3.95
  Physical writes:     0.38                  3.96
       User calls:     0.06                  0.64
           Parses:     2.04                 21.37
      Hard parses:     0.14                  1.45
            Sorts:     1.02                 10.72
           Logons:     0.02                  0.21
         Executes:     4.19                 43.91

About Load Profile in AWR Reports

This section gives a glimpse of the database workload activity that occurred within the snapshot interval. For example, the load profile below shows that an average transaction generates about 18K of redo data, and the database produces about 1.8K redo per second.

Load Profile
~~~~~~~~~~~~           Per Second       Per Transaction
                   --------------       ---------------
        Redo size:       1,766.20             18,526.31
    Logical reads:          39.21                411.30
    Block changes:          11.11                116.54
   Physical reads:           0.38                  3.95
  Physical writes:           0.38                  3.96
       User calls:           0.06                  0.64
           Parses:           2.04                 21.37
      Hard parses:           0.14                  1.45
            Sorts:           1.02                 10.72
           Logons:           0.02                  0.21
         Executes:           4.19                 43.91

The above statistics give an idea about the workload the database experienced during the time observed. However, they do not indicate what in the database is not working properly. For example, if there are a high number of physical reads per second, this does not mean that the SQLs are poorly tuned.
Perhaps this AWR report was built for a time period when large DSS batch jobs ran on the database. This workload information is intended to be used along with information from other sections of the AWR report in order to learn the details about the nature of the applications running on the system.  The goal is to get a correct picture of database performance.

The following list includes detailed descriptions for particular statistics:
Redo size: The amount of redo generated during this report.
Logical Reads: Calculated as (Consistent Gets + DB Block Gets = Logical Reads).
Block changes: The number of blocks modified during the sample interval.
Physical Reads: The number of requests for a block that caused a physical I/O operation.
Physical Writes: Number of physical writes performed.
User Calls: Number of user queries generated.
Parses: The total of all parses; both hard and soft.
Hard Parses: The parses requiring a completely new parse of the SQL statement. These consume both latches and shared pool area.
Soft Parses: Soft parses are not listed but derived by subtracting the hard parses from parses. A soft parse reuses a previous hard parse; hence it consumes far fewer resources.

Sorts, Logons, Executes and Transactions: All self-explanatory.

Parse activity statistics should be checked carefully because they can immediately indicate a problem within the application. For example, a database has been running several days with a fixed set of applications, it should, within a course of time, parse most SQLs issued by the applications, and these statistics should be near zero.
If there are high values of Soft Parses  or especially Hard Parses statistics, such values should be taken as an indication that the applications make little use of bind variables and produce large numbers of unique SQLs. However, if the database serves developmental purposes, high vales of these statistics are not bad.

The following information is also available in the workload section:
  % Blocks changed per Read:    4.85    Recursive Call %:    89.89
 Rollback per transaction %:    8.56       Rows per Sort:    13.39

The % Blocks changed per Read statistic indicates that only 4.85 percent of all blocks are retrieved for update, and in this example, the Recursive Call % statistic is extremely high with about 90 percent. However, this fact does not mean that nearly all SQL statements executed by the database are caused by parsing activity, data dictionary management, space management, and so on.
Remember, Oracle considers all SQL statements executed within PL/SQL programs to be recursive.  If there are applications making use of a large number of stored PL/SQL programs, this is good for performance. However, applications that do not widely use PL/SQL may indicate the need to further investigate the cause of this high recursive activity.
It is also useful to check the value of the Rollback per transaction % statistic. This statistic reports the percent of transactions rolled back. In a production system, this value should be low. If the output indicates a high percentage of transactions rolled back, the database expends a considerable amount of work to roll back changes made.  This should be further investigated in order to see why the applications roll back so often.

About Cache Sizes in AWR Reports

This report section contains the cache sizes at the end of the snapshot period. Remember, if you are using Automatic Memory Management, AMM may adjust your size between snapshots. This information is valid for the time specific to the end snapshot of the snapshot interval observed. This is essential because the Automatic Memory Management activity could resize caches during the snapshot interval time or the DBA can do it manually.

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 48M Std Block Size: 8K
Shared Pool Size: 56M Log Buffer: 256K

One interesting feature in AWR is the use of begin values and end values for those parameters that are changed dynamically or via the

Automatic Memory Manager (AMM) facility in Oracle10g:
PARAMETER NAME
BEGIN VALUE
END VALUE (IF DIFFERENT)
background_dump_dest
u01/app/DBDABR/ DUMP

compatible
10.1.0.2.0

db_block_size
8192

db_cache_size
364904448
 455210668


Differences between AWR and STATSPACK
http://vinu3012dba.blogspot.com/2010/08/difference-or-advantage-between-awr-and.html