Saturday, October 8, 2011

How to read AWR reports


Llet start analysing AWR


This is the example of AWR report. I have tried here to discuss the finding of sample AWR report I have generated.


Report Header

This section is self explanatory which provides database name, id, instance if RAC , platform information and snap interval. (database workload time duration in review).
This report is for instance number 2 of my RAC environment. So if you need to the analysis on RAC environment, you need to do it separately of all the instances in the RAC to see if all the instances are balanced the way they should be.



DB NameDB IdInstanceInst numStartup TimeReleaseRAC
TestRAC3626203793TestRac2217-Aug-11 19:0811.1.0.6.0YES


Host NamePlatformCPUsCoresSocketsMemory (GB)
TestRACLinux 64-bit for AMD88231.44



Snap IdSnap TimeSessionsCursors/Session
Begin Snap:2856627-Sep-11 01:00:211304.8
End Snap:2856727-Sep-11 02:00:431354.5
Elapsed:
60.35 (mins)

DB Time:
15.07 (mins)



BeginEnd

Buffer Cache:5,888M5,888MStd Block Size:8K
Shared Pool Size:8,704M8,704MLog Buffer:138,328K




Load Profile
This section provides the snapshot of the database workload occurred during the snapshot interval.



Per SecondPer TransactionPer ExecPer Call
DB Time(s):0.30.10.000.00
DB CPU(s):0.30.10.000.00
Redo size:48,933.619,916.2

Logical reads:1,124.4457.7

Block changes:195.979.7

Physical reads:80.532.8

Physical writes:4.31.8

User calls:141.457.6

Parses:123.250.2

Hard parses:2.20.9

W/A MB processed:1,940,807.0789,918.9

Logons:4.31.7

Executes:127.651.9

Rollbacks:0.00.0

Transactions:2.5







DB time(s):
Its the amount of time oracle has spent performing database user calls. Not it does not include background processes.
DB CPU(s):
Its the amount of CPU time spent on user calls. Same as DB time it does not include background process. The value is in microseconds
Redo size:
 For example, the table below shows that an average transaction generates about 19,000 of redo data along with around 48,000 redo per second.
Logical reads:
Consistent Gets+ DB blocks Gets = Logical reads
Block Changes:
The number of block modified during the sample interval
Physical reads:
No of block request causing I/O operation
Physical writes:
Number of physical writes performed
User calls:
Number of user queries generated
Parses:
The total of all parses; both hard and soft.
Hard Parses: 
The parses requiring a completely new parse of the SQL statement. These consume both latches and shared pool area.
Soft Parses:
Soft parses are not listed but derived by subtracting the hard parses from parses. A soft parse reuses a previous hard parse; hence it consumes far fewer resources.
Sorts:
No of sorts performed
Logons:
No of logons during the interval
Executes:
No of SQL Executes
Transactions
No of transactions per second

Instance Efficiency Percentages (Target 100%)

Instance efficiency should be close to 100 %

Buffer Nowait %:99.99Redo NoWait %:100.00
Buffer Hit %:93.06In-memory Sort %:100.00
Library Hit %:98.67Soft Parse %:98.20
Execute to Parse %:3.40Latch Hit %:99.98
Parse CPU to Parse Elapsd %:0.01% Non-Parse CPU:96.21


Execute to Parse % amd Parse CPU to Parse Elapsd %:

If the the value are low like in the above case of 3.40 and 0.01 means that there could be a parsing problem. You may need to look at bind variable issues or shared pool sizing issue.

Redo NoWait%:

Usually this stats is 99 or greater

In-memory Sort %:
This can tell you how efficient is you sort_area_size, hash_area_size or pga_aggrigate_target are. If you dont have adequate sizes of sort,hash and pga parameters, then you in-memory sort per cent will go down

Soft parse %:
with 98.20 % for the soft parse meaning that about 1.72 % (100 -soft parse) is happening for hard parsing. You might want to look at you bind variables issues.

Latch Hit %:
should be close to 100.

% Non-Parse CPU:
Most of our statements were already parsed so we weren't doing a lot of re parsing. Re parsing is high on CPU and should be avoided.

Shared Pool Statistics





BeginEnd
Memory Usage %:73.8675.42
% SQL with executions>1:92.6193.44
% Memory for SQL w/exec>1:94.3394.98





Memory Usage % is the shared pool usage. So here we have use 73.86 per cent of our shared pool and out of that almost 94 percent is being re-used. if Memory Usage % is too large like 90 % it could mean that your shared pool is tool small and if the percent is in 50 for example then this could mean that you shared pool is too large


Top 5 Timed Foreground Events


EventWaitsTime(s)Avg wait (ms)% DB timeWait Class
DB CPU
1,019
112.73
log file sync25,6424324.73Commit
db file scattered read3,06440134.43User I/O
library cache pin136,2672702.98Concurrency
db file sequential read7,6082432.71User I/O


its critical to look into this section. If you turn off the statistic parameter, then the Time(s) wont appear. Wait analysis should be done with respect to Time(s) as there could be million of waits but if that happens for a second or so then who cares. Therefore, time is very important component.


So you have several different types of waits. So you may see the different waits on your AWR report. So lets discuss the most common waits.



  • df file type waits:



db file sequential read:
Is the wait that comes from the physical side of the database. it related to memory starvation and non selective index use. sequential read is an index read followed by table read because it is doing index lookups which tells exactly which block to go to
db file scattered read:
caused due to full table scans may be because of insufficient indexes or un-avilablity of updated statistics
direct Path writes:
You wont see them unless you are doing some appends or data loads
direct Path reads:
could happen if you are doing a lot of parallel query activity
db file parallel writes / read: 
if you are doing a lot of partition activity then expect to see that wait even. it could be a table or index partition
db file single write:
if you see this event than probably you have a lot of data files in your database.
direct path read temp or direct path write temp:
this wait event shows Temp file activity (sort,hashes,temp tables, bitmap)
check pga parameter or sort area or hash area parameters. You might want to increase them


  • buffer type waits

so what's going on in your memory
latch: cache buffer chains:
check hot objects
free buffer waits:
insufficient buffers, process holding buffers too long or i/o subsystem is over loaded. Also check you db writes may be getting clogged up.
buffer busy waits:
see what is causing them further along in report. most of the time its data block related.
gc buffer busy:
its in the RAC environment. caused may be because of not enough memory on your nodes,overloaded interconnect. Also look RAC specific section of the report latch:
cache buffers lru chain – Freelist issues, hot blocks latch: cache buffer handles – Freelist issues, hot blocks
buffer busy - See what is causing them further along in report
no free buffers – Insufficient buffers, dbwr contention


  • Log Type Waits
log file parallel write – Look for log file contention
log buffer space – Look at increasing log buffer size
log file switch (checkpoint incomplete) – May indicate excessive db files or slow IO subsystem
log file switch (archiving needed) – Indicates archive files are written too slowly
log file switch completion – May need more log files per
log file sync – Could indicate excessive commits



  • GC Events
gccr multi block request – Full table or index scans
gc current multi block request – Full table or index scans
gccr block 2-way – Blocks are busy in another instance, check for block level contention or hot blocks
gccr block 3-way – Blocks are busy in another instance, check for block level contention or hot blocks
gccr block busy – Blocks are busy in another instance, check for block level contention or hot blocks
gccr block congested – cr block congestion, check for hot blocks or busy interconnect
gccr block lost – Indicates interconnect issues and contention
gc current block 2-way – Blocks are busy in another instance, check for block level contention or hot blocks
gc current block 3-way – Blocks are busy in another instance, check for block level contention or hot blocks
gc current block busy – Block is already involved in GC operation, shows hot blocks or congestion
gc current block congested – current block congestion, check for hot blocks or busy interconnect
gc current block lost - Indicates interconnect issues and contention



  • Undo Events
undo segment extension – If excessive, tune undo
latch: In memory undo latch – If excessive could be bug, check for your version, may have to turn off in memory undo
wait for a undo record – Usually only during recovery of large transactions, look at turning off parallel undo recovery.

general queries for DBA

To get a picture of all the full or incremental backups taken along with the time it took, use the below script


select decode(BACKUP_TYPE, 'L', 'ARCH', 'D', 'DB', 'I', 'INC',
              'Unknown type='||BACKUP_TYPE) TYPE,
       to_char(a.start_time, 'DDMON HH24:MI') start_time,
       to_char(a.elapsed_seconds/60, '99.9')||' Min' DURATION,
       substr(handle, -35) handle,
       nvl(d.file#, l.sequence#) file#, nvl(d.blocks, l.blocks) blocks
from   SYS.V_$BACKUP_SET a, SYS.V_$BACKUP_PIECE b,
       SYS.V_$BACKUP_DATAFILE d, SYS.V_$BACKUP_REDOLOG l
where  a.start_time between sysdate-1 and sysdate
  and  a.SET_STAMP = b.SET_STAMP
  and  a.SET_STAMP = d.SET_STAMP(+)
  and  a.SET_STAMP = l.SET_STAMP(+)
order  by start_time, file#



This shows the backup of all datafiles and archive log files
 
 


To see if archived logs got backed up in last 24 hours and how many are still sitting on the disks, use the script below

SELECT backedup||' out of  '||archived||' archive logs backed up'  "Archlog files backed up",
       ondisk "Archlog files on disk"
  FROM (select count(*) archived
          from v$archived_log where completion_time > sysdate - 1),
       (select count(*) backedup from v$archived_log
         where backup_count > 0
           and completion_time > sysdate - 1),
       (select count(*) ondisk from v$archived_log
         where archived = 'YES' and deleted  = 'NO')


To find out how many datafiles out of the total got backed up in last 24 hours, use the script below


SQL> SELECT dbfiles||' out of '||numfiles||' datafiles backed up' "Datafiles backed up",
                          cfiles "Control Files backed up", spfiles "SPFiles backed up"
          FROM    (select count(*) numfiles from v$datafile),
                         (select count(*) dbfiles  from v$backup_datafile a, v$datafile b
                          where a.file# = b.file#   and a.completion_time > sysdate - 1),
                         (select count(*) cfiles from v$backup_datafile
                          where file# = 0 and completion_time > sysdate - 1),
                         (select count(*) spfiles from v$backup_spfile
                         where completion_time > sysdate - 1);


Monday, April 18, 2011

Locally vs. Dictionary Managed Tablespaces


When Oracle allocates space to a segment (like a table or index), a group of contiguous free blocks, called an extent, is added to the segment. Metadata regarding extent allocation and unallocated extents are either stored in the data dictionary, or in the tablespace itself. Tablespaces that record extent allocation in the dictionary, are called dictionary managed tablespaces, and tablespaces that record extent allocation in the tablespace header, are called locally managed tablespaces.

SQL> select tablespace_name, extent_management, allocation_type from dba_tablespaces;

TABLESPACE_NAME EXTENT_MAN ALLOCATIO
------------------------------ ---------- ---------
SYSTEM -- DICTIONARY -USER
UNDOTBS1 -- LOCAL -SYSTEM
TEMP -- LOCAL -UNIFORM
USERS -- LOCAL -SYSTEM

Dictionary Managed Tablespaces (DMT):

Oracle use the data dictionary (tables in the SYS schema) to track allocated and free extents for tablespaces that is in "dictionary managed" mode. Free space is recorded in the SYS.FET$ table, and used space in the SYS.UET$ table. Whenever space is required in one of these tablespaces, the ST (space transaction) enqueue latch must be obtained to do inserts and deletes agianst these tables. As only one process can acquire the ST enque at a given time, this often lead to contention.

Execute the following statement to create a dictionary managed tablespace:

SQL> CREATE TABLESPACE tblspc1 DATAFILE '/oradata/tblspc1_01.dbf' SIZE 50M
EXTENT MANAGEMENT DICTIONARY
DEFAULT STORAGE ( INITIAL 50K NEXT 50K MINEXTENTS 2 MAXEXTENTS 50 PCTINCREASE 0);

Locally Managed Tablespaces (LMT):

Using LMT, each tablespace manages it's own free and used space within a bitmap structure stored in one of the tablespace's data files. Each bit corresponds to a database block or group of blocks. Execute one of the following statements to create a locally managed tablespace:

SQL> CREATE TABLESPACE tblspc2 DATAFILE '/oradata/tblspc2_01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

SQL> CREATE TABLESPACE tblspc3 DATAFILE '/oradata/tblspc3_01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;

Note the difference between AUTOALLOCATE and UNIFORM SIZE:

AUTOALLOCATE specifies that extent sizes are system managed. Oracle will choose "optimal" next extent sizes starting with 64KB. As the segment grows larger extent sizes will increase to 1MB, 8MB, and eventually to 64MB. This is the recommended option for a low or unmanaged environment.

UNIFORM specifies that the tablespace is managed with uniform extents of SIZE bytes (use K or M to specify the extent size in kilobytes or megabytes). The default size is 1M. The uniform extent size of a locally managed tablespace cannot be overridden when a schema object, such as a table or an index, is created.

Also not, if you specify, LOCAL, you cannot specify DEFAULT STORAGE, MINIMUM EXTENT or TEMPORARY.

Advantages of Locally Managed Tablespaces:
Eliminates the need for recursive SQL operations against the data dictionary (UET$ and FET$ tables)
Reduce contention on data dictionary tables (single ST enqueue)
Locally managed tablespaces eliminate the need to periodically coalesce free space (automatically tracks adjacent free space)
Changes to the extent bitmaps do not generate rollback information

Locally Managed SYSTEM Tablespace:

From Oracle9i release 9.2 one can change the SYSTEM tablespace to locally managed. Further, if you create a database with DBCA (Database Configuration Assistant), it will have a locally managed SYSTEM tablespace by default. The following restrictions apply:

No dictionary-managed tablespace in the database can be READ WRITE.
You cannot create new dictionary managed tablespaces
You cannot convert any dictionary managed tablespaces to local

Thus, it is best only to convert the SYSTEM tablespace to LMT after all other tablespaces are migrated to LMT.

Segment Space Management in LMT:

From Oracle 9i, one can not only have bitmap managed tablespaces, but also bitmap managed segments when setting Segment Space Management to AUTO for a tablespace. Look at this example:

SQL> CREATE TABLESPACE tblspc4 DATAFILE '/oradata/tblspc4_01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;

Segment Space Management eliminates the need to specify and tune the PCTUSED, FREELISTS, and FREELISTS GROUPS storage parameters for schema objects. The Automatic Segment Space Management feature improves the performance of concurrent DML operations significantly since different parts of the bitmap can be used simultaneously eliminating serialization for free space lookups against the FREELSITS. This is of particular importance when using RAC, or if "buffer busy waits" are deteted.

Convert between LMT and DMT:

The DBMS_SPACE_ADMIN package allows DBAs to quickly and easily convert between LMT and DMT mode. Look at these examples:

SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local('tblspc1');
PL/SQL procedure successfully completed.

SQL>exec dbms_space_admin.Tablespace_Migrate_FROM_Local('tblspc2');
PL/SQL procedure successfully completed.

Cool Scripts for daily DBA activities

I will be adding scripts to this post when ever I come across a good one.

Note: Test the scripts before using on a Production database.
Finely Format the SQL statements before use.

Calculate the Database Size

COLUMN "Total Mb" FORMAT 999,999,999.0
COLUMN "Redo Mb" FORMAT 999,999,999.0
COLUMN "Temp Mb" FORMAT 999,999,999.0
COLUMN "Data Mb" FORMAT 999,999,999.0

Prompt
Prompt "Database Size"

select (select sum(bytes/1048576) from dba_data_files) "Data Mb",
(select NVL(sum(bytes/1048576),0) from dba_temp_files) "Temp Mb",
(select sum(bytes/1048576)*max(members) from v$log) "Redo Mb",
(select sum(bytes/1048576) from dba_data_files) +
(select NVL(sum(bytes/1048576),0) from dba_temp_files) +
(select sum(bytes/1048576)*max(members) from v$log) "Total Mb"
from dual;
=========================================================
Heavy CPU SQL

This will generate the top SQL statements that produce heavy CPU usage

set termout on
set feedback on
set pagesize 132

#spool cpusql.lis

SELECT username,address, hash_value,
buffer_gets, executions, buffer_gets/executions "Gets/Exec",sql_text
FROM v$sqlarea,dba_users
WHERE buffer_gets > 50000
and executions > 0
and v$sqlarea.parsing_user_id = dba_users.user_id
order by 4 desc;

#spool off;
=========================================================
Calculate the Table Size

SELECT Segment_Name Table_Name,
SUM(Bytes) / (1024 * 1024) Table_Size_Meg
FROM dba_Extents
WHERE Owner = 'SCOTT'
AND Segment_Name = 'DEPT'
AND Segment_Type = 'TABLE'
GROUP BY Segment_Name
/

=============================================================================
Determine Tablespace Usage

SELECT a.TableSpace_Name,
Round(a.Bytes_Alloc / 1024 / 1024,2) Megs_Alloc,
Round(Nvl(b.Bytes_Free,0) / 1024 / 1024,2) Megs_Free,
Round((a.Bytes_Alloc - Nvl(b.Bytes_Free,0)) / 1024 / 1024,
2) Megs_Used,
Round((Nvl(b.Bytes_Free,0) / a.Bytes_Alloc) * 100,
2) pct_Free,
100 - Round((Nvl(b.Bytes_Free,0) / a.Bytes_Alloc) * 100,
2) pct_Used,
Round(MaxBytes / 1048576,2) MAX
FROM (SELECT f.TableSpace_Name,
SUM(f.Bytes) Bytes_Alloc,
SUM(DECODE(f.AutoexTensible,'YES',f.MaxBytes,
'NO',f.Bytes)) MaxBytes
FROM dba_Data_Files f
GROUP BY TableSpace_Name) a,
(SELECT f.TableSpace_Name,
SUM(f.Bytes) Bytes_Free
FROM dba_Free_Space f
GROUP BY TableSpace_Name) b
WHERE a.TableSpace_Name = b.TableSpace_Name (+)
UNION ALL
SELECT h.TableSpace_Name,
Round(SUM(h.Bytes_Free + h.Bytes_Used) / 1048576,
2) Megs_Alloc,
Round(SUM((h.Bytes_Free + h.Bytes_Used) - Nvl(p.Bytes_Used,0)) / 1048576,
2) Megs_Free,
Round(SUM(Nvl(p.Bytes_Used,0)) / 1048576,2) Megs_Used,
Round((SUM((h.Bytes_Free + h.Bytes_Used) - Nvl(p.Bytes_Used,0)) / SUM(h.Bytes_Used + h.Bytes_Free)) * 100,
2) pct_Free,
100 - Round((SUM((h.Bytes_Free + h.Bytes_Used) - Nvl(p.Bytes_Used,0)) / SUM(h.Bytes_Used + h.Bytes_Free)) * 100,
2) pct_Used,
Round(SUM(f.MaxBytes) / 1048576,2) MAX
FROM sys.v_$temp_Space_Header h,
sys.v_$temp_Extent_Pool p,
dba_temp_Files f
WHERE p.File_Id (+) = h.File_Id
AND p.TableSpace_Name (+) = h.TableSpace_Name
AND f.File_Id = h.File_Id
AND f.TableSpace_Name = h.TableSpace_Name
GROUP BY h.TableSpace_Name
ORDER BY 1
-------------------------------------------------------------------------
SELECT TableSpace_Name "Tablespace",
COUNT(Bytes) "Pieces",
MIN(Bytes) "Min",
Round(Avg(Bytes)) "Average",
MAX(Bytes) "Max",
SUM(Bytes) "Total"
FROM sys.dba_Free_Space
GROUP BY TableSpace_Name

==========================================================================
Track your import process:

SELECT Substr(sql_Text,Instr(sql_Text,'INTO "'),30) Table_Name,
Rows_Processed,
Round((SYSDATE - To_date(First_Load_Time,'yyyy-mm-dd hh24:mi:ss')) * 24 * 60,
1) Minutes,
Trunc(Rows_Processed / ((SYSDATE - To_date(First_Load_Time,'yyyy-mm-dd hh24:mi:ss')) * 24 * 60)) Rows_Per_Minute
FROM sys.v_$sqlArea
WHERE sql_Text LIKE 'INSERT %INTO "%'
AND Command_Type = 2
AND Open_Versions > 0;
===============================================================================
Details of parameters for SPfile modifications

SELECT NAME,
Isses_modIfiAble,
Issys_modIfiAble,
IsInstance_modIfiAble
FROM v$Parameter
ORDER BY NAME;
==============================================================================
Query to find the difference between two dates omitting weekends and holidays.


SELECT (To_date('31/01/2008','dd/mm/rrrr') - To_date('01/01/2008','dd/mm/rrrr') + 1) - (SELECT COUNT(Days)
FROM (SELECT To_char(To_date('01/01/2008','dd/mm/rrrr') + LEVEL,'D') Days
FROM Dual CONNECT BY LEVEL <= 31) WHERE Days IN ('7','1')) DAY
FROM Dual;
===============================================================================

If Alert log file is lost


One of my friend asked me whether if in any case the
alert log file is lost what happens?

It is automatically created whenever there needs to
be a new entry into the alert log.

Lets see a practical explanation

[oracle@oracle11gr1 ~]$ sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Sat Apr 19 15:35:05 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> conn /as sysdba
Connected.
SQL>
SQL>!

[oracle@oracle11gr1]$ cd /u01/app/oracle/diag/rdbms/orcl/orcl/trace
[oracle@oracle11gr1 trace]$
[oracle@oracle11gr1 trace]$ ll alert_orcl.log
-rw-r----- 1 oracle oinstall 144278 Apr 19 15:26 alert_orcl.log

[oracle@oracle11gr1 trace]$ pwd
/u01/app/oracle/diag/rdbms/orcl/orcl/trace

[oracle@oracle11gr1 trace]$
[oracle@oracle11gr1 trace]$ mv alert_orcl.log alert_orcl_bak.log

[oracle@oracle11gr1 trace]$ll alert_orcl.log
ls: alert_orcl.log: No such file or directory

[oracle@oracle11gr1 trace]$exit

SQL>
SQL> CREATE TABLESPACE EDR_VECTOR_SPIND DATAFILE
'/u01/app/oracle/oradata/orcl/user.dbf' SIZE 4M AUTOEXTEND ON
MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO; 2 3

Tablespace created.

SQL>
SQL>
SQL> !
[oracle@oracle11gr1 ~]$
[oracle@oracle11gr1 ~]$
[oracle@oracle11gr1 ~]$ cd /u01/app/oracle/diag/rdbms/orcl/orcl/trace
[oracle@oracle11gr1 trace]$
[oracle@oracle11gr1 trace]$
[oracle@oracle11gr1 trace]$ ll alert_orcl.log
-rw-r----- 1 oracle oinstall 392 Apr 19 15:33 alert_orcl.log

[oracle@oracle11gr1 trace]$
[oracle@oracle11gr1 trace]$ more alert_orcl.log
Sat Apr 19 15:33:15 2008
CREATE TABLESPACE EDR_VECTOR_SPIND DATAFILE
'/u01/app/oracle/oradata/orcl/user.dbf' SIZE 4M AUTOEXTEND ON
MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
Completed: CREATE TABLESPACE EDR_VECTOR_SPIND DATAFILE
'/u01/app/oracle/oradata/orcl/user.dbf' SIZE 4M AUTOEXTEND ON
MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

[oracle@oracle11gr1 trace]$

Tom Kytes Challenge


While i'm surfing in site i came across an article in a blog that is simple(not as it seems) :
Have to correctly provide ALL of the versions the following features were added to Oracle.

Thought interesting so blogged it here.

Happy reading, fun along with knowledge.....just go on

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

ops$tkyte%ORA10GR2> select distinct version from features order by version;

VERSION
--------------------
10.1
10.2
11.1
2
3
4
5
6
7.0
7.1
7.2
7.3
8.0
8.1.5
8.1.6
8.1.7
9.0
9.2

18 rows selected.

So, here are the features:

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select rownum, txt
2 from (select txt from features order by rnd)
3 /

ROWNUM TXT
---------- ----------------------------------------------------------------------
1 Real Application Testing
2 Read only Replication
3 Distributed Query
4 Drop column
5 Client-Server (where the client could be elsewhere in the network)
6 Object Relational Features
7 Ability to return result sets from stored procedures (ref cursors)
8 Commit and Rollback (transactions)
9 Triggers
10 Function based indexes
11 Materialized Views
12 Rman
13 Audit SYSDBA/SYSOPER activity
14 Automatic Undo Management
15 Resumable Operations
16 Automatic Storage Management (ASM)
17 Streams
18 Bitmap Indexes
19 csscan - Character Set Scanner utility
20 Flashback Query
21 Case statement (IN SQL, instead of decode)
22 Parallel Query
23 Transparent column level encryption
24 Tablespace encryption
25 PL/SQL
26 Partitioning
27 Row Level Locking
28 Read Consistency (my favorite feature!)
29 2 Phase Commit
30 Sorted Hash Clusters
31 Conditional compilation for PL/SQL
32 Connect By Queries (select ename, level from emp connect by prior....)
33 Update anywhere Replication

33 rows selected.

=======================================
Check your answers:


select rownum, version, txt
2 from (select version, txt from features order by rnd)
3 /

or

col txt for a40
select * from features order by
to_number(substr(version,1,decode(instr(version,'.'),0,1,instr(version,'.'))))
/



ROWNUM VERSION TXT
---------- -------------------- ----------------------------------------
1 8.1.5 Materialized Views
2 10.1 Sorted Hash Clusters
3 8.0 Rman
4 8.1.6 Case statement
5 7.0 2 Phase Commit
6 11.1 Real Application Testing
7 2 Connect By Queries (select ename, level
from emp connect by prior....)

8 9.0 Automatic Undo Management
9 3 Commit and Rollback (transactions)
10 7.2 Ability to return result sets from store
d procedures (ref cursors)

11 8.1.5 Function based indexes
12 11.1 Tablespace encryption
13 8.1.6 csscan - Character Set Scanner utility
14 4 Read Consistency (my favorite feature!)
15 7.0 Triggers
16 10.1 Automatic Storage Management (ASM)
17 7.1 Update anywhere Replication
18 5 Distributed Query
19 7.1 Parallel Query
20 5 Client-Server (where the client could be
elsewhere in the network)

21 10.2 Transparent column level encryption
22 10.2 Conditional compilation for PL/SQL
23 8.1.5 Drop column
24 9.2 Streams
25 8.0 Object Relational Features
26 9.0 Flashback Query
27 9.2 Audit SYSDBA/SYSOPER activity
28 7.3 Bitmap Indexes
29 6 PL/SQL
30 7.0 Read only Replication
31 6 Row Level Locking
32 8.0 Partitioning
33 9.0 Resumable Operations
================================================================================

Want to know how Tom exactly, created this table :

here is the script,

/*

drop table features;
create table features( rnd number, version varchar2(20), txt varchar2(100) );

insert into features values ( dbms_random.random, '2', 'Connect By Queries (select ename, level
from emp connect by prior....)');
insert into features values ( dbms_random.random, '3', 'Commit and Rollback (transactions)');
insert into features values ( dbms_random.random, '4', 'Read Consistency (my favorite feature!)');
insert into features values ( dbms_random.random, '5', 'Client-Server (where the client could be
elsewhere in the network)');
insert into features values ( dbms_random.random, '5', 'Distributed Query');
insert into features values ( dbms_random.random, '6', 'Row Level Locking');
insert into features values ( dbms_random.random, '6', 'PL/SQL');
insert into features values ( dbms_random.random, '7.0', '2 Phase Commit');
insert into features values ( dbms_random.random, '7.0', 'Triggers');
insert into features values ( dbms_random.random, '7.0', 'Read only Replication');
insert into features values ( dbms_random.random, '7.1', 'Update anywhere Replication');
insert into features values ( dbms_random.random, '7.1', 'Parallel Query');
insert into features values ( dbms_random.random, '7.2', 'Ability to return result sets from stored
procedures (ref cursors)');
insert into features values ( dbms_random.random, '7.3', 'Bitmap Indexes');
insert into features values ( dbms_random.random, '8.0', 'Object Relational Features');
insert into features values ( dbms_random.random, '8.0', 'Partitioning');
insert into features values ( dbms_random.random, '8.0', 'Rman');
insert into features values ( dbms_random.random, '8.1.5', 'Materialized Views');
insert into features values ( dbms_random.random, '8.1.5', 'Function based indexes');
insert into features values ( dbms_random.random, '8.1.5', 'Drop column');
insert into features values ( dbms_random.random, '8.1.6', 'Case statement');
insert into features values ( dbms_random.random, '8.1.6', 'csscan - Character Set Scanner
utility');
insert into features values ( dbms_random.random, '9.0', 'Automatic Undo Management');
insert into features values ( dbms_random.random, '9.0', 'Resumable Operations');
insert into features values ( dbms_random.random, '9.0', 'Flashback Query');
insert into features values ( dbms_random.random, '9.2', 'Streams');
insert into features values ( dbms_random.random, '9.2', 'Audit SYSDBA/SYSOPER activity');
insert into features values ( dbms_random.random, '10.1', 'Automatic Storage Management (ASM)');
insert into features values ( dbms_random.random, '10.1', 'Sorted Hash Clusters');
insert into features values ( dbms_random.random, '10.2', 'Conditional compilation for PL/SQL');
insert into features values ( dbms_random.random, '10.2', 'Transparent column level encryption');
insert into features values ( dbms_random.random, '11.1', 'Tablespace encryption');
insert into features values ( dbms_random.random, '11.1', 'Real Application Testing');

select distinct version from features order by version;
*/

===================================================================