Oracle Tips and Tricks — David Fitzjarrell

February 23, 2015

Merge, Right?

Filed under: General,Performance — dfitzjarrell @ 15:33

"There are no wrong roads to anywhere."
 - Norton Juster, The Phantom Tollbooth 

Oracle can do some, well, strange things when fixing issues caused by the optimizer. For 10g releases up to 10.2.0.x Oracle chose to silently ignore a message and eliminate an outer join on the MERGE statement under certain conditions. Let’s examine this and see exactly what Oracle implements.

Occasionally in a 10046 trace file generated from 10g releases prior to 10.2.0.x and procesed by the tkprof utility you may see something similar to this:


SELECT 1 FROM DUAL WHERE <some condition here>
 
Error encountered: ORA-00904

where <some condition here> is the ‘ON’ condition from a MERGE statement executed by that session. That seems odd, I know, and Oracle silently ignores that ORA-00904 condition as it is never displayed when executing the MERGE. Why would Oracle do this? Looking at the 10053 trace file generated from the following example:


SQL> merge into t1_merge t1
  2          using (select 1 merged_key, lpad('y', 21, 'y') xtra  from dual) t2
  3          on (t1.merge_key = t2.merged_key)
  4  when matched then update set t1.xtra_stuff = t2.xtra
  5  when not matched then insert (merge_key, more_txt, xtra_stuff) values (t2.merged_key, t2.merged_key, t2.xtra);

1 row merged.

SQL>

we see the following:


PARSE ERROR #5:len=36 dep=1 uid=60 oct=3 lid=60 tim=1266258229533447 err=904
SELECT 1 FROM DUAL WHERE t1.merge_key = t2.merged_key
CLOSE #5:c=0,e=4,dep=1,type=0,tim=1266258229533578
Registered qb: MRG$1 0x84379c (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
...
====================== END SQL Statement Dump ======================
=====================
PARSING IN CURSOR #1 len=302 dep=0 uid=1158 oct=189 lid=1158 tim=33189168479100 hv=1190985157 ad='1ec71da8'
merge into t1_merge t1
        using (select 1 merged_key, lpad('y', 21, 'y') xtra  from dual) t2
        on (t1.merge_key = t2.merged_key)
when matched then update set t1.xtra_stuff = t2.xtra
when not matched then insert (merge_key, more_txt, xtra_stuff) values (t2.merged_key, t2.merged_key, t2.xtra)
END OF STMT
PARSE #1:c=40000,e=51569,p=4,cr=29,cu=0,mis=1,r=0,dep=0,og=2,tim=33189168479094
BINDS #1:
EXEC #1:c=10000,e=558,p=0,cr=7,cu=3,mis=0,r=1,dep=0,og=2,tim=33189168479778
WAIT #1: nam='SQL*Net message to client' ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=2135111 tim=33189168479862
WAIT #1: nam='SQL*Net message from client' ela= 419 driver id=1650815232 #bytes=1 p3=0 obj#=2135111 tim=33189168480324
STAT #1 id=1 cnt=2 pid=0 pos=1 obj=0 op='MERGE  T1_MERGE (cr=7 pr=0 pw=0 time=425 us)'
STAT #1 id=2 cnt=1 pid=1 pos=1 obj=0 op='VIEW  (cr=7 pr=0 pw=0 time=178 us)'
STAT #1 id=3 cnt=1 pid=2 pos=1 obj=0 op='NESTED LOOPS OUTER (cr=7 pr=0 pw=0 time=175 us)'
STAT #1 id=4 cnt=1 pid=3 pos=1 obj=0 op='FAST DUAL  (cr=0 pr=0 pw=0 time=1 us)'
STAT #1 id=5 cnt=1 pid=3 pos=2 obj=2135111 op='TABLE ACCESS FULL T1_MERGE (cr=7 pr=0 pw=0 time=170 us)'

Notice there is no outer join in the plan steps. Additionally if we look at the outline data we see:


  /*+
    BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
      FIRST_ROWS(10)
      OUTLINE_LEAF(@"SEL$5428C7F1")
      MERGE(@"SEL$2")
      MERGE(@"SEL$3")
      OUTLINE_LEAF(@"SEL$4")
      OUTLINE_LEAF(@"MRG$1")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      NO_ACCESS(@"MRG$1" "from$_subquery$_008"@"MRG$1")
      NO_ACCESS(@"MRG$1" "T2"@"MRG$1")
      FULL(@"MRG$1" "T1"@"MRG$1")
      LEADING(@"MRG$1" "from$_subquery$_008"@"MRG$1" "T2"@"MRG$1" "T1"@"MRG$1")
      USE_MERGE_CARTESIAN(@"MRG$1" "T2"@"MRG$1")
      USE_MERGE_CARTESIAN(@"MRG$1" "T1"@"MRG$1")
    END_OUTLINE_DATA
  */

The outline data in these cases shows the outer join isn’t used as Oracle re-writes the query into this:


merge into t1
using (select 1 merged_key, lpad('y', 21, 'y') xtra from dual) t2
on (1 = 0)
when matched then update set t1.xtra = t2.xtra
when not matched then insert (merge_key, more_txt, xtra_stuff) values (t2.merged_key, t2.merged_key, t2.xtra);

Inerestingly the reason for this behavior is available in the online documentationL


To insert all of the source rows into the table, you can use a constant filter predicate in the ON clause condition. An example of a 
constant filter predicate is ON (0=1). Oracle Database recognizes such a predicate and makes an unconditional insert of all source 
rows into the table. This approach is different from omitting the merge_update_clause. In that case, the database still must perform 
a join. With constant filter predicate, no join is performed.

This appears to be fixed in 10.2.0.x and later releases as the ORA-00904 error does not occur in either of the trace files. MOS document id 244055.1 lists this modification as an ‘enhancement’ in 10g. The document also lists the bug that requires this work-around is fixed in 10.1.0.1.0. Even though 10g in all its forms is an old release there still may be sites using it.

In 11.2.0.4 this same construct generates no ORA-00904 error, nor does the ‘dummy’ query appear. Notice that the join is restored as evidenced by the execution plan:


----- Current SQL Statement for this session (sql_id=5zv0cdsdp0bf4) -----
merge into t1_merge t1
	using (select 1 merged_key, lpad('y', 21, 'y') xtra  from dual) t2
	on (t1.merge_key = t2.merged_key)
when matched then update set t1.xtra_stuff = t2.xtra
when not matched then insert (merge_key, more_txt, xtra_stuff) values (t2.merged_key, t2.merged_key, t2.xtra)
sql_text_length=289
sql=merge into t1_merge t1
	using (select 1 merged_key, lpad('y', 21, 'y') xtra  from dual) t2
	on (t1.merge_key = t2.merged_key)
when matched then update set t1.xtra_stuff = t2.xtra
when not matched then insert (merge_key, more_txt, xtra_stuff) values (t2.mer
sql=ged_key, t2.merged_key, t2.xtra)
----- Explain Plan Dump -----
----- Plan Table -----
 
============
Plan Table
============
----------------------------------------+-----------------------------------+
| Id  | Operation             | Name    | Rows  | Bytes | Cost  | Time      |
----------------------------------------+-----------------------------------+
| 0   | MERGE STATEMENT       |         |       |       |     6 |           |
| 1   |  MERGE                | T1_MERGE|       |       |       |           |
| 2   |   VIEW                |         |       |       |       |           |
| 3   |    NESTED LOOPS OUTER |         |     1 |    31 |     6 |  00:00:01 |
| 4   |     TABLE ACCESS FULL | DUAL    |     1 |     2 |     2 |  00:00:01 |
| 5   |     TABLE ACCESS FULL | T1_MERGE|     1 |    29 |     4 |  00:00:01 |
----------------------------------------+-----------------------------------+
Predicate Information:
----------------------
5 - filter("T1"."MERGE_KEY"=CASE  WHEN (ROWID IS NOT NULL) THEN 1 ELSE 1 END )

The outline data where this is fixed shows the join is used:


  /*+
    BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
      DB_VERSION('11.2.0.4')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$5428C7F1")
      MERGE(@"SEL$2")
      MERGE(@"SEL$3")
      OUTLINE_LEAF(@"SEL$4")
      OUTLINE_LEAF(@"MRG$1")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SEL$3")
      NO_ACCESS(@"MRG$1" "from$_subquery$_008"@"MRG$1")
      NO_ACCESS(@"MRG$1" "T2"@"MRG$1")
      FULL(@"MRG$1" "T1"@"MRG$1")
      LEADING(@"MRG$1" "from$_subquery$_008"@"MRG$1" "T2"@"MRG$1" "T1"@"MRG$1")
      USE_MERGE_CARTESIAN(@"MRG$1" "T2"@"MRG$1")
      USE_MERGE_CARTESIAN(@"MRG$1" "T1"@"MRG$1")
      PQ_DISTRIBUTE(@"MRG$1" "T2"@"MRG$1" NONE NONE)
      PQ_DISTRIBUTE(@"MRG$1" "T1"@"MRG$1" NONE NONE)
      FULL(@"SEL$5428C7F1" "DUAL"@"SEL$3")
      FULL(@"SEL$5428C7F1" "T1"@"SEL$2")
      LEADING(@"SEL$5428C7F1" "DUAL"@"SEL$3" "T1"@"SEL$2")
      USE_NL(@"SEL$5428C7F1" "T1"@"SEL$2")
    END_OUTLINE_DATA
  */

Oracle has, apparently, seen the ‘error of its ways’ with this ‘enhancement’ and re-thought how MERGE should be executed. Of course you may be stuck running an older version of 10g and could be affected by this ‘enhancement’. Even migrating to 10.2.0.4 can eliminate this strange query from your MERGE implementations. Know that this isn’t really a performance problem, but it can be disconcerting to see such essentially useless query text in trace files you may generate. Knowing where it comes from hopefully will make it easier to assess performance issues requiring 10046 and 10053 trace files.

It really helps to pay close attention when merging.

February 3, 2015

How Can I Compress Thee

Filed under: Exadata,General,Performance — dfitzjarrell @ 10:26

“You can swim all day in the Sea of Knowledge and not get wet.” 
― Norton Juster, The Phantom Tollbooth 

In previous posts compression options have been discussed, and now it’s time to see how Oracle performs basic compression. It isn’t really compression, it’e de-duplication, but it does result in space savings for data that won’t be modified after it’s ‘compressed’. Let’s look at how Oracle saves space with your data.

Oracle de-duplicates the data by finding common strings, tokenizing them and using the token identifier in the string to reduce the row length. So, what does that mean? Looking at an example might help; a table is built and populated as follows:


--
-- Create and populate the table
--
create table comptst(
	tstcol1	varchar2(4),
	tstcol2 varchar2(6),
	tstcol3	number(8,2),
	tstcol4	varchar2(10));

insert into comptst
values ('ZZXZ', 'bbddff', 199.44, 'PENDING');

insert into comptst
values ('ZZXZ', 'ghijkl', 43.08, 'PENDING');

insert into comptst
values ('ZZXZ', 'bbddff', 881.02, 'PENDING');

insert into comptst
values ('ZZXZ', 'bbddff', 54.97, 'PENDING');

commit;

insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;
insert into comptst select * From comptst;

commit;
				--
-- Compress the table with BASIC compression
--
alter table comptst compress basic;
alter table comptst move;

[The table was compressed after the data was inserted which required two steps, the first to set the compression level and the second, a table move in place, to actually compress the data. Had the table been built as compressed and direct path inserts used the data would have been compressed without further action.] Since the initial 4-row insert was re-inserted multiple times there is a lot of duplication in the data, and since Oracle de-duplicates rows to produce the effect of compression there should be a lot of data in a block dump indicating this. There is, and the first piece of that data is the following line:


  perm_9ir2[4]={ 0 2 3 1 }

Oracle builds a token table for each data block; this provides a reference for each data string that occurs more than once in the block. Additionally Oacle can re-arrange the column values in that token table so that multiple column values can be turned into a single token and, thus, a single reference. The line shown above indicates what column values map to the table positions in the token table for this block; in this case column 0 maps to the data in table column 0, column 1 maps to the data in table column 2, column 2 maps to data column 3 and column 3 maps to data column 1. Let’s look at the unique data that was inserted:


('ZZXZ', 'bbddff', 199.44, 'PENDING');
('ZZXZ', 'ghijkl', 43.08, 'PENDING');
('ZZXZ', 'bbddff', 881.02, 'PENDING');
('ZZXZ', 'bbddff', 54.97, 'PENDING');

Since these data rows are duplicated in each block every column is a potential compression token. Two values occur in every row, ‘ZZXZ’ and ‘PENDING’, so it should be expected that tokens for those values will be found in each of the compressed data rows. As mentioned previously Oracle builds a token table in each block so there are two tables in this block, the first, starting at offset 0, is the token table that has 7 rows and the second, starting at offset 7, is the actual table data and there are 721 rows:


0x24:pti[0]	nrow=7		offs=0
0x28:pti[1]	nrow=721	offs=7

Oracle has a clue with this implementation of compression and can create a token that includes a data value and a token, from the same token table, to reduce that row length even further. The examples provided here won’t be demonstrating that but know that it is possible. Now let’s look at the first row in this block for the data table:


tab 1, row 0, @0x1f31
tl: 5 fb: --H-FL-- lb: 0x0  cc: 4
col  0: [ 4]  5a 5a 58 5a
col  1: [ 7]  50 45 4e 44 49 4e 47
col  2: [ 6]  62 62 64 64 66 66
col  3: [ 3]  c1 37 62
bindmp: 2c 00 01 04 02

The actual column lengths are supplied between the square brackets for each column; the total length should be the sum of those values plus 7 bytes, 4 of those for the column lengths, one for the lock byte, one for the flag byte and one for the column count. Using that information the total length should be 24 bytes; the block dump provides a different total length of 5, as reported by the tl entry. There is a line at the end of the row dump labeled bindmp (a binary dump of the row contents) revealing the actual contents of those 5 bytes. As expected there is the lock byte (0x2c), the number of columns at this location (0x01) and two bytes representing the token, reporting that 4 columns are in this token and that the reference row in the token table is row 2. So, let’s look at table 0, row 2:


tab 0, row 2, @0x1f5c
tl: 10 fb: --H-FL-- lb: 0x0  cc: 4
col  0: [ 4]  5a 5a 58 5a
col  1: [ 7]  50 45 4e 44 49 4e 47
col  2: [ 6]  62 62 64 64 66 66
col  3: [ 3]  c1 37 62
bindmp: 00 b3 04 04 05 06 cb c1 37 62

It looks almost like the data row, but the total token length is 10 bytes. Looking at the bindmp the first two bytes indicate this token is used 179 times in this block, the third byte indicates that 4 columns are in this token, the two bytes after that report that the first two columns are also tokens, 0x04 and 0x05. Going back to the token table we see that those tokens are:


tab 0, row 4, @0x1f66
tl: 7 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 4]  5a 5a 58 5a
bindmp: 00 04 cc 5a 5a 58 5a
tab 0, row 5, @0x1f76
tl: 10 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 7]  50 45 4e 44 49 4e 47
bindmp: 00 04 cf 50 45 4e 44 49 4e 47

These are single-column tokens, and each is used 4 times in this block. This is how Oracle reduced the row length from 24 bytes to 5 to save block space. Working through the block dump it’s now possible to re-construct the 24 bytes of data the row originally contained even though it now is only 5 bytes in length.


"The way you see things depends a great deal on where you look at them from."
 - Norton Juster, The Phantom Tollbooth 

We see that Oracle doesn’t actually compress data, it replaces duplicate values with tokens and, through those tokens, reconstructs the data at query time by using the row directory and the actual row pieces in each block. Depending on the select list some tokens won’t be accessed if that data isn’t required. Of course all of this re-constructing can be expensive at the CPU level, and for full table scans of large tables performance can be an issue, especially with the “cache buffers chains” latch because Oracle is performing fewer “consistent gets – examination”. This is because Oracle has to pin blocks for a longer period due to the reconstruction. On the plus side the number of physical reads can decrease since the data is in a smaller number of blocks and can stay in the cache longer. Using basic compression is a trade-off between size and performance, and for extremely large tables or in cases where the compression savings are quite large (meaning the data is compressed more) queries may become CPU-intensive rather than I/O intensive. The good and the bad need to be weighed carefully when making the decision to use compression; choose wisely. Space is relatively inexpensive when compared to end-user satisfaction. The DBA’s idea of performance and the end-users ideas of performance use different criteria, and it’s really the end-users idea that should take precendence.

Anyone up for a swim?

September 8, 2014

Blooms

Filed under: Exadata,General,Performance — dfitzjarrell @ 09:41

In Oracle releases 10.2.0.x and later join processing can be made more efficient by the use of Bloom filters, primarily to reduce traffic between parallel query slaves. What is a Bloom filter? Named after Burton Howard Bloom, who came up with the concept in the 1970s, it’s an efficient data structure used to quickly determine if an element has a high probability of being a member of a given set. It’s based on a bit array that allows for rapid searches and returns one of two results: either the element is probably in the set (which can produce false positives) or the element is definitely not in the set. The filter cannot produce false negatives, and the incidence of false positives is relatively rare. Another advantage to Bloom filters is their small size relative to other data structures used for similar purposes (self-balancing binary search trees, hash tables, or linked lists). The possibility of false positives necessitates the addition of another filter to eliminate them from the results, yet such a filter doesn’t add appreciably to the process time and, therefore, goes relatively unnoticed.

When Bloom filters are used for a query on an Exadata system, the filter predicate and the storage predicate will list the SYS_OP_Bloom_FILTER function as being called. This function includes the additional filter to eliminate any false positives that could be returned. It’s the storage predicate that provides the real power of the Bloom filter on Exadata. Using a Bloom filter to pre-join the tables at the storage server level reduces the volume of data the database servers need to process and can significantly reduce the execution time of the given query.

An example of Bloom filters in action follows; a script named Bloom_fltr_ex.sql was executed to generate this output on an Exadata system; the script can be emailed on request:


SQL> --
SQL> -- Create sample tables
SQL> --
SQL> -- Create them parallel, necessary 
SQL> -- to get a Smart Scan on these tables
SQL> --
SQL> create table emp(
  2  	empid   number,
  3  	empnmvarchar2(40),
  4  	empsal  number,
  5  	empssn  varchar2(12),
  6  	constraint emp_pk primary key (empid)
  7  ) parallel 4;

Table created.

SQL>
SQL> create table emp_dept(
  2  	empid   number,
  3  	empdept number,
  4  	emploc  varchar2(60),
  5    constraint emp_dept_pk primary key(empid)
  6  ) parallel 4;

Table created.

SQL>
SQL> create table dept_info(
  2  	deptnum number,
  3  	deptnm  varchar2(25),
  4    constraint dept_info_pk primary key(deptnum)
  5  ) parallel 4;

Table created.

SQL>
SQL> --
SQL> -- Load sample tables with data
SQL> --
SQL> begin
  2  	     for i in 1..2000000 loop
  3  		     insert into emp
  4  		     values(i, 'Fnarm'||i, (mod(i, 7)+1)*1000, mod(i,10)||mod(i,10)||mod(i,10)||'-'||mod(i,10)||mod(i,10)||'-'||mod(i,10)||mod(i,10)||mod(i,10)||mod(i,10));
  5  		     insert into emp_dept
  6  		values(i, (mod(i,8)+1)*10, 'Zanzwalla'||(mod(i,8)+1)*10);
  7  		     commit;
  8  	     end loop;
  9  	     insert into dept_info
 10  	     select distinct empdept, case when empdept = 10 then 'SALES'
 11  					   when empdept = 20 then 'PROCUREMENT'
 12  					   when empdept = 30 then 'HR'
 13  					   when empdept = 40 then 'RESEARCH'
 14  					   when empdept = 50 then 'DEVELOPMENT'
 15  					   when empdept = 60 then 'EMPLOYEE RELATIONS'
 16  					   when empdept = 70 then 'FACILITIES'
 17  					   when empdept = 80 then 'FINANCE' end
 18  	     from emp_dept;
 19  
 20  end;
 21  /

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> -- Run join query using Bloom filter
SQL> --
SQL> -- Generate execution plan to prove Bloom
SQL> -- filter usage
SQL> --
SQL> -- Also report query execution time
SQL> --
SQL> set autotrace on
SQL> set timing on
SQL>
SQL> select /*+ Bloom join 2 parallel 2 use_hash(empemp_dept) */ e.empid, e.empnm, d.deptnm, e.empsal, count(*) emps
  2  fromemp e join emp_depted on (ed.empid = e.empid) join dept_info d on (ed.empdept = d.deptnum)
  3  whereed.empdept = 20
  4  group by e.empid, e.empnm, d.deptnm, e.empsal;

     EMPID EMPNM                  DEPTNM                        EMPSAL       EMPS
---------- ---------------------- ------------------------- ---------- ----------
    904505 Fnarm904505            PROCUREMENT                     1000          1
    907769 Fnarm907769            PROCUREMENT                     3000          1
    909241 Fnarm909241            PROCUREMENT                     5000          1
    909505 Fnarm909505            PROCUREMENT                     3000          1
    909641 Fnarm909641            PROCUREMENT                     6000          1
    910145 Fnarm910145            PROCUREMENT                     6000          1
...
    155833 Fnarm155833            PROCUREMENT                     7000          1
    155905 Fnarm155905            PROCUREMENT                     2000          1
    151081 Fnarm151081            PROCUREMENT                     1000          1
    151145 Fnarm151145            PROCUREMENT                     2000          1

250000 rows selected.

Elapsed: 00:00:14.27

Execution Plan
----------------------------------------------------------
Plan hash value: 2313925751

------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name         | Rows | Bytes | Cost (%CPU)| Time     |   TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |              |  218K|    21M|  1378   (1)| 00:00:01 |       |      |            |
|   1 |  PX COORDINATOR                          |              |      |       |            |          |       |      |            |
|   2 |   PX SEND QC (RANDOM)                    | :TQ10003     |  218K|    21M|  1378   (1)| 00:00:01 | Q1,03 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                         |              |  218K|    21M|  1378   (1)| 00:00:01 | Q1,03 | PCWP |            |
|   4 |     PX RECEIVE                           |              |  218K|    21M|  1378   (1)| 00:00:01 | Q1,03 | PCWP |            |
|   5 |      PX SEND HASH                        | :TQ10002     |  218K|    21M|  1378   (1)| 00:00:01 | Q1,02 | P->P | HASH       |
|   6 |       HASH GROUP BY                      |              |  218K|    21M|  1378   (1)| 00:00:01 | Q1,02 | PCWP |            |
|*  7 |        HASH JOIN                         |              |  218K|    21M|  1376   (1)| 00:00:01 | Q1,02 | PCWP |            |
|   8 |         PX RECEIVE                       |              |  218K|    11M|   535   (1)| 00:00:01 | Q1,02 | PCWP |            |
|   9 |          PX SEND BROADCAST               | :TQ10001     |  218K|    11M|   535   (1)| 00:00:01 | Q1,01 | P->P | BROADCAST  |
|  10 |           NESTED LOOPS                   |              |  218K|    11M|   535   (1)| 00:00:01 | Q1,01 | PCWP |            |
|  11 |            BUFFER SORT                   |              |      |       |            |          | Q1,01 | PCWC |            |
|  12 |             PX RECEIVE                   |              |      |       |            |          | Q1,01 | PCWP |            |
|  13 |              PX SEND BROADCAST           | :TQ10000     |      |       |            |          |       | S->P | BROADCAST  |
|  14 |               TABLE ACCESS BY INDEX ROWID| DEPT_INFO    |    1 |    27 |     1   (0)| 00:00:01 |       |      |            |
|* 15 |                INDEX UNIQUE SCAN         | DEPT_INFO_PK |    1 |       |     1   (0)| 00:00:01 |       |      |            |
|  16 |            PX BLOCK ITERATOR             |              |  218K|  5556K|   534   (1)| 00:00:01 | Q1,01 | PCWC |            |
|* 17 |             TABLE ACCESS STORAGE FULL    | EMP_DEPT     |  218K|  5556K|   534   (1)| 00:00:01 | Q1,01 | PCWP |            |
|  18 |         PX BLOCK ITERATOR                |              | 1657K|    75M|   839   (1)| 00:00:01 | Q1,02 | PCWC |            |
|* 19 |          TABLE ACCESS STORAGE FULL       | EMP          | 1657K|    75M|   839   (1)| 00:00:01 | Q1,02 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("ED"."EMPID"="E"."EMPID")
  15 - access("D"."DEPTNUM"=20)
  17 - storage("ED"."EMPDEPT"=20)
       filter("ED"."EMPDEPT"=20)
  19 - storage(SYS_OP_Bloom_FILTER(:BF0000,"E"."EMPID"))
       filter(SYS_OP_Bloom_FILTER(:BF0000,"E"."EMPID"))

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
     60  recursive calls
    174  db block gets
  40753  consistent gets
  17710  physical reads
   2128  redo size
9437983  bytes sent via SQL*Net to client
 183850  bytes received via SQL*Net from client
  16668  SQL*Net roundtrips to/from client
      6  sorts (memory)
      0  sorts (disk)
 250000  rows processed

SQL>

In less than 15 seconds 250,000 rows were returned from a three table join of over 4 million rows; the Bloom filter made a dramatic difference in how this query was processed and provided exceptional performance given the volume of data queried.

Without the storage level execution of the Bloom filter, the query execution time increased by 2.33 seconds, a 16.3 percent increase. For longer execution times, this difference can be significant. It isn’t the Bloom filter that gives Exadata such power with joins, it’s the fact that Exadata can execute it not only at the database level but also at the storage level, something commodity hardware configurations can’t do.

Of course Bloom filters are not exclusive to Exadata, as the following run of the same script on a non-Exadata system shows (only the abbreviated output section is reproduced here):


SQL> select /*+ Bloom join 2 parallel 2 use_hash(emp emp_dept) */ e.empid, e.empnm, d.deptnm, e.empsal	  --, count(*) emps
  2  from emp e join emp_dept ed on (ed.empid = e.empid) join dept_info d on (ed.empdept = d.deptnum)
  3  where ed.empdept = 20;

     EMPID EMPNM                                    DEPTNM                        EMPSAL                                                              
---------- ---------------------------------------- ------------------------- ----------                                                              
   1670281 Fnarm1670281                             PROCUREMENT                     5000                                                              
   1670289 Fnarm1670289                             PROCUREMENT                     6000                                                              
   1670297 Fnarm1670297                             PROCUREMENT                     7000                                                              
   1670305 Fnarm1670305                             PROCUREMENT                     1000                                                              
   1670313 Fnarm1670313                             PROCUREMENT                     2000                                                              
   1670321 Fnarm1670321                             PROCUREMENT                     3000                                                              
   1670329 Fnarm1670329                             PROCUREMENT                     4000                                                              
   1670337 Fnarm1670337                             PROCUREMENT                     5000                                                              
   1670345 Fnarm1670345                             PROCUREMENT                     6000                                                              
   1670353 Fnarm1670353                             PROCUREMENT                     7000                                                              
   1670361 Fnarm1670361                             PROCUREMENT                     1000                                                              
...
   1857369 Fnarm1857369                             PROCUREMENT                     4000                                                              
   1857377 Fnarm1857377                             PROCUREMENT                     5000                                                              
   1857385 Fnarm1857385                             PROCUREMENT                     6000                                                              

250000 rows selected.

Elapsed: 00:00:54.86

Execution Plan
----------------------------------------------------------                                                                                            
Plan hash value: 2643012915                                                                                                                           
                                                                                                                                                      
----------------------------------------------------------------------------------------------------------------------------------                    
| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |                    
----------------------------------------------------------------------------------------------------------------------------------                    
|   0 | SELECT STATEMENT                     |              |   273K|    26M|  1378   (1)| 00:00:01 |        |      |            |                    
|   1 |  PX COORDINATOR                      |              |       |       |            |          |        |      |            |                    
|   2 |   PX SEND QC (RANDOM)                | :TQ10002     |   273K|    26M|  1378   (1)| 00:00:01 |  Q1,02 | P->S | QC (RAND)  |                    
|*  3 |    HASH JOIN                         |              |   273K|    26M|  1378   (1)| 00:00:01 |  Q1,02 | PCWP |            |                    
|   4 |     PX RECEIVE                       |              |   273K|    13M|   536   (1)| 00:00:01 |  Q1,02 | PCWP |            |                    
|   5 |      PX SEND BROADCAST               | :TQ10001     |   273K|    13M|   536   (1)| 00:00:01 |  Q1,01 | P->P | BROADCAST  |                    
|   6 |       NESTED LOOPS                   |              |   273K|    13M|   536   (1)| 00:00:01 |  Q1,01 | PCWP |            |                    
|   7 |        BUFFER SORT                   |              |       |       |            |          |  Q1,01 | PCWC |            |                    
|   8 |         PX RECEIVE                   |              |       |       |            |          |  Q1,01 | PCWP |            |                    
|   9 |          PX SEND BROADCAST           | :TQ10000     |       |       |            |          |        | S->P | BROADCAST  |                    
|  10 |           TABLE ACCESS BY INDEX ROWID| DEPT_INFO    |     1 |    27 |     1   (0)| 00:00:01 |        |      |            |                    
|* 11 |            INDEX UNIQUE SCAN         | DEPT_INFO_PK |     1 |       |     1   (0)| 00:00:01 |        |      |            |                    
|  12 |        PX BLOCK ITERATOR             |              |   273K|  6947K|   535   (1)| 00:00:01 |  Q1,01 | PCWC |            |                    
|* 13 |         TABLE ACCESS FULL            | EMP_DEPT     |   273K|  6947K|   535   (1)| 00:00:01 |  Q1,01 | PCWP |            |                    
|  14 |     PX BLOCK ITERATOR                |              |  2099K|    96M|   840   (1)| 00:00:01 |  Q1,02 | PCWC |            |                    
|* 15 |      TABLE ACCESS FULL               | EMP          |  2099K|    96M|   840   (1)| 00:00:01 |  Q1,02 | PCWP |            |                    
----------------------------------------------------------------------------------------------------------------------------------                    
                                                                                                                                                      
Predicate Information (identified by operation id):                                                                                                   
---------------------------------------------------                                                                                                   
                                                                                                                                                      
   3 - access("ED"."EMPID"="E"."EMPID")                                                                                                               
  11 - access("D"."DEPTNUM"=20)                                                                                                                       
  13 - filter("ED"."EMPDEPT"=20)                                                                                                                      
  15 - filter(SYS_OP_Bloom_FILTER(:BF0000,"E"."EMPID"))                                                                                               
                                                                                                                                                      
Note                                                                                                                                                  
-----                                                                                                                                                 
   - dynamic sampling used for this statement (level=2)                                                                                               


Statistics
----------------------------------------------------------                                                                                            
         33  recursive calls                                                                                                                          
        139  db block gets                                                                                                                            
      36224  consistent gets                                                                                                                          
      17657  physical reads                                                                                                                           
          0  redo size                                                                                                                                
    9526012  bytes sent via SQL*Net to client                                                                                                         
     183846  bytes received via SQL*Net from client                                                                                                   
      16668  SQL*Net roundtrips to/from client                                                                                                        
          5  sorts (memory)                                                                                                                           
          0  sorts (disk)                                                                                                                             
     250000  rows processed                                                                                                                           

SQL>

The only differences between the two plans involve the storage component of Exadata. Notice also the vast difference in elapsed times between Exadata and non-Exadata systems; Exadata ran the query returning 250,000 rows in just over 14 seconds and the non-Exadata system returned the same volume of data in just under 55 seconds. Both times are substantial improvements over not using a Bloom filter so any hardware running 11.2.0.2 or later will benefit from Bloom filters.

Of course there is a ‘gotcha’ with Bloom filters, and Oracle recognizes this as a bug in releases older than 12.1.0.2. Hash left outer joins, hash right outer joins and hash full outer joins do not use a Bloom Filter and for those interested the unpublished bug number is 17716301. MOS document 1919508.1 reports details on this behavior including an example. As reported in that document the patch can be backported to earlier releases when an SR is filed requesting it.

Bloom filters are a welcome addition to both Exadata and non-Exadata installations as they can improve query performance with parallel queries, as evidenced here. Now that you know how to spot when they are used you may find they are utilized more often than you expected.

It’s a bloomin’ miracle.

July 22, 2014

That’s … Huge!

Filed under: Exadata,General,Performance — dfitzjarrell @ 14:27

Recently I’ve noticed the occasional thread in Oracle newsgroups and lists asking about hugepages support in Linux, including ‘best practices’ for hugepages configuration. This information is out on that ‘world-wide web’ in various places; I’d rather put a lot of that information in this post to provide an easier way to get to it. I’ll cover what hugepages are, what they do, what they can’t do and how best to allocate them for your particular installation. Let’s get started.

“Normal” memory pages in Linux are 4 KB in size and are allocated as needed where the memory map will allow so they are likely not contiguous. Hugepages, in comparison, are 2 MB pages locked in memory and are allocated in a contiguous ‘chunk’; these are allocated at boot time using a parameter in the /etc/sysctl.conf file named vm.nr_hugepages for RedHat Linux kernel 2.6 and a parameter named vm.hugetlb_pool for RedHat Linux kernel 2.4. You must remember that each page is 2 MB is size, as this affects how many hugepages you need to allocate to cover all of the SGAs of running Oracle databases. Set this too high and your system might not boot; set this too low and you won’t get the benefit of using hugepages in one or more Oracle instances. [When use_large_pages is set to ONLY and insufficient hugepages are available the database won’t start; changing use_large_pages to TRUE allows the database to start but it won’t be using hugepages, which results in more paging activity, lowering performance.] Since hugepages are contiguous it stands to reason that performance will improve since Oracle won’t need to access memory segments across the entire range of installed memory. Hugepages also reduce the size of ‘page tables’ by reducing the overhead for page tables for each connection to the database. Page tables are part of the Linux memory management, created and used to map a virtual address to a physical memory location. For SGAs 100 MB and greater using hugepages is probably a good idea, and that idea gets better as the SGA size increases. Kevin Closson reported back in 2009 on the page table overhead for an Oracle database with an 8000 MB SGA running 500 simultaneous connections. The page table overhead reported was startling — almost 6.8 GB of memory used for page tables when hugepages were not configured. He ran the same test again, on the same Oracle database, only that test was run after hugepages were allocated and used by the Oracle database. The page table overhead was reduced to slightly less than 23 MB. That is a dramatic change in memory usage, one that can definitely improve performance.

Hugepages are available for IPC (Inter-Process Communication) shared memory; this is the ‘standard’ shared memory model (starting with UNIX System V) allowing multiple processes to access the same shared memory segment. There is also another form of shared memory segment, the memory-mapped file, and currently such segments are not supported by hugepages. Oracle, on Linux, gives you a choice of using hugepages or memory-mapped files and you implement that choice by selecting to use (or not use) Automatic Memory Management (AMM). Choosing AMM disallows hugepages since AMM uses memory-mapped files located in /dev/shm; you cannot use AMM and hugepages together. For databases with SGAs up to 100 MB using AMM is probably not a good idea; a script to determine how many hugepages to allocate won’t consider SGAs of less than 100 MB. For Oracle databases with larger SGAs hugepages is, in my opinion, the choice to make as you can significantly reduce shared memory overhead where page tables are concerned.

Calculating how many hugepages to allocate has been a difficult task in the past; Oracle supplies a script that can calculate that value for you provided you have all of the databases that you want to use hugepages running; the script from Oracle won’t consider databases you have created that aren’t up and available as it uses the ipcs -m command to return the allocated shared memory segments. The script is available from My Oracle Support in Doc ID 401749.1. It generates a list of IPC shared memory segment sizes owned by Oracle and computes the total number of hugepages required. It outputs the result to be used in setting the vm.nr_hugepages parameter in /etc/sysctl.conf. Once that setting is modified the system has to be rebooted before the changes are implemented. Sample output from that script is shown below:


This script is provided by Doc ID 401749.1 from My Oracle Support
(http://support.oracle.com) where it is intended to compute values for
the recommended HugePages/HugeTLB configuration for the current shared
memory segments. Before proceeding with the execution please note following:
 * For ASM instance, it needs to configure ASMM instead of AMM.
 * The 'pga_aggregate_target' is outside the SGA and
   you should accommodate this while calculating SGA size.
 * In case you changes the DB SGA size,
   as the new SGA will not fit in the previous HugePages configuration,
   it had better disable the whole HugePages,
   start the DB with new SGA size and run the script again.
And make sure that:
 * Oracle Database instance(s) are up and running
 * Oracle Database 11g Automatic Memory Management (AMM) is not setup
   (See Doc ID 749851.1)
 * The shared memory segments can be listed by command:
     # ipcs -m


Press Enter to proceed...

Recommended setting: vm.nr_hugepages = 25240

For Linux RedHat kernel 2.6 this is the number of hugepages, not the overall size of the memory allocated for those hugepages. For RedHat kernel 2.4 a different parameter, vm.hugetlb_pool, is set to the actual size that pool needs to be. Fortunately the script knows which kernel is running and reports the correct parameter and value to set.

After you have hugepages allocated you need to tell Oracle to use them exclusively. An init parameter, use_large_pages, should be set to ONLY to keep Oracle from allocating conventionally managed memory segments for the SGA. If your hugepages calculations are correct then all of the databases on your database server will start without error. Allocate too few hugepages and databases with use_large_pages set to ONLY won’t start after all of the existing hugepages are used. Setting use_large_pages to TRUE will allow a database to start without using hugepages but that’s not what you want to get better performance from Oracle.

Earlier I mentioned that allocating too many hugepages can be worse than allocating too few hugepages. Since each hugepage consumes 2 MB of space multiplying the vm.nr_hugepages setting by 2097152 will show how much memory is occupied by hugepages. It is possible to try to allocate all of the system memory as hugepages resulting in a system that will not boot in multi-user mode. The kernel isn’t configured to use hugepages; the more hugepages that are allocated the less conventionally managed memory is available. Take it all away and there is no place for the operating system to start necessary processes after the kernel is loaded at boot time. Using single-user mode will allow the system adminstrator to modify the offending entry and change it to a more reasonable value so the system can be rebooted. Of course you don’t want to end up in that situation; manually calculating such a value can result in errors so use the Oracle-supplied script to calculate the hugepages value you actually need.

Exadata systems run best when hugepages are allocated and used exclusively. The exachk script checks for such settings in each running database on the system and reports those that do not have use_large_pages set to ONLY. It also checks if the currently configured hugepages setting meets or exceeds the estimated number of hugepages required by all running instances on the given database (compute) node. If this check fails it most likely will fail for all available nodes on the system. Failing this check isn’t the end of the world but it does mean that the failing databases aren’t using memory as efficiently as they could. For each node run the script, make the reported changes then reboot. Before starting any of the databases ensure that all have use_large_pages set to ONLY; start the databases across the cluster using srvctl. The next run ef exachk should report that the hugepages check passed.

RAC databases need to be configured the same way Exadata databases are configured (since Exadata databases are RAC databases). Modify kernel parameters on every database server in the cluster, then reboot. Modify the spfile for each database in the cluster then use srvctl to start those databases cluster-wide. Presuming the hugepages settings are correct across the cluster there should be no error when starting the databases.

Using hugepages for Oracle databases on Linux is, to me, the only way to go if you want to increase database performance. Of course, if you have loads memory to spare you may not see the need. Most systems I’ve worked on don’t have that luxury so being as ‘frugal’ with memory as you can is good. Making the most of available resources is key and hugepages on Linux can definitely help with respect to Oracle.

Size does matter.

July 1, 2014

What A Performance!

Filed under: General,Performance,stats — dfitzjarrell @ 10:09

Performance is one of the big watchwords in the IT industry; nowhere is it more often used than in the database arena, and it’s usually not a good sign. End users know nothing, really, of the inner workings of the database yet they are the first to point the ‘finger of blame’ at the database when the processing speed slows or queries take longer than they would like to return data. [For end users, performance is time, and the longer a process takes the more dismal performance seems to be. Given that perspective their complaints seem justified.] Yes, there ARE times when the database is at fault — ‘missing’ indexes or stale statistics can cause execution plans to change, as can inserts and updates to heavily transacted tables (which many times goes back to stale statistics since they now no longer accurately reflect the data and its distribution in the table). Bugs in the database software can also cause issues by causing the gathered statistics to miscalculate values or generate ‘useless’ frequency histograms, such as on single-valued not null columns. Such histograms can direct the optimizer to choose a path it wouldn’t normally choose when the statistics and histograms contain useful data.

Here is where the problem can get complicated: it may not be the database at fault when performance is down. Yes, it’s the database server you’re getting data from. Yes, things were running just fine until today. By ‘logical’ deduction it’s the database at fault, right? Not necessarily. The database also has to deal with the operating system, the storage system, the available memory and the network to get its work done. Problems in any of those areas are likely to adversely affect database performance. So it’s not necessarily the database causing the problem. How do you know where the problem originates? Oracle does provide some tools to help in assessing where a problem is likely to be, but you may also need to involve System Administrators, Network Administrators and/or Storage Administrators to fully investigate some issues. Let’s take a look at the Oracle supplied tools, along with some operating system utilities that can be used to diagnose performance issues.

At the first sign of trouble many look to the DBA to investigate the database for bottlenecks or other performance-robbing possibilities that surely must exist. Oracle provides at least two tools in releases 10.1 and later to investigate the database for performance problems — the AWR report, a general overall ‘picture’ of the database ‘health’ that can give direction to such a quest and the ASH report, a more detailed ‘picture’ for a given window of activity that provides more detailed wait information. These utilities should be used at the first report of performance issues to determine if the problem is database-related or caused by another area, such as the operating system or network. Starting with the AWR (Automatic Workload Repository) report the two sections to begin an investigation, in my estimation, are Wait Events Statistics and SQL Statistics. The first provides a system-wide view of the waits encountered and the times those waits consumed. Background Wait Events and the Wait Event Histogram sections can provide some insight into what was occurring during the report window; waits to watch are I/O related waits (reads, writes) which can signal issues with a query plan or issues with the storage (issues not addressable by the DBA). The SQL Statistics list SQL statements in order of several areas, including overall elapsed time, consumed CPU time, I/O wait time and physical reads. The SQL statistics can usually point to ‘bad’ plans, making performance problems a database issue because such plans are the result of bad or stale statistics. In 11.2 and later releases dynamic sampling can mitigate missing statistics by automatically sampling the data prior to generating a query execution plan. We won’t get into the details of using these reports to effect database tuning procedures as that’s a topic for another article. The intent here is to provide a means to determine IF the problem is a database issue or if it’s caused by outside ‘forces’. For releases earlier than 10.1 Oracle provides Statspack, the pre-cursor to AWR and the next step after the original database snapshot utilities utlbstat and utlestat. This utility can provide a very good snapshot of database activity, although not quite as robust as the AWR report. It is still available in 11.2 if you would like to install it. AWR and ASH are utilities which are part of the Diagnostic and Tuning Pack for OEM, and thus need to be licensed if they are to be used. Statspack doesn’t require such a license.

The “Top 5 Timed Foreground Events” section gives an overview of the wait events which consumed the most time during the report window:

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class
DB CPU 0 82.42
log file sync 8 0 11 30.72 Commit
Disk file operations I/O 1 0 1 0.33 User I/O
os thread startup 0 0 0.00 Concurrency
log file single write 0 0 0.00 System I/O

Host CPU (CPUs: 4 Cores: 2 Sockets: 1)

This provides a quick look at the ‘worst offenders’ in the wait event category (although they may not be performance-robbing events). Start here then work down to the Operating System Statistics section:

Operating System Statistics – Detail

Snap Time Load %busy %user %sys %idle %iowait
20-Jun 07:04:42 0.00
20-Jun 08:00:45 0.00 7.12 4.65 2.48 92.88 0.00

The Wait Event Histogram Detail can be very informative:

Wait Event Histogram Detail (64 msec to 2 sec)

  • Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
  • Units for % of Total Waits: ms is milliseconds s is 1024 milliseconds (approximately 1 second)
  • % of Total Waits: total waits for all wait classes, including Idle
  • % of Total Waits: value of .0 indicates value was <.05%; value of null is truly 0
  • Ordered by Event (only non-idle events are displayed)
% of Total Waits
Event Waits 64ms to 2s <32ms <64ms <1/8s <1/4s <1/2s <1s <2s >=2s
ADR block file read 3 81.3 12.5 6.3
Disk file operations I/O 1 98.2 1.8
control file parallel write 15 98.7 .8 .3 .1
control file sequential read 139 94.0 4.7 .8 .3 .1 .0
db file parallel read 2 50.0 50.0
db file parallel write 185 91.4 5.7 1.8 .8 .1 .2
db file scattered read 63 93.5 5.0 1.2 .3
db file sequential read 142 94.2 4.5 1.2 .1 .0
direct path read 1 100.0
enq: CR – block range reuse ckpt 16 82.2 15.6 1.1 1.1
log file parallel write 82 92.5 4.6 2.6 .3 .1
log file switch completion 2 50.0 50.0
log file sync 1 90.0 10.0

Wait Event Histogram Detail (4 sec to 2 min)

No data exists for this section of the report.

Wait Event Histogram Detail (4 min to 1 hr)

No data exists for this section of the report.

Service Statistics

  • ordered by DB Time
Service Name DB Time (s) DB CPU (s) Physical Reads (K) Logical Reads (K)
SYS$USERS 0 0 0 2
SYS$BACKGROUND 0 0 8 436
smedley 0 0 0 0
smedleyXDB 0 0 0 0

Such histograms can show where the preponderance of waits lie making it easier to determine if they are problematic.

Next on the list is the SQL Statistics report. There are several sections, the one shown is ordered by elapsed time:

SQL ordered by Elapsed Time

  • Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.
  • % Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by 100
  • %Total – Elapsed Time as a percentage of Total DB time
  • %CPU – CPU Time as a percentage of Elapsed Time
  • %IO – User I/O Time as a percentage of Elapsed Time
  • Captured SQL account for 1.1E+04% of Total DB Time (s): 0
  • Captured PL/SQL account for 2.3E+03% of Total DB Time (s): 0
Elapsed Time (s) Executions Elapsed Time per Exec (s) %Total %CPU %IO SQL Id SQL Module SQL Text
5.88 1 5.88 2070.31 15.66 79.60 acc988uzvjmmt delete from WRH$_SYSMETRIC_HIS…
4.60 22 0.21 1618.80 27.16 64.39 70vs1d7ywk5m0 MMON_SLAVE begin dbms_stats.copy_table_st…
2.49 44 0.06 876.07 10.66 83.39 2mp99nzd9u1qp delete from histgrm$ where obj…
2.14 769 0.00 753.49 4.38 96.01 db78fxqxwxt7r select /*+ rule */ bucket, end…
1.57 212 0.01 551.64 21.91 74.49 43ruajmwv9fus insert into histgrm$(obj#, int…
1.42 1 1.42 501.54 7.67 86.00 0sfsrf0qzvsjv delete from WRH$_SYSMETRIC_SUM…
1.19 22 0.05 418.37 2.63 93.41 d89c1mh5pvbkz MMON_SLAVE SELECT /*+ rule */ U.NAME, OT….
0.83 1 0.83 291.84 13.18 77.73 2whm2vvjb98k7 delete from WRH$_SQL_PLAN tab …
0.80 22 0.04 281.90 25.34 67.82 8sd37ztuwkyyc MMON_SLAVE begin dbms_swrf_internal.massa…
0.66 1 0.66 231.35 2.38 98.81 28bm5y53nujvz delete from WRH$_RESOURCE_LIMI…
0.54 1 0.54 188.64 0.00 91.89 4y3yvmbav7uwb delete /*+ UNNEST(@SEL$2) UNNE…
0.53 22 0.02 185.34 0.00 96.67 63kf3an2j0pkc insert into sys.wri$_optstat_h…
0.51 1 0.51 179.21 9.20 75.31 96w2cbx7ntmtv delete from WRH$_BG_EVENT_SUMM…
0.50 1 0.50 175.27 6.27 94.54 31sdu97k3baq1 delete from WRH$_SHARED_POOL_A…
0.49 22 0.02 171.58 12.81 66.96 480utgnq0t6mf MMON_SLAVE begin dbms_stats.unlock_partit…
0.44 1 0.44 156.14 17.60 80.45 1h7zt6jks80pa delete from WRH$_ENQUEUE_STAT …
0.42 1 0.42 146.27 15.03 94.38 bzscyq07w79ab delete /*+ dynamic_sampling(4)…
0.38 2,361 0.00 133.20 24.75 80.65 96g93hntrzjtr select /*+ rule */ bucket_cnt,…
0.35 1 0.35 123.71 4.44 94.32 9v9n97qj8z1dg delete /*+ dynamic_sampling(4)…
0.35 22 0.02 122.04 27.02 66.45 2rn08n06qr261 update histgrm$ set obj# = dec…
0.34 1 0.34 121.06 0.00 96.73 6ajkhukk78nsr begin prvt_hdm.auto_execute( :…
0.31 1 0.31 107.67 0.00 97.51 48ffykkmk7sgc delete /*+ dynamic_sampling(4)…
0.28 1 0.28 99.65 0.00 79.94 350myuyx0t1d6 insert into wrh$_tablespace_st…
0.27 375 0.00 94.36 17.47 87.22 bvn085xhxfn9f insert into sys.wri$_optstat_h…
0.26 161 0.00 90.72 6.06 90.14 74anujtt8zw4h select o.owner#, o.name, o.nam…
0.25 1 0.25 86.79 6.33 95.67 47x8f8z2hx1a2 delete from WRH$_MUTEX_SLEEP t…
0.24 1 0.24 85.22 6.45 92.64 az8cj7dfnpp45 delete from WRH$_MEM_DYNAMIC_C…
0.23 1 0.23 81.60 6.73 41.10 3mqvkt9as1phq merge /*+ dynamic_sampling(mm…
0.22 1 0.22 76.17 0.00 87.96 du2bqfpfj9au2 delete from WRH$_LIBRARYCACHE …
0.21 1 0.21 73.67 14.92 93.36 b7ba0zt64wtc7 delete from WRH$_RSRC_CONSUMER…
0.20 44 0.00 71.59 7.68 71.51 as3uq6ggb3gx6 delete from hist_head$ where o…
0.20 89 0.00 71.52 30.73 92.39 3ktacv9r56b51 select owner#, name, namespace…
0.20 10 0.02 71.14 0.00 99.04 c6awqs517jpj0 select /*+ index(idl_char$ i_i…
0.20 1,098 0.00 70.92 23.24 58.47 3c1kubcdjnppq update sys.col_usage$ set equa…
0.19 22 0.01 68.12 56.47 8.79 9dy457uf4uxzd MMON_SLAVE SELECT /*+ all_rows */ ‘”‘||UI…
0.19 1 0.19 67.57 0.00 0.00 f318xdxdn0pdc insert into wrh$_log (snap_id,…
0.19 113 0.00 67.01 0.00 84.34 59vjj34vugaav delete from obj$ where obj# = …
0.18 10 0.02 64.60 0.00 96.35 cvn54b7yz0s8u select /*+ index(idl_ub1$ i_id…
0.18 375 0.00 63.34 43.38 53.87 95mpkn5xz9001 insert into hist_head$(obj#, i…
0.18 22 0.01 62.59 0.00 71.91 dr277b6yv83uy MMON_SLAVE SELECT COLUMN_NAME FROM DBA_PA…
0.18 671 0.00 62.17 17.68 0.00 5ms6rbzdnq16t select job, nvl2(last_date, 1,…
0.18 1 0.18 61.81 8.89 96.19 7f8bfqbjxdssr delete from WRH$_LOG tab where…
0.17 1 0.17 61.43 17.89 95.77 7g732rx16j8jc insert into WRH$_SERVICE_STAT …
0.16 48 0.00 56.37 48.74 48.27 4h8danbxj5nz6 begin dbms_utility.validate(:b…
0.16 1 0.16 54.88 80.11 0.00 dayq182sk41ks insert into wrh$_memory_target…
0.15 252 0.00 54.58 30.21 39.65 7ng34ruy5awxq select i.obj#, i.ts#, i.file#,…
0.15 1 0.15 54.51 80.65 0.00 bm2pwrpcr8ru6 select sga_size s, sga_size_fa…
0.15 1 0.15 51.42 106.87 0.00 bunssq950snhf insert into wrh$_sga_target_ad…
0.14 1 0.14 49.59 0.00 93.41 084z6qq3wwkdc delete from WRH$_SQL_WORKAREA_…
0.14 1 0.14 48.68 0.00 90.79 17g3cqghntwb6 delete from WRH$_PGA_TARGET_AD…
0.13 22 0.01 44.60 0.00 35.38 7kc5p5bw4uz55 update tabpart$ set dataobj# =…
0.12 1 0.12 41.02 26.79 91.44 83hpfc7nhjpvg delete from WRH$_IOSTAT_FILETY…
0.12 22 0.01 40.72 121.45 16.37 267s83vd49cfa MMON_SLAVE SELECT PARTITION_NAME FROM DBA…
0.11 66 0.00 40.29 13.64 87.09 9fg7fjbwmv7v8 insert into sys.wri$_optstat_t…
0.11 1 0.11 39.34 13.97 65.25 9xcfxn794wp5h select 1 from WRH$_LATCH_CHILD…
0.11 1 0.11 37.29 0.00 98.77 3hy3nssu9shxj select /*+ FIRST_ROWS(1) PARAL…
0.10 6 0.02 36.11 0.00 63.61 9wncfacx0nj9h insert into smon_scn_time (thr…
0.10 2 0.05 35.13 0.00 1.98 934ur8r7tqbjx SELECT DBID FROM V$DATABASE
0.10 1 0.10 33.93 0.00 74.29 1v44r7vam2wbt delete from WRH$_IOSTAT_FUNCTI…
0.10 1 0.10 33.68 0.00 96.85 cs78htw2dhb0x delete from WRH$_IC_DEVICE_STA…
0.09 66 0.00 33.17 0.00 92.58 gc7b0drtzbyc6 select max(intcol#) from hist_…
0.09 89 0.00 33.07 0.00 92.64 8swypbbr0m372 select order#, columns, types …
0.09 1 0.09 32.97 16.66 86.94 8h77nkt8bnukh delete from WRH$_IOSTAT_DETAIL…
0.09 1 0.09 31.42 17.49 83.52 7gfhbhxrxx8wf delete /*+ dynamic_sampling(4)…

This section can help determine which queries may be creating performance issues; using the sql_id and text you can retrive the plans to see if a rewrite may improve the situation. It can also show where statistics may not be current, although it would also be a good idea to run a 10053 trace against each suspect query to report what statistics the optimizer is using.

The ASH, or Active Session History, report digs deeper into important wait events and timings, providing a better picture of the wait activity during the requested window; a section of this report is shown below:

SQL ID Planhash Sampled # of Executions % Activity Event % Event Top Row Source % RwSrc SQL Text
572fbaj0fdw2b 1648993636 1 7.69 CPU + Wait for CPU 5.77 SELECT STATEMENT 5.77 select output from table(dbms_…
572fbaj0fdw2b 1648993636 1 7.69 db file sequential read 1.92 ** Row Source Not Available ** 1.92
6ajkhukk78nsr 1 3.85 CPU + Wait for CPU 3.85 ** Row Source Not Available ** 3.85 begin prvt_hdm.auto_execute( :…
1bx8mgs8by25x 2223429142 1 1.92 CPU + Wait for CPU 1.92 HASH JOIN 1.92 select coalesce( p1s.parameter…
2asgk01xtb6p0 3702291220 1 1.92 control file sequential read 1.92 FIXED TABLE – FULL 1.92 select d.dbid dbid , d.name db…
317v5hnvvd49h 1656119837 1 1.92 CPU + Wait for CPU 1.92 UPDATE 1.92 UPDATE wrh$_seg_stat_obj ob S…

Again, use this report to initially determine if the problem is database-related; if that is the case then this report should provide a wealth of data aimed at directing your database/query tuning efforts. If there is no indication of database problems then other areas are likely at fault. Waits to examine from both reports include SQL*Net more data to client, Disk file operations I/O and os thread startup. The first wait in that list can be an indicator of network problems, the second storage issues (array problems, pending disk failures) or CPU issues (excessive time for O/S thread startups can indicate over-taxed CPUs which may be caused by other, non-database processes).

After examining reports generated by AWR, ASH or Statspack you find the database is not the source of the performance issue you may be able to discern which area outside of the database is the likely suspect. SQL*Net more data to client waits, if the time per wait is relatively high, can indicate problems in the network layer. Utilities such as netstat can show configured network interfaces and network traffic data on those interfaces. Example output is shown below:

$ netstat -ginw
Kernel Interface table
Iface       MTU   Met      RX-OK RX-ERR RX-DRP RX-OVR      TX-OK TX-ERR TX-DRP TX-OVR Flg
bondeth0     1500   0 5084620269      0      0      0 3923352780      0      0      0 BMmRU
bondeth0:1   1500   0      - no statistics available -                                BMmRU
bondeth0:3   1500   0      - no statistics available -                                BMmRU
bondeth0:4   1500   0      - no statistics available -                                BMmRU
eth0         1500   0   42182099      0      0      0   21397203      0      0      0 BMRU
eth4         1500   0     305021      0      0      0     281488      0      0      0 BMsRU
eth5         1500   0 5084315248      0      0      0 3923071292      0      0      0 BMsRU
ib0         64000   0 2419856775      0      1      0 2178061942      3    271      0 BMsRU
ib1         64000   0   21305668      0      0      0          0      0      0      0 BMsRU
lo          16436   0 4495752311      0      0      0 4495752311      0      0      0 LRU
$

The utility reports transmit and receive statistics, including dropped packets and packet errors. Running netstat -ginw may show which local interface is having issues; send such output to the Network Administrator so he or she can interpret the results and take appropriate action.

What if the problem isn’t the database, and it isn’t the network? The next area that can adversely affect performance is the storage tier. Statspack and ASH/AWR reports also provide disk read and write wait information. Again, if the wait time for an individual read or write wait is high (for example, a single wait over 300 milliseconds) it may be the storage that’s at fault. As disks age their performance can deteriorate as the alignment the platter had when new is compromised by wear and tear around the spindle. Firmware can also be an issue, especially when disks are upgraded and the firmware is not. This is not an area for the DBA to assess, but it is one he or she can report to the Storage Administrator for further investigation.

Network-related issues may also be a problem with either the local network interface card (NIC) or the cabling. This is, again, a task for the Network Administrator to assess and address, but you may be able to ‘point’ him or her in the right direction with output from the ping utility. Provided you have two or more systems that can ‘see’ the same endpoint a ping comparison may show network problems local to the database server. For example if server A can ping server C with response times less than 20 milliseconds and server B, using the same subnet, reports ping times in the 100 millisecond or greater range this could indicate either cabling issues or a NIC that may be misconfigured or failing. This isn’t likely to be valid if the two servers are on different subnets which is why it’s necessary to get the Network Administrator involved as soon as you suspect a network issue. He or she can provide needed configuration data and can better analyze and diagnose such problems.

And yet another area where performance can suffer is at the operating system level. A system starved for CPU, memory or both because of a temporarily high load can severly slow down database processes along with many other non-database-related services. The System Administrator may need to be called in on the issue to report what processes were running during a particular period when response time was slow. Remember that the end users equate performance with response time; a large, resource-intensive report that runs once a month can seriously slow down other database activity during that same period. System backups can also consume resources normally available for user and database processes. Knowing the period of poor performance can greatly assist the System Administrator investigating the issue. If it’s a defined window, for example from 8 PM to Midnight on Tuesdays, the System Administrator can track down the process causing the slowdown by monitoring the system activity during that period. It may be a process that can’t be stopped or rescheduled, but at least the end users will know what is causing their normally fast transactions to run much slower than usual. As mentioned before when you know it’s not a database issue get the Administrators of the suspected systems involved, because the earlier you get more eyes on the problem the faster the cause can be found and a solution put in place.

The DBA is the ‘go-to’ person when the database is to blame for performance issues (long run times for specific queries, as an example). But, when the DBA finds nothing awry in the database, other parties may need to get involved. Again, just because it seems to be the database at fault doesn’t mean that the database IS at fault; it could be other, related areas affecting performance. Knowing what database reports can be of use is the first step in being able to assess a situation so other areas, such as network and storage, can be brought in if need be. Knowing who to involve is just as important as knowing that a performance issue exists. Communication is key in these situations; a network or storage administrator may ask for additional information regarding the problem; this is where the AWR and/or Statspack reports may be helpful. Learn how to generate these reports now (if you don’t already know) so that when you need them you can concentrate on the issues at hand rather than how to generate a report.

Performance is a term that can have a broad interpretation, depending upon who is doing the interpreting. End users may more often report performance issues when a report that ran fine yesterday now takes much longer to complete because all they may see is their process running slowly. Knowing when the problem is the database and when it isn’t is key to getting such issues diagnosed and resolved. Hopefully this article has given you some tools to help in that regard; use them wisely and communicate well so that you can be part of the solution by knowing who else to involve when it isn’t the database that is at fault.

Now, on with the show!

June 6, 2014

“A Ponderous Chain”

Filed under: General,Performance — dfitzjarrell @ 11:59

Chained and migrated rows in an Oracle database can seriously impact I/O performance, especially with Exadata. In a ‘normal’ Oracle database [read that as “non-Exadata, non-ASM” database] chained rows, in small numbers where that usually means 1% or less of the total rows in a table, are generally a nuisance. The performance hit they generate in such a situation is small and may not even be noticed by the end users. Add more chained/migrated rows to that mix and the I/O work increase starts becoming noticeable. How do these rows get created, what is the problem they cause and how can you mitigate the issue? Let’s look into what chained/migrated rows are, what can cause them and how to deal with them.

Chained and migrated rows are a unique lot — they exist in two or more data blocks and are linked by the “head” rowid, usually the rowid of the original row. The chained piece or pieces come into being when a row is updated and the data block containing it can’t store the entire row any more. A migrated row is one that is moved entirely to a new block with a ‘pointer’ left behind associated with the original rowid. Let’s try to draw a rather simplified picture to illustrate that concept:

Original row -- Data fits into the available space in the block

       ------------------------------------------------------
000001:|Bonsai Bob|14734|23-NOV-2013|343.27|Duluth|Dept 300 |
       ------------------------------------------------------

Update is made -- Data now takes more space than block has available

       ------------------------------------------------------
000001:|Samurai Sam The Singing Elf|14734|23-NOV-2013|343.27|Duluth|Dept 300 |
       ------------------------------------------------------

Data no longer 'fits' in one block, it's divided between blocks --

       ------------------------------------------------------
000001:|Samurai Sam The Singing Elf|14734|23-NOV-2013|000017|
       ------------------------------------------------------
000017:|343.27|Duluth|Dept 300                              |
       ------------------------------------------------------

Notice there is a ‘pointer’ to the next piece of the row at the end of the original row. This occurs for every ‘split’ this row takes to get the data to ‘fit’ in the table. It’s also possible for a row to be migrated, in its entirety, to an empty block to minimize the trips Oracle has to take to retrieve the data; this occurs when the original row was small and occupied minimal space in a data block. Updates cause it to expand in length and make Oracle relocate the data to another block or blocks with more available space:

Original row -- Data occupies available space

       ------------------------------------------------------
000009:||Bonsai Bob|14734|23-NOV-2013|
       ------------------------------------------------------

Update is made -- Data no longer fits

       ------------------------------------------------------
000009:|Samurai Sam The Singing Elf|14734|23-NOV-2013|343.27|Duluth|Dept 300 |
       ------------------------------------------------------

Data is divided between blocks --

       ------------------------------------------------------
000009:||000019|
       ------------------------------------------------------
000019:|Samurai Sam The Singing Elf|14734|23-NOV-2013|000037|
       ------------------------------------------------------
000037:|343.27|Duluth|Dept 300                              |
       ------------------------------------------------------

The original rowid now has a pointer to the new location of the modified data. That migrated location may also be chained to an additional block because the length of the modified data eventually exceeds the data block size.

What does this mean for performance? It depends, really, on the overall number of rows in the table or schema that are chained or migrated. For a database not on Exadata, or one not using ASM, each chained/migrated row requires at least two fetches to return the data, which at least doubles the work Oracle has to do to process that row. For one, ten or one hundred rows this may be a very small decrease in performance. When 10% or more of the data in a table is chained/migrated I/O processing slows down, and this can happen for tables where the row length can exceed the block size. Without intervention Oracle usually sets the block size to 8196, or 8K, bytes. After taking into consideration any ‘overhead’ introduced by Oracle the maximum available space in an 8K block is rougly 7900 to 8000 bytes. Any table row longer than that will create a chained row even for an empty block. This can be changed at database creation or by creating tablespaces using a larger block size. [Smaller block sizes can also be used but would not benefit any process to reduce row chaining.] The largest possible block size supported by Oracle is 32K so if a table row exceeds that length when fully populated chained rows are guaranteed to exist. The more ‘passes’ Oracle must make to fetch a row the longer the I/O waits can be resulting in diminished performance so it’s best to minimize their occurrence.

Exadata and ASM present a different scenario. Since a diskgroup is striped across all available disks associated with that group the pieces of a chained row can exist on different disks, and, for Exadata, different storage cells. For a ‘regular’ database using ASM the performance can be slower because not only is Oracle needing to visit different blocks to return data it may also have to visit different disks as well, introducing additional latency. On top of that for an Exadata database chained rows, if not on the same disk or accessed by the same storage cell, causes Oracle to revert to regular block I/O, eliminating any chance of using a Smart Scan and any of the optimizations a Smart Scan would provide.

The question to answer is “How do I eliminate, or at least minimize, chained and migrated rows?” The first, and most effective, way is to design tables with the database block size in mind, taking care to not exceed the available space in the block when a row is fully populated and all variable-length columns contain data sized at the maximum defined length. It is possible to increase that available space somewhat, by setting two parameters, PCTFREE and PCTUSED, to tell Oracle how to allocate the datablock space. PCTUSED defines how much of the block can be used for data and governs when a block is taken off of the ‘available’ list. PCTFREE reserves the declared percentage for updates to existing data in the block. If rows are being constantly updated, such as in a financial system, it might be best to set PCTUSED to a smaller value than the default (which is 60) and PCTFREE to 100-PCTUSED, to ensure more space is available for updates to existing data. I have seen at least one system where PCTUSED was set to 1 and PCTFREE set to 99, allowing 1 row per block with plenty of room for updates. In that system the existence of chained rows dropped significantly. Please note that making such a change to an existing table using ‘alter table … ‘ won’t affect the currently populated data blocks. To do that in Oracle releases 9.0.1.x and later you’ll need to perform a move operation, to rebuild the table with the new PCTUSED and PCTFREE settings:


SQL> alter table empdrop move pctfree 99 pctused 1;

Table altered.

SQL>

Unfortunately such a move can increase the table size considerably since the rows per block has now been reduced to a much smaller value. [If you’re still in the ‘dark ages’, using Oracle 8.1 or earlier, you’ll need to export the table, pre-create it with a script modified to set the PCTUSED and PCTFREE ‘properly’ then import the data ignoring the table create error.] This is not a task to be taken lightly, nor one to use ‘canned’ scripts and ‘best practice’ values as each implementation is different. Simply because one table benefitted from such drastic settings for PCTUSED and PCTFREE doesn’t mean EVERY table will need such a dramatic change. Base your changes on the incidence of chained rows and the maximum row length as compared to the database block size.

“How do I know if I have chained rows?”, you ask. Oracle keeps a system statistic, ‘table fetch continued row’, that increments each time a continued/migrated row is fetched. It doesn’t tell you WHERE those rows are, however the ANALYZE TABLE command is still available to provide that information. It can take a while to run on a large table but by default it will populate a table named, brilliantly enough, CHAINED_ROWS, designed to contain all of the information you could need to identify which rows are chained in a table. There is no way to gather this information at the schema level (DBMS_STATS doesn’t have any procedure or function to gather that data) but you can create a script to do the deed. There is some preparation involved, namely creating the CHAINED_ROWS table, but Oracle has conveniently provided a script for that named utlchain.sql, located in $ORACLE_HOME/rdbms/admin:


rem
rem $Header: utlchain.sql 07-may-96.19:40:01 sbasu Exp $
rem
Rem Copyright (c) 1990, 1995, 1996, 1998 by Oracle Corporation
Rem NAME
REM    UTLCHAIN.SQL
Rem  FUNCTION
Rem    Creates the default table for storing the output of the
Rem    analyze list chained rows command
Rem  NOTES
Rem  MODIFIED
Rem     syeung     06/17/98  - add subpartition_name
Rem     mmonajje   05/21/96 -  Replace timestamp col name with analyze_timestam
Rem     sbasu      05/07/96 -  Remove echo setting
Rem     ssamu      08/14/95 -  merge PTI with Objects
Rem     ssamu      07/24/95 -  add field for partition name
Rem     glumpkin   10/19/92 -  Renamed from CHAINROW.SQL
Rem     ggatlin    03/09/92 -  add set echo on
Rem     rlim       04/29/91 -         change char to varchar2
Rem   Klein      01/10/91 - add owner name for chained rows
Rem   Klein      12/04/90 - Creation
Rem

create table CHAINED_ROWS (
  owner_name         varchar2(30),
  table_name         varchar2(30),
  cluster_name       varchar2(30),
  partition_name     varchar2(30),
  subpartition_name  varchar2(30),
  head_rowid         rowid,
  analyze_timestamp  date
);

Oracle also provides another version of that table in the $ORACLE_HOME/rdbms/admin/utlchn1.sql script, that uses the UROWID datatype which supports both Oracle and non-Oracle database rowid values:


Rem
Rem $Header: utlchn1.sql 24-jun-99.07:57:57 echong Exp $
Rem
Rem utlchn1.sql
Rem
Rem  Copyright (c) Oracle Corporation 1998, 1999. All Rights Reserved.
Rem
Rem    NAME
Rem      utlchn1.sql - 
Rem
Rem    DESCRIPTION
Rem      
Rem
Rem    NOTES
Rem      
Rem
Rem    MODIFIED   (MM/DD/YY)
Rem    echong      06/24/99 - rename
Rem    syeung      06/22/98 - add subpartition_name
Rem    echong      06/05/98 - chained rows table with urowid type
Rem    echong      06/05/98 - Created
Rem

create table CHAINED_ROWS (
  owner_name         varchar2(30),
  table_name         varchar2(30),
  cluster_name       varchar2(30),
  partition_name     varchar2(30),
  subpartition_name  varchar2(30),
  head_rowid         urowid,
  analyze_timestamp  date
);

Generally speaking you probably won’t need the table using the UROWID data type but either table can be used to report chained rows in Oracle databases.

Once the CHAINED_ROWS table is created it’s a simple task to generate a script to analyze all of the tables in a given schema for chained rows:


select 'analyze table '||owner||'.'||table_name||' list chained rows;'
from dba_tables
where owner = upper('&1');

Spool the output from that query to a file then execute it from the SQL> prompt. Remember that by default the command inserts data into the CHAINED ROWS table. You can, if you’re feeling adventurous, create a similar table with a different name, and have ANALYZE TABLE insert the chained row data into your table by telling Oracle where you want the data to go. For example if you decide to create a table named BAD_JUJU with the same structure as the CHAINED_ROWS table you can do this:


select 'analyze table '||owner||'.'||table_name||' list chained rows into bad_juju;'
from dba_tables
where owner = upper('&1');

You can then query your BAD_JUJU table for chained row information.

If the ‘table fetch continued row’ statistic reports 0 then you have nothing to worry about and nothing to ‘fix’:


SQL> select value from v$sysstat where name = 'table fetch continued row';

     VALUE
----------
         0

SQL>

In reality that isn’t likely to happen as some of the SYS-owned tables can, and do, contain chained rows. This is the result from a small, personal database I have on my laptop:


SQL> select value from v$sysstat where name = 'table fetch continued row';

     VALUE
----------
       778

SQL>

Let’s see where those chained rows are:


SQL> select owner_name, table_name, count(*) chain_ct
  2  from bad_juju
  3  group by owner_name, table_name;

OWNER_NAME                     TABLE_NAME                       CHAIN_CT
------------------------------ ------------------------------ ----------
SYS                            SMON_SCN_TIME                           1
SYS                            TABPART$                                1
SYS                            WRI$_DBU_FEATURE_USAGE                  5
APEX_030200                    WWV_FLOW_STEPS                         10
SYS                            IDL_CHAR$                             165
SYS                            METASTYLESHEET                        113
APEX_030200                    WWV_FLOW_TEMPLATES                     18
MDSYS                          SDO_DATUMS                              2
SYS                            COLLECTION$                             2
SYS                            IDL_UB1$                             6683
APEX_030200                    WWV_FLOW_STEP_PROCESSING               57
MDSYS                          SDO_ELLIPSOIDS                          8
SYS                            HIST_HEAD$                            265
SYS                            USER$                                   1
MDSYS                          SDO_COORD_OPS                          45
SYS                            IDL_SB4$                               31
SYS                            IND$                                    4
SYS                            METHOD$                                 3
SYS                            RESULT$                                 3
SYS                            TRIGGER$                                1
APEX_030200                    WWV_FLOW_PAGE_PLUG_TEMPLATES            1
SYS                            PARAMETER$                             13
SYS                            TAB$                                    2
SYS                            VIEW$                                  67
APEX_030200                    WWV_FLOW_PAGE_PLUGS                    36
MDSYS                          SDO_COORD_OP_PARAM_VALS               116
MDSYS                          SDO_COORD_OP_PATHS                     53
SYS                            ATTRIBUTE$                             29
SYS                            IDL_UB2$                              837
SYS                            JAVA$MC$                              124

30 rows selected.

SQL>

Notice that no user tables or vendor-supplied application schemas have chained rows, only Oracle-supplied tables, so do not be confused if you analyze the tables in a given schema and find there are no chained rows to address. Is it a good idea to change PCTFREE and PCTUSED on those Oracle-supplied tables? It’s your decision to make; if recursive sql calls to these tables are consuming large amounts of time it may be beneficial to change those values. I really suspect it’s not that big of a performance hit and you’d be better off addressing chained rows, should they exist, in any application schemas present in your database.

With Exadata chained rows do more than simply increase the I/O workload, they can shift it from the storage cells to the database server by reverting to regular block I/O rather than using a Smart Scan. Since Smart Scans can’t process chained rows that span disks and/or storage cells this eliminates any use of Storage Indexes, Predicate Filtering and Column Projection. There may be chained rows that can be processed by a Smart Scan (where the row is chained on the same disk accessed by the same storage cell) but those are very likely rare occurrences. Exadata provides three statistics regarding chained rows and their processing — ‘chained rows processed by cell’, ‘chained rows rejected by cell’ and ‘chained rows skipped by cell’. These can report the number of rows in each ‘category’ from the time the system was started. Each statistic, and what it reports, is listed below:


chained rows processed by cell

This statistic, chained rows processed by cell, reports on the chained rows that were processed within a storage cell, that is, where the row pieces 
(or at least the ones Oracle needed to return results) are located within the storage accessed by a given storage cell.  This is known as inter-block 
chaining, where the row piece is located in the same data block as the head row.  This is a special case of row chaining for rows with more than 255 
columns; the next piece is in the same block as the head and can be fetched with no additional effort.

chained rows rejected by cell

This counter records the chained rows where the next piece is not in the same block or cell, as described in the previous section.  This 
statistic is incremented in, apparently, special cases as it is not incremented very often in the systems I have dealt with.  When this is 
incremented the Smart Scan fell back to regular block I/O to process the row.

chained rows skipped by cell

This is the statistic most often incremented when a Smart Scan needs to revert back to regular block I/O.  Again it's incremented when a chained row 
is found and the remaining pieces reside across the entire storage stripe, spanning storage cells in the process.  It isn't clear when the previous 
counter or this counter should be incremented in such situations; I have found that this counter is the one most often incremented when Smart Scans 
revert to regular block I/O to process chained rows.

The same methods for non-Exadata databases can be applied to those on Exadata; since ASM stripes the diskgroup across all available disks it may not be possible to correct every chained row that exists. That does not mean that the chained/migrated row situation should not be addressed, but remember that the storage volume on Exadata systems is often times far larger than on conventional databases using commodity hardware and that fact can make any process intended to reduce or eliminate chained rows take longer. If the performance hit from the chained rows is large enough any effort to reduce that I/O workload is worth the time and the trouble it may take to implement.

Chained and migrated rows can be a bother, and can introduce performance issues when dealing with large volumes of them. Sometimes it’s not possible to eliminate them because the row length exceeds even the largest available database block size. But there are ways to mitigate their creation for tables where the row length easily fits within the database block, either by altering how the block space is allocated using PCTUSED and PCTFREE, moving the table or tables to a tablespace having a larger block size or by moving the table, in place, to relocate the rows and reduce, or eliminate, the existing chaining. This isn’t a quick operation, and it certainly should not be done online especially in production databases, but the problem, in many cases, can be successfully addressed.

It’s a situation worth pondering.

January 30, 2014

You Take Too Long

Filed under: General,Performance — dfitzjarrell @ 12:09

Sometimes statements take longer than you’d like to complete. This can happen for a number of reasons including data changes, table growth, stale statistics and the occasional optimizer bug you haven’t encountered before. When it’s a SQL query that is misbehaving it’s pretty easy to find the cause and, in many cases, fix it. When it’s PL/SQL it isn’t as easy to do. There are views available from Oracle to assist in this endeavor and, provided you’re licensed, these views can make the troubleshooting process much easier.

Finding long-running statements used to be a task for the V$SESSION_LONGOPS view but it doesn’t record every long running operation so some problem statements could be missed because their actions aren’t recorded there. With Oracle 10.2 and later releases AWR provides an extended window into performance statistics, and in this particular case the views you want are DBA_HIST_SQLSTAT and DBA_HIST_SQLTEXT. [The AWR views require being licensed for the Diagnostic and Tuning pack so you don’t run afoul of the Oracle licensing.] The following statements can be used to generate listings of long-running statements, both historically and from current sessions; put them into a single script and you can generate troubleshooting data to assist you in your efforts:


set linesize 200 trimspool on pagesize 60 verify off
column begin_interval_time format a35
column end_interval_time format a35

break on sql_id skip 1 on instance_number

column sdate new_value sdt noprint

select to_char(sysdate, 'YYYYMMDDHHMI') sdate from dual;

spool &sdt._elapsed_time_report.log

prompt 
prompt  Historic
prompt 
prompt  Elapsed by exec
prompt 

select distinct x.instance_number, x.sql_id, x.time_per_exec, x.elapsed_time_total, s.begin_interval_time, s.end_interval_time
from
(select instance_number, sql_id, snap_id,
       round((elapsed_time_total/1000000)/(case when executions_total = 0 then 1 else executions_total end),4) time_per_exec,
       round(elapsed_time_total/1000000, 4) elapsed_time_total
from dba_hist_sqlstat) x, (select snap_id, 
                                  max(begin_interval_time) begin_interval_time, 
                                  max(end_interval_time) end_interval_time
                           from dba_hist_snapshot 
                           group by snap_id) s
where s.snap_id = x.snap_id 
and x.time_per_exec > &&1
and x.time_per_exec  x.elapsed_time_total
order by 2 asc, 3 desc, 6 desc
/

clear breaks

prompt 
prompt  Elapsed time total, current statements
prompt 

select inst_id,
       sql_id,
       executions,
       round(elapsed_time/1000000, 6) elapsed_sec,
       round((elapsed_time/1000000)/(case when executions = 0 then 1 else executions end), 6) elapsed_per_exec,
       last_active_time
from gv$sqlstats
where elapsed_time/1000000 > &&1
order by 4 desc
/

prompt 
prompt  Elapsed per exec, current statements
prompt 

select inst_id,
       sql_id,
       executions,
       round(elapsed_time/1000000, 6) elapsed_sec,
       round((elapsed_time/1000000)/(case when executions = 0 then 1 else executions end), 6) elapsed_per_exec,
       last_active_time
from gv$sqlstats
where elapsed_time/1000000 > &&1
order by 5 desc
/

prompt
prompt SQL text for current statements
prompt

set long 1000000 linesize 200 trimspool on
column sql_text format a132

select sql_id, sql_text
from dba_hist_sqltext
where sql_id in
(
select sql_id
from gv$sqlstats
where elapsed_time/1000000 > &&1
);

set linesize 80 trimspool off

spool off

Calling such a script is straightforward, simply provide the minimum length of time, in seconds, the script should consider as a starting point for the report; notice that there is a historic section and a current statement section so don’t expect the results from the first (historic) query to match those of the remaining queries. As an example let’s look at statements running for 15 seconds or longer:


SQL> @query_elapsed_per_exec 15





Historic

Elapsed by exec


INSTANCE_NUMBER SQL_ID        TIME_PER_EXEC ELAPSED_TIME_TOTAL BEGIN_INTERVAL_TIME                 END_INTERVAL_TIME
--------------- ------------- ------------- ------------------ ----------------------------------- -----------------------------------
              1 0nazp4jx2k26t       33.9633            67.9265 30-JAN-14 02.00.05.958 AM           30-JAN-14 03.00.43.716 AM

              1 6ajkhukk78nsr       20.0365            40.0729 28-JAN-14 12.00.01.801 PM           28-JAN-14 01.00.06.031 PM

              1 6kk1p64v5qhvx       43.0353           258.2121 22-JAN-14 11.00.07.315 AM           22-JAN-14 12.00.14.834 PM

              1 b6usrg82hwsa3       63.9262           127.8525 26-JAN-14 05.00.29.524 PM           26-JAN-14 06.00.29.893 PM
                                    44.3428           133.0283 26-JAN-14 09.00.42.077 PM           26-JAN-14 10.00.46.559 PM
                                    35.0675           140.2702 27-JAN-14 01.00.53.979 AM           27-JAN-14 02.01.00.542 AM


6 rows selected.


Elapsed time total, current statements


   INST_ID SQL_ID        EXECUTIONS ELAPSED_SEC ELAPSED_PER_EXEC LAST_ACTI
---------- ------------- ---------- ----------- ---------------- ---------
         1 f6cz4n8y72xdc          1   17.983987        17.983987 30-JAN-14
         1 02577v815yp77          1   17.408648        17.408648 30-JAN-14


Elapsed per exec, current statements


   INST_ID SQL_ID        EXECUTIONS ELAPSED_SEC ELAPSED_PER_EXEC LAST_ACTI
---------- ------------- ---------- ----------- ---------------- ---------
         1 f6cz4n8y72xdc          1   17.983987        17.983987 30-JAN-14
         1 02577v815yp77          1   17.408648        17.408648 30-JAN-14


SQL text for current statements


SQL_ID        SQL_TEXT
------------- ------------------------------------------------------------------------------------------------------------------------------------
f6cz4n8y72xdc SELECT space_usage_kbytes  FROM  v$sysaux_occupants  WHERE occupant_name = 'SQL_MANAGEMENT_BASE'
02577v815yp77 BEGIN :success := dbms_ha_alerts_prvt.post_instance_up; END;

SQL>

The ‘offending’ SQL text is now available so you can take that and, using UNIX/Linux utilities like grep, find source code that may need to be tuned. If you don’t have grep (you’re on Windows and haven’t installed any of the Windows-ported UNIX/Linux utilities) you can query either the DBA_SOURCE or USER_SOURCE view to find the package or procedure that contains the errant code. Presumably your enterprise is using some form of change control to manage code modifications which will make the source easy to modify. However, if for some strange reason code is simply written and deployed (a ghastly occurrence) using either DBA_SOURCE or USER_SOURCE should allow you to retrieve the problem code and get it corrected. As you can see the 15 second window doesn’t supply much data from my personal database so let’s use a shorter length of time and get more output:


SQL> query_elapsed_per_exec 4





Historic

Elapsed by exec


INSTANCE_NUMBER SQL_ID        TIME_PER_EXEC ELAPSED_TIME_TOTAL BEGIN_INTERVAL_TIME                 END_INTERVAL_TIME
--------------- ------------- ------------- ------------------ ----------------------------------- -----------------------------------
              1 0nazp4jx2k26t       33.9633            67.9265 30-JAN-14 02.00.05.958 AM           30-JAN-14 03.00.43.716 AM

              1 1cd2krbdzrhvq        7.9052             79.052 30-JAN-14 02.00.05.958 AM           30-JAN-14 03.00.43.716 AM

              1 1cq3qr774cu45        6.6394            66.3944 30-JAN-14 02.00.05.958 AM           30-JAN-14 03.00.43.716 AM

              1 2nszajb0qbyvp        4.2614             8.5229 29-JAN-14 08.20.57.000 AM           29-JAN-14 08.32.46.414 AM
                                     4.1071             8.2142 28-JAN-14 11.36.50.000 PM           28-JAN-14 11.48.05.036 PM

              1 350myuyx0t1d6        4.6085            46.0851 30-JAN-14 02.00.05.958 AM           30-JAN-14 03.00.43.716 AM

              1 35y2u24f4rxqm        6.9261            34.6305 29-JAN-14 12.00.00.587 PM           29-JAN-14 01.00.07.187 PM
                                     5.7718             34.631 29-JAN-14 01.00.07.187 PM           29-JAN-14 02.00.11.450 PM
                                     4.9473            34.6314 29-JAN-14 02.00.11.450 PM           29-JAN-14 03.00.16.164 PM

              1 63n9pwutt8yzw        8.2877            16.5754 26-JAN-14 05.00.29.524 PM           26-JAN-14 06.00.29.893 PM
                                     6.4604            19.3812 26-JAN-14 09.00.42.077 PM           26-JAN-14 10.00.46.559 PM
                                     5.0272            20.1089 27-JAN-14 01.00.53.979 AM           27-JAN-14 02.01.00.542 AM

              1 6ajkhukk78nsr       20.0365            40.0729 28-JAN-14 12.00.01.801 PM           28-JAN-14 01.00.06.031 PM
                                     13.464            40.3919 28-JAN-14 01.00.06.031 PM           28-JAN-14 02.00.09.913 PM
                                    10.1444            40.5776 28-JAN-14 02.00.09.913 PM           28-JAN-14 03.00.14.264 PM
                                     9.9596            29.8787 23-JAN-14 11.00.11.607 AM           23-JAN-14 12.00.17.236 PM
                                     9.1878            36.7513 23-JAN-14 12.00.17.236 PM           23-JAN-14 01.00.23.702 PM
                                     8.3653            41.8264 28-JAN-14 03.00.14.264 PM           28-JAN-14 04.00.18.883 PM
                                     8.2677            16.5355 23-JAN-14 10.00.06.894 AM           23-JAN-14 11.00.11.607 AM
                                     8.0056            16.0111 24-JAN-14 09.00.15.036 AM           24-JAN-14 10.00.21.294 AM
                                     7.4062            37.0311 23-JAN-14 01.00.23.702 PM           23-JAN-14 02.00.27.111 PM
                                     6.8399            13.6798 22-JAN-14 09.00.56.596 AM           22-JAN-14 10.00.02.354 AM
                                     6.3572             31.786 22-JAN-14 12.00.14.834 PM           22-JAN-14 01.00.20.445 PM
                                      6.198            37.1877 23-JAN-14 02.00.27.111 PM           23-JAN-14 03.00.31.177 PM
                                     5.9719            35.8313 22-JAN-14 01.00.20.445 PM           22-JAN-14 02.00.27.213 PM
                                     5.7639            17.2917 24-JAN-14 10.00.21.294 AM           24-JAN-14 11.00.26.060 AM
                                     5.5205            38.6435 22-JAN-14 02.00.27.213 PM           22-JAN-14 03.00.32.344 PM
                                     4.7033            14.1099 22-JAN-14 10.00.02.354 AM           22-JAN-14 11.00.07.315 AM
                                     4.5685            18.2742 24-JAN-14 11.00.26.060 AM           24-JAN-14 12.00.31.071 PM
                                     4.0313            20.1563 24-JAN-14 12.00.31.071 PM           24-JAN-14 01.00.36.467 PM

              1 6c9u8s2zwmthf        5.8052            17.4155 27-JAN-14 10.00.16.082 AM           27-JAN-14 11.00.23.152 AM

              1 6kk1p64v5qhvx       43.0353           258.2121 22-JAN-14 11.00.07.315 AM           22-JAN-14 12.00.14.834 PM

              1 6mcpb06rctk0x        7.2119            14.4237 26-JAN-14 05.00.29.524 PM           26-JAN-14 06.00.29.893 PM
                                     4.9693            14.9079 26-JAN-14 09.00.42.077 PM           26-JAN-14 10.00.46.559 PM

              1 7ydcybv1szhrj        5.1018            10.2037 29-JAN-14 08.32.46.414 AM           29-JAN-14 10.00.52.345 AM

              1 9ctt1scmwbmbg        6.2727            12.5454 29-JAN-14 08.32.46.414 AM           29-JAN-14 10.00.52.345 AM
                                     5.4683            10.9365 28-JAN-14 12.00.01.801 PM           28-JAN-14 01.00.06.031 PM
                                     4.4863             8.9726 30-JAN-14 08.02.30.500 AM           30-JAN-14 09.00.33.806 AM
                                     4.3223            12.9669 29-JAN-14 10.00.52.345 AM           29-JAN-14 11.00.56.416 AM

              1 b3abwkm67yg8r        4.0078             8.0156 29-JAN-14 06.08.45.886 PM           29-JAN-14 07.00.30.792 PM

              1 b6usrg82hwsa3       63.9262           127.8525 26-JAN-14 05.00.29.524 PM           26-JAN-14 06.00.29.893 PM
                                    44.3428           133.0283 26-JAN-14 09.00.42.077 PM           26-JAN-14 10.00.46.559 PM
                                    35.0675           140.2702 27-JAN-14 01.00.53.979 AM           27-JAN-14 02.01.00.542 AM

43 rows selected.


Elapsed time total, current statements


   INST_ID SQL_ID        EXECUTIONS ELAPSED_SEC ELAPSED_PER_EXEC LAST_ACTI
---------- ------------- ---------- ----------- ---------------- ---------
         1 f6cz4n8y72xdc          1   17.983987        17.983987 30-JAN-14
         1 02577v815yp77          1   17.408648        17.408648 30-JAN-14
         1 12a2xbmwn5v6z          1   13.154622        13.154622 30-JAN-14
         1 8mz0wa11tacp0          1   10.168939        10.168939 30-JAN-14
         1 3ktacv9r56b51        163    9.548086          .058577 30-JAN-14
         1 c2p32r5mzv8hb          1    9.054776         9.054776 30-JAN-14
         1 9ctt1scmwbmbg          3    9.015966         3.005322 30-JAN-14
         1 6ajkhukk78nsr          3    8.864179         2.954726 30-JAN-14
         1 db78fxqxwxt7r        807    8.675157           .01075 30-JAN-14
         1 01uy9sb7w8a9g          1    8.557564         8.557564 30-JAN-14
         1 c0agatqzq2jzr          3    7.937203         2.645734 30-JAN-14
         1 cvn54b7yz0s8u         99    7.722906          .078009 30-JAN-14
         1 g0jvz8csyrtcf          1    6.608542         6.608542 30-JAN-14
         1 ga9j9xk5cy9s0         99    5.745133          .058032 30-JAN-14
         1 b3abwkm67yg8r          3    5.286708         1.762236 30-JAN-14
         1 cnq31548hb8un          1    5.123489         5.123489 30-JAN-14
         1 96g93hntrzjtr       3002    4.909722          .001635 30-JAN-14
         1 3nkd3g3ju5ph1        910    4.892754          .005377 30-JAN-14
         1 2nszajb0qbyvp         14    4.570901          .326493 30-JAN-14
         1 832kkz790r75k          1     4.17935          4.17935 30-JAN-14

20 rows selected.


Elapsed per exec, current statements


   INST_ID SQL_ID        EXECUTIONS ELAPSED_SEC ELAPSED_PER_EXEC LAST_ACTI
---------- ------------- ---------- ----------- ---------------- ---------
         1 f6cz4n8y72xdc          1   17.983987        17.983987 30-JAN-14
         1 02577v815yp77          1   17.408648        17.408648 30-JAN-14
         1 12a2xbmwn5v6z          1   13.154622        13.154622 30-JAN-14
         1 8mz0wa11tacp0          1   10.168939        10.168939 30-JAN-14
         1 c2p32r5mzv8hb          1    9.054776         9.054776 30-JAN-14
         1 01uy9sb7w8a9g          1    8.557564         8.557564 30-JAN-14
         1 g0jvz8csyrtcf          1    6.608542         6.608542 30-JAN-14
         1 cnq31548hb8un          1    5.123489         5.123489 30-JAN-14
         1 832kkz790r75k          1     4.17935          4.17935 30-JAN-14
         1 9ctt1scmwbmbg          3    9.015966         3.005322 30-JAN-14
         1 6ajkhukk78nsr          3    8.864179         2.954726 30-JAN-14
         1 c0agatqzq2jzr          3    7.937203         2.645734 30-JAN-14
         1 b3abwkm67yg8r          3    5.286708         1.762236 30-JAN-14
         1 2nszajb0qbyvp         14    4.570901          .326493 30-JAN-14
         1 cvn54b7yz0s8u         99    7.722906          .078009 30-JAN-14
         1 3ktacv9r56b51        163    9.548086          .058577 30-JAN-14
         1 ga9j9xk5cy9s0         99    5.745133          .058032 30-JAN-14
         1 db78fxqxwxt7r        807    8.675157           .01075 30-JAN-14
         1 3nkd3g3ju5ph1        910    4.892754          .005377 30-JAN-14
         1 96g93hntrzjtr       3002    4.909722          .001635 30-JAN-14

20 rows selected.


SQL text for current statements


SQL_ID        SQL_TEXT
------------- ------------------------------------------------------------------------------------------------------------------------------------
12a2xbmwn5v6z select owner, segment_name, blocks from dba_segments where tablespace_name = :tsname
cnq31548hb8un BEGIN    :c := dbms_spm_internal.auto_purge_sql_plan_baseline;  END;
2nszajb0qbyvp DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;  broken BOOLEAN := FALSE; BEGIN wwv_flow_mail.push_queue(wwv_flow_pla
              tform.get_preference('SMTP_HOST_ADDRESS'),wwv_flow_platform.get_preference('SMTP_HOST_PORT')); :mydate := next_date; IF broken THEN
              :b := 1; ELSE :b := 0; END IF; END;

f6cz4n8y72xdc SELECT space_usage_kbytes  FROM  v$sysaux_occupants  WHERE occupant_name = 'SQL_MANAGEMENT_BASE'
db78fxqxwxt7r select /*+ rule */ bucket, endpoint, col#, epvalue from histgrm$ where obj#=:1 and intcol#=:2 and row#=:3 order by bucket
9ctt1scmwbmbg begin dbsnmp.bsln_internal.maintain_thresholds; end;
3ktacv9r56b51 select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(property,0),subname,type#,d_attrs from dependency$ d, obj$
               o where d_obj#=:1 and p_obj#=obj#(+) order by order#

6ajkhukk78nsr begin prvt_hdm.auto_execute( :dbid, :inst_num , :end_snap_id ); end;
96g93hntrzjtr select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#, sample_size, minimum, maximum, distcnt, lowval, hival, dens
              ity, col#, spare1, spare2, avgcln from hist_head$ where obj#=:1 and intcol#=:2

3nkd3g3ju5ph1 select obj#,type#,ctime,mtime,stime, status, dataobj#, flags, oid$, spare1, spare2 from obj$ where owner#=:1 and name=:2 and namespa
              ce=:3 and remoteowner is null and linkname is null and subname is null

cvn54b7yz0s8u select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ where obj#=:1 and part=:2 and version=:3 order by piece#
02577v815yp77 BEGIN :success := dbms_ha_alerts_prvt.post_instance_up; END;
ga9j9xk5cy9s0 select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece from idl_sb4$ where obj#=:1 and part=:2 and version=:3 order by piece#
832kkz790r75k SELECT c.capture#, c.capture_name, c.flags FROM sys.streams$_capture_process c, dba_queues q, dba_queue_tables qt  ,v$instance i WHE
              RE c.queue_owner  = q.owner   and c.queue_name   = q.name   and qt.queue_table = q.queue_table   and qt.owner       = q.owner   and
              ((qt.owner_instance = i.instance_number and  (bitand(c.flags, 1572864) = 0 or   bitand(c.flags, 8388608) > 0))  or (bitand(c.flags,
              1572864) > 0 and  bitand(c.flags, 8388608) = 0))   and bitand(c.flags, 512) != 512   and c.status       = :1 ORDER BY c.capture#

b3abwkm67yg8r DECLARE job BINARY_INTEGER := :job;  next_date TIMESTAMP WITH TIME ZONE := :mydate;  broken BOOLEAN := FALSE;  job_name VARCHAR2(30)
               := :job_name;  job_subname VARCHAR2(30) := :job_subname;  job_owner VARCHAR2(30) := :job_owner;  job_start TIMESTAMP WITH TIME ZONE
               := :job_start;  job_scheduled_start TIMESTAMP WITH TIME ZONE := :job_scheduled_start;  window_start TIMESTAMP WITH TIME ZONE := :wi
              ndow_start;  window_end TIMESTAMP WITH TIME ZONE := :window_end;  chain_id VARCHAR2(14) :=  :chainid;  credential_owner varchar2(30)
               := :credown;  credential_name  varchar2(30) := :crednam;  destination_owner varchar2(30) := :destown;  destination_name varchar2(30
              ) := :destnam;  job_dest_id varchar2(14) := :jdestid;  log_id number := :log_id;  BEGIN  begin dbms_rlmgr_dr.execschdactions('RLM$SC
              HDNEGACTION'); end;  :mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

8mz0wa11tacp0 select table_objno, primary_instance, secondary_instance, owner_instance from sys.aq$_queue_table_affinities a  where  a.owner_insta
              nce  :1 and dbms_aqadm_syscalls.get_owner_instance(a.primary_instance,       a.secondary_instance,       a.owner_instance) = :2  o
              rder by table_objno

c2p32r5mzv8hb BEGIN    prvt_advisor.delete_expired_tasks;  END;
01uy9sb7w8a9g  begin      dbms_aqadm_sys.remove_all_nondurablesub(:1, :2);   end;
c0agatqzq2jzr insert into "SYS"."ALERT_QT"  (q_name, msgid, corrid, priority, state, delay, expiration,   time_manager_info, local_order_no, chain
              _no, enq_time, step_no, enq_uid,   enq_tid, retry_count, exception_qschema, exception_queue, recipient_key,   dequeue_msgid, user_da
              ta, sender_name, sender_address, sender_protocol,   user_prop, cscn, dscn)   values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :
              12, :13, :14, 0, :15,         :16, :17, :18, :19, :20, :21, :22, :23, :24, :25)

g0jvz8csyrtcf SELECT T.TASK_ID FROM SYS.DBA_ADVISOR_TASKS T, WRI$_ADV_PARAMETERS P WHERE T.TASK_ID = P.TASK_ID AND T.STATUS#  :B2 AND T.READ_ONL
              Y = 'FALSE' AND P.NAME = 'DAYS_TO_EXPIRE' AND PRVT_ADVISOR.CHECK_EXPIRATION_COMP(:B1 , T.LAST_MODIFIED, P.VALUE) = 1


20 rows selected.

SQL>

It’s likely that longer minimum execution times will produce a much longer report in an active production database; it’s also likely that full blocks of PL/SQL will be reported to aid you in your search for the misbehaving code. Once you get the code and the sql_id values it’s a simple task to return plan data for the sql_ids of interest:


SQL> @plan_query_hist 12a2xbmwn5v6z

I   Operation                                Object Name                        # Rows      BYTES    COST PStart PStop
--- ---------------------------------------- ------------------------------ ---------- ---------- ---------- ------ ------
1   SELECT STATEMENT                                                                                 1117
2   VIEW                                     SYS_DBA_SEGS                          474      85320    1117
3     UNION-ALL
4       NESTED LOOPS (OUTER)                                                         1        121    1082
5         NESTED LOOPS                                                               1        103    1081
6           HASH JOIN                                                               33       2112    1015
7             NESTED LOO                                                          1736      72912      20
8               NESTED L                                                             6         96       2
9                 TABLE  (BY INDEX ROWID)    TS$                                     1         10       1
10                  INDE (UNIQUE SCAN)       I_TS1                                   1                  0
11                INDEX (FULL SCAN)          I_FILE2                                 6         36       1
12              TABLE AC (CLUSTER)           SEG$                                  289       7514       8
13                INDEX (RANGE SCAN)         I_FILE#_BLOCK#                          1                  2
14            VIEW                           SYS_OBJECTS                          5842     128524     995
15              UNION-AL
16                TABLE  (FULL)              TAB$                                 1826      40172     296
17                TABLE  (FULL)              TABPART$                              110       1760       2
18                TABLE  (FULL)              CLU$                                   10        140     296
19                TABLE  (FULL)              IND$                                 3713      66834     296
20                TABLE  (FULL)              INDPART$                              127       2032       3
21                TABLE  (FULL)              LOB$                                   22        440     296
22                TABLE  (FULL)              TABSUBPART$                            32        448       2
23                TABLE  (FULL)              INDSUBPART$                             1         52       2
24                TABLE  (FULL)              LOBFRAG$                                1         17       2
25          TABLE ACCESS (BY INDEX ROWID)    OBJ$                                    1         39       2
26            INDEX (RANGE SCAN)             I_OBJ1                                  1                  1
27        TABLE ACCESS (CLUSTER)             USER$                                   1         18       1
28          INDEX (UNIQUE SCAN)              I_USER#                                 1                  0
29      NESTED LOOPS (OUTER)                                                         2        198      10
30        NESTED LOOPS                                                               2        162       8
31          NESTED LOOPS                                                             2        150       8
32            NESTED LOO                                                            11        506       3
33              TABLE AC (BY INDEX ROWID)    TS$                                     1         10       1
34                INDEX (UNIQUE SCAN)        I_TS1                                   1                  0
35              TABLE AC (FULL)              UNDO$                                  11        396       2
36            TABLE ACCE (CLUSTER)           SEG$                                    1         29       1
37              INDEX (UNIQUE SCAN)          I_FILE#_BLOCK#                          1                  0
38          INDEX (UNIQUE SCAN)              I_FILE2                                 1          6       0
39        TABLE ACCESS (CLUSTER)             USER$                                   1         18       1
40          INDEX (UNIQUE SCAN)              I_USER#                                 1                  0
41      HASH JOIN (RIGHT OUTER)                                                    472      33512      25
42        TABLE ACCESS (FULL)                USER$                                  63       1134       3
43        NESTED LOOPS                                                             472      25016      21
44          NESTED LOOPS                                                             6        114       3
45            TABLE ACCE (BY INDEX ROWID)    TS$                                     1         10       1
46              INDEX (UNIQUE SCAN)          I_TS1                                   1                  0
47            TABLE ACCE (FULL)              FILE$                                   6         54       2
48          TABLE ACCESS (CLUSTER)           SEG$                                   79       2686       8
49            INDEX (RANGE SCAN)             I_FILE#_BLOCK#                          1                  2

49 rows selected.


Access Predicates .....


no rows selected


Filter predicates .....


no rows selected

SQL>

The statement for that plan runs for a little over 13 seconds per execution; it’s using data dictionary tables and views that can’t be modified or indexed any further so other means will need to be used to tune this statement. One option might be to gather data dictionary statistics to ensure they are current, however I have rarely found such a case to be true. Since these examples were run on a Windows system, in a fairly small database with a single disk the most likely culprit is the file configuration. This can also happen on enterprise systems, depending upon how LUNs are created and presented, so it would be a good idea to examine file placement to see if redo logs or control files are located on the same spindles as data and index files. Examining a current AWR report should shed some light on how I/O is behaving and which wait events are consuming the most resources.

Although not quite as straightforward as SQL statement tuning it is possible to find ‘offending’ PL/SQL code and work to improve its performance. The DBMS_PROFILER package, available since Oracle 10.2 was released, can be used to provide performance information for PL/SQL packages and procedures. To install the objects the dbms_profiler package requires the ?/rdbms/admin/proftab.sql script needs to be run. Once these objects are created [each user who wants to use the profiler needs to install these objects) it’s simply a matter of starting the profiler, running the problem code and then stopping the profiler run. Using the profdemo.sql script from Oracle you can see the following calls before and after the PL/SQL is executed:


SQL> Rem  Start profiling.
SQL>
SQL> declare
  2    status number;
  3  begin
  4    status := dbms_profiler.start_profiler('PROFILER DEMO');
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL>
SQL> execute profdemo_pack.profdemo_p1;

PL/SQL procedure successfully completed.

SQL>
SQL> Rem  Stop profiling data.
SQL>
SQL> declare
  2    status number;
  3  begin
  4    status := dbms_profiler.stop_profiler;
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL>

Reports were generated using the profsum.sql script, again from Oracle; the proffer.sql scripts needs to be run to create the packages and procedures called by profuse.sql:


GRAND_TOTA
----------
       .00


     RUNID RUN_COMMENT                       SECONDS
---------- ------------------------------ ----------
         1 PROFILER DEMO                        .037
         2 PROFILER DEMO                        .003
         3 PROFILER DEMO                        .003
         4 PROFILER DEMO                        .051


     RUNID RUN_COMMENT          UNIT_OWNER                       UNIT_NAME            SECONDS   PERCEN
---------- -------------------- -------------------------------- -------------------- --------- ------
         1 PROFILER DEMO        SYS                              PROFDEMO_PACK              .00    1.4
         2 PROFILER DEMO        SYS                              PROFDEMO_PACK              .00    9.6
         2 PROFILER DEMO                                              .00    1.5
         2 PROFILER DEMO                                              .00    1.2
         3 PROFILER DEMO        SYS                              PROFDEMO_PACK              .00   10.7
         3 PROFILER DEMO                                              .00    2.1
         3 PROFILER DEMO                                              .00    1.4
         3 PROFILER DEMO        SYS                              DBMS_OUTPUT                .00    1.1

8 rows selected.


UNIT_OWNER                       UNIT_NAME                 SECONDS   PERCENTAG
-------------------------------- ------------------------- --------- ---------
SYS                              PROFDEMO_PACK                   .00     66.44
                                           .00     23.00
SYS                              DBMS_OUTPUT                     .00      5.35
SYS                              DBMS_PROFILER                   .00      4.38
SYS                              PROFDEMO_TYPE                   .00       .83

    to_char(p1.total_time/(p1.total_occur*p1.min_time),'99999.99')as "Ave/min",
                         *
ERROR at line 10:
ORA-01476: divisor is equal to zero



     RUNID SECONDS  OWNER                UNIT_NAME                 LINE# TEXT
---------- -------- -------------------- -------------------- ---------- -------------------------------------------------------------
         1       .0 SYS                  PROFDEMO_PACK                 9        insert into profdemo_tab_1 values (d3);
         4       .0 SYS                  PROFDEMO_PACK                 9        insert into profdemo_tab_1 values (d3);
         3       .0 SYS                  PROFDEMO_PACK                 9        insert into profdemo_tab_1 values (d3);
         2       .0 SYS                  PROFDEMO_PACK                 9        insert into profdemo_tab_1 values (d3);
         1       .0 SYS                  PROFDEMO_PACK                15        insert into profdemo_tab_1 values (d2.atr);
         4       .0 SYS                  PROFDEMO_PACK                15        insert into profdemo_tab_1 values (d2.atr);
         3       .0 SYS                  PROFDEMO_PACK                15        insert into profdemo_tab_1 values (d2.atr);
         2       .0 SYS                  PROFDEMO_PACK                15        insert into profdemo_tab_1 values (d2.atr);

8 rows selected.


SECONDS  UNIT_OWNER           UNIT_NAME                 LINE# TEXT
-------- -------------------- -------------------- ---------- -------------------------------------------------------------------------
      .0 SYS                  PROFDEMO_PACK                 9        insert into profdemo_tab_1 values (d3);
      .0 SYS                  PROFDEMO_PACK                15        insert into profdemo_tab_1 values (d2.atr);
      .0 SYS                  PROFDEMO_PACK                 8        d3 := d1.profdemo_type_regular_method()+i;
      .0 SYS                  PROFDEMO_PACK                18      raise value_error;
      .0 SYS                  PROFDEMO_PACK                 2    procedure profdemo_p1 is


PL/SQL procedure successfully completed.


UNIT_OWNER                       UNIT_NAME                        LINES_EXECUTED
-------------------------------- -------------------------------- --------------
SYS                              PROFDEMO_PACK                                11
SYS                              PROFDEMO_TYPE                                 4
SYS                              DBMS_OUTPUT                                  17
SYS                              DBMS_PROFILER                                 7


UNIT_OWNER                       UNIT_NAME                        LINES_PRESENT
-------------------------------- -------------------------------- -------------
SYS                              PROFDEMO_PACK                               13
SYS                              PROFDEMO_TYPE                                9
SYS                              DBMS_OUTPUT                                106
SYS                              DBMS_PROFILER                              112


LINES_EXECUTED
--------------
            39


LINES_PRESENT
-------------
          240

=================================trace info=================================
===========================Results for run #1 made on 30-JAN-14 11:25:14 =========================
(PROFILER DEMO) Run total time:       .04 seconds
Unit #1: SYS.DBMS_PROFILER - Total time:       .00 seconds
Unit #2: . - Total time:       .00 seconds
Unit #3: . - Total time:       .00 seconds
Unit #4: SYS.DBMS_OUTPUT - Total time:       .00 seconds
Unit #5: . - Total time:       .00 seconds
Unit #6: SYS.PROFDEMO_PACK - Total time:       .00 seconds
1                                                package body profdemo_pack is
2           0   .00000795                           procedure profdemo_p1 is
3           1   .00000198  .00000198                  d1 profdemo_type := profdemo_type(earliest_date);
4           1   .00000497  .00000497                  d2 profdemo_type := profdemo_type(earliest_date+1);
5                                                     d3 date;
6                                                   begin
7           6   .00000099  .00000016                  for i in 1..5 loop
8           5   .00001789  .00000357                    d3 := d1.profdemo_type_regular_method()+i;
9           5   .00041947  .00008389                    insert into profdemo_tab_1 values (d3);
10                                                     end loop;
11
12           1   .00000099  .00000099                  if d1 > d2 then
13           0   0                                       insert into profdemo_tab_1 values (d1.atr);
14                                                     else
15           1   .00004274  .00004274                    insert into profdemo_tab_1 values (d2.atr);
16                                                     end if;
17
18           1   .00001093  .00001093                  raise value_error;
19
20           1   0  0                                  exception when value_error then
21           1   .00000099  .00000099                          NULL;
22           1   .00000099  .00000099                end;
23                                                end profdemo_pack
Unit #7: SYS.PROFDEMO_TYPE - Total time:       .00 seconds
1                                                type body profdemo_type is
2           0   0                                    static function profdemo_type_static_method  return date is
3                                                    begin
4           0   0                                         return (sysdate);
5           0   0                                    end;
6
7           0   .00000198                            member function profdemo_type_regular_method return date is
8                                                    begin
9           5   .00000099  .00000019                      return (atr);
10           5   .00000099  .00000019                 end;
11
12           0   0                                    map member function profdemo_type_map_method return date is
13                                                    begin
14           2   .00000099  .00000049                      return (atr);
15           2   0  0                                 end;
16                                                end
Unit #8: . - Total time:       .00 seconds
Unit #9: . - Total time:       .00 seconds
===========================Results for run #2 made on 30-JAN-14 11:27:07 =========================
(PROFILER DEMO) Run total time:       .00 seconds
Unit #1: SYS.DBMS_PROFILER - Total time:       .00 seconds
Unit #2: . - Total time:       .00 seconds
Unit #3: . - Total time:       .00 seconds
Unit #4: SYS.DBMS_OUTPUT - Total time:       .00 seconds
Unit #5: . - Total time:       .00 seconds
Unit #6: SYS.PROFDEMO_PACK - Total time:       .00 seconds
1                                                package body profdemo_pack is
2           0   .00000537                           procedure profdemo_p1 is
3           1   .00000179  .00000179                  d1 profdemo_type := profdemo_type(earliest_date);
4           1   .00000447  .00000447                  d2 profdemo_type := profdemo_type(earliest_date+1);
5                                                     d3 date;
6                                                   begin
7           6   0  0                                  for i in 1..5 loop
8           5   .00001432  .00000286                    d3 := d1.profdemo_type_regular_method()+i;
9           5   .00021223  .00004244                    insert into profdemo_tab_1 values (d3);
10                                                     end loop;
11
12           1   .00000268  .00000268                  if d1 > d2 then
13           0   0                                       insert into profdemo_tab_1 values (d1.atr);
14                                                     else
15           1   .00003671  .00003671                    insert into profdemo_tab_1 values (d2.atr);
16                                                     end if;
17
18           1   .00000985  .00000985                  raise value_error;
19
20           1   0  0                                  exception when value_error then
21           1   .00000089  .00000089                          NULL;
22           1   .00000089  .00000089                end;
23                                                end profdemo_pack
Unit #7: SYS.PROFDEMO_TYPE - Total time:       .00 seconds
1                                                type body profdemo_type is
2           0   0                                    static function profdemo_type_static_method  return date is
3                                                    begin
4           0   0                                         return (sysdate);
5           0   0                                    end;
6
7           0   .00000089                            member function profdemo_type_regular_method return date is
8                                                    begin
9           5   .00000179  .00000035                      return (atr);
10           5   .00000179  .00000035                 end;
11
12           0   0                                    map member function profdemo_type_map_method return date is
13                                                    begin
14           2   .00000179  .00000089                      return (atr);
15           2   0  0                                 end;
16                                                end
Unit #8: . - Total time:       .00 seconds
Unit #9: . - Total time:       .00 seconds
===========================Results for run #3 made on 30-JAN-14 11:29:21 =========================
(PROFILER DEMO) Run total time:       .00 seconds
Unit #1: SYS.DBMS_PROFILER - Total time:       .00 seconds
Unit #2: . - Total time:       .00 seconds
Unit #3: . - Total time:       .00 seconds
Unit #4: SYS.DBMS_OUTPUT - Total time:       .00 seconds
Unit #5: . - Total time:       .00 seconds
Unit #6: SYS.PROFDEMO_PACK - Total time:       .00 seconds
1                                                package body profdemo_pack is
2           0   .00000625                           procedure profdemo_p1 is
3           1   .00000267  .00000267                  d1 profdemo_type := profdemo_type(earliest_date);
4           1   .00000535  .00000535                  d2 profdemo_type := profdemo_type(earliest_date+1);
5                                                     d3 date;
6                                                   begin
7           6   .00000267  .00000044                  for i in 1..5 loop
8           5   .00001339  .00000267                    d3 := d1.profdemo_type_regular_method()+i;
9           5   .00023757  .00004751                    insert into profdemo_tab_1 values (d3);
10                                                     end loop;
11
12           1   .00000357  .00000357                  if d1 > d2 then
13           0   0                                       insert into profdemo_tab_1 values (d1.atr);
14                                                     else
15           1   .00003840  .00003840                    insert into profdemo_tab_1 values (d2.atr);
16                                                     end if;
17
18           1   .00000982  .00000982                  raise value_error;
19
20           1   0  0                                  exception when value_error then
21           1   0  0                                          NULL;
22           1   .00000089  .00000089                end;
23                                                end profdemo_pack
Unit #7: SYS.PROFDEMO_TYPE - Total time:       .00 seconds
1                                                type body profdemo_type is
2           0   0                                    static function profdemo_type_static_method  return date is
3                                                    begin
4           0   0                                         return (sysdate);
5           0   0                                    end;
6
7           0   .00000178                            member function profdemo_type_regular_method return date is
8                                                    begin
9           5   .00000089  .00000017                      return (atr);
10           5   .00000178  .00000035                 end;
11
12           0   0                                    map member function profdemo_type_map_method return date is
13                                                    begin
14           2   .00000089  .00000044                      return (atr);
15           2   0  0                                 end;
16                                                end
Unit #8: . - Total time:       .00 seconds
Unit #9: . - Total time:       .00 seconds
===========================Results for run #4 made on 30-JAN-14 11:30:41 =========================
(PROFILER DEMO) Run total time:       .05 seconds
Unit #1: SYS.DBMS_PROFILER - Total time:       .00 seconds
Unit #2: . - Total time:       .00 seconds
Unit #3: . - Total time:       .00 seconds
Unit #4: SYS.DBMS_OUTPUT - Total time:       .00 seconds
Unit #5: . - Total time:       .00 seconds
Unit #6: SYS.PROFDEMO_PACK - Total time:       .00 seconds
1                                                package body profdemo_pack is
2           0   .00000393                           procedure profdemo_p1 is
3           1   .00000295  .00000295                  d1 profdemo_type := profdemo_type(earliest_date);
4           1   .00000295  .00000295                  d2 profdemo_type := profdemo_type(earliest_date+1);
5                                                     d3 date;
6                                                   begin
7           6   .00000098  .00000016                  for i in 1..5 loop
8           5   .00001968  .00000393                    d3 := d1.profdemo_type_regular_method()+i;
9           5   .00027756  .00005551                    insert into profdemo_tab_1 values (d3);
10                                                     end loop;
11
12           1   .00000098  .00000098                  if d1 > d2 then
13           0   0                                       insert into profdemo_tab_1 values (d1.atr);
14                                                     else
15           1   .00004232  .00004232                    insert into profdemo_tab_1 values (d2.atr);
16                                                     end if;
17
18           1   .00001082  .00001082                  raise value_error;
19
20           1   0  0                                  exception when value_error then
21           1   .00000098  .00000098                          NULL;
22           1   .00000098  .00000098                end;
23                                                end profdemo_pack
Unit #7: SYS.PROFDEMO_TYPE - Total time:       .00 seconds
1                                                type body profdemo_type is
2           0   0                                    static function profdemo_type_static_method  return date is
3                                                    begin
4           0   0                                         return (sysdate);
5           0   0                                    end;
6
7           0   0                                    member function profdemo_type_regular_method return date is
8                                                    begin
9           5   .00000098  .00000019                      return (atr);
10           5   0  0                                 end;
11
12           0   0                                    map member function profdemo_type_map_method return date is
13                                                    begin
14           2   .00000098  .00000049                      return (atr);
15           2   0  0                                 end;
16                                                end
Unit #8: . - Total time:       .00 seconds
Unit #9: . - Total time:       .00 seconds
============================================================================

PL/SQL procedure successfully completed.

================== Profiler report - all runs rolled up ===================
Unit .:
Unit SYS.DBMS_OUTPUT:
Unit SYS.DBMS_PROFILER:
Unit SYS.PROFDEMO_PACK:
1                                                package body profdemo_pack is
2           0   .00002351                           procedure profdemo_p1 is
3           4   .00000941  .00000235                  d1 profdemo_type := profdemo_type(earliest_date);
4           4   .00001775  .00000443                  d2 profdemo_type := profdemo_type(earliest_date+1);
5                                                     d3 date;
6                                                   begin
7          24   .00000465  .00000019                  for i in 1..5 loop
8          20   .00006530  .00000326                    d3 := d1.profdemo_type_regular_method()+i;
9          20   .00114684  .00005734                    insert into profdemo_tab_1 values (d3);
10                                                     end loop;
11
12           4   .00000823  .00000205                  if d1 > d2 then
13           0   0                                       insert into profdemo_tab_1 values (d1.atr);
14                                                     else
15           4   .00016018  .00004004                    insert into profdemo_tab_1 values (d2.atr);
16                                                     end if;
17
18           4   .00004143  .00001035                  raise value_error;
19
20           4   0  0                                  exception when value_error then
21           4   .00000287  .00000071                          NULL;
22           4   .00000376  .00000094                end;
23                                                end profdemo_pack
Unit SYS.PROFDEMO_TYPE:
1                                                type body profdemo_type is
2           0   0                                    static function profdemo_type_static_method  return date is
3                                                    begin
4           0   0                                         return (sysdate);
5           0   0                                    end;
6
7           0   .00000466                            member function profdemo_type_regular_method return date is
8                                                    begin
9          20   .00000466  .00000023                      return (atr);
10          20   .00000457  .00000022                 end;
11
12           0   0                                    map member function profdemo_type_map_method return date is
13                                                    begin
14           8   .00000466  .00000058                      return (atr);
15           8   0  0                                 end;
16                                                end
============================================================================

PL/SQL procedure successfully completed.

A wealth of performance data is provided from the profiler; tuning the PL/SQL should then be a simple matter of working through the long-running sections and addressing any issues causing the slow performance. And all this was made available by running the set of queries provided at the start of this post so problem statements could be found.

When things start running slower, and it’s PL/SQL performance that’s impeding progress, it’s good to know how to find the problem statements so you can run the profiler (on 10.2 and later releases) to find the root causes and address them. It may take some time to correct the issues you find but it’s definitely worth the effort.

Hurry up, I haven’t got all day.

August 26, 2011

Parallel Universe

Filed under: General,Performance — dfitzjarrell @ 00:09
Tags: , ,

An oft-used (and subsequently oft-abused) execution path is parallel execution, usually ‘instigated’ by some sort of parallel hint. Developers, albeit with the best intentions, misuse and abuse this mechanism because of faulty logic, that ‘logic’ being that if one process executes in X amount of time then Y parallel processes will execute in X/Y amount of time, and nothing could be further from the truth in many cases. Yes, there are opportunities to use parallelism to speed up processing but in most of the cases I’ve seen it’s doing more harm than good.

To illustrate this in a ‘real world’ situation say you’re in a restaurant and the service appears to be slow; you think ‘if I had three waitresses instead of one I’d get my food faster’, but let’s look at that from a different point of view. Three waitresses for one table means that all three waitresses need to communicate with each other to avoid repeating work done by the others (this, of course, takes time). Each waitress needs to be assigned specific duties but also must be available to take over for another if something happens (one broke her toe, for instance); such an occurrence requires reassignment of duties and adjusting the schedule to accomodate the change. Eventually you get your order but it will take MORE time than if a single waitress performed all of the necessary tasks.

Parallel processing does more than simply ‘divide and conquer’ as it requires several steps most developers may not know about or may ignore entirely. DSS and DW systems, running with large numbers of CPUs, can benefit from parallel processing as the load can be distributed among the CPUs reducing the load on a single processor. OLTP systems, on the other hand, usually involve operations which are quick to begin with and the overhead of implementing parallel processing is quite large compared to the overall execution time; additionally it may take longer to complete the parallel execution than it would to properly tune the query for single-threaded processing and eliminate the parallelism altogether.

So what does parallel processing bring to the table? Obviously the possibility, but not the guarantee, of reduced execution time. What does it cost? That depends on the system but for OLTP systems it usually creates more work than it accomplishes as a parallel query coordinator process is spawned, along with X number of parallel slaves all ‘talking’ to that coordinator as well as a parallel-to-serial operation by the coordinator to assemble the final results. Many OLTP systems aren’t designed for parallel processing (and rightly so) as parallel execution is designed to utilize resources that may remain idle (CPU, memory) and are in sufficient quantity to warrant generating the additional overhead required. Queries and statements should be tuned properly (there, I’ve said it again) for speedy execution and slapping a parallel hint on them isn’t the correct way to go about the tuning process. Looking at an example of the differences between parallel and serial execution might help clear the air.

  
SQL> select id, txtval, status
  2  from para_tst pt
  3  where id between 9001 and 34001;

        ID TXTVAL                          STATUS
---------- ------------------------------ -------
      9389 ALL_SCHEDULER_RUNNING_JOBS       VALID
      9390 USER_SCHEDULER_RUNNING_JOBS      VALID
      9391 USER_SCHEDULER_RUNNING_JOBS      VALID
[...]
     20772 /130d52e2_JDK2Sorter             VALID
     20773 /130d52e2_JDK2Sorter             VALID
     20774 /4c28cb16_ToolLogOptions         VALID
     20775 /4c28cb16_ToolLogOptions         VALID
     20776 /cbd9a55f_AbortException         VALID
     20777 /cbd9a55f_AbortException         VALID

591960 rows selected.

Elapsed: 00:02:03.68

Execution Plan
----------------------------------------------------------
Plan hash value: 350193380

---------------------------------------------------------------------------------------------------
| Id | Operation                | Name     | Rows | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT         |          |  593K|    20M|  1698   (2)| 00:00:21 |       |       |
|  1 |  PARTITION RANGE ITERATOR|          |  593K|    20M|  1698   (2)| 00:00:21 |    19 |    21 |
|* 2 |   TABLE ACCESS FULL      | PARA_TST |  593K|    20M|  1698   (2)| 00:00:21 |    19 |    21 |
---------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("ID" &gt;=9001 abd "ID" &lt;=34001)

SQL> select /*+ parallel(pt 2) pq_distribute(pt partition) */
  2        id, txtval, status
  3  from para_tst pt
  4  where id between 9001 and 34001;

        ID TXTVAL                           STATUS
---------- ------------------------------- -------
     18404 /79bc64f9_JvmMemoryMeta           VALID
     18405 /b80019c2_EnumJvmThreadContent    VALID
     18406 /b80019c2_EnumJvmThreadContent    VALID
     18407 /bcfa29b5_EnumJvmThreadCpuTime    VALID
     18408 /bcfa29b5_EnumJvmThreadCpuTime    VALID
     17997 /b3bd73eb_CommonClassObject       VALID
     17998 /b3bd73eb_CommonClassObject       VALID
     17999 /c5a69e17_ServerSchemaObject1     VALID
[...]
     20724 /4bd3ef8d_KnownOptions4           VALID
     20725 /4bd3ef8d_KnownOptions4           VALID
     20726 /75d2b0ba_KnownOptions5           VALID
     20727 /75d2b0ba_KnownOptions5           VALID
     20728 /75e9b2d4_KnownOptions            VALID
     20729 /75e9b2d4_KnownOptions            VALID

591960 rows selected.

Elapsed: 00:02:23.59

Execution Plan
----------------------------------------------------------
Plan hash value: 1393746857

-------------------------------------------------------------------------------------------------------------------------
| Id | Operation            | Name     | Rows | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ |IN-OUT| PQ Distr |
-------------------------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT     |          |  593K|    20M|   942   (1)| 00:00:12 |       |       |       |      |          |
|  1 |  PX COORDINATOR      |          |      |       |            |          |       |       |       |      |          |
|  2 |   PX SEND QC (RANDOM)| :TQ10000 |  593K|    20M|   942   (1)| 00:00:12 |       |       | Q1,00 | P->S | QC (RAND)|
|  3 |    PX BLOCK ITERATOR |          |  593K|    20M|   942   (1)| 00:00:12 |    19 |    21 | Q1,00 | PCWC |          |
|* 4 |     TABLE ACCESS FULL| PARA_TST |  593K|    20M|   942   (1)| 00:00:12 |    19 |    21 | Q1,00 | PCWP |          |
-------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("ID" &gt;=9001 abd "ID" &lt;=34001)
 

On a system running 11.2.0.2 with absolutely no load the parallel execution takes almost twenty seconds longer; on a heavily used OLTP system this difference will likely be greater as the overhead to run in parallel is greater than that for the ‘straight’ query. Resources allocated in OLTP configurations are intended for multiple users performing concurrent processing thus there usually aren’t ‘extra’ CPU cycles or memory to toss at parallel execution. Since the parallel query slaves will be waiting for resources (just like their user session counterparts) this will increase the response time and delay processing until such resources are freed. Notice also the execution path, which includes the parallel-to-serial operation that consolodates the outputs from the parallel query slaves into the final result set. Depending upon the server configuration there could be many more parallel slaves than shown here, and ‘funneling’ all of that data into a single ‘pipe’ consumes time and resources you most likely won’t have in the OLTP environment.

To be fair about this Oracle 11.2.0.2 and later releases offer a possibly better mechanism for implementing parallel execution, comprised of auto degree of parallelism (auto DOP), in-memory parallel execution and parallel statement queuing. A few bits and pieces need to be configured, first, but these are available on any system running Oracle 11.2.0.2 or later. What does auto DOP do? Oracle computes the degree of parallelism ‘on the fly’ before the statement executes. It determines the available resources and whether or not the statement will run faster, overall, in parallel. Once the DOP has been set it cannot be changed while the statement is running. It can, though, manage resources better to avoid overloading the system with parallel statements.

That’s great, I hear you say, but what if the statement would run better in parallel but can’t at the time the execution begins? This is where parallel statement queuing takes over; the statement is placed in the parallel run queue until the required resources are available at which time the statement starts execution. Confusing? Possibly, so let’s set up an 11.2.0.3 database to enable auto DOP and parallel statement queuing and run an example.

First and foremost it is absolutely necessary to calibrate the I/O subsystem the database uses. This is accomplished with a packaged procedure, DBMS_RESOURCE_MANAGER.CALIBRATE_IO, that takes two input values and three output variables into which to collect the calculated results. As inputs you need to supply the total number of disks associated with the database and a reasonable estimate of the disk latency; the three output parameters will hold the maximum calculated I/O per second (IOPS), the maximum MB per second and the calculated latency. The script, a calibration run and the generated output follow:

 
$ cat calibrate_io.sql 
Set serveroutput on size 1000000
Declare
	Calc_lat	number;
	Calc_iops	number;
	Calc_mbps	number;
Begin
	Dbms_resource_manager.calibrate_io(&dsks,&maxlat, Calc_iops, Calc_mbps, Calc_lat);

	Dbms_output.put_line('Max IOPS          : '||Calc_iops);
	Dbms_output.put_line('Max mbytes-per-sec: '||Calc_mbps);
	Dbms_output.put_line('Calc. latency     : '||Calc_lat);
End;
/
$

...

SQL> @calibrate_io
Enter value for dsks: 36
Enter value for maxlat: 10
old   7:    dbms_resource_manager.calibrate_io (&dsks, &maxlat, Calc_iops, Calc_mbps, Calc_lat);
new   7:    dbms_resource_manager.calibrate_io (36, 10, Calc_iops, Calc_mbps, Calc_lat);
Max IOPS          : 1022
Max mbytes-per-sec: 2885
Calc. latency     : 11

PL/SQL procedure successfully completed.

Elapsed: 00:07:10.57
SQL>
SQL> select num_physical_disks, max_iops, max_mbps, max_pmbps, latency
  2  from dba_rsrc_io_calibrate;

NUM_PHYSICAL_DISKS   MAX_IOPS   MAX_MBPS  MAX_PMBPS    LATENCY
------------------ ---------- ---------- ---------- ----------
                36       1022       2885        289         11

SQL>

[For 36 total disks on an Exadata system this calibration took over 7 minutes to complete, with no load on the system. This is a resource-intensive operation and should never be performed during the normal work day.]

I/O calibration isn’t the only factor in enabling Auto DOP as the parameter parallel_degree_policy must be set to AUTO or LIMITED. By default it is set to MANUAL which disables the three new parallel features of Oracle Release 11.2: Auto DOP, Parallel Statement Queuing and In-memory Parallel Execution. The three available settings, and their effects on these features, are listed below:

 

Setting         Effect
--------------- ---------------------------------------------------------
MANUAL          Disables all three new parallel query features.  Causes
                Parallel processing reverts to the behavior of prior
                releases, parallelizing statements only if they are 
                hinted or the object being queried is created with or
                altered to have a DEGREE greater than the default of 1.

LIMITED         Only Auto DOP is enabled with this setting; the remaining
                two new parallel features are disabled.  With this
                setting only queries against objects associated with a 
                DEGREE of DEFAULT will be considered.

AUTO            All three new parallel features are enabled.  Queries
                and statements will be evaluated for parallel execution
                regardless of the DEGREE setting on the object or objects
                accessed.

Another parameter that affects one of the new features, this one ‘hidden’, is _parallel_statement_queueing. When set to TRUE, the default, queuing is enabled, which allows Oracle to decide if a parallel statement can be executed at run time or if it needs to wait in the queue until sufficient resources are available. And yet another ‘hidden’ parameter, _parallel_cluster_cache_policy, controls whether In-memory parallel execution is available or not. A setting of cached enables this feature.

So you have all of this configured and set, what does it do for you? Let’s look a parallel statement queuing first. After building a huge version of the EMP table twelve simultaneous sessions were used to select the average salary. Querying V$SQL_MONITOR to see how many of those sessions ended up in the queue produced the following output:

 
SQL> select sid, sql_id, sql_exec_id, sql_text
  2  from v$sql_monitor
  3  where status = 'QUEUED'
  4  order by 3;

       SID SQL_ID        SQL_EXEC_ID SQL_TEXT
---------- ------------- ----------- ------------------------------------
      1532 5du23va3p3ad0    16777216 select avg(sal) from emp
      1059 5du23va3p3ad0    16777217 select avg(sal) from emp
      1628 5du23va3p3ad0    16777218 select avg(sal) from emp
       865 5du23va3p3ad0    16777219 select avg(sal) from emp
       205 5du23va3p3ad0    16777220 select avg(sal) from emp
      2199 5du23va3p3ad0    16777221 select avg(sal) from emp
      1542 5du23va3p3ad0    16777222 select avg(sal) from emp
       159 5du23va3p3ad0    16777223 select avg(sal) from emp
      1888 5du23va3p3ad0    16777224 select avg(sal) from emp
      1234 5du23va3p3ad0    16777225 select avg(sal) from emp
       705 5du23va3p3ad0    16777226 select avg(sal) from emp

11 rows selected.

SQL>

The sql_exec_id is, basically, a run number indicating the statement’s position in the queue. It is also possible for the sql_exec_id to be the same for all queued statements (indicating simultaneous execution) if the sum of the parallel resources required for execution doesn’t exceed the maximum available for the database.

Having auto DOP configured can help performance by parallelizing statements that can benefit from it. Unlike the earlier attempts by Oracle to implement parallelism this mechanism takes the guesswork out of parallel processing. Yes, parameters like parallel_min_servers, parallel_max_servers and parallel_servers_target need to be set by you and if those values are set to very large values you could make performance worse instead of better. Careful consideration in setting those parameters is key to successfully implementing these parallel features. To help in setting parallel_servers_target the following formula can be used:

((4*cpu_count)*parallel_threads_per_cpu)*active number of instances

Given a system with the following settings:

cpu_count 24
parallel_threads_per_cpu 4

and two active instances (this works just as well for RAC as it does stand-alone databases) the default setting would be computed as follows:

((4*24)*4)*2 = 768

You can also use this to provide a guide for setting parallel_max_servers. At that point you can then determine how best to set parallel_min_servers. Even with these improved features it’s likely that Oracle won’t execute most statements in parallel, which is a good thing.

When is it good practice to use parallel processing? One situation which comes to mind is in creating/populating tables using complex queries that may return faster in parallel than with standard serial processing. An application on which I worked was populating a table with a rather benign join but it was taking far too long to complete the load — the elapsed time exceeded the batch processing window. Using parallelism (along with regularly updated statistics) dramatically reduced the response time from over an hour to less than two minutes, well within the batch window allowing the rest of the processing to continue. [Note that this was a batch process, run outside of the normal business day, which freed resources normally allocated to user sessions.] There are others, outside of data warehousing applications, but they’re exceptions and not the rule.

So, we’ve learned that even though parallelism would appear to make things go faster in reality that’s not often the case due to the extra overhead in managing additional processes and consolodating the individual results into the final result set. We’ve also learned that simply slapping a /*+ parallel */ hint into a query doesn’t constitute tuning and doing so can make performance worse instead of better. The correct choice is to properly tune the query or queries in question using resources such as AWR and ASH reports, execution plans and wait statistics to pinpoint the problem area or areas to address. Parallelism isn’t a silver bullet and wasn’t intended to be and should be used sparingly, if at all.

Now, where’s my sandwich?

November 19, 2008

‘Magical’ Indexes

Filed under: Indexes,Performance — dfitzjarrell @ 15:32

In a forum I frequent the following question was presented (I have reposted the question in its entirety here):

Suppose I have a table structure like T1(C1,C2,C3,C4....). 
I  creates a index indx1 on (C1,C2,C3). I issue three select statements like : 

1.select * from T1 where C1= 
2.select * from T1 where C2= 
3.select * from T1 where C3= 
4.select * from T1 where C1=  and  C2=  and C3 = 


How the optimizer will use the index for each statements. 


Regards, 
Sanjoy 

Of course the only way to begin to answer that, since the original poster did not provide any DDL or sample data, is to create an example and execute it (we’ll ignore the fact that his three queries are numbered 1 through 4). I’ve posted the results below, this running on an Oracle 11.1.0.6 instance; the statistics reported by autotrace have been removed as they provided no additional value to the example. Notice how the plans change as the data ‘topography’ changes, illustrating that the original question was ambiguous, at best:

SQL>
SQL> --
SQL> -- Suppose I have a table structure like T1(C1,C2,C3,C4....).
SQL> -- I  creates a index indx1 on (C1,C2,C3). I issue three select
SQL> -- statements like :
SQL> --
SQL> -- 1.select * from T1 where C1=
SQL> -- 2.select * from T1 where C2=
SQL> -- 3.select * from T1 where C3=
SQL> -- 4.select * from T1 where C1=  and  C2=  and C3 =
SQL> --
SQL> --
SQL> -- How the optimizer will use the index for each statements.
SQL> --
SQL> --
SQL> -- Regards,
SQL> -- Sanjoy
SQL> --
SQL>
SQL> --
SQL> -- Create the table in question
SQL> --
SQL> create table t1(
  2        c1 number,
  3        c2 varchar2(20),
  4        c3 date,
  5        c4 varchar2(10),
  6        c5 number,
  7        c6 number
  8  );

Table created.

SQL>
SQL> --
SQL> -- Create the index specified
SQL> --
SQL> create index indx1
  2  on t1(c1,c2,c3);

Index created.

SQL>
SQL> --
SQL> -- Load test data
SQL> --
SQL>
SQL> --
SQL> -- Data with unique C1, C2 and C3 values
SQL> --
SQL> begin
  2        for i in 1..10101 loop
  3         insert into t1
  4         values (i, 'Testing record '||i, trunc(sysdate+i), 'Filler', mod(i, 734), mod(i, 963));
  5        end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> --
SQL> -- 'Standard' statistics
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING');

PL/SQL procedure successfully completed.

SQL>
SQL> set autotrace on linesize 132
SQL>
SQL> select * From t1 where c1 =433;

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
       433 Testing record 433   26-JAN-10 Filler            433        433


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)

SQL>
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
      7748 Testing record 7748  05-FEB-30 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    46 |    22   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    46 |    22   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C2"='Testing record 7748')

SQL>
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         9 Testing record 9     28-NOV-08 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    46 |    22   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    46 |    22   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C3"=TRUNC(SYSDATE@!+9))

SQL>
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
      9993 Testing record 9993  30-MAR-36 Filler            451        363


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))

SQL>
SQL> set autotrace off
SQL>
SQL> --
SQL> -- Statistics with auto-sized histograms on indexed columns
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING', method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');

PL/SQL procedure successfully completed.

SQL>
SQL> set autotrace on linesize 132
SQL>
SQL> select * From t1 where c1 =433;

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
       433 Testing record 433   26-JAN-10 Filler            433        433


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)

SQL>
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
      7748 Testing record 7748  05-FEB-30 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    46 |    22   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    46 |    22   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C2"='Testing record 7748')

SQL>
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         9 Testing record 9     28-NOV-08 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    46 |    22   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    46 |    22   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C3"=TRUNC(SYSDATE@!+9))

SQL>
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
      9993 Testing record 9993  30-MAR-36 Filler            451        363


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))

SQL>
SQL> set autotrace off
SQL>
SQL> truncate table t1;

Table truncated.

SQL>
SQL> --
SQL> -- Data with unique C2 and C3 values, and a reasonably selective C1
SQL> --
SQL> begin
  2        for i in 1..10101 loop
  3         insert into t1
  4         values (mod(i, 43), 'Testing record '||i, trunc(sysdate+i), 'Filler', mod(i, 734), mod(i, 963));
  5        end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> --
SQL> -- 'Standard' statistics
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING');

PL/SQL procedure successfully completed.

SQL>
SQL> set autotrace on linesize 132
SQL>
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    45 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    45 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)

SQL>
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         8 Testing record 7748  05-FEB-30 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    45 |    22   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    45 |    22   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C2"='Testing record 7748')

SQL>
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         9 Testing record 9     28-NOV-08 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    45 |    22   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    45 |    22   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C3"=TRUNC(SYSDATE@!+9))

SQL>
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    45 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    45 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))

SQL>
SQL> set autotrace off
SQL>
SQL> --
SQL> -- Statistics with auto-sized histograms on indexed columns
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING', method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');

PL/SQL procedure successfully completed.

SQL>
SQL> set autotrace on linesize 132
SQL>
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    45 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    45 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)

SQL>
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         8 Testing record 7748  05-FEB-30 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    45 |    22   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    45 |    22   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C2"='Testing record 7748')

SQL>
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         9 Testing record 9     28-NOV-08 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    45 |    22   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    45 |    22   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C3"=TRUNC(SYSDATE@!+9))

SQL>
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    45 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    45 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))

SQL>
SQL> set autotrace off
SQL>
SQL> truncate table t1;

Table truncated.

SQL>
SQL> --
SQL> -- Data with unique C2 and C3 values, with a cycling C1 set of values
SQL> --
SQL> begin
  2        for i in 1..10101 loop
  3         insert into t1
  4         values (mod(i, 3), 'Testing record '||i, trunc(sysdate+i), 'Filler', mod(i, 734), mod(i, 963));
  5        end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> --
SQL> -- 'Standard' statistics
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING');

PL/SQL procedure successfully completed.

SQL>
SQL> set autotrace on linesize 132
SQL>
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    45 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    45 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)

SQL>
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         2 Testing record 7748  05-FEB-30 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    45 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    45 |     5   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     4   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')

SQL>
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         0 Testing record 9     28-NOV-08 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    45 |    22   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    45 |    22   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C3"=TRUNC(SYSDATE@!+9))

SQL>
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    45 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    45 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))

SQL>
SQL> set autotrace off
SQL>
SQL> --
SQL> -- Statistics with auto-sized histograms on indexed columns
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING', method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');

PL/SQL procedure successfully completed.

SQL>
SQL> set autotrace on linesize 132
SQL>
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    45 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    45 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)

SQL>
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         2 Testing record 7748  05-FEB-30 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    45 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    45 |     5   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     4   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')

SQL>
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         0 Testing record 9     28-NOV-08 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    45 |    22   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |    45 |    22   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("C3"=TRUNC(SYSDATE@!+9))

SQL>
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    45 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    45 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))

SQL>
SQL> set autotrace off
SQL>
SQL> truncate table t1;

Table truncated.

SQL>
SQL> --
SQL> -- Data with cycling C1 and C2 values and unique C3 values
SQL> --
SQL> begin
  2        for i in 1..10101 loop
  3         insert into t1
  4         values (mod(i, 3), 'Testing record '||mod(i,3), trunc(sysdate+i), 'Filler', mod(i, 734), mod(i, 963));
  5        end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> --
SQL> -- 'Standard' statistics
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING');

PL/SQL procedure successfully completed.

SQL>
SQL> set autotrace on linesize 132
SQL>
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    42 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    42 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)

SQL>
SQL> select * From t1 where c2 = 'Testing record 7748';

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    42 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    42 |     5   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     4   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')

SQL>
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         0 Testing record 0     28-NOV-08 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    42 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    42 |     5   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     4   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C3"=TRUNC(SYSDATE@!+9))
       filter("C3"=TRUNC(SYSDATE@!+9))

SQL>
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    42 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    42 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))

SQL>
SQL> set autotrace off
SQL>
SQL> --
SQL> -- Statistics with auto-sized histograms on indexed columns
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING', method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');

PL/SQL procedure successfully completed.

SQL>
SQL> set autotrace on linesize 132
SQL>
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    42 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    42 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)

SQL>
SQL> select * From t1 where c2 = 'Testing record 7748';

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    42 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    42 |     5   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     4   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')

SQL>
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         0 Testing record 0     28-NOV-08 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    42 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    42 |     5   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     4   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C3"=TRUNC(SYSDATE@!+9))
       filter("C3"=TRUNC(SYSDATE@!+9))

SQL>
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    42 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    42 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))

SQL>
SQL> set autotrace off
SQL>

Note that as the selectivity of the leading columns (C1 and C2) decreases the plans change from TABLE ACCESS FULL to INDEX SKIP SCAN when column C1 is not included in the where clause. Note also that the queries did NOT change and that histograms made no difference in the plans.

This changes for 11.2.0.2 (for the curious the results remain the same for version 12.2.0.1):

SQL>
SQL> 
SQL> --
SQL> -- I  creates a index indx1 on (C1,C2,C3). I issue three select
SQL> -- statements like :
SQL> --
SQL> -- 1.select * from T1 where C1=
SQL> -- 2.select * from T1 where C2=
SQL> -- 3.select * from T1 where C3=
SQL> -- 4.select * from T1 where C1=  and  C2=  and C3 =
SQL> --
SQL> --
SQL> --
SQL> --
SQL> -- Regards,
SQL> -- Sanjoy
SQL> --
SQL> 
SQL> --
SQL> -- Create the table in question
SQL> --
SQL> create table t1(
  2      c1 number,
  3      c2 varchar2(20),
  4      c3 date,
  5      c4 varchar2(10),
  6      c5 number,
  7      c6 number
  8  );

Table created.

SQL> 
SQL> 
SQL> --
SQL> -- Create the index specified
SQL> --
SQL> create index indx1
  2  on t1(c1,c2,c3);

Index created.

SQL> 
SQL> 
SQL> --
SQL> -- Load test data
SQL> --
SQL> 
SQL> --
SQL> -- Data with unique C1, C2 and C3 values
SQL> --
SQL> begin
  2      for i in 1..10101 loop
  3       insert into t1
  4       values (i, 'Testing record '||i, trunc(sysdate+i), 'Filler', mod(i, 734), mod(i, 963));
  5      end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> 
SQL> 
SQL> commit;

Commit complete.

SQL> 
SQL> 
SQL> --
SQL> -- 'Standard' statistics
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING');

PL/SQL procedure successfully completed.

SQL> 
SQL> 
SQL> 
SQL> set autotrace on linesize 132
SQL> 
SQL> select * From t1 where c1 =433;

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
       433 Testing record 433   07-MAY-13 Filler            433        433


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    47 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    47 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)




SQL> 
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
      7748 Testing record 7748  17-MAY-33 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    47 |    11   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    47 |    11   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |    11   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')




SQL> 
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         9 Testing record 9     09-MAR-12 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    47 |    11   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    47 |    11   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |    11   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C3"=TRUNC(SYSDATE@!+9))
       filter("C3"=TRUNC(SYSDATE@!+9))



SQL> 
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
      9993 Testing record 9993  10-JUL-39 Filler            451        363


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    47 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    47 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))




SQL> 
SQL> set autotrace off
SQL> 
SQL> --
SQL> -- Statistics with auto-sized histograms on indexed columns
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING', method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');

PL/SQL procedure successfully completed.

SQL> 
SQL> set autotrace on linesize 132
SQL> 
SQL> select * From t1 where c1 =433;

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
       433 Testing record 433   07-MAY-13 Filler            433        433


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    47 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    47 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)


ws processed

SQL> 
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
      7748 Testing record 7748  17-MAY-33 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    47 |    11   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    47 |    11   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |    11   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')




SQL> 
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         9 Testing record 9     09-MAR-12 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    47 |    11   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    47 |    11   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |    11   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C3"=TRUNC(SYSDATE@!+9))
       filter("C3"=TRUNC(SYSDATE@!+9))



SQL> 
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
      9993 Testing record 9993  10-JUL-39 Filler            451        363


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    47 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    47 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))




SQL> 
SQL> set autotrace off
SQL> 
SQL> truncate table t1;

Table truncated.

SQL> 
SQL> --
SQL> -- Data with unique C2 and C3 values, and a reasonably selective C1
SQL> --
SQL> begin
  2      for i in 1..10101 loop
  3       insert into t1
  4       values (mod(i, 43), 'Testing record '||i, trunc(sysdate+i), 'Filler', mod(i, 734), mod(i, 963));
  5      end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> 
SQL> commit;

Commit complete.

SQL> 
SQL> --
SQL> -- 'Standard' statistics
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING');

PL/SQL procedure successfully completed.

SQL> 
SQL> set autotrace on linesize 132
SQL> 
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)


SQL> 
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         8 Testing record 7748  17-MAY-33 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     9   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     9   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     9   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')



SQL> 
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         9 Testing record 9     09-MAR-12 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |    16   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |    16   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |    16   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C3"=TRUNC(SYSDATE@!+9))
       filter("C3"=TRUNC(SYSDATE@!+9))




SQL> 
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))




SQL> 
SQL> set autotrace off
SQL> 
SQL> --
SQL> -- Statistics with auto-sized histograms on indexed columns
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING', method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');

PL/SQL procedure successfully completed.

SQL> 
SQL> set autotrace on linesize 132
SQL> 
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)




SQL> 
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         8 Testing record 7748  17-MAY-33 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     9   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     9   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     9   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')




SQL> 
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         9 Testing record 9     09-MAR-12 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |    16   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |    16   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |    16   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C3"=TRUNC(SYSDATE@!+9))
       filter("C3"=TRUNC(SYSDATE@!+9))




SQL> 
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))



SQL> 
SQL> set autotrace off
SQL> 
SQL> truncate table t1;

Table truncated.

SQL> 
SQL> --
SQL> -- Data with unique C2 and C3 values, with a cycling C1 set of values
SQL> --
SQL> begin
  2      for i in 1..10101 loop
  3       insert into t1
  4       values (mod(i, 3), 'Testing record '||i, trunc(sysdate+i), 'Filler', mod(i, 734), mod(i, 963));
  5      end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> 
SQL> commit;

Commit complete.

SQL> 
SQL> --
SQL> -- 'Standard' statistics
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING');

PL/SQL procedure successfully completed.

SQL> 
SQL> set autotrace on linesize 132
SQL> 
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)



SQL> 
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         2 Testing record 7748  17-MAY-33 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     1   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')



SQL> 
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         0 Testing record 9     09-MAR-12 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |    20   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |    20   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |    19   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C3"=TRUNC(SYSDATE@!+9))
       filter("C3"=TRUNC(SYSDATE@!+9))




SQL> 
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))




SQL> 
SQL> set autotrace off
SQL> 
SQL> --
SQL> -- Statistics with auto-sized histograms on indexed columns
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING', method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');

PL/SQL procedure successfully completed.

SQL> 
SQL> set autotrace on linesize 132
SQL> 
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)




SQL> 
SQL> select * From t1 where c2 = 'Testing record 7748';

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         2 Testing record 7748  17-MAY-33 Filler            408         44


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     1   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')



SQL> 
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         0 Testing record 9     09-MAR-12 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |    20   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |    20   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |    19   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C3"=TRUNC(SYSDATE@!+9))
       filter("C3"=TRUNC(SYSDATE@!+9))




SQL> 
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    46 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    46 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))




SQL> 
SQL> set autotrace off
SQL> 
SQL> truncate table t1;

Table truncated.

SQL> 
SQL> --
SQL> -- Data with cycling C1 and C2 values and unique C3 values
SQL> --
SQL> begin
  2      for i in 1..10101 loop
  3       insert into t1
  4       values (mod(i, 3), 'Testing record '||mod(i,3), trunc(sysdate+i), 'Filler', mod(i, 734), mod(i, 963));
  5      end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> 
SQL> commit;

Commit complete.

SQL> 
SQL> --
SQL> -- 'Standard' statistics
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING');

PL/SQL procedure successfully completed.

SQL> 
SQL> set autotrace on linesize 132
SQL> 
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    43 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    43 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)




SQL> 
SQL> select * From t1 where c2 = 'Testing record 7748';

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    43 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    43 |     1   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')



SQL> 
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         0 Testing record 0     09-MAR-12 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    43 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    43 |     1   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C3"=TRUNC(SYSDATE@!+9))
       filter("C3"=TRUNC(SYSDATE@!+9))



SQL> 
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    43 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    43 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))




SQL> 
SQL> set autotrace off
SQL> 
SQL> --
SQL> -- Statistics with auto-sized histograms on indexed columns
SQL> --
SQL> exec dbms_stats.gather_schema_stats(ownname=>'BING', method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');

PL/SQL procedure successfully completed.

SQL> 
SQL> set autotrace on linesize 132
SQL> 
SQL> select * From t1 where c1 =433;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    43 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    43 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=433)




SQL> 
SQL> select * From t1 where c2 = 'Testing record 7748';

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    43 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    43 |     1   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C2"='Testing record 7748')
       filter("C2"='Testing record 7748')



SQL> 
SQL> select * from t1 where c3 = trunc(sysdate+9);

        C1 C2                   C3        C4                 C5         C6
---------- -------------------- --------- ---------- ---------- ----------
         0 Testing record 0     09-MAR-12 Filler              9          9


Execution Plan
----------------------------------------------------------
Plan hash value: 3325179317

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    43 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    43 |     1   (0)| 00:00:01 |
|*  2 |   INDEX SKIP SCAN           | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C3"=TRUNC(SYSDATE@!+9))
       filter("C3"=TRUNC(SYSDATE@!+9))



SQL> 
SQL> select * From t1 where c1 = 9993 and c2 = 'Testing record 9993' and c3 = trunc(sysdate+9993);

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 552572096

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    43 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |     1 |    43 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | INDX1 |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("C1"=9993 AND "C2"='Testing record 9993' AND
              "C3"=TRUNC(SYSDATE@!+9993))



SQL> 
SQL> set autotrace off
SQL> 
SQL>

Notice that in the later release INDEX SKIP SCAN is chosen by the optimizer when any indexed column other than the leading column is used in the where clause.

So how does Oracle treat a ‘select * from t1 where c3 = trunc(sysdate+9)’ query when the index is built on columns (c1,c2,c3)? That depends entirely upon the data and how skewed (or not) it may be. [It also depends upon the Oracle version in use, as releases after 8.1.7.4 implemented changes in the available query plans and how indexes could be used. Again note the differences between how 11.1.0.6 and 11.2.0.2 behave.] Because of deletes/inserts a plan can change even though the query has not, so there is no ‘definitive’ answer to the question as written. The conditions are simply too vague to produce repeatable results.

And that’s a definite maybe.

April 17, 2008

To "b" or not to "b"

Filed under: Indexes,Performance — dfitzjarrell @ 13:12

With the plethora of database-centric applications available today, and with the performance problems they can generate it can be a worthy effort to determine which vendor-created indexes are and are not being used. (This is especially helpful if you’re working closely with the application vendor to improve their product.) Of course one way to do this is to set event 10046 at level 8 or 12 and let the trace files fly so they can be analyzed later for which indexes are being used by the application queries. And that could be a long and tedious process. One would think there is a better way to accomplish this.

There is.

Oh, I suppose you’d like to know this better way … it’s really rather simple:

Let Oracle do the work for you.

So let’s see how we tell Oracle to do this task for us so our job is much easier.

Oracle has provided a mechanism (since at least Oracle 8.1.6) to monitor an index for usage using

alter index  monitoring usage;

The results of that monitoring are found in the V$OBJECT_USAGE view,in a column, strangely enough, named USED. This isn’t a long, boring thesis on how, when, where, who and why the index in question was used, only that it either is or is not used. The ‘window’ spans the time period starting with the execution of the above-listed command and ends when the following is issued:

alter index  nomonitoring usage;

The data remains in the V$OBJECT_USAGE view until another monitoring ‘window’ is started.

So, let’s see an example of how this works. We’ll use the EMP table from the SCOTT/TIGER demonstration schema:

SQL>
SQL> --
SQL> -- Create an index on the EMPNO column
SQL> -- of the EMP table
SQL> --
SQL> create index emp_eno_idx
  2  on emp(empno);

Index created.

SQL>
SQL> --
SQL> -- Let's monitor the index to see if
SQL> -- it's being used
SQL> --
SQL> alter index emp_eno_idx monitoring usage;

Index altered.

SQL>
SQL> --
SQL> -- Now, let's run some queries
SQL> --
SQL> -- First, let's get everything from the
SQL> -- EMP table
SQL> --
SQL> select * from emp;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7788 SCOTT      ANALYST         7566 09-DEC-82       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7876 ADAMS      CLERK           7788 12-JAN-83       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

SQL>
SQL> --
SQL> -- Obviously the index hasn't yet been
SQL> -- used
SQL> --
SQL> select index_name, table_name, used from v$object_usage;

INDEX_NAME                     TABLE_NAME                     USE
------------------------------ ------------------------------ ---
EMP_ENO_IDX                    EMP                            NO

1 row selected.

SQL>
SQL> --
SQL> -- So let's run a qualified query and
SQL> -- see if things change
SQL> --
SQL> -- Since the DEPTNO column isn't indexed
SQL> -- the monitored index still shouldn't be
SQL> -- used
SQL> --
SQL> select * from emp where deptno = 30;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30

6 rows selected.

SQL>
SQL> --
SQL> -- And we see it isn't
SQL> --
SQL> select index_name, table_name, used from v$object_usage;

INDEX_NAME                     TABLE_NAME                     USE
------------------------------ ------------------------------ ---
EMP_ENO_IDX                    EMP                            NO

1 row selected.

SQL>
SQL> --
SQL> -- Yet another qualified query, this time
SQL> -- using the indexed column
SQL> --
SQL>  select * from emp where empno > 7000;

     EMPNO ENAME      JOB	       MGR HIREDATE	    SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK	      7902 17-DEC-80	    800 	 0	   20
      7499 ALLEN      SALESMAN	      7698 20-FEB-81	   1600        300	   30
      7521 WARD       SALESMAN	      7698 22-FEB-81	   1250        500	   30
      7566 JONES      MANAGER	      7839 02-APR-81	   2975 	 0	   20
      7654 MARTIN     SALESMAN	      7698 28-SEP-81	   1250       1400	   30
      7698 BLAKE      MANAGER	      7839 01-MAY-81	   2850 	 0	   30
      7782 CLARK      MANAGER	      7839 09-JUN-81	   2450 	 0	   10
      7788 SCOTT      ANALYST	      7566 19-APR-87	   3000 	 0	   20
      7839 KING       PRESIDENT 	   17-NOV-81	   5000 	 0	   10
      7844 TURNER     SALESMAN	      7698 08-SEP-81	   1500 	 0	   30
      7876 ADAMS      CLERK	      7788 23-MAY-87	   1100 	 0	   20

     EMPNO ENAME      JOB	       MGR HIREDATE	    SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7900 JAMES      CLERK	      7698 03-DEC-81	    950 	 0	   30
      7902 FORD       ANALYST	      7566 03-DEC-81	   3000 	 0	   20
      7934 MILLER     CLERK	      7782 23-JAN-82	   1300 	 0	   10

14 rows selected.

SQL> -- We see the index is now being used, or at
SQL> -- least it was for that last query
SQL> --
SQL> select index_name, table_name, used from v$object_usage;

INDEX_NAME                     TABLE_NAME                     USE
------------------------------ ------------------------------ ---
EMP_ENO_IDX                    EMP                            YES

1 row selected.

SQL>
SQL> --
SQL> -- We'll try another query using that column
SQL> --
SQL> -- Let's set autotrace on to see if the index
SQL> -- is being used in this example
SQL> --
SQL> set autotrace on
SQL> select * From emp where empno is null;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 3712041407

---------------------------------------------------------------------------
 Id   Operation           Name  Rows   Bytes  Cost (%CPU) Time 
---------------------------------------------------------------------------
   0  SELECT STATEMENT              1     87      0   (0)      
*  1   FILTER                                                  
   2    TABLE ACCESS FULL EMP      14   1218      3   (0) 00:00:01
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
          8  consistent gets
          0  physical reads
          0  redo size
        353  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

SQL> set autotrace off
SQL>
SQL> --
SQL> -- Since the index has been marked as used
SQL> -- it remains in the USED state for this
SQL> -- monitoring window even though the last
SQL> -- query didn't use the index at all
SQL> --
SQL> select index_name, table_name, used from v$object_usage;

INDEX_NAME                     TABLE_NAME                     USE
------------------------------ ------------------------------ ---
EMP_ENO_IDX                    EMP                            YES

1 row selected.

SQL>
SQL> --
SQL> -- Turn off the usage monitoring
SQL> --
SQL> alter index emp_eno_idx nomonitoring usage;

Index altered.

SQL>
SQL> --
SQL> -- And the last generated data remains
SQL> --
SQL> select index_name, table_name, used from v$object_usage;

INDEX_NAME                     TABLE_NAME                     USE
------------------------------ ------------------------------ ---
EMP_ENO_IDX                    EMP                            YES

1 row selected.

SQL> 

Letting Oracle monitor index usage is much easier than traipsing through piles of event 10046 trace files looking for index scans. I’m happy they’ve provided such a tool. But, you may run across an index which is used but won’t be marked as such in V$OBJECT_USAGE. How can this be? Oracle can use the statistics from the index in determining the best query plan, and when those statistics are gone (as when the index has been dropped) performance can suffer; the optimizer generates a decision tree when each query is hard parsed, and missing index statistics may direct the optimizer down a path it might not have taken when the statistics existed. Oracle, in one respect,is correct in that the index in question hasn’t been read however it did use the statistics to perform path elimination. So, before heading straight for the ‘drop index’ command it would be prudent to verify the index in question really isn’t being used in any way — this is why we have test systems, correct? Dropping the index on a test database then verifying that no performance degradation occurs is, in my mind, a good idea. If, after the tests indicate an index may truly be unused, performance problems arise because that index is missing it can be recreated to restore the application to its original lustre.

Some suggest that simply setting an index to UNUSABLE would provide the same conditions as dropping it, but disabling an index in that fashion doesn’t remove the statistics generated on that index and if a query or process is using those statistics but is not actually accessing the index the same conditions don’t exist and one could be led into a false sense of security that the index in question is truly unused. Yes, actual access to the index is not allowed but since the index wasn’t being read to begin with (only the statistics were used by the CBO for cost calculations) I can’t accept that the same run-time conditions exist. Eventually the statistics will be outdated and no longer will be used but it could take a week, a month or longer for this to occur (depending upon system activity). For those DBAs in a hurry (and, face it, sometimes management IS in a hurry for results) setting an index to UNUSABLE may not be a valid course of action to discover whether it’s actually used or not.

There is a way in Oracle 11.2 and later releases: set the index to INVISIBLE. This will prevent the optimizer from using the index and the associated statistics without dropping it:


SQL> create index emp_empno_idx
  2  on emp(empno)
  3  invisible;

Index created.

SQL> 
SQL> set autotrace on
SQL> 
SQL> select
  2  	     *
  3  from
  4  	     emp
  5  where
  6  	     empno < 7400;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO                                               
---------- ---------- --------- ---------- --------- ---------- ---------- ----------                                               
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20                                               


Execution Plan
----------------------------------------------------------                                                                          
Plan hash value: 3956160932                                                                                                         
                                                                                                                                    
--------------------------------------------------------------------------                                                          
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                                          
--------------------------------------------------------------------------                                                          
|   0 | SELECT STATEMENT  |      |     1 |    87 |     3   (0)| 00:00:01 |                                                          
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    87 |     3   (0)| 00:00:01 |                                                          
--------------------------------------------------------------------------                                                          
                                                                                                                                    
Predicate Information (identified by operation id):                                                                                 
---------------------------------------------------                                                                                 
                                                                                                                                    
   1 - filter("EMPNO" 
SQL> alter session set optimizer_use_invisible_indexes=true;

Session altered.

SQL> set autotrace off
SQL> select LOADED_VERSIONS,executions,INVALIDATIONS,ROWS_PROCESSED from v$sql where sql_id='axr7gct6p1g3y';

LOADED_VERSIONS EXECUTIONS INVALIDATIONS ROWS_PROCESSED                                                                             
--------------- ---------- ------------- --------------                                                                             
              1          1             2              1                                                                             
              1          1             1              1                                                                             

SQL> set autotrace on
SQL> select
  2  	     *
  3  from
  4  	     emp
  5  where
  6  	     empno < 7400;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO                                               
---------- ---------- --------- ---------- --------- ---------- ---------- ----------                                               
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20                                               


Execution Plan
----------------------------------------------------------                                                                          
Plan hash value: 576302360                                                                                                          
                                                                                                                                    
---------------------------------------------------------------------------------------------                                       
| Id  | Operation                   | Name          | Rows  | Bytes | Cost (%CPU)| Time     |                                       
---------------------------------------------------------------------------------------------                                       
|   0 | SELECT STATEMENT            |               |     1 |    87 |     2   (0)| 00:00:01 |                                       
|   1 |  TABLE ACCESS BY INDEX ROWID| EMP           |     1 |    87 |     2   (0)| 00:00:01 |                                       
|*  2 |   INDEX RANGE SCAN          | EMP_EMPNO_IDX |     1 |       |     1   (0)| 00:00:01 |                                       
---------------------------------------------------------------------------------------------                                       
                                                                                                                                    
Predicate Information (identified by operation id):                                                                                 
---------------------------------------------------                                                                                 
                                                                                                                                    
   2 - access("EMPNO" 
SQL> alter session set optimizer_use_invisible_indexes=false;

Session altered.

SQL> set autotrace off
SQL> select LOADED_VERSIONS,EXECUTIONS,INVALIDATIONS,ROWS_PROCESSED from v$sql where sql_id='axr7gct6p1g3y';

LOADED_VERSIONS EXECUTIONS INVALIDATIONS ROWS_PROCESSED                                                                             
--------------- ---------- ------------- --------------                                                                             
              1          1             2              1                                                                             
              1          1             1              1                                                                             

SQL> set autotrace on 
SQL> select
  2  	     *
  3  from
  4  	     emp
  5  where
  6  	     empno < 7400;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO                                               
---------- ---------- --------- ---------- --------- ---------- ---------- ----------                                               
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20                                               


Execution Plan
----------------------------------------------------------                                                                          
Plan hash value: 3956160932                                                                                                         
                                                                                                                                    
--------------------------------------------------------------------------                                                          
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                                          
--------------------------------------------------------------------------                                                          
|   0 | SELECT STATEMENT  |      |     1 |    87 |     3   (0)| 00:00:01 |                                                          
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    87 |     3   (0)| 00:00:01 |                                                          
--------------------------------------------------------------------------                                                          
                                                                                                                                    
Predicate Information (identified by operation id):                                                                                 
---------------------------------------------------                                                                                 
                                                                                                                                    
   1 - filter("EMPNO" 
SQL> alter index emp_empno_idx visible;

Index altered.

SQL> set autotrace off
SQL> select LOADED_VERSIONS,EXECUTIONS,INVALIDATIONS,ROWS_PROCESSED from v$sql where sql_id='axr7gct6p1g3y';

LOADED_VERSIONS EXECUTIONS INVALIDATIONS ROWS_PROCESSED                                                                             
--------------- ---------- ------------- --------------                                                                             
              1          2             3              2                                                                             
              1          1             2              1                                                                             


SQL> set autotrace on
SQL> select
  2  	     *
  3  from
  4  	     emp
  5  where
  6  	     empno < 7400;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO                                               
---------- ---------- --------- ---------- --------- ---------- ---------- ----------                                               
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20                                               


Execution Plan
----------------------------------------------------------                                                                          
Plan hash value: 576302360                                                                                                          
                                                                                                                                    
---------------------------------------------------------------------------------------------                                       
| Id  | Operation                   | Name          | Rows  | Bytes | Cost (%CPU)| Time     |                                       
---------------------------------------------------------------------------------------------                                       
|   0 | SELECT STATEMENT            |               |     1 |    87 |     2   (0)| 00:00:01 |                                       
|   1 |  TABLE ACCESS BY INDEX ROWID| EMP           |     1 |    87 |     2   (0)| 00:00:01 |                                       
|*  2 |   INDEX RANGE SCAN          | EMP_EMPNO_IDX |     1 |       |     1   (0)| 00:00:01 |                                       
---------------------------------------------------------------------------------------------                                       
                                                                                                                                    
Predicate Information (identified by operation id):                                                                                 
---------------------------------------------------                                                                                 
                                                                                                                                    
   2 - access("EMPNO" 

Note the change in LOADED_VERSIONS and INVALIDATIONS when the index is made visible or invisible. There is also a setting, optimizer_use_invisible_indexes, which can be set at the session level. It’s FALSE by default; setting it to TRUE will show the same results as the example shown above without making the index visible to every user who has access the objects in the affected schema, thus making the effect local to the current session. By setting the index to INVISIBLE you can assess whether that index is truly used or not, regardless of what Oracle’s index monitoring tells you. You will then be better prepared to assess whether the index can be safely dropped.

Of course database administration cannot be ruled by rash acts, and relying upon a sole source of information (such as V$OBJECT_USAGE) can result in problems down the line. So, careful attention to detail is necessary, especially when managing the existence (or not) of an index [or table, or view, or …]. I like to follow a few simple rules:

1 — Test, test, test.
2 — Keep testing.
3 — Never do anything you can’t undo.

Keeping to that methodology usually ensures I’m not in trouble later. And, it keeps the end-users happier.

I like happy end-users.

« Previous PageNext Page »

Create a free website or blog at WordPress.com.