Oracle Tips and Tricks — David Fitzjarrell

June 29, 2007

Table Scans, Histograms and Scarlett O’Hara

Filed under: General,histograms,Indexes,Performance,stats — dfitzjarrell @ 16:43

Table scans aren’t necessarily bad.

Now that I have your attention, let me repeat that.

Table scans aren’t necessarily bad.


Well, it depends upon the data and how much of it you need to return. Sometimes we WANT a table scan over an index scan [and that may fly in the face of some tuning “experts” recommendations, but I’m not dealing in myth, here, but fact]; let’s look at a situation where this is the case.

Oracle, by default, expects your data to be uniformly distributed in your tables. Yes, that’s very likely wishful thinking on Oracle’s part, but that’s how the biscuit is buttered. We can influence that behaviour, however, with a neat aspect of statistics gathering known as the histogram, which is basically a ‘graph’ of how your data adheres to (or ignores) Oracle’s initial premise.

Computing histograms can help query execution plans by eliminating the ‘guesswork’ the optimizer is doing with respect to data distribution. In one running shot let’s state how Oracle ‘optimizes’ a query path (this could get deep, but it will be ‘sliced and diced’ into more manageable pieces as we go on):

Optimizer plans, in the “absence” of histograms [which aren’t really absent — that will be discussed in a bit], are based upon the assumption of data being distributed evenly across the desired keys. Such assumptions may work well for many installations, however these same assumptions may wreak havoc on data distributed such that clusters and gaps appear as the optimizer will assume, wrongly, that a given range of values will be located in close proximity to each other. Should the data returned exceed 30% of the total data in the table a full-table scan may be chosen, when, in actuality, an index scan would be more appropriate. Histograms will illustrate the relative distribution of the data keys, and will provide the optimizer with better information with which to formulate an access plan. Histograms are created using ‘buckets’, or ranges, to group key values. Such groupings can be seen in the USER_TAB_HISTOGRAMS view; the endpoint number for each group is listed, as well as the normalized value for that group. By default DBMS_STATS computes histograms with one ‘bucket’ for each indexed column if the SIZE parameter is not specified; with fairly evenly distributed data such histograms are acceptable. With skewed data a better histogram is necessary, and with 9i and later releases Oracle provides the SIZE AUTO specification, allowing Oracle to determine how many ‘buckets’ will be necessary. [Oracle release 10.2 in all of its forms has problems generating proper histograms on partitioned tables where single-valued non-null columns are present. Rather than skip the histogram altogether it chooses in some cases to create a frequency histogram using the sample size as an endpoint value. Another issue with 10.2 histograms: the endpoints on frequency histograms ONLY reflect the sample size, NOT an adjusted value for the entire table. 11.2 fixes this issue]

Okay, let’s take a DEEP breath. Now another. Good, now we’re ready to run an example which may explain how this works a bit more clearly and helps explain all of the glorious text in the paragraph above.

Let’s create a table, TEST_TAB, with the following structure

Name                            Null? Type
------------------------------- ----- ---------
A                                     NUMBER(6)
B                                     NUMBER(6)

I realise the column names aren’t exceptionally descriptive, but they serve the intended purpose. Let’s also load the table with skewed data; Column A contains unique values from 1 to 10000 (yes, I can count that high), while Column B contains 10 distinct values: 1, 2, 3, 4, 5, 9996, 9997, 9998, 9999 and 10000. The value 5 occurs 9991 times in the data; all other values occur only once. [Okay, so I like the number 5.]

We’ll run the following queries against this data:

(1) select * from test_tab where b=5;
(2) select * from test_tab where b=3;

As expected both of the above queries execute a full table scan (no surprise there) as no other access method is available. So, let’s be generous and provide another execution path possibility.

We’ll create an index on column B and see how that changes the execution plans. As we (hopefully) expected both queries now execute an index range scan in place of the full table scan. That’s good, right? I mean, any time we can use an index we should, right? Well, not really. With an index present it would be preferable to see an index range scan for query (2), where a small percentage of rows satisfy the condition BUT a full table scan for query (1), where the majority of the table data would be returned. Why? It’s a matter of I/O. For a small subset of the data an index scan, even though each row generates two I/O operations, generates less I/O than a full table scan. But, if we’re returning most of the table data doubling that I/O is foolish and wasteful, thus a full table scan (one read per row) is preferable. So, what do we do now? Well, we could try MySQL and see if it’s any better (it isn’t) or we could create a better set of histograms. Better? We have histograms now? Yes, but as we will see there is only one ‘bucket’ available; we really need more.

The table is analyzed using the dbms_stats.gather_table_stats() procedure:

SQL> exec dbms_stats.gather_table_stats(ownname=>NULL, tabname=>’TEST_TAB’, method_opt => ‘for all columns’, estimate_percent => null);

The computed statistics from dba_tables are shown below:

---------- ---------- ------------ ---------- ---------- -----------
     10000         64            0         86          0          10

And the statistics from dba_tab_columns :

------------ ---- ---- --------- ---------- ----------- ----------- -----------
       10000 Full Full     .0001          0           1 30-JUN-1999       10000
          10 Full Full        .1          0           1 30-JUN-1999       10000

From the USER_TAB_HISTOGRAMS view the following information is available:

---------- ----------- --------------- --------------
TEST_TAB   A                         0              1
TEST_TAB   A                         1          10000
TEST_TAB   B                         0              1
TEST_TAB   B                         1          10000

So, DBMS_STATS.GATHER_TABLE_STATS has created 1 bucket histograms for each column, so all values for the column are in the same bucket. The ENDPOINT_NUMBER represents the bucket number and ENDPOINT_VALUE represents the last column value in that bucket.

Now, unfortunately, both query (1) and (2) again execute a full table scan. Drat, drat and double drat. Simply having statistics about the table and columns does not help the optimizer distinguish how many rows have each value. The reason both queries execute a full table scan is the 1 bucket histogram; any value selected should be in that one bucket. (Yes, this reflects the general data layout if all we consider is whether the data is in the table or not, but that doesn’t help matters any.) This, of course, is not the case. To quote Scarlett O’Hara, “Where shall I go? What shall I do?” I can’t answer the first question, but in response to the second we create a more accurate histogram so the Optimizer knows how many values occur for each column. Let’s look at the test queries again:

Query (1): select * from test_tab where b=5;

which should execute a full table scan as the preponderance of rows in the table have b=5 (it’s far cheaper to ‘throw away’ 9 rows than it is to double the I/O with an index scan for that much data).

Query (2): select * from test_tab where b=3;

which should execute an index range scan, as only 1 of the 10,000 values has b=3 (2 I/O operations are cheaper than 10,000). But, hope springs eternal; we’ll try to correct this situation by executing dbms_stats.gather_table_stats() again, this time specifying ten buckets for the histogram and restricting the action to only column B:

SQL> exec dbms_stats.gather_table_stats(ownname=>NULL, tabname=>’TEST_TAB’, method_opt => ‘for all indexed columns size 10’, estimate_percent => null);

Querying USER_TAB_HISTOGRAMS the new, improved histogram is:

---------- ----------- --------------- --------------
TEST_TAB   B                         1              1
TEST_TAB   B                         2              2
TEST_TAB   B                         3              3
TEST_TAB   B                         4              4
TEST_TAB   B                      9995              5
TEST_TAB   B                      9996           9996
TEST_TAB   B                      9997           9997
TEST_TAB   B                      9998           9998
TEST_TAB   B                      9999           9999
TEST_TAB   B                     10000          10000

The ENDPOINT_VALUE shows the column value and the ENDPOINT_NUMBER shows the cumulative number of rows up to and including that value (yeah, that can be confusing). To explain it better, for ENDPOINT_VALUE 2 there is an ENDPOINT_NUMBER of 2; the previous ENDPOINT_NUMBER is 1, hence the number of rows with value 2 is the current ENDPOINT_NUMBER minus the previous ENDPOINT_NUMBER: 2 – 1 = 1 row having b=2; for ENDPOINT_VALUE 5, the ENDPOINT_NUMBER is 9995. The previous ENDPOINT_NUMBER is 4, so 9995 – 4 = 9991 rows where b=5. Boy, that’s a lot of work. But, that work is worth the effort as this now accurately reflects the data distribution in the table, and is more likely to provide the correct (read that ‘desired’) execution plans for the two example queries. This is proven (Hallelujah!) by the execution plans for both queries, shown below:

SQL> select * from test_tab where b=5
SQL> /

Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS     (Cost=10 Card=9991 Bytes=99910)
1 0 TABLE ACCESS (FULL) OF 'TEST_TAB'     (Cost=10 Card=9991 Bytes=99910)

SQL> select * from test_tab where b=3
SQL> /

Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS      (Cost=6 Card=500 Bytes=5000)
1 0 TABLE ACCESS (BY ROWID) OF 'TEST_TAB'  (Cost=6 Card=500 Bytes=5000)

Woo-hoo, we’re cooking with gas now! So, we’ve learned something, but what? We’ve learned that for low cardinality data, histograms having one bucket for each distinct value are just peachy; however there may be a need to create histograms on data with a larger number of distinct values so using one bucket per value in such cases creates far too much overhead. So, Scarlett haunts us again with the “What shall I do?” question. [Rhett told her where she could go, so she’s stopped asking that one.] Well, Scarlett, we create histograms with fewer buckets, resulting in height-balanced buckets with the exception of the last one, which may have fewer values than the rest. If the histogram created in the example above used eight buckets instead of ten:

---------- ----------- --------------- --------------
TEST_TAB   B                         0              1
TEST_TAB   B                         1              5
TEST_TAB   B                         2              5
TEST_TAB   B                         3              5
TEST_TAB   B                         4              5
TEST_TAB   B                         5              5
TEST_TAB   B                         6              5
TEST_TAB   B                         7              5
TEST_TAB   B                         8          10000

Oracle creates the requested number of buckets by placing the same number of values into each bucket. The ENDPOINT_NUMBER is the actual bucket number and ENDPOINT_VALUE is the endpoint value of the bucket determined by the column value. Bucket 0 holds the lowest value for the column, which is 1. Buckets 1 through 7 contain values up to and including 5, with Bucket 8 containing values from 5 to 10000. Buckets 1 through 7 are height-balanced, so all have the same number of values, leaving Bucket 8 to contain fewer values (it gets the leftovers, so to speak). For this particular data set such a histogram would be a poor choice, but for tables with a large number of distinct values, spread over an even larger dataset, such a histogram could prove quite useful as it could provide a better distribution map than Oracle would normally assume exists. Don’t, however, go overboard with histograms as you could end up with ‘too much of a good thing’ and send Oracle off on query plans it has no business choosing. Moderation in all things is good.

I think Scarlett would approve. Wherever she may be.

Create a free website or blog at