Oracle Tips and Tricks — David Fitzjarrell

November 8, 2012

I Can’t Find My Keys

Filed under: Exadata — dfitzjarrell @ 10:32

Exadata offers many features to improve performance — Smart Scans, offloading, Infiniband internal network — but the one feature not often mentioned is the storage index. Designed to prevent Exadata from reading data blocks unnecessarily its purpose is sometimes not clearly explained, leading to confusion. So what is a storage index, where is it found and what does it do? Let’s see if we can answer those questions, and possibly a few more.

Exadata storage indexes are … indexes … but not in the usual bitmap/b-tree sense. You can’t use one to pinpoint the exact location of a particular record and you can’t scan one for a particular range of values. You can’t build one with DDL like ‘create storage index …’ because they are dynamically created in memory based on predicates in a where clause. How, then, are they built, what do they contain and what good purpose do they serve?

Exadata divides the storage into 1 MB sections, on a per table basis; for a quarter rack configured with high capacity disks using the out-of-the-box 80/20 data to recovery distribution there is roughly 30 TB of usable storage to be divided into 1 MB ‘chunks’, making approximately 31 million storage segments, each with its own storage index. That’s not all, as each storage index can contain data for up to eight columns referenced in where clauses of queries that satisfy the conditions for a Smart Scan. For example, we can have, in segment 1, the following eight columns ‘indexed’:

Table name		Column name
======================  ======================

and in segment 2 we may have the following columns indexed:

Table name		Column name
======================  ======================

Notice that only three columns occur in both segments: ORDER.DT, INVOICE_NO and INVOICE_AMT. This can go on for the other 31 million storage segments, a fact that can make discussing storage indexes confusing. Remember, though, that it’s not really what is in the storage index that makes the difference, it’s what ISN’T in the storage index that provides the performance. Each storage index includes, besides the column each entry references, the minimum and maximum value for that column within that 1 MB segment and whether or not that column contains NULLs. That information is what gives the storage index its power as, during a Smart Scan as tasks are offloaded to the storage cells, Exadata scans these storage indexes for columns and values of interest. If the storage index contains the desired column and the minimum and maximum values don’t fall within the desired range that 1 MB segment is skipped. If you’re looking for a rather narrow range of values such segment skipping can reduce the volume of data scanned by gigabytes, or even terabytes, and that is the power of the storage index. It tells Exadata which data blocks NOT to read so some people have described storage indexes as ‘reverse indexes’ which is, I suppose, an apt description of their behaviour.

Do storage indexes always help? No, there are conditions when a storage index can’t provide any benefit, such as when the columns of interest are not indexed or when the query in question doesn’t qualify for a Smart Scan. Smart scans are necessary to reap the benefits of a storage index because storage indexes exist at the storage cell level and are accessed (if they’re available) as part of the offloading process. It’s important to remember the conditions that trigger Smart Scans:

Full table scans and/or full index scans coupled with direct path reads, at least one predicate and using the following simple comparison operators

– or –

Parallel query is in use, again with at least one predicate and using the above mentioned simple comparison operators

Notice the first set of conditions — just because a full table scan or full index scan is in use there is no guarantee that direct path reads will be used so not every full table scan or full index scan qualifies for a Smart Scan. Parallel query, by design, does implement direct path reads. [Oracle has supplied a hidden parameter, _serial_direct_read, in 11.2 which, when set to TRUE (the default is auto), will cause Oracle to favor direct path reads in serial access mode. Notice I said ‘favor’; it doesn’t guarantee serial direct path reads will be executed, it simply increases the likelihood that they will occur. Setting this parameter can also introduce performance problems as Oracle attempts to use serial direct path reads, thus it’s best to test this on a non-production system.]

Storage indexes cannot be modified or tuned like traditional b-tree indexes, and they are not physical objects in the database. They are in-memory ‘tables’, if you will, built ‘on-the-fly’ which will need to be rebuilt if the cellsrv program (the operating system for the storage cells) is restarted so on a reboot the storage indexes created after the restart are not likely to be the same as those that existed before cellsrv was rebooted. Because they can’t be tuned or modified there are no hints to ensure their use; the optimizer won’t know and there are no wait events that monitor storage index usage so AWR and ASH won’t be able to report if they are used or not. That is not to say there isn’t SOME way of knowing if a storage index has been used — a single statistic, cell physical IO bytes saved by storage index, reports, in a cumulative manner by session, how many bytes were saved using storage indexes. Since it’s a cumulative statistic it will be necessary to query that statistic, using the v$sesstat view, before running a candidate query, execute the statement then query the statistic again afterwards. Subtracting the latter value from the former will provide the bytes saved by storage indexes for that query.

Do storage indexes work with Hybrid Columnar Compressed (HCC) tables? Yes, in fact storage indexes can work with compressed tables using any valid level of compression (HCC.OLTP). They also work with bind variables as those values are passed to the storage cells during the offload process. Partitioned tables can also make use of storage indexes, so for qualifying queries both partition pruning and storage indexes can combine to dramatically reduce I/O. Joins and multi-column predicates (again the limit is 8 columns per 1 MB segment) will also make use of storage indexes should a Smart Scan be executed. They will even benefit queries using subqueries as the returned value from the subquery will be compared to the minimum/maximum values for the given column.

Are there any other limitations to storage indexes? Yes, a storage index won’t be used for CLOB columns as CLOBS are not indexed, and predicate values using wildcards (‘%’,’_’) cannot be searched. Index-organized tables also won’t qualify for smart scans — with an IOT the row source cannot be offloaded so a Smart Scan won’t occur. Index range scans won’t trigger Smart Scans, nor will queries using reverse-key indexes. As with other, traditional indexes, functions applied to columns disable a storage index, such as using trunc() on a date column. Implicit conversions used to disable storage indexes on occasion, especially when the optimizer didn’t recognize the string as a convertible value (such as date strings). Recent releases of the Exadata storage server software have corrected most of these issues.

Storage indexes are part of the ‘secret sauce’ of Exadata, and can provide dramatic results for qualifying queries. Knowing when they can be used and when they can’t should help when tuning performance issues you may experience with an Exadata configuration.

┬áSometimes it’s good to lose your keys.

Create a free website or blog at