Oracle Tips and Tricks — David Fitzjarrell

November 3, 2017

Go Big …

Filed under: General — dfitzjarrell @ 12:15

"You see, to tall men I'm a midget, and to short men I'm a giant;
to the skinny ones I'm a fat man, and to the fat ones I'm a thin man."
-- Norton Juster, The Phantom Tollbooth

For years DBAs (and users) have been searching for the elusive ‘go_faster’ initialization parameter in Oracle, only to be disappointed at every turn as no such setting exists, at least not with that name. Oracle has, though, moved in that direction lately with the 12.x releases by providing a way for speedier access to ‘big tables’. So what IS a ‘big table’? That’s up to how your database has the “_small_table_threshold” parameter configured. Let’s look at how Oracle has decided to make ‘big table’ access faster.

Oracle offers, in releases 12 and later, a new ‘twist’ that may improve performance of ‘big’ tables called, appropriately enough, Automatic Big Table Caching. This allows Oracle to automatically cache such tables into an area of the SGA reserved specifically for this purpose. The big table cache won’t ‘work’ for direct path reads; in the example that follows those have been ‘turned off’ for the session to ensure that the table will be loaded into the configured cache. Let’s look at how it’s configured and what it can do for you when it’s active.

Unlike the In-Memory database option Automatic Big Table Caching is a no additional cost feature; no additional licensing is required to use it. It does require a change to memory settings which may increase the shared pool/sga/memory settings and could require that additional memory be installed (if the database is heavily used and memory can’t be reallocated to the big table cache from the existing SGA/memory settings). That being said, configuring Automatic Big Table Caching is fairly simple as only one parameter, db_big_table_cache_percent_target, needs to be set. Depending upon the percentage configured this could create a situation where the current sga or memory settings need to be increased; that will depend upon the size of the big table or tables Oracle will be caching. [If the table is too large for the cache then it could be ‘cached’ to disk and the usage query will reflect that.] Let’s look at a possible configuration and see how it performs.

In this example the cache percent is set to 80 (a rather large number, true, but this is for a small, personal database and the table will be quite large in comparison to the available SGA.) Let’s create and populate a table to run an example with:


BING @ quanghoo > 
BING @ quanghoo > create table bigly_bigly_table(
  2  smorgun	     number,
  3  fleester	     varchar2(800),
  4  yupqast	     date);

Table created.

BING @ quanghoo > 
BING @ quanghoo > begin
  2  	     for i in 1..1000000 loop
  3  		     insert into bigly_bigly_table
  4  		     values(i, 'Altoona'||lpad(i, 773,'0'), sysdate+mod(i,991));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

BING @ quanghoo > 
BING @ quanghoo > exec dbms_stats.gather_table_stats(user, 'BIGLY_BIGLY_TABLE', cascade=>true)

PL/SQL procedure successfully completed.

BING @ quanghoo > 

Let’s check on what the database considers a ‘small’ table:


BING @ quanghoo > 
BING @ quanghoo > connect sys/########### as sysdba
Connected.
SYS @ quanghoo > 
SYS @ quanghoo > SELECT a.ksppinm  "Parameter",  c.ksppstvl "Value"
  2  	   FROM   x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
  3  	   WHERE  a.indx = b.indx AND a.indx = c.indx
  4  	    AND   p.name(+) = a.ksppinm
  5  	    AND   UPPER(a.ksppinm) IN ( UPPER('_db_block_buffers'),
  6  					UPPER('_small_table_threshold'));

Parameter                                             Value
----------------------------------------------------- ---------------------------------
_db_block_buffers                                     208502         
_small_table_threshold                                4170    
                                                                                                                                                      
SYS @ quanghoo > 

The value is in blocks, so any table larger than 4170 blocks in this particular database will be considered a big table. Let’s now check on the number of blocks our example table occupies:


SYS @ quanghoo > connect bing/##########
Connected.
BING @ quanghoo > set echo on linesize 150 pagesize 100
BING @ quanghoo > column table_name format a35
BING @ quanghoo > alter session set parallel_degree_policy=auto;

Session altered.

BING @ quanghoo > alter session set "_serial_direct_read"=never;

Session altered.

BING @ quanghoo > select blocks from dba_tables where table_name = 'BIGLY_BIGLY_TABLE';

    BLOCKS                                                                                                                                            
----------                                                                                                                                            
    112097                                                                                                                                            

BING @ quanghoo > 

We verify the percent target that was set earlier:


BING @ quanghoo > show parameter DB_BIG_TABLE_CACHE_PERCENT_TARGET

NAME                                 TYPE        VALUE                                                                                                
------------------------------------ ----------- ------------------------------                                                                       
db_big_table_cache_percent_target    string      80                                                                                                   
BING @ quanghoo > 

Now we check to see what the cache shows as being used; since this example has been run more than once the object count is greater than 1:


BING @ quanghoo > select BT_CACHE_TARGET, OBJECT_COUNT, MEMORY_BUF_ALLOC, MIN_CACHED_TEMP from v$bt_scan_cache;

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP                                                                                         
--------------- ------------ ---------------- ---------------                                                                                         
             80            2           170037            1000                                                                                         

BING @ quanghoo > 

Let’s run a count against the table (there are no indexes on the table) and exercise that cache a bit:


BING @ quanghoo > select count(*) from BIGLY_BIGLY_TABLE;

  COUNT(*)                                                                                                                                            
----------                                                                                                                                            
   1000000                                                                                                                                            

BING @ quanghoo > 
BING @ quanghoo > select BT_CACHE_TARGET, OBJECT_COUNT, MEMORY_BUF_ALLOC, MIN_CACHED_TEMP from v$bt_scan_cache;

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP                                                                                         
--------------- ------------ ---------------- ---------------                                                                                         
             80            2           170037            1000                                                                                         

BING @ quanghoo > 

That the object count did not increase indicates that the object is cached and that Oracle is using it to provide the results we requested. Repeated queries against the table (using full table scans and NOT using direct reads) will provide the same results. Checking the query statistics we see nothing but consistent gets, which is what is to be expected when using the big table cache for cached objects:


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
     111360  consistent gets
          1  physical reads
          0  redo size
        542  bytes sent via SQL*Net to client
        552  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Running the same query again produces the same results (since the statistics didn’t change those numbers will not be repeated):


BING @ quanghoo > 
BING @ quanghoo > select count(*) from BIGLY_BIGLY_TABLE;

  COUNT(*)                                                                                                                                            
----------                                                                                                                                            
   1000000                                                                                                                                            

BING @ quanghoo > 
BING @ quanghoo > select BT_CACHE_TARGET, OBJECT_COUNT, MEMORY_BUF_ALLOC, MIN_CACHED_TEMP from v$bt_scan_cache;

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP                                                                                         
--------------- ------------ ---------------- ---------------                                                                                         
             80            2           170037            1000                                                                                         

BING @ quanghoo > 

Should another ‘big’ table be accessed (by conventional reads) it would also go into the cache, provided there is enough space in the cache. Again, this is automatically done by Oracle once the big table cache has been configured.

Automatic Big Table Caching can be a performance improvement when large tables are the norm rather than the exception. That it’s also a feature that doesn’t require additional licensing makes it all the more desirable to configure and use.

Of course, it’s all in how you see things.

1 Comment »

  1. […] Go Big … […]

    Pingback by What’s Here?” | Oracle Tips and Tricks -- David Fitzjarrell — November 18, 2022 @ 14:42


RSS feed for comments on this post. TrackBack URI

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Create a free website or blog at WordPress.com.