Oracle Tips and Tricks — David Fitzjarrell

October 26, 2015

Results, What Results?

Filed under: General — dfitzjarrell @ 11:55

"For instance," said the boy again, "if Christmas trees were people and people were Christmas trees,
we'd all be chopped down, put up in the living room, and covered in tinsel, while the trees opened our presents."
"What does that have to do with it?" asked Milo.
"Nothing at all," he answered, "but it's an interesting possibility, don't you think?"
Norton Juster, The Phantom Tollbooth 

Oracle offers a results cache in the database (from 11.2 onwards) and in PL/SQL (again, from 11.2 onwards) which can greatly reduce execution time of repeated statements when insert/update/delete activity is not heavy. The mechanism in PL/SQL, however, may not work as expected with global temporary tables, a fact mentioned in the documentation but possibly overlooked until its effects are seen when it is actually used. Let’s look at what behavior you might see when using this option, using a slightly modified example from Jonathan Lewis.

This starts with flushing the shared pool, creating and populating a global temporary table and creating a function to display the number of available records from the results cache. First the table is created and populated with a single row:


SQL> set serveroutput on size 1000000
SQL> alter system flush shared_pool;

System altered.

SQL>
SQL> truncate table glob_tmp;

Table truncated.

SQL> drop table glob_tmp;

Table dropped.

SQL> create global temporary table glob_tmp (fnum number)
  2  on commit preserve rows
  3  ;

Table created.

SQL>
SQL> insert into glob_tmp values(1);

1 row created.

SQL> commit;

Commit complete.

SQL>

Compute garden-variety stats on the global temporary table (no ‘special’ options in use):


SQL> execute dbms_stats.gather_table_stats(user,'glob_tmp');

PL/SQL procedure successfully completed.

SQL>

Create the function to return the number of visible rows in the results cache:


SQL>
SQL> create or replace function f_cache
  2  return number
  3  result_cache
  4
  5  is
  6          m_ret number;
  7  begin
  8          select  max(fnum)
  9          into    f_cache.m_ret
 10          from    glob_tmp
 11          ;
 12
 13          return f_cache.m_ret;
 14  end;
 15  /

Function created.

SQL>

Execute the function for the current session; all seems well since we’ve inserted and committed a row, but this is about to change:


SQL> execute dbms_output.put_line(f_cache)
1

PL/SQL procedure successfully completed.

SQL>

Now create a second session and do nothing except run the function:


SQL> execute dbms_output.put_line(f_cache);
1

PL/SQL procedure successfully completed.

SQL>

From the second sesson insert a row into the global temporary table and run the function again:


SQL> insert into glob_tmp values(0);

1 row created.

SQL> execute dbms_output.put_line(f_cache);
0

PL/SQL procedure successfully completed.

SQL>

This is curious behavior. After Session 1 had inserted a row into the global temporary table and issued a commit both sessions could ‘see’ it, yet when Session 2 inserts a row, but doesn’t commit, that original row ‘goes away’ because Oracle thinks it’s providing the correct answer. Let’s do something really destructive and truncate the global temporary table in Session 1:


SQL> truncate table glob_tmp;

Table truncated.

SQL> execute dbms_output.put_line(f_cache);
1

PL/SQL procedure successfully completed.

SQL>

The table data has been truncated, yet the function STILL shows that one row exists in the results cache. Note that Session 2 has not issued a commit for its insert. Let’s do that now:


SQL> commit;

Commit complete.

SQL>

Back to Session 1, let’s execute the function again:


SQL> execute dbms_output.put_line(f_cache);

PL/SQL procedure successfully completed.

SQL>

And from Session 2, where nothing else has occurred other than the commit, we see the same results:


SQL> execute dbms_output.put_line(f_cache);

PL/SQL procedure successfully completed.

SQL>

Yet, the truncate in Session 1 only affected the data inserted by Session 1. Going back to Session 2, let’s chech that session’s contents of glob_tmp:



SQL> select count(*) from glob_tmp;

  COUNT(*)
----------
         1

SQL>

It would be expected, I would think, that the function would return a 1 for the committed row in Session 2, and either NULL or 0 for Session 1, but that’s not the case as evidenced here. Oracle reports several bugs in relation to the PL/SQL results cache, listed below:

Bug 21905592 : PL/SQL RESULT CACHE MUST BE BYPASSSED WHEN SESSION SCN NOT CURRENT
Bug 21905695 : BAD X-SESSION RESULTS W/ RESULT-CACHED PL/SQL FN AND “SET TXN READ ONLY”
Bug 21907155 : PL/SQL RC FN SHOWS ONE SESSION THE CONTENTS OF ANOTHER SESSION’S GTT
Bug 21885173 : THE LNPLS SECTION “RESTRICTIONS ON RESULT-CACHED FUNCTIONS” NEEDS RE-WRITE

So we see that the PL/SQL results cache can produce inconsistent results and unexpected behavior across sessions with global temporary tables. Take care when using the PL/SQL results cache with temporary tables since you may see ‘answers’ that aren’t correct even though Oracle thinks that they are. Looking in the documentatipon Oracle notes that global temporary tables don’t meet the criteria for result caching, and this example shows a good reason to beleive that restriction. I’ve found that the documentation may not be referenced until a ‘problem’, like the one illustrated here, surfaces.

When properly used the PL/SQL results cache can be of great help to application programmers, once the known restrictions have been taken into account. For such features it’s best to read the documentation before attempting to use them, so that you won’t be surprised later.

Sometimes a change in perspective can be very helpful.

Advertisements

October 21, 2015

Hakan You Do That?

Filed under: General — dfitzjarrell @ 13:49

"Everyone understands numbers. No matter what language you speak,
they always mean the same thing. A seven is a seven anywhere in the world." 
Norton Juster, The Phantom Tollbooth 

An interesting problem submitted to Oracle Support involves a possibly little-known aspect of a table called the Hakan Factor. The problem description, in brief, states that the Hakan Factor isn’t preserved for a partitioned table when a single partition is compressed. Why is this a problem? The Hakan Factor is used by bitmap indexes to map rows to the index entries and it’s currently one less than the number of rows contained in a database block, due to adjustments to the calculation made necessary by unwanted behavior in Oracle releases 9.2.0.8 and earlier. To provide a ‘real-world’ example consder a parking garage. Each level of the garage can hold X number of vehicles and if the garage is extended upwards, by adding more levels, this doesn’t affect the vehicle-to-level mapping since each new level matches the previous levels. Now consider this same parking garage being modified to increase the number of vehicles each level can contain, keeping the number of levels the same. The original garage mapping no longer applies so the garage map is invalid. Something similar happens when the Hakan Factor changes for a partition and no longer matches the table you wanted to use for partition exchange; the partition mapping doesn’t match the table mapping and the bitmap indexes can’t find the new data and an error is thrown. Since the Hakan Factor is critical for bitmap indexes let’s look at this factor in a bit more depth.

The Hakan Factor is set by using ‘alter table … minimize records_per_block;’ or by creating a table using the familiar ‘create table … as select … from …;’ mechanism. The latter sets the Hakan Factor of the new table to that of the source table. When the Hakan Factor becomes a problem is when dealing with a table having bitmap indexes on it, notably a partitioned table where partition exchange is being executed. If the Hakan Factor of the partition and the table it’s being exchanged for don’t match and there are bitmap indexes present the exchange will fail with the following ‘informative’ error:


ORA-14642: Bitmap index mismatch

This can happen for a number of reasons, including adding columns to a partition or table definition, minimizing the rows_per_block (mentioned above) and, according to the Bug report by compressing a partition. [The Bug number is 18115378.] The bug has not been reported as fixed; the workaround is to simply execute ‘alter table … minimize records_per_block;’. Of course if bitmap indexes are not in place the error won’t be thrown so another way around this problem is to drop the bitmap indexes on the partitioned table, perform the exchange then recreate the bitmap indexes (a procedure recommended by Oraclec support). The bug report concerns the building of bitmap indexes after a partition has been compressed; the error thrown is:


ORA-28604: table too fragmented to build bitmap index (4625956,272,272)

even though it really isn’t. The problem stems from differing Hakan Factors for the compressed and uncompressed partitions, a ‘problem’ Oracle should be able to build a bitmap index around.

How can you know the current Hakan Factor for a given table? Jonathan Lewis has constructed an interesting procedure to generate the Hakan factor, shown below (thank you, Jonathan) :


create or replace procedure show_hakan(
    i_table     in  varchar2,
    i_owner     in  varchar2 default user
) as
    m_obj       number(8,0);
    m_flags     varchar2(12);
    m_hakan     number(8,0);
begin
 
    /* created by show_hakan.sql    */
    select
        obj#,
/*
        case
            when    (spare1 > 5 * power(2,15))
                then    (spare1 - 5 * power(2,15))
            when    (spare1 > power(2,17))
                then    (spare1 - power(2,17))
            when    (spare1 > power(2,15))
                then    (spare1 - power(2,15))
                else    spare1
        end                 hakan
*/
    to_char(
        bitand(
            spare1, to_number('ffff8000','xxxxxxxx')
        ),
        'xxxxxxxx'
    )                   flags,
    bitand(spare1, 32767)           hakan   -- 0x7fff
    into
        m_obj,
        m_flags,
        m_hakan
    from
        tab$
    where   obj# in (
            select  object_id
            from    dba_objects
            where   object_name = upper(i_table)
            and object_type = 'TABLE'
            and owner = upper(i_owner)
        )
    ;
 
    dbms_output.put_line(
        'Hakan factor - 1 (see bug history) for object ' ||
        m_obj   || ' (' ||
        i_owner || '.' ||
        i_table || ') is ' ||
        m_hakan || ' with flags ' ||
        m_flags
    );
end;
/
 
drop public synonym show_hakan;
create public synonym show_hakan for show_hakan;
grant execute on show_hakan to public;

The procedure needs to be created by SYS, thus the creation of the public synonym and the grant to PUBLIC. Once it’s created using it is fairly simple as the following example illustrates:


SQL>> 
SQL>> create table df1 (
  2  	     snord   number,
  3  	     snerd   varchar2(20),
  4  	     snard   date);

Table created.

SQL>> 
SQL>> begin
  2  	     for i in 1..10000 loop
  3  		     insert into df1
  4  		     values(i, 'Hakan test '||i, sysdate+mod(i, 19));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL> procedure successfully completed.

SQL>> 
SQL>> exec show_hakan('df1');
Hakan factor - 1 (see bug history) for object 95173 (BING.df1) is 736 with flags         0                                          

PL/SQL> procedure successfully completed.

SQL>> 

The table MUST be populated to set the Hakan factor; this is done automatically when the table is loaded with data either by a series of inserts or by using SQL*Loader. This example was run on Oracle 12.1.0.2 but it has been tested and will run on 11.2 without issues. [Jonathan Lewis tested this on 9.2.0.8 with an earlier version of the procedure and it calculated the Hakan Factor correctly, however the code posted above will need to be modified (possibly by trial-and-error) to produce results.] If a new table is created using ‘create table .. as select .. ‘ the Hakan Factor is replicated to the new table, as mentioned above:


SQL>> exec show_hakan('EMP');
Hakan factor - 1 (see bug history) for object 95181 (BING.EMP) is 736 with flags         0

PL/SQL> procedure successfully completed.

SQL>>
SQL>> create table pme as select * from emp;

Table created.

SQL>>
SQL>> exec show_hakan('PME');
Hakan factor - 1 (see bug history) for object 95187 (BING.PME) is 736 with flags         0

PL/SQL> procedure successfully completed.

SQL>>

Altering the table to minimize records_per_block changes the Hakan Factor:


SQL> alter table df1 minimize records_per_block;

Table altered.

SQL> 
SQL> exec show_hakan('df1');
Hakan factor - 1 (see bug history) for object 95173 (BING.df1) is 232 with flags      8000                                          

PL/SQL procedure successfully completed.

SQL> 

Compressing a table can also alter the Hakan Factor according to the bug report, so that should be tested:


SQL> alter table pme compress for oltp;

Table altered.

SQL>
SQL> exec show_hakan('PME');
Hakan factor - 1 (see bug history) for object 95194 (BING.PME) is 32759 with flags     20000

PL/SQL procedure successfully completed.

SQL>

This shouldn’t be a surprise since compression alters how rows are stored in a block and, in Hybrid Columnar Compression, alters how the blocks are configured. I can see why Oracle is considering this a bug; compressed partitions are marked as such and creating a bitmap index on a partitioned table with ‘mixed’ partitions shouldn’t throw an error. Advanced Compression options can leave a compressed table/partition in a state where some blocks are compressed and others are not, but the Hakan Factor is the maximum number of rows a block can have for a given partition and since there should always be fully compressed blocks in a partition using Advanced Compression Oracle should be able to build a bitmap index on a table containing both compresed and uncompressed partitions. Generating the Hakan Factor for a table using OLTP compression before and after a large update produces the following results:


SQL> begin
  2          for i in 1..19 loop
  3                  insert into pme select * from pme;
  4          end loop;
  5
  6          commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL>
SQL> alter table pme compress for oltp;

Table altered.

SQL>
SQL> exec show_hakan('PME');
Hakan factor - 1 (see bug history) for object 95208 (BING.PME) is 32759 with flags     20000

PL/SQL procedure successfully completed.

SQL>
SQL> update pme set job='CLORK' where job = 'CLERK';

2097152 rows updated.

SQL> commit;

Commit complete.

SQL>
SQL exec show_hakan('PME');
Hakan factor - 1 (see bug history) for object 95208 (BING.PME) is 32759 with flags     20000

PL/SQL procedure successfully completed.

SQL>

The Hakan Factor for the compressed table didn’t change even after a large update occurred that left some blocks uncompressed. Since Oracle knows the partition is compressed it shoudn’t complain about ‘fragmentation’, it should consider the compression and continue to build the bitmap index.

In Oracle 9.2.0.8 and earlier releases the Hakan Factor was equal to the number of rows a block would contain; unfortunately it would decrease by 1 each time it was regenerated using ‘alter table … minimize records_per_block;’ so it was possible to reduce the Hakan Factor to 1 and end up with an extremely large table containing a great deal of wasted space. Also in 9.2.0.8 and earlier releases direct path inserts behaved differently with respect to the Hakan Factor than garden-variety inserts so large direct-path loads during batch jobs could also result in an overly large table with a lot of wasted space. An attempt was made in Version 10g to make the normal inserts and direct-path inserts behave consistently; the change resulted in the Hakan Factor being one less than the actual number of rows in a block and since the Hakan Factor cannot be 0 the wasted space issue was ‘resolved’, at the expense of an ‘incorrect’ Hakan Factor.

The Hakan Factor is (or, hopefully, was) a little-known nugget of table knowledge, critical for the implementation and performance of bitmap indexes. Hopefully understanding what it is and what it affects will make the DBA’s job easier when such errors, as reported here, are encountered.

Remember, Oracle has your number.

Create a free website or blog at WordPress.com.