Oracle Tips and Tricks — David Fitzjarrell

October 22, 2014

Memory Improvement

Filed under: General — dfitzjarrell @ 09:37

“Besides, being lost is never a matter of not knowing where you are; it’s a matter of 
not knowing where you aren’t—and I don’t care at all about where I’m not.” 
― Norton Juster, The Phantom Tollbooth 

Oracle 12c (version 12.1.0.2) offers the option of using in-memory processing to speed things along. Called the In-memory option it’s installed when you install the 12.1.0.2 software. Using it can make a considerable difference in processing speed, provided you have sufficient resources (RAM) available. Let’s revisit an older example, on Bloom filters, and see if Oracle processes things any faster in-memory.

Looking again at the Bloom filter example using 11.2.0.3 on a Windows installation it’s evident that the Bloom filter provides an increased level of performance (only the abbreviated output section is reproduced here):


SQL> select /*+ Bloom join 2 parallel 2 use_hash(emp emp_dept) */ e.empid, e.empnm, d.deptnm, e.empsal	  --, count(*) emps
  2  from emp e join emp_dept ed on (ed.empid = e.empid) join dept_info d on (ed.empdept = d.deptnum)
  3  where ed.empdept = 20;

     EMPID EMPNM                                    DEPTNM                        EMPSAL                                                              
---------- ---------------------------------------- ------------------------- ----------                                                              
   1670281 Fnarm1670281                             PROCUREMENT                     5000                                                              
   1670289 Fnarm1670289                             PROCUREMENT                     6000                                                              
   1670297 Fnarm1670297                             PROCUREMENT                     7000                                                              
   1670305 Fnarm1670305                             PROCUREMENT                     1000                                                              
   1670313 Fnarm1670313                             PROCUREMENT                     2000                                                              
   1670321 Fnarm1670321                             PROCUREMENT                     3000                                                              
   1670329 Fnarm1670329                             PROCUREMENT                     4000                                                              
   1670337 Fnarm1670337                             PROCUREMENT                     5000                                                              
   1670345 Fnarm1670345                             PROCUREMENT                     6000                                                              
   1670353 Fnarm1670353                             PROCUREMENT                     7000                                                              
   1670361 Fnarm1670361                             PROCUREMENT                     1000                                                              
...
   1857369 Fnarm1857369                             PROCUREMENT                     4000                                                              
   1857377 Fnarm1857377                             PROCUREMENT                     5000                                                              
   1857385 Fnarm1857385                             PROCUREMENT                     6000                                                              

250000 rows selected.

Elapsed: 00:00:54.86

Execution Plan
----------------------------------------------------------                                                                                            
Plan hash value: 2643012915                                                                                                                           
                                                                                                                                                      
----------------------------------------------------------------------------------------------------------------------------------                    
| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |                    
----------------------------------------------------------------------------------------------------------------------------------                    
|   0 | SELECT STATEMENT                     |              |   273K|    26M|  1378   (1)| 00:00:01 |        |      |            |                    
|   1 |  PX COORDINATOR                      |              |       |       |            |          |        |      |            |                    
|   2 |   PX SEND QC (RANDOM)                | :TQ10002     |   273K|    26M|  1378   (1)| 00:00:01 |  Q1,02 | P->S | QC (RAND)  |                    
|*  3 |    HASH JOIN                         |              |   273K|    26M|  1378   (1)| 00:00:01 |  Q1,02 | PCWP |            |                    
|   4 |     PX RECEIVE                       |              |   273K|    13M|   536   (1)| 00:00:01 |  Q1,02 | PCWP |            |                    
|   5 |      PX SEND BROADCAST               | :TQ10001     |   273K|    13M|   536   (1)| 00:00:01 |  Q1,01 | P->P | BROADCAST  |                    
|   6 |       NESTED LOOPS                   |              |   273K|    13M|   536   (1)| 00:00:01 |  Q1,01 | PCWP |            |                    
|   7 |        BUFFER SORT                   |              |       |       |            |          |  Q1,01 | PCWC |            |                    
|   8 |         PX RECEIVE                   |              |       |       |            |          |  Q1,01 | PCWP |            |                    
|   9 |          PX SEND BROADCAST           | :TQ10000     |       |       |            |          |        | S->P | BROADCAST  |                    
|  10 |           TABLE ACCESS BY INDEX ROWID| DEPT_INFO    |     1 |    27 |     1   (0)| 00:00:01 |        |      |            |                    
|* 11 |            INDEX UNIQUE SCAN         | DEPT_INFO_PK |     1 |       |     1   (0)| 00:00:01 |        |      |            |                    
|  12 |        PX BLOCK ITERATOR             |              |   273K|  6947K|   535   (1)| 00:00:01 |  Q1,01 | PCWC |            |                    
|* 13 |         TABLE ACCESS FULL            | EMP_DEPT     |   273K|  6947K|   535   (1)| 00:00:01 |  Q1,01 | PCWP |            |                    
|  14 |     PX BLOCK ITERATOR                |              |  2099K|    96M|   840   (1)| 00:00:01 |  Q1,02 | PCWC |            |                    
|* 15 |      TABLE ACCESS FULL               | EMP          |  2099K|    96M|   840   (1)| 00:00:01 |  Q1,02 | PCWP |            |                    
----------------------------------------------------------------------------------------------------------------------------------                    
                                                                                                                                                      
Predicate Information (identified by operation id):                                                                                                   
---------------------------------------------------                                                                                                   
                                                                                                                                                      
   3 - access("ED"."EMPID"="E"."EMPID")                                                                                                               
  11 - access("D"."DEPTNUM"=20)                                                                                                                       
  13 - filter("ED"."EMPDEPT"=20)                                                                                                                      
  15 - filter(SYS_OP_Bloom_FILTER(:BF0000,"E"."EMPID"))                                                                                               
                                                                                                                                                      
Note                                                                                                                                                  
-----                                                                                                                                                 
   - dynamic sampling used for this statement (level=2)                                                                                               


Statistics
----------------------------------------------------------                                                                                            
         33  recursive calls                                                                                                                          
        139  db block gets                                                                                                                            
      36224  consistent gets                                                                                                                          
      17657  physical reads                                                                                                                           
          0  redo size                                                                                                                                
    9526012  bytes sent via SQL*Net to client                                                                                                         
     183846  bytes received via SQL*Net from client                                                                                                   
      16668  SQL*Net roundtrips to/from client                                                                                                        
          5  sorts (memory)                                                                                                                           
          0  sorts (disk)                                                                                                                             
     250000  rows processed                                                                                                                           

SQL>

Now let’s get Oracle’s In-memory option working. By default it’s disabled; the initialization parameter inmemory_size is set to 0. To get In-memory configured and working three steps need to be performed:


1. Set inmemory_size to a non-zero value; in the database used for this example it's been set to 576M.  As this is a parameter that cannot be modified directly
   it will need to be changed using scope=spfile.
2. If desired increase the sga_max_size by the value that inmemory_size has been set to.  This is not a requirement but remember that any memory allocated
   to the in-memory database will be taken from the SGA allocation.
3. Restart the database to effect the changes.

One caveat on this is calculating how much memory will be needed to provide a workable In-memory configuration; it’s similar to configuring the KEEP pool (a task from earlier releases) as you should allocate sufficient memory to contain all of the tables that will be used to prevent delays in loading the data to the IM column store. Now you have the in-memory option configured and ready to use. Alter a table to use inmemory and you will start seeing the possible benefits of this configuration. (By default the following in-memory options are set:


  PRIORITY NONE -- Oracle controls when table data is populated in the IM column store.  Population may be delayed if memory is needed for other processes.
                   Other values are LOW, MEDIUM, HIGH and CRITICAL.  As expected a higher PRIORITY loads before any lower priority.
  MEMCOMPRESS FOR QUERY LOW -- This option provides the best query performance and the least compression.
  DISTRIBUTE AUTO -- Used for RAC this controls how the data is distributed between nodes.  BY ROWID RANGE, BY PARTITION and BY SUBPARTITION are other options.
  NO DUPLICATE -- Another RAC parameter, this controls how many nodes will have duplicated data.  DUPLICATE involves one node for a total of two (so in a two-node
                  RAC configuration DUPLICATE and DUPLICATE ALL provide the same duplication),  DUPLICATE ALL duplicates the table data across all available RAC nodes.

Re-writing the Bloom filter example to take advantage of this new feature allows us to see how it will perform:


SQL>
SQL> --
SQL> -- Create sample tables
SQL> --
SQL> create table emp(
  2  	     empid   number,
  3  	     empnm   varchar2(40),
  4  	     empsal  number,
  5  	     empssn  varchar2(12),
  6  	     constraint emp_pk primary key (empid)
  7  ) inmemory;

Table created.

SQL>
SQL> create table emp_dept(
  2  	     empid   number,
  3  	     empdept number,
  4  	     emploc  varchar2(60),
  5  	     constraint emp_dept_pk primary key(empid)
  6  ) inmemory;

Table created.

SQL>
SQL> create table dept_info(
  2  	     deptnum number,
  3  	     deptnm  varchar2(25),
  4  	     constraint dept_info_pk primary key(deptnum)
  5  ) inmemory;

Table created.

SQL>
SQL> --
SQL> -- Load sample tables with data
SQL> --
SQL> begin
  2  	     for i in 1..2000000 loop
  3  		     insert into emp
  4  		     values(i, 'Fnarm'||i, (mod(i, 7)+1)*1000, mod(i,10)||mod(i,10)||mod(i,10)||'-'||mod(i,10)||mod(i,10)||'-'||mod(i,10)||mod(i,10)||mod(i,10)||mod(i,10));
  5  		     insert into emp_dept
  6  		     values(i, (mod(i,8)+1)*10, 'Zanzwalla'||(mod(i,8)+1)*10);
  7  		     commit;
  8  	     end loop;
  9  	     insert into dept_info
 10  	     select distinct empdept, case when empdept = 10 then 'SALES'
 11  					   when empdept = 20 then 'PROCUREMENT'
 12  					   when empdept = 30 then 'HR'
 13  					   when empdept = 40 then 'RESEARCH'
 14  					   when empdept = 50 then 'DEVELOPMENT'
 15  					   when empdept = 60 then 'EMPLOYEE RELATIONS'
 16  					   when empdept = 70 then 'FACILITIES'
 17  					   when empdept = 80 then 'FINANCE' end
 18  	     from emp_dept;
 19
 20  end;
 21  /

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> -- Gather statistics
SQL> --
SQL> exec dbms_stats.gather_schema_stats('BING')

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> -- Run join query using bloom filter
SQL> --

SQL> set autotrace on
SQL> set timing on
SQL>
SQL> select /*+ bloom join 2 parallel 2 use_hash(emp emp_dept) */ e.empid, e.empnm, d.deptnm, e.empsal	  --, count(*) emps
  2  from emp e join emp_dept ed on (ed.empid = e.empid) join dept_info d on (ed.empdept = d.deptnum)
  3  where ed.empdept = 20;

     EMPID EMPNM                                    DEPTNM                        EMPSAL
---------- ---------------------------------------- ------------------------- ----------
   1548633 Fnarm1548633                             PROCUREMENT                     3000
   1548641 Fnarm1548641                             PROCUREMENT                     4000
   1548649 Fnarm1548649                             PROCUREMENT                     5000
 ...
    897969 Fnarm897969                              PROCUREMENT                     3000
    897977 Fnarm897977                              PROCUREMENT                     4000
    897985 Fnarm897985                              PROCUREMENT                     5000
    897993 Fnarm897993                              PROCUREMENT                     6000
    898001 Fnarm898001                              PROCUREMENT                     7000

250000 rows selected.

Elapsed: 00:00:25.67

Execution Plan
----------------------------------------------------------
Plan hash value: 2613779428

---------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |           |   250K|    10M|       |  3863   (2)| 00:00:01 |
|*  1 |  HASH JOIN                    |           |   250K|    10M|  8304K|  3863   (2)| 00:00:01 |
|   2 |   JOIN FILTER CREATE          | :BF0000   |   250K|    10M|       |  3863   (2)| 00:00:01 |
|*  3 |    HASH JOIN                  |           |   250K|  5371K|       |    94  (23)| 00:00:01 |
|*  4 |     TABLE ACCESS INMEMORY FULL| DEPT_INFO |     1 |    13 |       |     1   (0)| 00:00:01 |
|*  5 |     TABLE ACCESS INMEMORY FULL| EMP_DEPT  |   250K|  2197K|       |    92  (22)| 00:00:01 |
|   6 |   JOIN FILTER USE             | :BF0000   |  2000K|    41M|       |   137  (17)| 00:00:01 |
|*  7 |    TABLE ACCESS INMEMORY FULL | EMP       |  2000K|    41M|       |   137  (17)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("ED"."EMPID"="E"."EMPID")
   3 - access("ED"."EMPDEPT"="D"."DEPTNUM")
   4 - inmemory("D"."DEPTNUM"=20)
       filter("D"."DEPTNUM"=20)
   5 - inmemory("ED"."EMPDEPT"=20)
       filter("ED"."EMPDEPT"=20)
   7 - inmemory(SYS_OP_BLOOM_FILTER(:BF0000,"E"."EMPID"))
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"E"."EMPID"))

Note
-----
   - this is an adaptive plan


Statistics
----------------------------------------------------------
         14  recursive calls
          0  db block gets
         26  consistent gets
          0  physical reads
          0  redo size
    9659722  bytes sent via SQL*Net to client
     183877  bytes received via SQL*Net from client
      16668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     250000  rows processed

SQL>

Notice that the Bloom filter is still used, and in conjunction with the In-memory option the execution time was considerably shorter than the original, 11.2.0.3 Windows server, example. Notice also that since the table was in-memory the memory sorts are 0 (since all processing was done in-memory). If you have the resources available using the In-memory option for Oracle 12.1.0.2 might improve performance significantly.

Bloom filters plus the In-memory database option with Oracle 12.1.0.2 can improve query performance dramatically, and that can make for very happy end users. It’s a bit of a chore to get In-memory configured (the hardest part being the available memory calculations) but it appears to be worth every bit of time spent to get it working.

We could all use a little memory improvement now and then.

Advertisements

October 7, 2014

To Skip, Or Not To Skip

Filed under: General — dfitzjarrell @ 14:02

An interesting issue presented itself just recently with a logical standby database I manage. The database is used for generating reports and the client wanted to skip all DML activity for a given schema as it wasn’t necessary for reporting purposes. I had done this in version 10.2.0.x; it was a simple procedure on a low-traffic database:


alter database stop logical standby apply;
exec dbms_logstdby.skip('DML','','%')
alter database start logical standby apply;

Pretty simple stuff, really. In the 10.2.0.x standby configuration I had been lucky that no DDL changes for the specified schema had occurred, so missing a step wasn’t obvious as it didn’t impact the apply process. For a while the 11.2 standby was happily applying transactions and skipping DML for the specified schema. Overnight, however, the apply process shut down for no immediately apparent reason. The schema in question had all of its DML ignored so why did apply stop? Looking into the alert log provided the following set of errors:


ORA-26808: Apply process AS01 died unexpectedly.
ORA-02149: Specified partition does not exist
LOGSTDBY Apply process AS03 server id=3 pid=85 OS id=27432 stopped

This required further investigation. Checking the primary database I found 594 partitions for tables owned by the schema that were supposed to be skipped. On the logical standby I found 43 fewer partitions for that same schema. So it appeared that the actions I took the day before, to skip all DML activity for this schema, weren’t enough. Pondering the issue I went to the Oracle online documentation and eventually found information showing a step I had missed. Skipping the schema DML wasn’t enough, the DDL also had to be ignored and to do that required the following set of instructions:


ALTER DATABASE ABORT LOGICAL STANDBY APPLY;
EXECUTE DBMS_LOGSTDBY.SKIP(stmt => 'SCHEMA_DDL',-
     schema_name => '', -
     object_name => '%');
EXECUTE DBMS_LOGSTDBY.SKIP ('DML', '', '%');
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;  

Using the abort command does two things: it stops the apply process and it rolls back any pending transactions. This is necessary so that ‘offending’ transactions, the ones that throw the listed error, get rolled back; this ‘resets’ the transaction ‘pointer’ for Log Miner so that when the apply process is restarted it can skip the transactions throwing the errors. Notice the second step in the skip process:


EXECUTE DBMS_LOGSTDBY.SKIP(stmt => 'SCHEMA_DDL',-
     schema_name => '', -
     object_name => '%');

This is necessary to tell Oracle to completely skip any DDL for the specified schema and keeps transaction processing going even if there are DDL changes on the primary database for the schema you want to skip. Since I hadn’t executed the skip procedure twice, with the two sets of slightly different parameters, only half of the job was done; it wasn’t apparent until partitions were added to the specified schema on the primary that the skip wasn’t processed completely. Once that missing step had been executed, and the original skip statement was re-submitted, the apply process started applying transactions and the standby lag started getting shorter and shorter.

It pays to read the documentation when performing unfamiliar tasks or what seem to be familiar tasks in a new release. Missed steps can create issues that can seem to be mysterious; verifying the required steps is necessary to ensure a task is completed successfully. What you may have given the least thought to may be the very thing that stops you in your tracks.

Skip, skip, skip to my Lou.

Blog at WordPress.com.