Oracle Tips and Tricks — David Fitzjarrell

January 28, 2008

Lies, Damned Lies, and Statistics

Filed under: General,Performance,stats — dfitzjarrell @ 15:03

Apparently Oracle 10.2.0.3 and later releases up to 12.1.0.x have a problem with fixed object statistics (well, it’s not a problem, really, they simply don’t exist by default) causing queries against views using some of these fixed objects to take an abnormally long period of time for the data volume processed. As an example take the following query:

select distinct owner from v$access;

Executing this query in a plain-vanilla 10.2.0.3 database provides the following output:

SQL> select distinct owner from v$access;
select distinct owner from v$access
*
ERROR at line 1: ORA-01013: user requested cancel of current operation

The operation was cancelled after 5 minutes as the query would not return results in a ‘reasonable’ time frame. A subsequent execution, left to its devices, returned its two-row result set in approximately 12 minutes.

The query plan:

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  dmfdcmvwkgfqh, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ distinct owner from v$access

Plan hash value: 3850776806

---------------------------------------------------------------------------------------------------------------------------------------------------------
 Id   Operation                   Name             Starts  E-Rows E-Bytes Cost (%CPU) E-Time    A-Rows    A-Time     OMem   1Mem   O/1/M
---------------------------------------------------------------------------------------------------------------------------------------------------------
   1   HASH UNIQUE                                      2     105  14175      1 (100) 00:00:01       3 00:01:44.35                   
   2    NESTED LOOPS                                    2     105  14175      0   (0)              348 00:01:29.55                   
   3     NESTED LOOPS                                   2      10    820      0   (0)              348 00:01:29.53                   
   4      MERGE JOIN CARTESIAN                          2     100   5700      0   (0)              125K00:00:00.61                   
*  5       FIXED TABLE FULL       X$KSUSE               2       1     19      0   (0)               81 00:00:00.01                   
   6       BUFFER SORT                                 81     100   3800      0   (0)              125K00:00:00.49    118K   118K     2/0/0
   7        FIXED TABLE FULL      X$KGLDP               2     100   3800      0   (0)             3128 00:00:00.05                   
*  8      FIXED TABLE FIXED INDEX X$KGLLK (ind:1)     125K      1     25      0   (0)              348 00:02:19.55                   
*  9     FIXED TABLE FIXED INDEX  X$KGLOB (ind:1)     348      10    530      0   (0)              348 00:00:00.02                   
---------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter("S"."INST_ID"=USERENV('INSTANCE'))
   8 - filter(("L"."KGLLKUSE"="S"."ADDR" AND "L"."KGLLKHDL"="D"."KGLHDADR" AND "L"."KGLNAHSH"="D"."KGLNAHSH"))
   9 - filter(("O"."KGLNAHSH"="D"."KGLRFHSH" AND "O"."KGLHDADR"="D"."KGLRFHDL"))

Notice the MERGE JOIN CARTESIAN and the disparity between the estimated rows (100) and the actual rows (125,000). Since this is accessing fixed tables the error is probably in the fixed table statistics. DBMS_STATS provides a procedure to compute fixed table statistics:

SQL> connect / as sysdba
SQL> exec dbms_stats.gather_fixed_objects_stats(NULL);

PL/SQL procedure successfully completed.

SQL>

Running the original query again:

SQL> select distinct owner
2 from v$access;

OWNER
----------------------------------------------------------------
PUBLIC
SYS

Elapsed: 00:00:00.14

Execution Plan
----------------------------------------------------------
Plan hash value: 174934893

----------------------------------------------------------------------------------------------
 Id   Operation                   Name             Rows   Bytes  Cost (%CPU) Time
----------------------------------------------------------------------------------------------
   0  SELECT STATEMENT                                15   1305      5 (100) 00:00:01
   1   HASH UNIQUE                                    15   1305      5 (100) 00:00:01
   2    NESTED LOOPS                                 433  37671      4 (100) 00:00:01
   3     NESTED LOOPS                                433  29877      3 (100) 00:00:01
   4      HASH JOIN                                  433  16021      1 (100) 00:00:01
   5       FIXED TABLE FULL       X$KSUSE            170   2040      0   (0) 00:00:01
   6       FIXED TABLE FULL       X$KGLLK            433  10825      1 (100) 00:00:01
   7      FIXED TABLE FIXED INDEX X$KGLDP (ind:1)      1     32      0   (0) 00:00:01
   8     FIXED TABLE FIXED INDEX  X$KGLOB (ind:1)      1     18      0   (0) 00:00:01
----------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        269  bytes sent via SQL*Net to client
        246  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          4  rows processed

SQL>

Notice the results are returned in less than 1 second, and the MERGE JOIN CARTESIAN is missing from the execution plan. More detailed information of the above plan is posted below:

SQL> select * from table(dbms_xplan.display_cursor('dmfdcmvwkgfqh',0,'ALL ALLSTATS'));


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  dmfdcmvwkgfqh, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ distinct owner from v$access

Plan hash value: 174934893

---------------------------------------------------------------------------------------------------------------------------------------------------------
 Id   Operation                   Name             Starts  E-Rows E-Bytes Cost (%CPU) E-Time    A-Rows    A-Time     OMem   1Mem   O/1/M
---------------------------------------------------------------------------------------------------------------------------------------------------------
   1   HASH UNIQUE                                      1      15   1305      5 (100) 00:00:01       4 00:00:00.10                   
   2    NESTED LOOPS                                    1     433  37671      4 (100) 00:00:01     394 00:00:00.09                   
   3     NESTED LOOPS                                   1     433  29877      3 (100) 00:00:01     394 00:00:00.06                   
*  4      HASH JOIN                                     1     433  16021      1 (100) 00:00:01     335 00:00:00.03   1236K  1236K     1/0/0
*  5       FIXED TABLE FULL       X$KSUSE               1     170   2040      0   (0)              170 00:00:00.01                   
   6       FIXED TABLE FULL       X$KGLLK               1     433  10825      1 (100) 00:00:01     335 00:00:00.01                   
*  7      FIXED TABLE FIXED INDEX X$KGLDP (ind:1)     335       1     32      0   (0)              394 00:00:00.03                   
*  8     FIXED TABLE FIXED INDEX  X$KGLOB (ind:1)     394       1     18      0   (0)              394 00:00:00.03                   
---------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("L"."KGLLKUSE"="S"."ADDR")
   5 - filter("S"."INST_ID"=USERENV('INSTANCE'))
   7 - filter(("L"."KGLLKHDL"="D"."KGLHDADR" AND "L"."KGLNAHSH"="D"."KGLNAHSH"))
   8 - filter(("O"."KGLNAHSH"="D"."KGLRFHSH" AND "O"."KGLHDADR"="D"."KGLRFHDL"))

Notice that the actual rows (A-Rows) is much closer in agreement with the estimated rows (E-Rows), thus improving the optimizer’s efforts in determining a viable execution plan. [It has also come to my attention that the exp utility will hang in some releases when exporting cluster definitions. This ‘hang’ may also be a result of missing fixed object statistics if the database in question is a 10g or later release.] I would include this in the script used to update schema statistics (in many cases scheduled for a weekly run) and let these be generated ‘automatically’.

A workaround is to set optimizer_mode at the session level. Setting this parameter to

RULE

also ‘solves’ the problem by using antiquated optimizer code that eliminates the cost model altogether. Oracle recommends the first ‘option’ (computing statistics on the fixed objects) and also recommends these statistics be kept current. Adding a call to the dbms_stats.gather_fixed_objects_stats(NULL) procedure to the weekly statistics run should be a fairly easy modification to make, and would be my choice to ensure queries against fixed objects return in a reasonable period of time with respect to the data volume.

Verifying you have fixed object statistics involves a quick query of the sys.tab_stats$ table. There should be in the neighborhood of 580 rows returned (at least on the 10.2.0.3.0 databases I’ve queried).

The time required for dbms_stats.gather_fixed_objects_stats to complete can vary from 1 minute to possibly 5 minutes or more (depending upon the server configuration and user load) so this won’t report ‘PL/SQL procedure successfully completed.’ immediately. You can monitor the progress of the collection using the V$SESSION_LONGOPS view:

SQL> select sid, serial#, units, sofar, totalwork, start_time, last_update_time, time_remaining
  2  from v$session_longops
  3  where (sid, serial#) in (select sid, serial# from v$session where username = 'SYS')
  4  and units = 'Objects'
  5  order by target, last_update_time;

       SID    SERIAL# UNITS                                 SOFAR  TOTALWORK START_TIM LAST_UPDA TIME_REMAINING
---------- ---------- -------------------------------- ---------- ---------- --------- --------- --------------
       145       3796 Objects                                  34         34 08-OCT-08 08-OCT-08           0
       145       3796 Objects                                   3          3 08-OCT-08 08-OCT-08           0
       145       3796 Objects                                 548        548 08-OCT-08 08-OCT-08           0

SQL>

The TOTALWORK column indicates the total quantity of UNITS the task entails. The SOFAR column reports, in UNITS, the work completed.

Oracle, in another Metalink Note ( 549895.1) suggests setting the hidden init parameter _optimizer_cartesian_enabled to false, either at the session level or at the instance level [this prevents the CBO from choosing to use a cartesian join when other access paths are available. In Oracle 10.2.0.3 and later releases an additional hidden parameter, _optimizer_mjc_enabled, also needs to be set to FALSE to force the optimizer to completely eliminate any merge join cartesian paths. Thanks, Nuno, for that bit of information.]. I’d rather compute statistics on the fixed objects rather than set a hidden init.ora parameter, as setting that parameter may adversely affect other queries which aren’t experiencing problems.

If, for some reason, you decide you don’t want fixed objects statistics that’s a simple operation to undertake. Simply do this:

SQL> exec dbms_stats.delete_fixed_objects_stats

PL/SQL procedure successfully completed.

SQL>

and the fixed objects statistics are gone.

As stated earlier including these in the regular statistics run is probably a good idea, so that any changes to data in those fixed objects will be captured.

In Oracle 12.1.0.x and later releases these statistics are included in the pre-configured automatic statistics job (read here) so these shouldn’t be an issue.

There’s nothing better than nice, fairly fresh statistics. And I ain’t lyin’.

January 9, 2008

Resurrecting The Dead (or Gee, I Thought That Data Was Gone Forever)

Filed under: flashback — dfitzjarrell @ 16:29

Occasionally an end-user does the unthinkable, and changes data permanently (or so it appears) without checking the results before issuing a commit. Ooops. It used to be that to restore that data one had basically two choices:

A) Restore the data from a known good copy made before the changes were enacted
B) Restore the table from the last known good export

Yes, there was a third choice, an incomplete tablespace recovery but that presumed the database to be in ARCHIVELOG mode and, gasp, there are databases running in NOARCHIVELOG mode where this wouldn’t work. So, what is a DBA to do? If you’re running 9i you can enable flashback query by using the dbms_flashback package, which was nice but rather cumbersome in implementation, as it only allowed you to query the flashback data while enabled so updates to existing tables (to restore known good data) were somewhat convoluted in execution; it was necesary to enable flashback mode, open a cursor, then disable flashback mode to effect the updates:

declare
  --
  -- Cursor to fetch the required data
  --
  cursor c1 is
  select flerg, snerm, anguplast
  from snangpiester;

  --
  -- A place to stash each row of the returned data
  --
  c1_rec c1%rowtype;

begin

  --
  -- We flashback the data image to before the problem began
  --
  dbms_flashback.enable_at_time(to_timestamp('16-SEP-2007 06:53:00 AM','DD-MON-RRRR HH:MI:SS AM'));

  --
  -- Open the cursor while in flashback mode
  --
  open c1;

  --
  -- Disable flashback mode so we can update the problem table
  --
  dbms_flashback.disable;

  --
  -- Go fetch the good data and update the problem table
  --
  loop
          fetch c1 into c1_rec;
          exit when c1%notfound;  -- say goodbye when there's no more data
          update snangpiester
          set anguplast = c1_rec.anguplast
          where flerg = c1_rec.flerg
          and snerm = c1_rec.snerm;
  end loop;

  --
  -- Commit the changes
  --

  commit;

end;
/

Enter Oracle 10g and later releases where flashback query is enabled and does not require the use of any additional packages. Flashback query can use the SCN or a timestamp value as a reference point, making the use of such queries much easier. Let’s see how using flashback query can resurrect data thought to be lost by committed user changes.

Flashback query syntax is as follows:

select [select list here]
from [table]
as of timestamp to_timestamp('date/time or timestamp string');

Note the highlighted text; the ‘as of’ syntax [described fully at http://docs.oracle.com] tells Oracle to use flashback mode to rebuild the data image as of the provided date and time utilising data found in the UNDO segments preserved by the undo_retention parameter setting. To illustrate the power of using flashback query let’s look at an example using the EMP table.

Koffi Cupps, assistant HR manager and part-time ankle model, wanted to increase the salaries of the sales team by 17 percent and add $85 to each salesman’s commission; unfortunately she updated the entire employee table without first making a copy of the original data and committed the changes before checking the results:

SQL> update emp
   2 set sal = sal*1.17, comm = nvl(comm, 0) + 85;

14 rows updated.

SQL> commit;

Commit complete.

SQL> select 
   2 from emp;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        936         85         20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1872        385         30
      7521 WARD       SALESMAN        7698 22-FEB-81     1462.5        585         30
      7566 JONES      MANAGER         7839 02-APR-81    3480.75         85         20
      7654 MARTIN     SALESMAN        7698 28-SEP-81     1462.5       1485         30
      7698 BLAKE      MANAGER         7839 01-MAY-81     3334.5         85         30
      7782 CLARK      MANAGER         7839 09-JUN-81     2866.5         85         10
      7788 SCOTT      ANALYST         7566 09-DEC-82       3510         85         20
      7839 KING       PRESIDENT            17-NOV-81       5850         85         10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1755         85         30
      7876 ADAMS      CLERK           7788 12-JAN-83       1287         85         20
      7900 JAMES      CLERK           7698 03-DEC-81     1111.5         85         30
      7902 FORD       ANALYST         7566 03-DEC-81       3510         85         20
      7934 MILLER     CLERK           7782 23-JAN-82       1521         85         10

14 rows selected.

SQL>

Fortunately her younger sister, Dixie, is the Oracle DBA for the company. Knowing the HR software was using Oracle 11.2.0.4 Dixie sprang into action to restore the data prior to the change (it was also a good thing this was early on a Monday morning and that the undo_retention was set to a sufficient size else Dixie could have received an ORA-01555 because the necessary undo blocks had been overwritten):

SQL> select *
   2  from emp as of timestamp to_timestamp(trunc(sysdate));

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20 
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30 
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30 
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20 
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30 
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30 
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10 
      7788 SCOTT      ANALYST         7566 09-DEC-82       3000                    20 
      7839 KING       PRESIDENT            17-NOV-81       5000                    10 
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30 
      7876 ADAMS      CLERK           7788 12-JAN-83       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30 
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20 
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

All Dixie needed to do was delete the incorrect data and insert the old values:

SQL> delete from emp;

14 rows deleted.

SQL> insert into emp
  2  select * from emp as of timestamp to_timestamp(trunc(sysdate));

14 rows created.

SQL> commit;

Commit complete.

Of course she also could have updated the ‘bad’ data using the flashback image:

SQL> update emp e
2 set (sal, comm) = (select sal, comm 
3 from emp 
4 as of timestamp to_timestamp(trunc(sysdate)) 
5 where empno = e.empno);

14 rows updated.

SQL> commit;

Commit complete.

SQL> select *
2 from emp;


     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20 
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30 
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30 
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20 
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30 
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30 
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10 
      7788 SCOTT      ANALYST         7566 09-DEC-82       3000                    20 
      7839 KING       PRESIDENT            17-NOV-81       5000                    10 
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30 
      7876 ADAMS      CLERK           7788 12-JAN-83       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30 
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20 
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

SQL>

As you can see the original state of the data has been restored, Dixie Cupps is a hero and Koffi Cupps can try her update again, being careful to not commit the changes until she checks that they are correct. Oh, and after making a copy of the table BEFORE she issues any update statements.

If you’re planning on using flashback query on a regular basis and you’re running Oracle version 10gR2 or later it might be a wise idea to alter the UNDO tablespace to provide a guaranteed undo_retention window:

SQL> alter tablespace undotbs1 retention guarantee;

Tablespace altered.

SQL>

The tradeoff for this is that multiple DML operations may fail due to a lack of writable UNDO space; Oracle will fail transactions if there is insufficient UNDO space to honor both the undo_retention and the current transactional load since guaranteeing the undo retention will cause Oracle to keep unexpired undo data for the duration of the undo_retention period, whereas not guaranteeing the retention will allow Oracle to overwrite unexpired undo data should the need arise. And while Oracle can overwrite unexpired undo data it will do its level best to maintain the configured undo_retention even without the retention guarantee. Testing this on a non-production database, with a comparable workload, would be a very good idea. Such actions can easily be undone:

SQL> alter tablespace undotbs1 retention noguarantee;

Tablespace altered.

SQL>

in the event a problem such as multiple DML operations failing rears its ugly head.

Flashback query can be a very useful tool to the DBA in such situations, and also for some reporting tasks as data can be queried as of a given timestamp (again, as long as the undo_retention setting provides the necessary window). Let’s see how a salary report can be generated after salaries for some employees have been adjusted:

SQL> --
SQL> -- Get reference timestamp before changes
SQL> --
SQL> select to_timestamp(to_char(sysdate, 'DD-MON-YYYY HH:MI:SS AM')) curr_timestamp from dual;

CURR_TIMESTAMP
---------------------------------------------------------------------------
09-JAN-08 11.54.40.000000000 AM

SQL>
SQL> --
SQL> -- Change the data in EMP
SQL> --
SQL> update emp
  2  set sal = sal+120, comm = nvl(comm, 0) + 85
  3  where job = 'SALESMAN';

4 rows updated.

SQL>
SQL> --
SQL> -- Commit and make the changes "permanent"
SQL> --
SQL> commit;

Commit complete.

SQL>
SQL> --
SQL> -- Show the current state of the data
SQL> --
SQL> select *
  2  from emp;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20 
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1720        385         30 
      7521 WARD       SALESMAN        7698 22-FEB-81       1370        585         30 
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20 
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1370       1485         30 
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30 
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10 
      7788 SCOTT      ANALYST         7566 09-DEC-82       3000                    20 
      7839 KING       PRESIDENT            17-NOV-81       5000                    10 
      7844 TURNER     SALESMAN        7698 08-SEP-81       1620         85         30 
      7876 ADAMS      CLERK           7788 12-JAN-83       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30 
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20 
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

SQL>
SQL>
SQL> --
SQL> -- Generate a report of employee number,
SQL> -- employee name, job, salary, percent increase,
SQL> -- commission, increase
SQL> --
SQL> with get_old as(
  2       select empno, ename, job, sal, nvl(comm,0) comm
  3       from emp
  4       as of timestamp to_timestamp('&curr_ts')
  5  ),
  6  get_curr as (
  7       select empno, ename, job, sal, comm
  8       from emp
  9  )
 10  select c.empno, c.ename, c.job, c.sal,
 11  (c.sal - o.sal)/o.sal*100 sal_incr,
 12  c.comm,
 13  (c.comm - o.comm) comm_incr
 14  from get_old o, get_curr c
 15  where c.empno = o.empno;

     EMPNO ENAME      JOB              SAL   SAL_INCR       COMM  COMM_INCR
---------- ---------- --------- ---------- ---------- ---------- ---------- 
      7369 SMITH      CLERK            800                     0 
      7499 ALLEN      SALESMAN        1720        7.5        385         85 
      7521 WARD       SALESMAN        1370        9.6        585         85 
      7566 JONES      MANAGER         2975                     0 
      7654 MARTIN     SALESMAN        1370        9.6       1485         85 
      7698 BLAKE      MANAGER         2850                     0 
      7782 CLARK      MANAGER         2450                     0 
      7788 SCOTT      ANALYST         3000                     0 
      7839 KING       PRESIDENT       5000                     0 
      7844 TURNER     SALESMAN        1620          8         85         85 
      7876 ADAMS      CLERK           1100                     0
      7900 JAMES      CLERK            950                     0 
      7902 FORD       ANALYST         3000                     0 
      7934 MILLER     CLERK           1300                     0

14 rows selected.

SQL>

Flashback query, in 10g and later releases, is one of the nicer enhancements to the database. How did we ever live without it? Oh, I guess by having copies of tables lying around that no one keeps track of …

Create a free website or blog at WordPress.com.