Oracle Tips and Tricks — David Fitzjarrell

July 21, 2016

On An Average Day

Filed under: General — dfitzjarrell @ 09:18

"There are other advantages," continued the child. "For instance, if one rat were cornered by nine cats,
on the average, each cat would be ten percent rat and the rat would be ninety percent cat. If you happened
to be a rat, you can see how much nicer it would make things."
-- Norton Juster, The Phantom Tollbooth

Recently an interesting question appeared on “Ask Tom” with regard to the ‘average’ between two dates. Which is not saying it’s the ONLY interesting question on “Ask Tom” but it did catch my attention, notably the topic of the discussion. Why, on earth, would anyone want the ‘average’ of two dates? Digging a bit deeper it became obvious what the original poster wanted to accomplish. Let’s take that trip and see where it leads.

The average person doesn’t usually ask for the average between two dates; averages are computed on salaries, on distance, on temperatures, on volumes, on numbers, in general, not dates. If we look at what the ‘average’ would be between two dates, however, we find it’s the date smack-dab in the middle of the two dates supplied. Connor McDonald provided this solution to the question:


SQL> with t as
  2     ( select
  3         date '2015-09-01' d1,
  4         date '2015-12-07' d2
  5      from dual )
  6  select
  7    least(d1,d2) + abs(d1-d2)/2
  8  from t;

LEAST(D1,
---------
19-OCT-15

SQL>

which provides the correct answer given the date range specified. It’s not an ‘average’, really, it’s more like the interval middle value, but it can serve a purpose. Let’s say that Millicent Muggwumpp, super-star billing clerk, needs to determine the middle date for a billing cycle so invoices can be generated and sent to the various customers. The solution Connor provided (with some modification) can provide that information. Let’s ‘hack away’ at Connor’s code and see what we end up with:


SQL> with interval_middle as
  2     ( select
  3         to_date('&&1', 'RRRR-MM-DD') d1,
  4         to_date('&&2', 'RRRR-MM-DD') d2
  5      from dual )
  6  select
  7    least(d1,d2) + abs(d1-d2)/2
  8  from interval_middle;

LEAST(D1,
---------
15-JUN-16

SQL>

Looking at the changes it should be noted that replaceable parameters have been substituted for the hard-coded dates and a more robust explicit date conversion has been written, including the expected date string format. Millicent can now use this script and pass to it any valid date range and return the middle date of that range. [SQL*Plus allows us to shut off the parameter replacement message showing the original code and the current version with the parameters supplied, but we could easily allow that to be displayed:


SQL> with interval_middle as
  2     ( select
  3         to_date('&&1', 'RRRR-MM-DD') d1,
  4         to_date('&&2', 'RRRR-MM-DD') d2
  5      from dual )
  6  select
  7    least(d1,d2) + abs(d1-d2)/2
  8  from interval_middle;
old   3:        to_date('&&1', 'RRRR-MM-DD') d1,
new   3:        to_date('2016-06-01', 'RRRR-MM-DD') d1,
old   4:        to_date('&&2', 'RRRR-MM-DD') d2
new   4:        to_date('2016-06-29', 'RRRR-MM-DD') d2

LEAST(D1,
---------
15-JUN-16

SQL>

if confirmation of the passed-in date values is required.]

Of course Connor’s answer wasn’t the only one provided as Stew Ashton questioned the need for LEAST() and ABS() in the query:


No matter what date you start with, you either subtract a positive number from the later date or subtract a negative number from the earlier date.


alter session set nls_date_format='YYYY-MM-DD HH24:MI';

with t as (
  select date '2016-07-01' d1, date '2016-07-18' d2
  from dual
)
select d1 - (d1 - d2) / 2 result1,
       d2 - (d2 - d1) / 2 result2
from t;

RESULT1          RESULT2
---------------- ----------------
2016-07-09 12:00 2016-07-09 12:00

Modifying this code in the same manner as Connor’s was produces:


SQL> with t as (
  2    select to_date('&&1', 'RRRR-MM-DD') d1, to_date('&&2', 'RRRR-MM-DD') d2
  3    from dual
  4  )
  5  select d1 - (d1 - d2) / 2 result1,
  6         d2 - (d2 - d1) / 2 result2
  7  from t;

RESULT1          RESULT2
---------------- ----------------
2016-06-15 00:00 2016-06-15 00:00

SQL>

The same result is produced by both queries, so LEAST() and ABS() aren’t absolutely necessary. Choose the approach that provides a sufficient level of comfort to you.

Averaging dates may not be an obvious operation but it can be useful, if in a limited way. We all look at dates in unusual ways on occasion so what may seem illogical to you, at least on the face of it, may be completely and utterly logical to someone else. Who knows, Aunt Edna may ask you to find the date between Uncle Milton’s and Aunt Haggatha’s birthdays because she wants to ship a plaster cat to Guatemala. Stranger things have happened.

And that’s about average.

July 14, 2016

Size Does Matter

Filed under: General — dfitzjarrell @ 11:07

"You see, to tall men I'm a midget, and to short men I'm a giant;
to the skinny ones I'm a fat man, and to the fat ones I'm a thin man."
-- Norton Juster, The Phantom Tollbooth

Someone in one of the Oracle forums asked the following question:


How to estimate datafile size of tablespace when it is almost full? Could someone please explain?

which is ambiguous to say the least, since no information was provided by the person posting as to system configuration, database size, transaction volume, etc. Several responses later the original poster replied:


My question is how much space to allocate to datafile when tablespace of it reaches almost max size?

which, on the face of it, seems to be a more ‘answerable’ question. Information is still missing, however, so it’s difficult to provide any real solution to the person who asked. Since Oracle doesn’t really monitor tablespace growth –it does monitor file size and space within the datafiles but those are only part of the required information — it’s up to the DBA (which, presumably, is you) to monitor such growth. It isn’t difficult, but it does take some preparation before any results can be returned. Let’s look at one way tablespace monitoring can be achieved.

The following script sets up a table and a sequence that will be populated by another script running as a scheduled job or task on the database server:


create table tablespace_monitor(
	mon_run	number,
	run_dt	date,
	tablespace_name varchar2(35),
	available  number,
	used	number,
	free	number,
	pct_free number(9,2))
tablespace users;

create sequence tab_mon_seq
start with 1 increment by 1 nocycle nomaxvalue nocache;

Created when connected as SYS as SYSDBA this will hold the necessary data to monitor tablespaces and their growth. This table will be populated by the following script:


insert into tablespace_monitor (tablespace_name, available, used, free, pct_free)
(
select tname_a tablespace_name, 
       ttl_avail available, 
       (ttl_avail - ttl_free) used, 
       ttl_free free, 
       round((ttl_free/ttl_avail)*100,2) "PCT FREE"
from
 (select tablespace_name tname_a, sum(user_bytes) ttl_avail
 from dba_data_files
 group by tablespace_name) avail left join
 (select tablespace_name tname_f, sum(bytes) ttl_free
 from dba_free_space
 group by tablespace_name) free on tname_f = tname_a
union
select tablespace_name, 
       sum(bytes_used+bytes_free) ttl_avail,
       sum(bytes_used),
       sum(bytes_free),
       round((sum(bytes_free)/sum(bytes_used+bytes_free))*100,2) pct_free
from v$temp_space_header
group by tablespace_name);

commit;

column next_seq noprint
select tab_mon_seq.nextval next_seq from dual;

update tablespace_monitor
set mon_run = tab_mon_seq.currval,
    run_dt = sysdate
where mon_run is null;

commit;

The provided scripts have been tested on 12.1.0.2 and work as expected. The second script is scheduled to run once daily to provide daily tablespace size data, which will be used by the next script to generate a report:


set linesize 200 numwidth 13 pagesize 40
column "CONSUMED/FREED (+)" format a20
column "TABLESPACE NAME" format a15

select m1.tablespace_name "TABLESPACE NAME", m2.free "BYTES FREE", m2.pct_free "PCT FREE",
      case when trunc((m1.free - m2.free)/(1024*1024)) > 0 then lpad(to_char(round((m1.free - m2.free)/(1024*1024),2)),20)
            when trunc((m1.free - m2.free)/(1024)) between 1 and 999 then lpad(to_char(round((m1.free - m2.free)/(1024),2)),20)
           when trunc((m1.free - m2.free)/(1024*1024))  0 then 'MB'
            when abs(trunc((m1.free - m2.free)/(1024))) between 1 and 999 then 'KB'
            else null end unit,
       case when trunc((((m1.free - m2.free)/(1024*1024))/((m2.run_dt - m1.run_dt)*1440))) > 0 then round((((m1.free - m2.free)/(1024*1024))/((m2.run_dt - m1.run_dt)*1440)),2)
            when trunc((((m1.free - m2.free)/(1024))/((m2.run_dt - m1.run_dt)*1440))) between 1 and 999 then round((((m1.free - m2.free)/(1024))/((m2.run_dt - m1.run_dt)*1440)),2)
            when trunc((((m1.free - m2.free)/(1024*1024))/((m2.run_dt - m1.run_dt)*1440)))  0 then 'MB'
            when abs(trunc((((m1.free - m2.free)/(1024))/((m2.run_dt - m1.run_dt)*1440)))) between 1 and 999 then 'KB'
            else null end "UNIT PER MIN",
       round((m2.run_dt - m1.run_dt)*1440,2) "WINDOW IN MIN",
       to_char(m1.run_dt, 'RRRR-MM-DD HH24:MI:SS') "BEGIN DATE/TIME",
       to_char(m2.run_dt, 'RRRR-MM-DD HH24:MI:SS') "LAST RUN DATE/TIME"
from tablespace_monitor m1, tablespace_monitor m2
where m2.mon_run = (select max(mon_run) from tablespace_monitor)
and m1.mon_run = (select min(mon_run) from tablespace_monitor)
and m2.tablespace_name = m1.tablespace_name

spool &1
/
spool off

set linesize 80

The script takes the most recent data then takes the previous snapshot data and generates the difference, showing the growth in the last 24 hours based on running this script once per day. Of course it can be run more than once per day, but the report will only be generated on the most recent data and the run prior to that; if it is desired to run the population script more than once per day it’s recommended that the report also be run more than once per day, to report on all of the generated values stored in the tablespace_monitor table.

The report looks like this:


TABLESPACE NAME    BYTES FREE      PCT FREE CONSUMED/FREED (+)   UN   AVG PER MIN UN WINDOW IN MIN BEGIN DATE/TIME     LAST RUN DATE/TIME
--------------- ------------- ------------- -------------------- -- ------------- -- ------------- ------------------- -------------------
INDX                103809024           100                    0                0             9.87 2016-07-14 10:00:20 2016-07-14 10:10:12
SYSAUX               62193664          7.61                 2.19 MB        227.03 KB          9.87 2016-07-14 10:00:20 2016-07-14 10:10:12
SYSTEM               62849024          7.06                    0                0             9.87 2016-07-14 10:00:20 2016-07-14 10:10:12
TEMP                 42991616         68.33                    0                0             9.87 2016-07-14 10:00:20 2016-07-14 10:10:12
UNDOTBS1           1986789376         97.47                   23 MB          2.33 MB          9.87 2016-07-14 10:00:20 2016-07-14 10:10:12
USERS             33880604672         98.69               88 (+) MB          8.92 MB          9.87 2016-07-14 10:00:20 2016-07-14 10:10:12

Over a period of time growth trends can be determined for each tablespace, and space additions can be planned accordingly. Since it’s not likely that the grown patterns will be linear it’s a good idea to make note of reports of exceptionally large space usage and the period of time in which they occur. This should provide a window to plan on storage needs and give time to the System Admins and Storage Admins to prepare additional storage for use.

Storage cannot be managed without usage and growth data yet it seems that some DBAs try to do just that; a S.W.A.G CAN get you there on occasion but more often than not it’s simply glorified guesswork without a crystal ball or Ouija board handy. Regardless of how the end-users see them databases are part of a complex managed data entry and retrieval system that should not be left to chance or ‘sophisticated’ guesswork. Planning and monitoring are necessities in the IT environment; the database should not be left ‘out in the cold’ to suffer on its own.

Whether the provided scripts are used to monitor tablespace growth or you decide to write your own implementing tablespace monitoring should be at or near the top of every DBAs ‘to-do’ list. Keeping ‘tabs’ on how much space is being used in which tablespace can only help the DBA by ensuring that the users don’t run out of space because usage wasn’t tracked. Emergency space additions take time and interrupt the business flow; scheduling such additions can prevent the end-users from complaining and will keep management happy because lack of space didn’t stop the flow of transactions. Which means everybody wins.

It does seem that size is important.

July 6, 2016

He’s Making A List

Filed under: General — dfitzjarrell @ 10:16

"But I suppose there's a lot to see everywhere, if only you keep your eyes open."
-- Norton Juster, The Phantom Tollbooth

Recently a tweet by Chris Saxon (@chrisrsaxon) with respect to SQL, the language and implementation, started a list of contributions which should cause people to think a bit differently about how the language works and how queries should be structured. The list, found here, is by no stretch of the imagination complete but it is a very good place to begin re-thinking how to write SQL. Let’s look at a few of the listed items a bit further (since the list is likekly to grow not all of the list members can be covered).

Franck Pachot start the list with the following items:


				1. SQL is a declarative language
				2. It is compiled to procedural operations
				3. It operates on row sets

So what, exactly, is a “declarative language”? According to the accepted definition it is:


	... non-procedural and very high-level (4th generation). This means the programmer specifies what
        needs to be done rather than how to do it.

Unlike languages such as C/C++, Pascal, FORTRAN, BASIC, Java, Perl and Python, the SQL language simply requires that you decide what you want done and, in the case of Oracle, the optimizer decides HOW to get it done. That doesn’t mean that you can’t have some input to influence the ‘how’, it simply means you don’t need to think about it. Certainly hints are available to ‘nudge’ the optimizer in the direction more experienced coders may think is ‘best’, but there is no requirement to use them.

His second item is a reference to PL/SQL, the procedural component to Oracle’s SQL*Plus, the ADA-based language extension where procedures, functions, triggers and anonymous procedural blocks are written. Yes, you can go happily along with many tasks and not need PL/SQL but it does come in handy when repeated tasks are necessary or when intermediate feedback (well, sort of) is desired. [Pipelined functions do provide immediate feedback during execution, but they are a special case.] Many PL/SQL examples have been posted here, so look around and they can be found.

His third item was addressed by Jonathan Lewis in an Oracle Forums response; let’s look at the problem that initiated the set-based response and Jonathan Lewis’ offering.

A query and part of the execution plan were posted with the poster asking how to get rid of the table scan. The table has a primary key index so, on the face of it, it is interesting that the primary key wasn’t used.

Looking at the definition for the ‘problem’ table (LF_HOTEL_TEMP) we find only two columns, both of which are in the primary key definition. The tables were created:


SQL> create table LF_HOTEL_TEMP
  2  (
  3    HOTEL_CODE VARCHAR2(4000),
  4    SERVICE_ID NUMBER(3)
  5  )
  6  /

Table created.

SQL>
SQL> create table LF_TS_ROOMTYPE_PROPERTIES
  2  (
  3    HOTEL_CODE    VARCHAR2(20),
  4    ROOM_TYPE     VARCHAR2(500),
  5    BOARD_TYPE    VARCHAR2(500),
  6    ROOM_AMT      FLOAT,
  7    SERVICE_ID    NUMBER,
  8    CURRENCY_CODE VARCHAR2(10)
  9  )
 10  /

Table created.

SQL>
SQL> create table LF_HB_ROOMTYPE_PROPERTIES
  2  (
  3    HOTEL_CODE    VARCHAR2(20),
  4    ROOM_TYPE     VARCHAR2(500),
  5    BOARD_TYPE    VARCHAR2(500),
  6    ROOM_AMT      FLOAT,
  7    SERVICE_ID    NUMBER,
  8    CURRENCY_CODE VARCHAR2(10)
  9  )
 10  /

Table created.

SQL>
SQL> create table LF_GTA_ROOMTYPE_PROPERTIES
  2  (
  3    HOTEL_CODE    VARCHAR2(20),
  4    ROOM_TYPE     VARCHAR2(500),
  5    BOARD_TYPE    VARCHAR2(500),
  6    ROOM_AMT      FLOAT,
  7    SERVICE_ID    NUMBER,
  8    CURRENCY_CODE VARCHAR2(10)
  9  )
 10  /

Table created.

SQL>

and then the primary key and additional indexes were created:


SQL> alter table lf_hotel_temp
  2    add constraint lf_hotel_temp_PK primary key (HOTEL_CODE,service_id)
  3  /

Table altered.

SQL>
SQL> create  index LF_hb_roomtype_prop_IDX on lf_hb_roomtype_properties (HOTEL_CODE)
  2  /

Index created.

SQL>
SQL> create  index LF_ts_roomtype_prop_IDX on lf_ts_roomtype_properties (HOTEL_CODE)
  2  /

Index created.

SQL>
SQL> create index LF_gta_roomtype_prop_IDX on lf_gta_roomtype_properties (HOTEL_CODE)
  2  /

Index created.

SQL>

The original poster reported 278,000 rows in the LF_HOTEL_TEMP table so data was generated to reproduce that nunber of rows. Next the remaining tables were populated so that none of the three remaining tables shared any data:


SQL> begin
  2  	     for i in 1..278000 loop
  3  		     insert into lf_hotel_temp
  4  		     values(i, mod(i,999)+1);
  5  	     end loop;
  6
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL>
SQL> insert into lf_ts_roomtype_properties
  2    (hotel_code)
  3  select hotel_code from lf_hotel_temp
  4  where service_id < 511;

142058 rows created.

SQL> insert into lf_ts_roomtype_properties
  2    (hotel_code)
  3  select hotel_code from lf_hotel_temp
  4  where service_id between 313 and 642;

91740 rows created.

SQL> insert into lf_gta_roomtype_properties
  2    (hotel_code)
  3  select hotel_code from lf_hotel_temp
  4  where service_id between 271 and 823;

153743 rows created.

SQL> insert into lf_hb_roomtype_properties
  2    (hotel_code)
  3  select hotel_code from lf_hotel_temp
  4  where service_id between 571 and 999;

119262 rows created.

SQL> COMMIT;

Commit complete.

SQL>

Let’s look at the original query and its execution plan:


SQL> set autotrace on linesize 140
SQL>
SQL> -- original query
SQL> SELECT a.hotel_code
  2    FROM lf_hotel_temp a
  3  WHERE a.service_id = : p_service_id
  4  	    AND (NOT EXISTS (SELECT *
  5  	       FROM lf_ts_roomtype_properties b
  6  	      WHERE a.hotel_code = b.hotel_code)
  7  	     or NOT EXISTS (SELECT *
  8  	       FROM lf_gta_roomtype_properties b
  9  	      WHERE a.hotel_code = b.hotel_code)
 10  	    or	NOT EXISTS (SELECT *
 11  	       FROM lf_hb_roomtype_properties b
 12  	      WHERE a.hotel_code = b.hotel_code));

HOTEL_CODE
-------------------------------------------------------------------
1998
999
5994
...
243756
235764
238761

278 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 4111332730

-----------------------------------------------------------------------------------------------
| Id  | Operation          | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                          |     3 |    33 |   512   (4)| 00:00:01 |
|*  1 |  FILTER            |                          |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| LF_HOTEL_TEMP            |   278 |  3058 |    94  (16)| 00:00:01 |
|*  3 |   INDEX RANGE SCAN | LF_TS_ROOMTYPE_PROP_IDX  |     1 |     7 |     3   (0)| 00:00:01 |
|*  4 |   INDEX RANGE SCAN | LF_GTA_ROOMTYPE_PROP_IDX |     1 |     7 |     1   (0)| 00:00:01 |
|*  5 |   INDEX RANGE SCAN | LF_HB_ROOMTYPE_PROP_IDX  |     1 |     7 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter( NOT EXISTS (SELECT 0 FROM "LF_TS_ROOMTYPE_PROPERTIES" "B" WHERE
              "B"."HOTEL_CODE"=:B1) OR  NOT EXISTS (SELECT 0 FROM "LF_GTA_ROOMTYPE_PROPERTIES" "B"
              WHERE "B"."HOTEL_CODE"=:B2) OR  NOT EXISTS (SELECT 0 FROM "LF_HB_ROOMTYPE_PROPERTIES"
              "B" WHERE "B"."HOTEL_CODE"=:B3))
   2 - filter("A"."SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))
   3 - access("B"."HOTEL_CODE"=:B1)
   4 - access("B"."HOTEL_CODE"=:B1)
   5 - access("B"."HOTEL_CODE"=:B1)



Statistics
----------------------------------------------------------
         11  recursive calls
          0  db block gets
       2077  consistent gets
        872  physical reads
          0  redo size
       4119  bytes sent via SQL*Net to client
        570  bytes received via SQL*Net from client
         20  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        278  rows processed


SQL>

All but LF_HOTEL_TEMP use the indexes to speed access to the data; this may be due to the fact that LF_HOTEL_TEMP is a two-column table and the priomary key index will be larger than the table since it contains the table data and a rowid for each row. The main idea illustrated by this problem involves being able to find a logically equivalent way of writing the SQL. Jonathan Lewis decided that set operations might produce a ‘better’ plan and re-wrote the query as shown below, producing a plan accessing LF_HOTEL_TEMP by the primary key index:


SQL> -- JL solution
SQL> var p_service_id number
SQL> exec : p_service_id := 1

PL/SQL procedure successfully completed.

SQL>
SQL> select  /*+ dynamic_sampling(0) */
  2  	     hotel_code
  3  from    lf_hotel_temp
  4  where   service_id = :p_service_id
  5  minus   (
  6  	     select  hotel_code
  7  	     from    lf_ts_roomtype_properties
  8  	     where   hotel_code is not null
  9  	     intersect
 10  	     select  hotel_code
 11  	     from    lf_gta_roomtype_properties
 12  	     where   hotel_code is not null
 13  	     intersect
 14  	     select  hotel_code
 15  	     from    lf_hb_roomtype_properties
 16  	     where   hotel_code is not null
 17  	     )
 18  ;

HOTEL_CODE
---------------------------------------------------------------
100899
101898
102897
...
999
9990
99900


278 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 775735246

--------------------------------------------------------------------------------------------------
| Id  | Operation             | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                          |     1 |  2159 |     4 (100)| 00:00:01 |
|   1 |  MINUS                |                          |       |       |            |          |
|   2 |   SORT UNIQUE NOSORT  |                          |     1 |  2015 |     1 (100)| 00:00:01 |
|*  3 |    INDEX FULL SCAN    | LF_HOTEL_TEMP_PK         |     1 |  2015 |     0   (0)| 00:00:01 |
|   4 |   INTERSECTION        |                          |       |       |            |          |
|   5 |    INTERSECTION       |                          |       |       |            |          |
|   6 |     SORT UNIQUE NOSORT|                          |     4 |    48 |     1 (100)| 00:00:01 |
|*  7 |      INDEX FULL SCAN  | LF_TS_ROOMTYPE_PROP_IDX  |     4 |    48 |     0   (0)| 00:00:01 |
|   8 |     SORT UNIQUE NOSORT|                          |     4 |    48 |     1 (100)| 00:00:01 |
|*  9 |      INDEX FULL SCAN  | LF_GTA_ROOMTYPE_PROP_IDX |     4 |    48 |     0   (0)| 00:00:01 |
|  10 |    SORT UNIQUE NOSORT |                          |     4 |    48 |     1 (100)| 00:00:01 |
|* 11 |     INDEX FULL SCAN   | LF_HB_ROOMTYPE_PROP_IDX  |     4 |    48 |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))
       filter("SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))
   7 - filter("HOTEL_CODE" IS NOT NULL)
   9 - filter("HOTEL_CODE" IS NOT NULL)
  11 - filter("HOTEL_CODE" IS NOT NULL)



Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       5996  consistent gets
       2416  physical reads
     234680  redo size
       4119  bytes sent via SQL*Net to client
        570  bytes received via SQL*Net from client
         20  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        278  rows processed

SQL>

Notice that using set-based processing improved the execution plan. Because of the first item on this list it’s easy to forget that SQL is a set-based implementation; sometimes it takes ‘interesting’ problems to lead us down the set-based pathway.

As the tweet thread progressed Jeff Smith (@thatjeffsmith) offered additional items that, not specific to SQL, are good to remember nonetheless:


				4. getting results back doesn't mean your code is right.
				5. learning never stops

Number 4 is an important lesson to remember; just because you get data doesn’t necessarily mean that data is right. Many times queries are written and run only to discover that the join conditions aren’t correct or are missing entirely, making the results useless. Always think through the process, and, if possible, manually generate a few rows to know what SHOULD be returned. Comparing those to the query results should show if the logic in the query is correct.

That learning never stops is one reason blogs are written and read; there’s always something new, different, or changed that makes ‘resting on one’s laurels’ nearly impossible, especially in IT. A day doesn’t go by that I don’t learn something, and, to be honest, if that day ever comes I’ll consider it wasted.

There are more items in that list that I won’t mention here but just because they aren’t commented on doesn’t make them any less important. Read through the list (the link provided should bring up the latest chain of tweets) and ponder those responses. It’s time to think about SQL anew, especially if some of these comments weren’t in your idea of what SQL is and what it does. Perspectives change, and ideas along with them, so never stop learning. The more you know, the more you can grow.

Santa isn’t the only one who makes lists, you know.

June 27, 2016

Examining The Remains

Filed under: General — dfitzjarrell @ 08:14

"Don't you know anything at all about numbers?"
"Well, I don't think they're very important," snapped Milo, too embarrassed to admit the truth.
"NOT IMPORTANT!" roared the Dodecahedron, turning red with fury.  "Could you have tea for two 
without the two, or three blind mice without the three? Would there be four corners of the earth 
if there weren't a four? And how would you sail the seven seas without a seven?"
"All I meant was..." began Milo, but the Dodecahedron, overcome with emotion and shouting furiously,
carried right on.
"If you had high hopes, how would you know how high they were? And did you know that narrow escapes 
come in all different widths? Would you travel the whole wide world without ever knowing how wide it
was? And how could you do anything at long last," he concluded, waving his arms over his head, "without
knowing how long the last was? Why, numbers are the most beautiful and valuable things in the world.
Just follow me and I'll show you." He turned on his heel and stalked off into the cave.
-- Norton Juster, The Phantom Tollbooth

It used to be that a database was, well, a database and it didn’t contain lots of interesting mathematical and analytic functions, just tables, data and basic string and number functions. That’s changed, mainly because of what work we now need to do and what results we need to report; enterprise level databases are both more complicated in terms of extended functionality and easier in terms of end-users generating with a single function values that were the result of complicated calculations done outside of the database. It’s a testament to Oracle that, for the most part, these functions provide correct results. Unforunately it appears that Oracle has ‘dropped the ball’ with what may be one of the simpler supplied functions, REMAINDER().

To start this odyssey a PL/SQL post was provided to illustrate the difference between MOD() and REMAINDER() and, unfortunately, the results from REMAINDER looked rather odd. I’ve provided the example from that post, modified to provide some context in the output. The ‘problem’ is that some of the remainders are negative which flies in the face of the mathematical definition of a remainder, which is:


In mathematics, the remainder is the amount "left over" after performing some computation.  In arithmetic, the remainder
is the integer "left over" after dividing one integer by another to produce an integer quotient (integer division).

The thought in my mind and that of every mathematician on this planet is that a remainder will never be negative. It’s what is ‘left over’ and you can’t ‘owe someone’ a remainder; besides being impossible according to the definition it’s also just not polite. For example, 15 divided by 2 has a remainder of 1, not -1. Let’s run the modified example and see what it returns:


SQL> 
SQL> --
SQL> -- REMAINDER doesn't return the correct remainder for
SQL> -- divisors of odd numbers that are powers of 2
SQL> --
SQL> -- The following example returns correct remainders
SQL> -- for all listed divisors of 15 except the
SQL> -- powers of 2, where the formula generates 16 as the
SQL> -- (n*X) value, thus displaying a negative remainder:
SQL> --
SQL> BEGIN
  2  	DBMS_OUTPUT.put_line ('MOD(15,2):'||MOD (15, 2));
  3  	DBMS_OUTPUT.put_line ('REMAINDER(15,2):'||REMAINDER (15, 2));
  4  	DBMS_OUTPUT.put_line ('MOD(15,3):'||MOD (15, 3));
  5  	DBMS_OUTPUT.put_line ('REMAINDER(15,3):'||REMAINDER (15, 3));
  6  	DBMS_OUTPUT.put_line ('MOD(15,4):'||MOD (15, 4));
  7  	DBMS_OUTPUT.put_line ('REMAINDER(15,4):'||REMAINDER (15, 4));
  8  	DBMS_OUTPUT.put_line ('MOD(15,5):'||MOD (15, 5));
  9  	DBMS_OUTPUT.put_line ('REMAINDER(15,5):'||REMAINDER (15, 5));
 10  	DBMS_OUTPUT.put_line ('MOD(15,6):'||MOD (15, 6));
 11  	DBMS_OUTPUT.put_line ('REMAINDER(15,6):'||REMAINDER (15, 6));
 12  	DBMS_OUTPUT.put_line ('MOD(15,7):'||MOD (15, 7));
 13  	DBMS_OUTPUT.put_line ('REMAINDER(15,7):'||REMAINDER (15, 7));
 14  	DBMS_OUTPUT.put_line ('MOD(15,8):'||MOD (15, 8));
 15  	DBMS_OUTPUT.put_line ('REMAINDER(15,8):'||REMAINDER (15, 8));
 16  END;
 17  /
MOD(15,2):1                                                                     
REMAINDER(15,2):-1                                                              
MOD(15,3):0                                                                     
REMAINDER(15,3):0                                                               
MOD(15,4):3                                                                     
REMAINDER(15,4):-1                                                              
MOD(15,5):0                                                                     
REMAINDER(15,5):0                                                               
MOD(15,6):3                                                                     
REMAINDER(15,6):3                                                               
MOD(15,7):1                                                                     
REMAINDER(15,7):1                                                               
MOD(15,8):7                                                                     
REMAINDER(15,8):-1                                                              

PL/SQL procedure successfully completed.

Why, oh why, does this happen? Oracle takes a ‘shortcut’ to get there, using a formula that, as far as I can tell, isn’t checking all that it needs to check before ‘spitting out’ the answer. That formula is shown below:


					R = m - (n*X)

where R is the remainder, m is the dividend, n is the divisor and X is an integer where n*X should be <= m

Notice what check is missing? For some odd reason Oracle never checks to see if the product (n*X) is less than or equal to m. Since Oracle calculates X by performng ROUND(m/n,0) fractional parts of a quotient, when they are .5 or greater, round up to the next highest integer. For the case of 15 divided by even numbers the calculated value of X results in (n*X) being equal to 16. Once that happens the remainders all end up as -1 which, by definition, is impossible.

Fiddle-dee-dee, what are we to do? One option is to use MOD() instead, since the results for MOD() don’t exhibit the same problem. Another option, if production code can’t be changed, is to create another remainder function, possibly in the application schema, with a private synonym calling it ‘REMAINDER’. One example of such a function is shown below:


SQL> 
SQL> --
SQL> -- Create a function to return the correct remainder - does not
SQL> -- generate values greater than the supplied target so the remainders
SQL> -- are not negative
SQL> --
SQL> create or replace function remainder_func(p_num1 in number, p_num2 in number)
  2  return number is
  3  	     v_rmdr  number;
  4  	     v_x     number:-1;
  5  begin
  6  	     v_x :- trunc(p_num1/p_num2, 0);
  7  	     v_rmdr :- p_num1 - (v_x * p_num2);
  8  	     return(v_rmdr);
  9  end;
 10  /

Function created.

SQL> 

So, let’s see what results are returned, since this function uses TRUNC() instead of ROUND():


SQL> BEGIN
  2  	DBMS_OUTPUT.put_line ('MOD(15,2):'||MOD (15, 2));
  3  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,2):'||REMAINDER_func (15, 2));
  4  	DBMS_OUTPUT.put_line ('MOD(15,3):'||MOD (15, 3));
  5  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,3):'||REMAINDER_func (15, 3));
  6  	DBMS_OUTPUT.put_line ('MOD(15,4):'||MOD (15, 4));
  7  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,4):'||REMAINDER_func (15, 4));
  8  	DBMS_OUTPUT.put_line ('MOD(15,5):'||MOD (15, 5));
  9  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,5):'||REMAINDER_func (15, 5));
 10  	DBMS_OUTPUT.put_line ('MOD(15,6):'||MOD (15, 6));
 11  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,6):'||REMAINDER_func (15, 6));
 12  	DBMS_OUTPUT.put_line ('MOD(15,7):'||MOD (15, 7));
 13  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,7):'||REMAINDER_func (15, 7));
 14  	DBMS_OUTPUT.put_line ('MOD(15,8):'||MOD (15, 8));
 15  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,8):'||REMAINDER_func (15, 8));
 16  END;
 17  /
MOD(15,2):1                                                                     
REMAINDER_func(15,2):1                                                          
MOD(15,3):0                                                                     
REMAINDER_func(15,3):0                                                          
MOD(15,4):3                                                                     
REMAINDER_func(15,4):3                                                          
MOD(15,5):0                                                                     
REMAINDER_func(15,5):0                                                          
MOD(15,6):3                                                                     
REMAINDER_func(15,6):3                                                          
MOD(15,7):1                                                                     
REMAINDER_func(15,7):1                                                          
MOD(15,8):7                                                                     
REMAINDER_func(15,8):7                                                          

PL/SQL procedure successfully completed.

Wow, golly gee whilikers, the results are correct! Of course that should be expected since there is no possibility that (n*X) will be greater than m. Let’s take a side trip and see how ROUND() and TRUNC() provide different results for odd numbers divided by even numbers, using 15 as a ‘test subject’:


SQL> 
SQL> --
SQL> -- ROUND() doesn't work for this example as it
SQL> -- generates values that are powers of 2
SQL> --
SQL> 
SQL> create or replace procedure remainder_test(p_num1 in number, p_num2 in number)
  2  is
  3  	     v_t_rmdr	     number;
  4  	     v_r_rmdr	     number;
  5  	     v_tx    number:-1;
  6  	     v_rx    number:-1;
  7  begin
  8  	     dbms_output.put_line('---------------------------------------------------------------------');
  9  	     v_tx :- trunc(p_num1/p_num2, 0);
 10  	     v_rx :- round(p_num1/p_num2, 0);
 11  	     v_t_rmdr :- p_num1 - (v_tx * p_num2);
 12  	     v_r_rmdr :- p_num1 - (v_rx * p_num2);
 13  	     dbms_output.put_line('Rounded:   '||v_rx||' (n*X): '||v_rx*p_num2||' Remainder: '||v_r_rmdr);
 14  	     dbms_output.put_line('Truncated: '||v_tx||' (n*X): '||v_tx*p_num2||' Remainder: '||v_t_rmdr);
 15  	     dbms_output.put_line('---------------------------------------------------------------------');
 16  end;
 17  /

Procedure created.

SQL> 
SQL> BEGIN
  2  	REMAINDER_test (15, 2);
  3  	REMAINDER_test (15, 3);
  4  	REMAINDER_test (15, 4);
  5  	REMAINDER_test (15, 5);
  6  	REMAINDER_test (15, 6);
  7  	REMAINDER_test (15, 7);
  8  	REMAINDER_test (15, 8);
  9  END;
 10  /
---------------------------------------------------------------------           
Rounded:   8 (n*X): 16 Remainder: -1                                            
Truncated: 7 (n*X): 14 Remainder: 1                                             
---------------------------------------------------------------------           
---------------------------------------------------------------------           
Rounded:   5 (n*X): 15 Remainder: 0                                             
Truncated: 5 (n*X): 15 Remainder: 0                                             
---------------------------------------------------------------------           
---------------------------------------------------------------------           
Rounded:   4 (n*X): 16 Remainder: -1                                            
Truncated: 3 (n*X): 12 Remainder: 3                                             
---------------------------------------------------------------------           
---------------------------------------------------------------------           
Rounded:   3 (n*X): 15 Remainder: 0                                             
Truncated: 3 (n*X): 15 Remainder: 0                                             
---------------------------------------------------------------------           
---------------------------------------------------------------------           
Rounded:   3 (n*X): 18 Remainder: -3                                            
Truncated: 2 (n*X): 12 Remainder: 3                                             
---------------------------------------------------------------------           
---------------------------------------------------------------------           
Rounded:   2 (n*X): 14 Remainder: 1                                             
Truncated: 2 (n*X): 14 Remainder: 1                                             
---------------------------------------------------------------------           
---------------------------------------------------------------------           
Rounded:   2 (n*X): 16 Remainder: -1                                            
Truncated: 1 (n*X): 8 Remainder: 7                                              
---------------------------------------------------------------------           

PL/SQL procedure successfully completed.

SQL> 

There’s quite a difference between ROUND() and TRUNC(); to be fair MOD() uses FLOOR() in the calculations to generate MOD values and, as mentioned before, REMAINDER() uses ROUND(), probably in an effort to make the two functions internally different. Using a bit more code the ROUND() option could also work:


SQL> --
SQL> -- Create a function to return the correct remainder - does not
SQL> -- generate values greater than the supplied target so the remainders
SQL> -- are not negative
SQL> --
SQL> -- Uses ROUNND() to compute values
SQL> --
SQL> create or replace function remainder_func(p_num1 in number, p_num2 in number)
  2  return number is
  3  	     v_rmdr  number;
  4  	     v_x     number:=1;
  5  begin
  6  	     v_x := round(p_num1/p_num2, 0);
  7  	     if v_x*p_num2 > p_num1 then
  8  		     v_x := v_x -1;
  9  	     end if;
 10  	     v_rmdr := p_num1 - (v_x * p_num2);
 11  	     return(v_rmdr);
 12  end;
 13  /

Function created.

SQL> 
SQL> BEGIN
  2  	DBMS_OUTPUT.put_line ('MOD(15,2):'||MOD (15, 2));
  3  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,2):'||REMAINDER_func (15, 2));
  4  	DBMS_OUTPUT.put_line ('MOD(15,3):'||MOD (15, 3));
  5  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,3):'||REMAINDER_func (15, 3));
  6  	DBMS_OUTPUT.put_line ('MOD(15,4):'||MOD (15, 4));
  7  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,4):'||REMAINDER_func (15, 4));
  8  	DBMS_OUTPUT.put_line ('MOD(15,5):'||MOD (15, 5));
  9  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,5):'||REMAINDER_func (15, 5));
 10  	DBMS_OUTPUT.put_line ('MOD(15,6):'||MOD (15, 6));
 11  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,6):'||REMAINDER_func (15, 6));
 12  	DBMS_OUTPUT.put_line ('MOD(15,7):'||MOD (15, 7));
 13  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,7):'||REMAINDER_func (15, 7));
 14  	DBMS_OUTPUT.put_line ('MOD(15,8):'||MOD (15, 8));
 15  	DBMS_OUTPUT.put_line ('REMAINDER_func(15,8):'||REMAINDER_func (15, 8));
 16  END;
 17  /
MOD(15,2):1                                                                     
REMAINDER_func(15,2):1                                                          
MOD(15,3):0                                                                     
REMAINDER_func(15,3):0                                                          
MOD(15,4):3                                                                     
REMAINDER_func(15,4):3                                                          
MOD(15,5):0                                                                     
REMAINDER_func(15,5):0                                                          
MOD(15,6):3                                                                     
REMAINDER_func(15,6):3                                                          
MOD(15,7):1                                                                     
REMAINDER_func(15,7):1                                                          
MOD(15,8):7                                                                     
REMAINDER_func(15,8):7                                                          

PL/SQL procedure successfully completed.

SQL> 

but I think the first function is slightly more efficient. Additionally the second function essentially ‘re-invents’ FLOOR(), so why not simply use FLOOR() to get the job done:


SQL>
SQL> --
SQL> -- Create a function to return the correct remainder - does not
SQL> -- generate values greater than the supplied target so the remainders
SQL> -- are not negative
SQL> --
SQL> -- Uses FLOOR() to compute values
SQL> --
SQL> create or replace function remainder_func(p_num1 in number, p_num2 in number)
  2  return number is
  3          v_rmdr  number;
  4          v_x     number:=1;
  5  begin
  6          v_x := floor(p_num1/p_num2);
  7          v_rmdr := p_num1 - (v_x * p_num2);
  8          return(v_rmdr);
  9  end;
 10  /

Function created.

SQL>
SQL> BEGIN
  2     DBMS_OUTPUT.put_line ('MOD(15,2):'||MOD (15, 2));
  3     DBMS_OUTPUT.put_line ('REMAINDER_func(15,2):'||REMAINDER_func (15, 2));
  4     DBMS_OUTPUT.put_line ('MOD(15,3):'||MOD (15, 3));
  5     DBMS_OUTPUT.put_line ('REMAINDER_func(15,3):'||REMAINDER_func (15, 3));
  6     DBMS_OUTPUT.put_line ('MOD(15,4):'||MOD (15, 4));
  7     DBMS_OUTPUT.put_line ('REMAINDER_func(15,4):'||REMAINDER_func (15, 4));
  8     DBMS_OUTPUT.put_line ('MOD(15,5):'||MOD (15, 5));
  9     DBMS_OUTPUT.put_line ('REMAINDER_func(15,5):'||REMAINDER_func (15, 5));
 10     DBMS_OUTPUT.put_line ('MOD(15,6):'||MOD (15, 6));
 11     DBMS_OUTPUT.put_line ('REMAINDER_func(15,6):'||REMAINDER_func (15, 6));
 12     DBMS_OUTPUT.put_line ('MOD(15,7):'||MOD (15, 7));
 13     DBMS_OUTPUT.put_line ('REMAINDER_func(15,7):'||REMAINDER_func (15, 7));
 14     DBMS_OUTPUT.put_line ('MOD(15,8):'||MOD (15, 8));
 15     DBMS_OUTPUT.put_line ('REMAINDER_func(15,8):'||REMAINDER_func (15, 8));
 16  END;
 17  /
MOD(15,2):1
REMAINDER_func(15,2):1
MOD(15,3):0
REMAINDER_func(15,3):0
MOD(15,4):3
REMAINDER_func(15,4):3
MOD(15,5):0
REMAINDER_func(15,5):0
MOD(15,6):3
REMAINDER_func(15,6):3
MOD(15,7):1
REMAINDER_func(15,7):1
MOD(15,8):7
REMAINDER_func(15,8):7

PL/SQL procedure successfully completed.

SQL>

FLOOR() returns the highest value that is not larger than the supplied argument (in this case 15 divided by some number), rounding down when necessary. It’s interesting that Oracle used a different formula to compute REMAINDER() values that could (and does) produce negative results.

A discussion ensued on Twitter that ROUND() contained a bug, yet nothing was further from the truth. ROUND() behaves exactly as it should; the issue is that ROUND() can generate a number that, when multiplied by the supplied divisor, is larger than the supplied dividend, producing remainders that are negative. So the ‘bug’, if you will, is in how Oracle implemented REMAINDER() [R=m-(n*X)], and that ‘bug’ can be fixed by coding the function to produce (n*X) values that are always less than or equal to m.

No matter how thoroughly testing is done it seems that one of two problems will rear its ugly head. One is that as hard as you try you can’t generate every bit of data the application will see ‘in the wild’. The other is you can overlook simple examples in favor of the more complex. This is, of course, why we file bug reports with software vendors, because we caught something they didn’t. It is, I suppose, the ‘circle of life’ for software.

Now the remainder is up to you.

June 14, 2016

“It Doesn’t Work”

Filed under: General — dfitzjarrell @ 17:09


“Since you got here by not thinking, it seems reasonable to expect that, in order to get out, you must start thinking.”
— Norton Juster, The Phantom Tollbooth

The Devil is in the details, however, in support forums and newsgroups, the Devil is just as present when the details are either sketchy or non-existent. Volunteers in such forums and newsgroups quite often hear the dreaded words “it doesn’t work”, followed by … nothing. No detail, no explanation, no follow-up, nothing. If you walked in to your doctor and said, with no context, “it hurts” your doctor would be at a loss with respect to any reasonable attempt at a diagnosis. WHAT hurts? Where? When did it start? All questions your doctor would immediately ask in hopes of finding an answer. Support volunteers have no crystal balls, no Ouija boards, no Tarot cards and are not mind readers so leaving out important information makes it impossible for them to assess your situation.

Encountering a problem for the first time can be frustrating, that’s understood, so it stands to reason that you may not have much of a clue on how to ask your question. How should you frame questions in such forums? I blogged about that here so I won’t repeat myself. Suffice it to say that the more detail and explanation you can provide to support volunteers the better the responses will be. Also try your best to find an answer before you go to the forums and newsgroups. The more YOU can do to find an answer outside of the forums the more likely you’ll find help in the forums when you really need it.

Members of such forums see all sorts of questions, many of them repeatedly, so supplying as much information as you can when you first ask your question is important. If you ‘miss the mark’, so to speak, when supplying relevant details these volnteers will ask you questions to narrow down the possibilities. Doing so doesn’t mean they don’t know what they are talking about, it means you didn’t make your situation clear enough to generate a usable answer. Another response you might encounter for ambiguous questions is “That depends…”, and that isn’t a red flag of ignorance, it’s the volunteers trying to get you to be more specific with your details. Remember, the more they know about your problem the better able they will be to find you an answer.

Patience is a virtue; forum members don’t sit at their computers every second of every day just waiting for you to ask questions so you shouldn’t expect immediate responses. Sometimes it may take a day to get an initial response, so that shouldn’t worry you. In some cases there may be forum members who tend to answer questions in certain specific areas, and your question may fall into one of those areas, and those members may be offline when you get the time to ask. Just because you don’t get an immediate response in no way indicates you and your question are being ignored, and a delay in responses doesn’t mean you need to post your question again and again. Rest assured you will have the attention of forum members and someone will provide a response, even if it’s to ask for more information. Quality takes time.

It bears repeating that the brilliant souls in these forums and newsgroups are volunteers, not paid support personnel, and they are in these forums because they actually, honestly want to help by sharing knowledge they have gained through years of experience. They are giving their time to help you and others like you and it would be wise to remember that. Having a sense of entitlement has no place in these forums; behaving as though you are owed an answer is probably the surest way to keep those volunteers from helping you. Treat them as you would want to be treated and you may find that these volunteers will go the extra mile to help you.

It’s definitely worth thinking about.

June 6, 2016

“It’s … MUTATING!!!!”

Filed under: General — dfitzjarrell @ 08:59

“The most important reason for going from one place to another is to see what's in between.” 
― Norton Juster, The Phantom Tollbooth   

The mutating table error (ORA-04091) thrown by Oracle can be a familiar sight throughout a DBA’s career, along with the usual question of ‘Why?’. The error itself is not difficult to understand nor is it hazardous to the table data. Let’s look at why the error is thrown, why it’s not gong to create data problems and how to possibly fix it.

The first thing to realize is the table isn’t actually mutating; the error is thrown because of the read consistency mechanism Oracle employs to ensure against ‘dirty reads’, which are reads of uncommitted changes. For all but the session performing the data modifications Oracle will use redo blocks to reconstruct the data image as of the starting time of the query. This prevents any uncommitted changes from being read. For the session performing the modifications the uncommitted changes are visible, but only after the modifications have completed. Insert 10 rows into a table, then query the count and you’ll see there are 10 more rows than when you started. Every other session sees only the committed results. The mutating issue surfaces when a regular trigger attempts to modify the same table that was modified by the driving insert, update or delete statement. Since Oracle allows the modifying session to see its own changes Oracle tries to execute the trigger but fails when another DML statement tries to change incompletely modified data, meaning the change IS made but the update statement hasn’t yet completed all actions it started, such as firing triggers. Since the insert/update/delete can’t complete until the trigger successfully executes, and executing the trigger would pile changes on top of changes in the middle of a transaction, Oracle doesn’t allow it and throws the error.

In a promoted video on YouTube the following statement is made:


“… that you are breaking this rule and stop your table data and schema objects from corruption.”

Trying to modify data that is in the middle of another modification operation won’t corrupt it since Oracle won’t allow such updates to occur. Depending on the type of trigger involved Oracle will either try to modify existing data before the requested modifications are processed or Oracle will process the insert/update then try to modify the resulting data before the transaction ends. In either case since an active modification is in progress Oracle won’t allow a second modification from the same session on the incompletely altered data. Were this safety mechanism not in place it would be possible to ‘mangle’ table data into an irreversible state and thus ‘currupt’ it, however schema object changes are effected thtough data definition language (DDL) changes which are preceded by a commit and followed by one. DML, such as insert, update and delete statements, cannot corrupt schema object structure.

Getting around such an error can be done in two ways: using a compound trigger if you’re using Oracle 11g or later, or by using the autonomous_transaction pragma. The following example shows how to code such a trigger to avoid the ORA-04091 error:


create or replace procedure set_comm(p_empno in number, p_sal in number, p_oldsal number)
is
pragma autonomous_transaction;
begin
	if p_sal >= p_oldsal*1.10 then
		update emp
		set comm = p_sal*.10
		where empno = p_empno;
	end if;
end;
/

create or replace trigger after_emp_updt
after update of sal on emp
for each row
begin
		set_comm(:new.empno, :new.sal, :old.sal);
end;
/

The trigger is simple, and uses a procedure declared to execute in an autonomous transaction. Autonomous transactions occur in a new process spawned from the calling process, and are thus subject to the standard read consistency mechanism Oracle employs. The procedure can perform its work outside of the original session that fired the trigger, getting around the ORA-04091 error entirely.

The compound trigger operates a bit differently, doing all of its table access before the triggering event causes one of the sections to fire, thus there is no in-process data update to interrupt. Let’s look at an example from a previous article to see how this is done:


SQL> create or replace trigger check_raise_on_avg
  2  for update of sal on emp
  3  COMPOUND TRIGGER
  4    Twelve_Percent        constant number:=0.12;
  5
  6    -- Declare collection type and variable:
  7
  8    TYPE Department_Salaries_t  IS TABLE OF Emp.Sal%TYPE
  9                                  INDEX BY VARCHAR2(80);
 10    Department_Avg_Salaries     Department_Salaries_t;
 11    TYPE Sal_t             IS TABLE OF Emp.Sal%TYPE;
 12    Avg_Salaries                Sal_t;
 13    TYPE Deptno_t       IS TABLE OF Emp.Deptno%TYPE;
 14    Department_IDs              Deptno_t;
 15
 16    BEFORE STATEMENT IS
 17    BEGIN
 18      SELECT               AVG(e.Sal), NVL(e.Deptno, -1)
 19        BULK COLLECT INTO  Avg_Salaries, Department_IDs
 20        FROM               Emp e
 21        GROUP BY           e.Deptno;
 22      FOR j IN 1..Department_IDs.COUNT() LOOP
 23        Department_Avg_Salaries(Department_IDs(j)) := Avg_Salaries(j);
 24      END LOOP;
 25    END BEFORE STATEMENT;
 26
 27    AFTER EACH ROW IS
 28    BEGIN
 29      IF :NEW.Sal - :Old.Sal >
 30        Twelve_Percent*Department_Avg_Salaries(:NEW.Deptno)
 31      THEN
 32        Raise_Application_Error(-20000, 'Raise too large');
 33      END IF;
 34    END AFTER EACH ROW;
 35  END Check_Raise_On_Avg;
 36  /

Trigger created.

SQL> select empno, sal from emp;

     EMPNO        SAL
---------- ----------
      7369        800
      7499       1600
      7521       1250
      7566       2975
      7654       1250
      7698       2850
      7782       2450
      7788       3000
      7839       5000
      7844       1500
      7876       1100
      7900        950
      7902       3000
      7934       1300

14 rows selected.

SQL>
SQL> update emp set sal=sal*1.10 where empno = 7369;

1 row updated.

SQL>
SQL> select empno, sal from emp;

     EMPNO        SAL
---------- ----------
      7369        880
      7499       1600
      7521       1250
      7566       2975
      7654       1250
      7698       2850
      7782       2450
      7788       3000
      7839       5000
      7844       1500
      7876       1100
      7900        950
      7902       3000
      7934       1300

14 rows selected.

SQL>

Updates occur even though the trigger is also selecting data from the table being modified, all possible by the way Oracle executes compound triggers.

The mutating table error is inconvenient but it really isn’t a danger to your data or your schema objects. It’s a read consistency problem that can be overcome by using autonomous transactions or compound triggers. Knowing that should make the DBAs life a little bit easier.

Sometimes what’s in between can be interesting.

March 21, 2016

Repeat After Me

Filed under: General — dfitzjarrell @ 08:44

“every time you decide something without having a good reason, you jump to Conclusions whether you like it or not.” 
-- Norton Juster, The Phantom Tollbooth

In an Oracle forum recently the following question was posted:


I have a request for 3 indices as shown below. Does the 1st index suffice for 2 and 3?  Do I need all 3?
 
    CREATE INDEX IDX_ATM_EM_EFF_VER_CURRENT_DEL ON ATM_xxx_SALARY
    (EMPLOYEE_KEY, EFFECTIVE_DT, SALARY_VERSION_NUMBER, IS_CURRENT, IS_DELETED);

    CREATE INDEX IDX_ATM_EM_VER_CURRENT ON ATM_xxx_SALARY
    (EMPLOYEE_KEY, SALARY_VERSION_NUMBER, IS_CURRENT);

    CREATE INDEX .IDX_ATM_SAL_CURRENT_DEL ON ATM_xxx_SALARY
    (EMPLOYEE_KEY, IS_DELETED, IS_CURRENT);

Answering such a question in absence of any additional information isn’t easy, but looking at the columns it’s not unrealistic to make the following presumptions about the data:

        EMPLOYEE_KEY must be not null and unique
        IS_CURRENT is either a YES or NO value
        IS_DELETED is either a YES or NO value    

The following example, using queries tailored to the indexes to be created, was run in Oracle 11.2.0.4; notice that the first index is used in all three queries:


SQL> --
SQL> -- Create the table
SQL> --
SQL> create table atm_xxx_salary(
  2  employee_key            varchar2(20) not null,
  3  emp_fname               varchar2(20),
  4  emp_lname               varchar2(40),
  5  effective_dt            date,
  6  salary_version_number      number,
  7  is_current              varchar2(3),
  8  is_deleted              varchar2(3));
      
Table created.

SQL>
SQL> --
SQL> -- Load data
SQL> --
SQL> begin
  2          for i in 1..100 loop
  3                  insert into atm_xxx_salary
  4                  values('Employee_'||i, 'Blobbo', 'Fingnestle', sysdate + mod(i, 9), mod(i,11)+1, 'YES','NO');
  5  End loop;
  6          commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> -- Create first index
SQL> --
SQL> CREATE INDEX IDX_ATM_EM_EFF_VER_CURRENT_DEL ON ATM_xxx_SALARY
  2  (EMPLOYEE_KEY, EFFECTIVE_DT, SALARY_VERSION_NUMBER, IS_CURRENT, IS_DELETED);
         
Index created.
         
SQL>
SQL> exec dbms_stats.gather_schema_stats(user);
      
PL/SQL procedure successfully completed.
    
SQL>
SQL> set autotrace on linesize 150
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and effective_dt between sysdate and sysdate+4
  5  and salary_version_number = 7
  6  and is_current = 'YES';

no rows selected

   
       
Execution Plan
----------------------------------------------------------
Plan hash value: 2729507590

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                                |     6 |   288 |     2   (0)| 00:00:01 |
|*  1 |  FILTER                      |                                |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| ATM_XXX_SALARY                 |     6 |   288 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | IDX_ATM_EM_EFF_VER_CURRENT_DEL |     6 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------
       
Predicate Information (identified by operation id):
---------------------------------------------------
     
   1 - filter(SYSDATE@!+4>=SYSDATE@!)
   3 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "EFFECTIVE_DT">=SYSDATE@! AND
              "SALARY_VERSION_NUMBER"=7 AND "IS_CURRENT"='YES' AND "EFFECTIVE_DT"<=SYSDATE@!+4)
       filter("SALARY_VERSION_NUMBER"=7 AND "EFFECTIVE_DT"=SYSDATE@! AND "IS_CURRENT"='YES')

SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
        
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO


Execution Plan
----------------------------------------------------------
Plan hash value: 1328304448
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                                |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID |  ATM_XXX_SALARY                |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
        

Predicate Information (identified by operation id):
---------------------------------------------------
         
   2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_CURRENT"='YES' AND "IS_DELETED"='NO')
       filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
        
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
       
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO


Execution Plan
----------------------------------------------------------
Plan hash value: 1328304448
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                                |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY                 |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
        
   2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_CURRENT"='YES' AND "IS_DELETED"='NO')
       filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')

        
         
SQL>
SQL> set autotrace off

The above appears to answer the question and prove the remaining two indexes are not necessary, but it can’t hurt to test this theory by creating the remaining indexes and running the same set of queries after each index creation. Creating the second index apparently does no good as it isn’t used in favor of the first:

       
SQL> --
SQL> -- Create second index
SQL> --
SQL> CREATE INDEX IDX_ATM_EM_VER_CURRENT ON ATM_xxx_SALARY
  2  (EMPLOYEE_KEY, SALARY_VERSION_NUMBER, IS_CURRENT);
         
Index created.
         
SQL>
SQL>
SQL> set autotrace on
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and effective_dt between sysdate and sysdate+4
  5  and salary_version_number = 7
  6  and is_current = 'YES';
         
no rows selected
        
         
Execution Plan
----------------------------------------------------------
Plan hash value: 2729507590
---------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                                |     6 |   288 |     2   (0)| 00:00:01 |
|*  1 |  FILTER                       |                                |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY                 |     6 |   288 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |     6 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
   1 - filter(SYSDATE@!+4>=SYSDATE@!)
   3 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "EFFECTIVE_DT">=SYSDATE@! AND
              "SALARY_VERSION_NUMBER"=7 AND "IS_CURRENT"='YES' AND "EFFECTIVE_DT"<=SYSDATE@!+4)
       filter("SALARY_VERSION_NUMBER"=7 AND "EFFECTIVE_DT"=SYSDATE@! AND "IS_CURRENT"='YES')
         
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
      
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO
         
         
Execution Plan
----------------------------------------------------------
Plan hash value: 1328304448
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                                |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY                 |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
   2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_CURRENT"='YES' AND "IS_DELETED"='NO')
       filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
         
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
      
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO
         
         
Execution Plan
----------------------------------------------------------
Plan hash value: 1328304448
         
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                                |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY                 |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
   2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_CURRENT"='YES' AND "IS_DELETED"='NO')
       filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
         
SQL>
SQL> set autotrace off

Let’s create the third index and see what Oracle does:

       
SQL> --
SQL> -- Create third index
SQL> --
SQL> CREATE INDEX IDX_ATM_SAL_CURRENT_DEL ON ATM_xxx_SALARY
  2  (EMPLOYEE_KEY, IS_DELETED, IS_CURRENT);
         
Index created.
         
SQL>
SQL> set autotrace on
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and effective_dt between sysdate and sysdate+4
  5  and salary_version_number = 7
  6  and is_current = 'YES';
         
no rows selected
         
         
Execution Plan
----------------------------------------------------------
Plan hash value: 2729507590
         
---------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                                |     6 |   288 |     2   (0)| 00:00:01 |
|*  1 |  FILTER                       |                                |       |       |            |       |
|   2 |   TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY                 |     6 |   288 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |     6 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
   1 - filter(SYSDATE@!+4>=SYSDATE@!)
   3 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "EFFECTIVE_DT">=SYSDATE@! AND
              "SALARY_VERSION_NUMBER"=7 AND "IS_CURRENT"='YES' AND "EFFECTIVE_DT"<=SYSDATE@!+4)
       filter("SALARY_VERSION_NUMBER"=7 AND "EFFECTIVE_DT"=SYSDATE@! AND "IS_CURRENT"='YES')
         
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
         
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO
         
         
Execution Plan
---------------------------------------------------------
Plan hash value: 1562453262
         
-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                         |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY          |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_SAL_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
    2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
        filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
         
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
         
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO
         
         
Execution Plan
----------------------------------------------------------
Plan hash value: 1562453262
         
-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                         |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY          |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_SAL_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
   2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
       filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')

SQL>
SQL>
SQL> set autotrace off

Notice that the first and third indexes are used, with the third index used on the second and third queries. Remember that the first index was used on all three queries so given the presumptions made on the nature of the data the second and third indexes appear to not be needed. Of course this is not rigorous testing and other queries and/or data distributions could produce differing results. You really need to test, test, test, test and test again before making a final decision. But, the indications are good that a single index would suffice to provide “speedy” data retrieval.

Please don’t make me repeat myself.

February 17, 2016

“That Ain’t Right”

Filed under: General — dfitzjarrell @ 09:46

"Is everyone who lives in Ignorance like you?" asked Milo.
"Much worse," he said longingly. "But I don't live here. I'm from a place very far away called Context." 
-- Norton Juster, The Phantom Tollbooth 

Dates are, well, dates. They are not strings, they are not numbers, they are dates and to handle them properly the correct data type needs to be used. I can’t count how many times I’ve seen dates stored as VARCHAR2 strings or as numbers, neither of which treat them properly or order them correctly. Recently in a forum I frequent this question was asked:


Version Oracle 11 g R2


Hello every body
 

First of All i know the Format is wrong
but as it is a old project i need to extract the report as it is

there is a table with two fields and data as follow

create table Onduty_leave
(
Start_time varchar2(100),
end_time  varchar2(100)
);

insert into onduty_leave (start_time,end_time) values('09.00 am','06.00 pm');

Minutes i am handling differently

Please just tell me how to calculate hours between 09 am to 06 pm 

Yes, this person realizes that the data type is incorrect for the data being stored, but the fact remains someone thought this was a good idea and implemented it into a production system. Of course it’s possible to provide the results the person posting this question is after, but it would be so much easier had the correct data type been used. Be that as it may let’s proceed with the current table definition and show what can be done to calculate the desired quantity. Let’s build and populate the table with some simple code:


SQL> 
SQL> --
SQL> -- Create the table with the improper data type
SQL> --
SQL> create table Onduty_leave
  2  (
  3  Start_time varchar2(100),
  4  end_time  varchar2(100)
  5  );

Table created.

SQL> 
SQL> --
SQL> -- Populate the table
SQL> --
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into onduty_leave
  4  		     values(sysdate-i, sysdate+i);
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 
SQL> --
SQL> -- Set the display format to see the date and time
SQL> --
SQL> alter session set nls_date_format = 'DD-MON-RRRR HH24:MI:SS';

Session altered.

SQL> 
SQL> --
SQL> -- Format the strings for display
SQL> --
SQL> column start_time format a25
SQL> column end_time format a25
SQL> 

We are now ready to compute the hours between the two provided dates in each record. Since these are VARCHAR2 strings it will be necessary to use TO_CHAR and TO_DATE to get proper results.


SQL> --
SQL> -- 24 hours per day, find days between and multiply
SQL> -- This uses the entire date string to provide correct
SQL> -- results
SQL> --
SQL> 
SQL> select (to_date(end_time) - to_date(start_time))*24 hours_between
  2  from onduty_leave;

HOURS_BETWEEN                                                                   
-------------                                                                   
           48                                                                   
           96                                                                   
          144                                                                   
          192                                                                   
          240                                                                   
          288                                                                   
          336                                                                   
          384                                                                   
          432                                                                   
          480                                                                   

10 rows selected.

SQL> 

It isn’t rocket science but it is a bit more work than we should be doing to get these answers. The data in the table was generated simply; let’s generate some new data that may more accurately reflect real-world entries into this table:


SQL> --
SQL> -- Purge all current data
SQL> --
SQL> truncate table onduty_leave;

Table truncated.

SQL> 
SQL> --
SQL> -- Populate table with more realistic records
SQL> --
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into onduty_leave
  4  		     values(sysdate, sysdate+((25*i)/24));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 

Using just the time from each entry let’s compute the new hours difference between them:


SQL> --
SQL> -- Hours between entries
SQL> -- Use only the time portion of the date
SQL> --
SQL> -- This produces wrong results since the date
SQL> -- isn't included in the calculation
SQL> --
SQL> select start_time, end_time,
  2  	    (to_date(to_char(to_date(end_time), 'HH24:MI:SS'), 'HH24:MI:SS') - to_date(to_char(to_date(start_time), 'HH24:MI:SS'), 'HH24:MI:SS'))*24 hours_between
  3  from onduty_leave;

START_TIME                END_TIME                  HOURS_BETWEEN               
------------------------- ------------------------- -------------               
17-FEB-2016 09:12:28      18-FEB-2016 10:12:28                  1               
17-FEB-2016 09:12:28      19-FEB-2016 11:12:28                  2               
17-FEB-2016 09:12:28      20-FEB-2016 12:12:28                  3               
17-FEB-2016 09:12:28      21-FEB-2016 13:12:28                  4               
17-FEB-2016 09:12:28      22-FEB-2016 14:12:28                  5               
17-FEB-2016 09:12:28      23-FEB-2016 15:12:28                  6               
17-FEB-2016 09:12:28      24-FEB-2016 16:12:28                  7               
17-FEB-2016 09:12:28      25-FEB-2016 17:12:28                  8               
17-FEB-2016 09:12:28      26-FEB-2016 18:12:28                  9               
17-FEB-2016 09:12:28      27-FEB-2016 19:12:28                 10               

10 rows selected.

SQL> 

The initial request was flawed by assuming each record would reference a single day; even though this may be true for the actual data the solution cannot rely on such assumptions. Obviously the results from that last query were wrong; let’s use the full date string and see what values we get:


SQL> --
SQL> -- 24 hours per day, find days between and multiply
SQL> --
SQL> -- Use full date string to compute the hours between
SQL> -- entries
SQL> --
SQL> 
SQL> select start_time, end_time,
  2  	    (to_date(end_time) - to_date(start_time))*24 hours_between
  3  from onduty_leave;

START_TIME                END_TIME                  HOURS_BETWEEN               
------------------------- ------------------------- -------------               
17-FEB-2016 09:12:28      18-FEB-2016 10:12:28                 25               
17-FEB-2016 09:12:28      19-FEB-2016 11:12:28                 50               
17-FEB-2016 09:12:28      20-FEB-2016 12:12:28                 75               
17-FEB-2016 09:12:28      21-FEB-2016 13:12:28                100               
17-FEB-2016 09:12:28      22-FEB-2016 14:12:28                125               
17-FEB-2016 09:12:28      23-FEB-2016 15:12:28                150               
17-FEB-2016 09:12:28      24-FEB-2016 16:12:28                175               
17-FEB-2016 09:12:28      25-FEB-2016 17:12:28                200               
17-FEB-2016 09:12:28      26-FEB-2016 18:12:28                225               
17-FEB-2016 09:12:28      27-FEB-2016 19:12:28                250               

10 rows selected.

SQL> 

This looks much better, but it’s still based on date values stored as strings. With the format specified we can successfully order the data for a single month, but when we add data for other months and sort it ‘strange’ things can happen:


SQL> --
SQL> -- Populate table with more records
SQL> --
SQL> begin
  2  	     for i in 11..100 loop
  3  		     insert into onduty_leave
  4  		     values(sysdate, sysdate+((25*i)/24));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 
SQL> --
SQL> -- Sort the data
SQL> --
SQL> select *
  2  from onduty_leave
  3  order by 2;

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:04      01-APR-2016 03:29:04                                  
17-FEB-2016 09:29:04      01-MAR-2016 22:29:04                                  
17-FEB-2016 09:29:04      01-MAY-2016 08:29:04                                  
17-FEB-2016 09:29:04      02-APR-2016 04:29:04                                  
17-FEB-2016 09:29:04      02-MAR-2016 23:29:04                                  
17-FEB-2016 09:29:04      02-MAY-2016 09:29:04                                  
17-FEB-2016 09:29:04      03-APR-2016 05:29:04                                  
17-FEB-2016 09:29:04      03-MAY-2016 10:29:04                                  
17-FEB-2016 09:29:04      04-APR-2016 06:29:04                                  
17-FEB-2016 09:29:04      04-MAR-2016 00:29:04                                  
17-FEB-2016 09:29:04      04-MAY-2016 11:29:04                                  
17-FEB-2016 09:29:04      05-APR-2016 07:29:04                                  
17-FEB-2016 09:29:04      05-MAR-2016 01:29:04                                  
17-FEB-2016 09:29:04      05-MAY-2016 12:29:04                                  
17-FEB-2016 09:29:04      06-APR-2016 08:29:04                                  
17-FEB-2016 09:29:04      06-MAR-2016 02:29:04                                  
17-FEB-2016 09:29:04      06-MAY-2016 13:29:04                                  
17-FEB-2016 09:29:04      07-APR-2016 09:29:04                                  
17-FEB-2016 09:29:04      07-MAR-2016 03:29:04                                  
17-FEB-2016 09:29:04      07-MAY-2016 14:29:04                                  
17-FEB-2016 09:29:04      08-APR-2016 10:29:04                                  
17-FEB-2016 09:29:04      08-MAR-2016 04:29:04                                  
...

100 rows selected.

SQL> 

Oracle doesn’t know these are dates, so they are sorted as ASCII text making the order anything but logical, as far as dates go. Let’s drop the orginal table, recreate with the proper date type and try this again:


SQL> --
SQL> -- Drop the original table
SQL> --
SQL> drop table onduty_leave purge;

Table dropped.

SQL> 
SQL> --
SQL> -- Create the table with the proper data type
SQL> --
SQL> create table Onduty_leave
  2  (
  3  Start_time date,
  4  end_time  date
  5  );

Table created.

SQL> 
SQL> --
SQL> -- Populate the table
SQL> --
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into onduty_leave
  4  		     values(sysdate, sysdate+((25*i)/24));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 
SQL> --
SQL> -- Compute the hours between entries for a given record
SQL> --
SQL> select (end_time - start_time)*24 hours_between
  2  from onduty_leave;

HOURS_BETWEEN                                                                   
-------------                                                                   
           25                                                                   
           50                                                                   
           75                                                                   
          100                                                                   
          125                                                                   
          150                                                                   
          175                                                                   
          200                                                                   
          225                                                                   
          250                                                                   

10 rows selected.

SQL> 
SQL> --
SQL> -- Populate table with more records
SQL> --
SQL> begin
  2  	     for i in 11..100 loop
  3  		     insert into onduty_leave
  4  		     values(sysdate, sysdate+((25*i)/24));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 
SQL> --
SQL> -- Sort the data
SQL> --
SQL> select *
  2  from onduty_leave
  3  order by 2;

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      18-FEB-2016 10:29:05                                  
17-FEB-2016 09:29:05      19-FEB-2016 11:29:05                                  
17-FEB-2016 09:29:05      20-FEB-2016 12:29:05                                  
17-FEB-2016 09:29:05      21-FEB-2016 13:29:05                                  
17-FEB-2016 09:29:05      22-FEB-2016 14:29:05                                  
17-FEB-2016 09:29:05      23-FEB-2016 15:29:05                                  
17-FEB-2016 09:29:05      24-FEB-2016 16:29:05                                  
17-FEB-2016 09:29:05      25-FEB-2016 17:29:05                                  
17-FEB-2016 09:29:05      26-FEB-2016 18:29:05                                  
17-FEB-2016 09:29:05      27-FEB-2016 19:29:05                                  
17-FEB-2016 09:29:05      28-FEB-2016 20:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      29-FEB-2016 21:29:05                                  
17-FEB-2016 09:29:05      01-MAR-2016 22:29:05                                  
17-FEB-2016 09:29:05      02-MAR-2016 23:29:05                                  
17-FEB-2016 09:29:05      04-MAR-2016 00:29:05                                  
17-FEB-2016 09:29:05      05-MAR-2016 01:29:05                                  
17-FEB-2016 09:29:05      06-MAR-2016 02:29:05                                  
17-FEB-2016 09:29:05      07-MAR-2016 03:29:05                                  
17-FEB-2016 09:29:05      08-MAR-2016 04:29:05                                  
17-FEB-2016 09:29:05      09-MAR-2016 05:29:05                                  
17-FEB-2016 09:29:05      10-MAR-2016 06:29:05                                  
17-FEB-2016 09:29:05      11-MAR-2016 07:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      12-MAR-2016 08:29:05                                  
17-FEB-2016 09:29:05      13-MAR-2016 09:29:05                                  
17-FEB-2016 09:29:05      14-MAR-2016 10:29:05                                  
17-FEB-2016 09:29:05      15-MAR-2016 11:29:05                                  
17-FEB-2016 09:29:05      16-MAR-2016 12:29:05                                  
17-FEB-2016 09:29:05      17-MAR-2016 13:29:05                                  
17-FEB-2016 09:29:05      18-MAR-2016 14:29:05                                  
17-FEB-2016 09:29:05      19-MAR-2016 15:29:05                                  
17-FEB-2016 09:29:05      20-MAR-2016 16:29:05                                  
17-FEB-2016 09:29:05      21-MAR-2016 17:29:05                                  
17-FEB-2016 09:29:05      22-MAR-2016 18:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      23-MAR-2016 19:29:05                                  
17-FEB-2016 09:29:05      24-MAR-2016 20:29:05                                  
17-FEB-2016 09:29:05      25-MAR-2016 21:29:05                                  
17-FEB-2016 09:29:05      26-MAR-2016 22:29:05                                  
17-FEB-2016 09:29:05      27-MAR-2016 23:29:05                                  
17-FEB-2016 09:29:05      29-MAR-2016 00:29:05                                  
17-FEB-2016 09:29:05      30-MAR-2016 01:29:05                                  
17-FEB-2016 09:29:05      31-MAR-2016 02:29:05                                  
17-FEB-2016 09:29:05      01-APR-2016 03:29:05                                  
17-FEB-2016 09:29:05      02-APR-2016 04:29:05                                  
17-FEB-2016 09:29:05      03-APR-2016 05:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      04-APR-2016 06:29:05                                  
17-FEB-2016 09:29:05      05-APR-2016 07:29:05                                  
17-FEB-2016 09:29:05      06-APR-2016 08:29:05                                  
17-FEB-2016 09:29:05      07-APR-2016 09:29:05                                  
17-FEB-2016 09:29:05      08-APR-2016 10:29:05                                  
17-FEB-2016 09:29:05      09-APR-2016 11:29:05                                  
17-FEB-2016 09:29:05      10-APR-2016 12:29:05                                  
17-FEB-2016 09:29:05      11-APR-2016 13:29:05                                  
17-FEB-2016 09:29:05      12-APR-2016 14:29:05                                  
17-FEB-2016 09:29:05      13-APR-2016 15:29:05                                  
17-FEB-2016 09:29:05      14-APR-2016 16:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      15-APR-2016 17:29:05                                  
17-FEB-2016 09:29:05      16-APR-2016 18:29:05                                  
17-FEB-2016 09:29:05      17-APR-2016 19:29:05                                  
17-FEB-2016 09:29:05      18-APR-2016 20:29:05                                  
17-FEB-2016 09:29:05      19-APR-2016 21:29:05                                  
17-FEB-2016 09:29:05      20-APR-2016 22:29:05                                  
17-FEB-2016 09:29:05      21-APR-2016 23:29:05                                  
17-FEB-2016 09:29:05      23-APR-2016 00:29:05                                  
17-FEB-2016 09:29:05      24-APR-2016 01:29:05                                  
17-FEB-2016 09:29:05      25-APR-2016 02:29:05                                  
17-FEB-2016 09:29:05      26-APR-2016 03:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      27-APR-2016 04:29:05                                  
17-FEB-2016 09:29:05      28-APR-2016 05:29:05                                  
17-FEB-2016 09:29:05      29-APR-2016 06:29:05                                  
17-FEB-2016 09:29:05      30-APR-2016 07:29:05                                  
17-FEB-2016 09:29:05      01-MAY-2016 08:29:05                                  
17-FEB-2016 09:29:05      02-MAY-2016 09:29:05                                  
17-FEB-2016 09:29:05      03-MAY-2016 10:29:05                                  
17-FEB-2016 09:29:05      04-MAY-2016 11:29:05                                  
17-FEB-2016 09:29:05      05-MAY-2016 12:29:05                                  
17-FEB-2016 09:29:05      06-MAY-2016 13:29:05                                  
17-FEB-2016 09:29:05      07-MAY-2016 14:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      08-MAY-2016 15:29:05                                  
17-FEB-2016 09:29:05      09-MAY-2016 16:29:05                                  
17-FEB-2016 09:29:05      10-MAY-2016 17:29:05                                  
17-FEB-2016 09:29:05      11-MAY-2016 18:29:05                                  
17-FEB-2016 09:29:05      12-MAY-2016 19:29:05                                  
17-FEB-2016 09:29:05      13-MAY-2016 20:29:05                                  
17-FEB-2016 09:29:05      14-MAY-2016 21:29:05                                  
17-FEB-2016 09:29:05      15-MAY-2016 22:29:05                                  
17-FEB-2016 09:29:05      16-MAY-2016 23:29:05                                  
17-FEB-2016 09:29:05      18-MAY-2016 00:29:05                                  
17-FEB-2016 09:29:05      19-MAY-2016 01:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      20-MAY-2016 02:29:05                                  
17-FEB-2016 09:29:05      21-MAY-2016 03:29:05                                  
17-FEB-2016 09:29:05      22-MAY-2016 04:29:05                                  
17-FEB-2016 09:29:05      23-MAY-2016 05:29:05                                  
17-FEB-2016 09:29:05      24-MAY-2016 06:29:05                                  
17-FEB-2016 09:29:05      25-MAY-2016 07:29:05                                  
17-FEB-2016 09:29:05      26-MAY-2016 08:29:05                                  
17-FEB-2016 09:29:05      27-MAY-2016 09:29:05                                  
17-FEB-2016 09:29:05      28-MAY-2016 10:29:05                                  
17-FEB-2016 09:29:05      29-MAY-2016 11:29:05                                  
17-FEB-2016 09:29:05      30-MAY-2016 12:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      31-MAY-2016 13:29:05                                  

100 rows selected.

SQL> 

Proper date arithmetic can be performed and the dates sort correctly since Oracle now knows they are dates.

Using the correct data type for the data being used is critical; storing data in a format other than what should be used can create problems with applications and can result in incorrect values. Analytical functions can produce erroneous output when data isn’t using the correct type, especially functions like lead(), lag() and sum() that can partition and order data to provide ‘rolling’ results. And if you can manipulate the data to provide correct answers it usually requires much more work than would be necessary using the proper data type. Keep that in mind when you design tables and applications.

Context is everything.

January 26, 2016

Heap Of Trouble

Filed under: General — dfitzjarrell @ 07:31

"But I suppose there's a lot to see everywhere, if only you keep your eyes open."
-- Norton Juster, The Phantom Tollbooth 

Oracle Support recently updated what I call a ‘blast from the past’; it involves a subpool memory leak in Oracle 11.2.0.1 and shared cursors. Originally submitted in 2010 this issue was updated the end of last year to reflect the status of ‘Fixed in Product Version 12.1’. There have been several bug fixes to the 11.2.0 series of releases addressing this and similar issues, but I suspect Oracle Support waited for its declaration of ‘fixed’ until 12.1.0.2 was finally released. Let’s look at the symptoms and show how you can investigate the shared pool subpools without generating a trace file.

This issue finally becomes apparent when an ORA-04031 error is thrown:


ORA-4031: unable to allocate ... bytes of shared memory ("shared pool","STANDARD","PLMCD^e93dbe75","BAMIMA: Bam Buffer")

In the original submission a trace file for the 4031 error, generated by the database, was submitted which revealed the offending portion of subpool 2:


Memory Utilization of Subpool 1
================================
"free memory              "      25488576
"SQLA                     "     134248904
"PCUR                     "      25094632
Memory Utilization of Subpool 2
================================
"free memory              "     127129872
"PCUR                     "    2427488432

Notice that the PCUR area in subpool 2 is almost 100 times the size of the same area in subpool 1. Unfortunately an AWR report shows only the Shared Pool Size, not a breakdown of the various subpools in the Shared Pool. Fortunately Tanel Poder has provided a tool to print out the various memory segments in the subpools called sgastatx.sql that can be used to see how much memory is allocated to each subpool by reporting on all of the memory areas of interest. Looking at an example of how the script is run:


SQL> @sgastatx %

-- All allocations:

SUBPOOL                             BYTES         MB
------------------------------ ---------- ----------
shared pool (0 - Unused):        83886080         80
shared pool (1):                184549376        176
shared pool (Total):            268435456        256

-- Allocations matching "%":

SUBPOOL                        NAME                       SUM(BYTES)         MB
------------------------------ -------------------------- ---------- ----------
shared pool (0 - Unused):      free memory                  83886080         80

shared pool (1):               free memory                  41599128      39.67
                               SQLA                         14559048      13.88
                               KGLH0                        12547880      11.97
                               row cache                     8641136       8.24
                               KGLS                          7616536       7.26
                               db_block_hash_buckets         5836800       5.57
                               KGLSG                         5267216       5.02
                               dbwriter coalesce buffer      4194816          4
                               KCB Table Scan Buffer         4194816          4
                               kglsim hash table bkts        4194304          4
...

The default behavior of the script is to report everything, so the % parameter is not necessary. Other text parameters can be passed to return memory areas of concern, in this case the KGLH0 area which maps to the PCUR data reported in the trace file. Returning only the KGLH0 allocations:


SQL> @sgastatx KGLH0

-- All allocations:

SUBPOOL                             BYTES         MB
------------------------------ ---------- ----------
shared pool (0 - Unused):        83886080         80
shared pool (1):                184549376        176
shared pool (Total):            268435456        256

-- Allocations matching "KGLH0":

SUBPOOL                        NAME                       SUM(BYTES)         MB
------------------------------ -------------------------- ---------- ----------
shared pool (1):               KGLH0                        12560024      11.98


SQL>

In the database used for these runs only one subpool is allocated outside of subpool 0, which is the free memory not yet allocated to a subpool. As allocations to various subpools are made subpool 0 decreases in size until all of the unallocated free space is consumed. This can result in ‘lopsided’ allocations to some subpools; once memory is allocated to a particular subpool it cannot be re-allocated to another subpool. In ‘normal’ cases when ORA-04031 errors arise expanding the shared pool will correct them, provided dynamic shared pool allocations are configured by setting sga_max_size larger than sga_target. In extreme cases, such as this memory leak, the only solution is to restart the database to clear out the existing memory segments.

The sgastatx.sql script can be run at any time, won’t impact performance and will provide the current allocations allowing the DBA to monitor the subpool segments so proactive changes can be made to prevent ORA-04031 errors, again provided dynamic shared pool allocations are configured.

The sgastatx.sql script can also report on the free space in all allocated subpools as well as in subpool 0:


SQL> @sgastatx "free memory"

-- All allocations:

SUBPOOL                             BYTES         MB
------------------------------ ---------- ----------
shared pool (0 - Unused):        83886080         80
shared pool (1):                184549376        176
shared pool (Total):            268435456        256

-- Allocations matching "free memory":

SUBPOOL                        NAME                       SUM(BYTES)         MB
------------------------------ -------------------------- ---------- ----------
shared pool (0 - Unused):      free memory                  83886080         80

shared pool (1):               free memory                  37168136      35.45


SQL>

When subpool 0 is exhausted it will be eliminated from the displayed output, which will indicate that Oracle can make no more adjustments to the existing subpools and either a dynamic shared pool increase needs to be executed or, failing that, the spfile needs to be changed, the database stopped and then restarted. [If an spfile is not in use then the pfile needs to be modified to reflect the memory increase, followed by a shutdown and startup of the database. For ease of administration it is recommended that an spfile, rather than a pfile, be used.]

The ideal solution to this issue is to either patch to the terminal release of 11.2.0 (11.2.0.5) or upgrade to 12.1.0.2 so the memory-related bugs can be patched. It is possible that a third-party application may not be certified for any release newer than 11.2.0.1 (which should be an extremely rare case) then the next solution is to apply relevant patches for 11.2.0.1. Oracle Support lists 8 merge patches for bug fixes related to this memory leak (18730652, 18837746, 18980005, 19015163, 19494335, 19481242, 21078557 and 21076074).

Hopefully those shops still running on the base version 11.2.0.1 are few and far between, but anything is possible. Being prepared is the best way a DBA can quickly and reliably address issues, even those needing a bit of research.

Apparently there is quite a bit to see, if you look.

January 8, 2016

“Say What?!?!?”

Filed under: General,Indexes,Performance — dfitzjarrell @ 16:04

"The only thing you can do easily is be wrong, and that's hardly worth the effort." 
Norton Juster, The Phantom Tollbooth

Oracle can lie to you. Not like a disreputable used-car salesman but more like the ‘little white lie’ sometimes told in order to hide less-than-desirable parts of the truth. And it’s not Oracle, really, it’s the optimizer and it does it by reporting query plans that may not accurately report the execution path. Sometimes we can make Oracle tell lies that it doesn’t know its telling, by using features not everyone uses. Take, for example, NLS settings. There have been improvements in National Language Support in most Oracle releases, and those improvements can introduce behavior that causes Oracle to lie to you. Let’s look at an example where this is true.

Jonathan Lewis brought this example to light in a recent blog post where he dislplayed the plan using dbms_xplan. After reading the post I decided to add to it by using autotrace to generate the plan, just to see if the two plans matched. Let’s look at the modified example:


SQL> --
SQL> --  Setup NLS parameters
SQL> --
SQL> ALTER session SET nls_sort=binary_ci;

Session altered.

SQL> ALTER session SET nls_comp=linguistic;

Session altered.

SQL> 
SQL> --
SQL> -- Create table, populate table and
SQL> -- create index
SQL> --
SQL> CREATE TABLE log_data(
  2    account_id NUMBER,
  3    log_type NUMBER,
  4    sys_name VARCHAR2(30),
  5    log_time TIMESTAMP,
  6    msg varchar2(4000)
  7  )
  8  nologging
  9  ;

Table created.

SQL> 
SQL> insert /*+ append */ into log_data(
  2    account_id,
  3    log_type,
  4    sys_name,
  5    log_time,
  6    msg
  7  )
  8  select
  9  	     5,
 10  	     2,
 11  	     dbms_random.string('a',1),
 12  	     sysdate + dbms_random.value,
 13  	     rpad('x',200)
 14  from
 15  	     dual
 16  connect by
 17  	     level  user,
  4  		     tabname	      =>'LOG_DATA',
  5  		     method_opt       => 'for all columns size 1'
  6  	     );
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL> 

The setup is complete so let’s see what autotrace reports as the execution plan:


SQL> 
SQL> column msg format a20
SQL> 
SQL> --
SQL> -- Use autotrace to generate the plan
SQL> --
SQL> set autotrace on
SQL> 
SQL> SELECT
  2  	     *
  3  FROM
  4    (
  5  	 SELECT
  6  	     sys_name, log_time,  substr(msg,1,40) msg
  7  	 FROM log_data
  8  	 WHERE
  9  	   account_id=5
 10  	   AND log_type=2
 11  	   AND sys_name='a'
 12  	 ORDER BY
 13  	   log_time  desc
 14    )
 15  WHERE
 16    rownum <= 10
 17  ;

SYS_NAME                       LOG_TIME                                                                    MSG                                        
------------------------------ --------------------------------------------------------------------------- --------------------                       
A                              09-JAN-16 02.42.54.000000 PM                                                x                                          
a                              09-JAN-16 02.41.02.000000 PM                                                x                                          
a                              09-JAN-16 02.40.54.000000 PM                                                x                                          
a                              09-JAN-16 02.36.38.000000 PM                                                x                                          
A                              09-JAN-16 02.36.02.000000 PM                                                x                                          
a                              09-JAN-16 02.34.15.000000 PM                                                x                                          
a                              09-JAN-16 02.31.15.000000 PM                                                x                                          
A                              09-JAN-16 02.30.59.000000 PM                                                x                                          
a                              09-JAN-16 02.30.36.000000 PM                                                x                                          
a                              09-JAN-16 02.29.53.000000 PM                                                x                                          

10 rows selected.


Execution Plan
----------------------------------------------------------                                                                                            
Plan hash value: 1444908817                                                                                                                           
                                                                                                                                                      
--------------------------------------------------------------------------------------------                                                          
| Id  | Operation                       | Name     | Rows  | Bytes | Cost (%CPU)| Time     |                                                          
--------------------------------------------------------------------------------------------                                                          
|   0 | SELECT STATEMENT                |          |    10 |  1120 |    13   (0)| 00:00:01 |                                                          
|*  1 |  COUNT STOPKEY                  |          |       |       |            |          |                                                          
|   2 |   VIEW                          |          |    11 |  1232 |    13   (0)| 00:00:01 |                                                          
|*  3 |    SORT ORDER BY STOPKEY        |          |    11 |  2453 |    13   (0)| 00:00:01 |                                                          
|   4 |     TABLE ACCESS BY INDEX ROWID | LOG_DATA |  1000 |   217K|    13   (0)| 00:00:01 |                                                          
|*  5 |      INDEX RANGE SCAN DESCENDING| LOG_DATE |    11 |       |     2   (0)| 00:00:01 |                                                          
--------------------------------------------------------------------------------------------                                                          
                                                                                                                                                      
Predicate Information (identified by operation id):                                                                                                   
---------------------------------------------------                                                                                                   
                                                                                                                                                      
   1 - filter(ROWNUM<=10)                                                                                                                             
   3 - filter(ROWNUM<=10)                                                                                                                             
   5 - access("ACCOUNT_ID"=5 AND "LOG_TYPE"=2 AND                                                                                                     
              NLSSORT("SYS_NAME",'nls_sort=''BINARY_CI''')=HEXTORAW('6100') )                                                                         


Statistics
----------------------------------------------------------                                                                                            
          8  recursive calls                                                                                                                          
          0  db block gets                                                                                                                            
        981  consistent gets                                                                                                                          
          0  physical reads                                                                                                                           
          0  redo size                                                                                                                                
        923  bytes sent via SQL*Net to client                                                                                                         
        524  bytes received via SQL*Net from client                                                                                                   
          2  SQL*Net roundtrips to/from client                                                                                                        
          1  sorts (memory)                                                                                                                           
          0  sorts (disk)                                                                                                                             
         10  rows processed                                                                                                                           

SQL> 
SQL> set autotrace off
SQL> 

SQL> 
SQL> --
SQL> -- Now use dbms_xplan to generate the plan
SQL> --
SQL> SELECT /*+ gather_plan_statistics */
  2  	     *
  3  FROM
  4    (
  5  	 SELECT
  6  	     sys_name, log_time,  substr(msg,1,40) msg
  7  	 FROM log_data
  8  	 WHERE
  9  	   account_id=5
 10  	   AND log_type=2
 11  	   AND sys_name='a'
 12  	 ORDER BY
 13  	   log_time  desc
 14    )
 15  WHERE
 16    rownum 'allstats last'));

PLAN_TABLE_OUTPUT                                                                                                                                     
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  36qykj7j9tsrp, child number 0                                                                                                                 
-------------------------------------                                                                                                                 
SELECT /*+ gather_plan_statistics */         * FROM   (     SELECT                                                                                    
   sys_name, log_time,  substr(msg,1,40) msg     FROM log_data                                                                                        
WHERE       account_id=5       AND log_type=2       AND sys_name='a'                                                                                  
 ORDER BY       log_time  desc   ) WHERE   rownum <= 10                                                                                               
                                                                                                                                                      
Plan hash value: 1444908817                                                                                                                           
                                                                                                                                                      
---------------------------------------------------------------------------------------------------------------------------------                     
| Id  | Operation                       | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |                     

PLAN_TABLE_OUTPUT                                                                                                                                     
------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------                     
|   0 | SELECT STATEMENT                |          |      1 |        |     10 |00:00:00.01 |     979 |       |       |          |                     
|*  1 |  COUNT STOPKEY                  |          |      1 |        |     10 |00:00:00.01 |     979 |       |       |          |                     
|   2 |   VIEW                          |          |      1 |     11 |     10 |00:00:00.01 |     979 |       |       |          |                     
|*  3 |    SORT ORDER BY STOPKEY        |          |      1 |     11 |     10 |00:00:00.01 |     979 |  2048 |  2048 | 2048  (0)|                     
|   4 |     TABLE ACCESS BY INDEX ROWID | LOG_DATA |      1 |   1000 |    975 |00:00:00.01 |     979 |       |       |          |                     
|*  5 |      INDEX RANGE SCAN DESCENDING| LOG_DATE |      1 |     11 |    975 |00:00:00.01 |       5 |       |       |          |                     
---------------------------------------------------------------------------------------------------------------------------------                     
                                                                                                                                                      
Predicate Information (identified by operation id):                                                                                                   
---------------------------------------------------                                                                                                   

PLAN_TABLE_OUTPUT                                                                                                                                     
------------------------------------------------------------------------------------------------------------------------------------------------------
                                                                                                                                                      
   1 - filter(ROWNUM<=10)                                                                                                                             
   3 - filter(ROWNUM<=10)                                                                                                                             
   5 - access("ACCOUNT_ID"=5 AND "LOG_TYPE"=2 AND "LOG_DATA"."SYS_NC00006$"=HEXTORAW('6100') )                                                        
                                                                                                                                                      

27 rows selected.

SQL> 

Notice that both methods report the same execution plan; it’s the access predicates that differ, and autotrace reports the index as the access point rather than the hidden column in the table. Hihstorically it’s been dbms_xplan.display_cursor that’s been the ‘truth’, but in this case it may be autotrace reporting the actual predicates in use.

And that’s the truth.

Next Page »

Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,358 other followers