Oracle Tips and Tricks — David Fitzjarrell

March 21, 2016

Repeat After Me

Filed under: General — dfitzjarrell @ 08:44

“every time you decide something without having a good reason, you jump to Conclusions whether you like it or not.” 
-- Norton Juster, The Phantom Tollbooth

In an Oracle forum recently the following question was posted:


I have a request for 3 indices as shown below. Does the 1st index suffice for 2 and 3?  Do I need all 3?
 
    CREATE INDEX IDX_ATM_EM_EFF_VER_CURRENT_DEL ON ATM_xxx_SALARY
    (EMPLOYEE_KEY, EFFECTIVE_DT, SALARY_VERSION_NUMBER, IS_CURRENT, IS_DELETED);

    CREATE INDEX IDX_ATM_EM_VER_CURRENT ON ATM_xxx_SALARY
    (EMPLOYEE_KEY, SALARY_VERSION_NUMBER, IS_CURRENT);

    CREATE INDEX .IDX_ATM_SAL_CURRENT_DEL ON ATM_xxx_SALARY
    (EMPLOYEE_KEY, IS_DELETED, IS_CURRENT);

Answering such a question in absence of any additional information isn’t easy, but looking at the columns it’s not unrealistic to make the following presumptions about the data:

        EMPLOYEE_KEY must be not null and unique
        IS_CURRENT is either a YES or NO value
        IS_DELETED is either a YES or NO value    

The following example, using queries tailored to the indexes to be created, was run in Oracle 11.2.0.4; notice that the first index is used in all three queries:


SQL> --
SQL> -- Create the table
SQL> --
SQL> create table atm_xxx_salary(
  2  employee_key            varchar2(20) not null,
  3  emp_fname               varchar2(20),
  4  emp_lname               varchar2(40),
  5  effective_dt            date,
  6  salary_version_number      number,
  7  is_current              varchar2(3),
  8  is_deleted              varchar2(3));
      
Table created.

SQL>
SQL> --
SQL> -- Load data
SQL> --
SQL> begin
  2          for i in 1..100 loop
  3                  insert into atm_xxx_salary
  4                  values('Employee_'||i, 'Blobbo', 'Fingnestle', sysdate + mod(i, 9), mod(i,11)+1, 'YES','NO');
  5  End loop;
  6          commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> -- Create first index
SQL> --
SQL> CREATE INDEX IDX_ATM_EM_EFF_VER_CURRENT_DEL ON ATM_xxx_SALARY
  2  (EMPLOYEE_KEY, EFFECTIVE_DT, SALARY_VERSION_NUMBER, IS_CURRENT, IS_DELETED);
         
Index created.
         
SQL>
SQL> exec dbms_stats.gather_schema_stats(user);
      
PL/SQL procedure successfully completed.
    
SQL>
SQL> set autotrace on linesize 150
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and effective_dt between sysdate and sysdate+4
  5  and salary_version_number = 7
  6  and is_current = 'YES';

no rows selected

   
       
Execution Plan
----------------------------------------------------------
Plan hash value: 2729507590

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                                |     6 |   288 |     2   (0)| 00:00:01 |
|*  1 |  FILTER                      |                                |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| ATM_XXX_SALARY                 |     6 |   288 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | IDX_ATM_EM_EFF_VER_CURRENT_DEL |     6 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------
       
Predicate Information (identified by operation id):
---------------------------------------------------
     
   1 - filter(SYSDATE@!+4>=SYSDATE@!)
   3 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "EFFECTIVE_DT">=SYSDATE@! AND
              "SALARY_VERSION_NUMBER"=7 AND "IS_CURRENT"='YES' AND "EFFECTIVE_DT"<=SYSDATE@!+4)
       filter("SALARY_VERSION_NUMBER"=7 AND "EFFECTIVE_DT"=SYSDATE@! AND "IS_CURRENT"='YES')

SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
        
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO


Execution Plan
----------------------------------------------------------
Plan hash value: 1328304448
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                                |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID |  ATM_XXX_SALARY                |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
        

Predicate Information (identified by operation id):
---------------------------------------------------
         
   2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_CURRENT"='YES' AND "IS_DELETED"='NO')
       filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
        
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
       
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO


Execution Plan
----------------------------------------------------------
Plan hash value: 1328304448
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                                |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY                 |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
        
   2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_CURRENT"='YES' AND "IS_DELETED"='NO')
       filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')

        
         
SQL>
SQL> set autotrace off

The above appears to answer the question and prove the remaining two indexes are not necessary, but it can’t hurt to test this theory by creating the remaining indexes and running the same set of queries after each index creation. Creating the second index apparently does no good as it isn’t used in favor of the first:

       
SQL> --
SQL> -- Create second index
SQL> --
SQL> CREATE INDEX IDX_ATM_EM_VER_CURRENT ON ATM_xxx_SALARY
  2  (EMPLOYEE_KEY, SALARY_VERSION_NUMBER, IS_CURRENT);
         
Index created.
         
SQL>
SQL>
SQL> set autotrace on
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and effective_dt between sysdate and sysdate+4
  5  and salary_version_number = 7
  6  and is_current = 'YES';
         
no rows selected
        
         
Execution Plan
----------------------------------------------------------
Plan hash value: 2729507590
---------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                                |     6 |   288 |     2   (0)| 00:00:01 |
|*  1 |  FILTER                       |                                |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY                 |     6 |   288 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |     6 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
   1 - filter(SYSDATE@!+4>=SYSDATE@!)
   3 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "EFFECTIVE_DT">=SYSDATE@! AND
              "SALARY_VERSION_NUMBER"=7 AND "IS_CURRENT"='YES' AND "EFFECTIVE_DT"<=SYSDATE@!+4)
       filter("SALARY_VERSION_NUMBER"=7 AND "EFFECTIVE_DT"=SYSDATE@! AND "IS_CURRENT"='YES')
         
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
      
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO
         
         
Execution Plan
----------------------------------------------------------
Plan hash value: 1328304448
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                                |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY                 |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
   2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_CURRENT"='YES' AND "IS_DELETED"='NO')
       filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
         
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
      
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO
         
         
Execution Plan
----------------------------------------------------------
Plan hash value: 1328304448
         
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                                |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY                 |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
   2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_CURRENT"='YES' AND "IS_DELETED"='NO')
       filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
         
SQL>
SQL> set autotrace off

Let’s create the third index and see what Oracle does:

       
SQL> --
SQL> -- Create third index
SQL> --
SQL> CREATE INDEX IDX_ATM_SAL_CURRENT_DEL ON ATM_xxx_SALARY
  2  (EMPLOYEE_KEY, IS_DELETED, IS_CURRENT);
         
Index created.
         
SQL>
SQL> set autotrace on
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and effective_dt between sysdate and sysdate+4
  5  and salary_version_number = 7
  6  and is_current = 'YES';
         
no rows selected
         
         
Execution Plan
----------------------------------------------------------
Plan hash value: 2729507590
         
---------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                                |     6 |   288 |     2   (0)| 00:00:01 |
|*  1 |  FILTER                       |                                |       |       |            |       |
|   2 |   TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY                 |     6 |   288 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN           | IDX_ATM_EM_EFF_VER_CURRENT_DEL |     6 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
   1 - filter(SYSDATE@!+4>=SYSDATE@!)
   3 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "EFFECTIVE_DT">=SYSDATE@! AND
              "SALARY_VERSION_NUMBER"=7 AND "IS_CURRENT"='YES' AND "EFFECTIVE_DT"<=SYSDATE@!+4)
       filter("SALARY_VERSION_NUMBER"=7 AND "EFFECTIVE_DT"=SYSDATE@! AND "IS_CURRENT"='YES')
         
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
         
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO
         
         
Execution Plan
---------------------------------------------------------
Plan hash value: 1562453262
         
-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                         |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY          |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_SAL_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
    2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
        filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
         
SQL>
SQL> select *
  2  from atm_xxx_salary
  3  where employee_key like 'Emp%10%'
  4  and is_deleted = 'NO'
  5  and is_current = 'YES';
         
EMPLOYEE_KEY         EMP_FNAME            EMP_LNAME                                EFFECTIVE SALARY_VERSION_NUMBER IS_ IS_
-------------------- -------------------- ---------------------------------------- --------- --------------------- --- ---
Employee_10          Blobbo               Fingnestle                               22-MAR-16                    11 YES NO
Employee_100         Blobbo               Fingnestle                               22-MAR-16                     2 YES NO
         
         
Execution Plan
----------------------------------------------------------
Plan hash value: 1562453262
         
-------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                         |    99 |  4752 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | ATM_XXX_SALARY          |    99 |  4752 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN           | IDX_ATM_SAL_CURRENT_DEL |    99 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------
         
Predicate Information (identified by operation id):
---------------------------------------------------
         
   2 - access("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')
       filter("EMPLOYEE_KEY" LIKE 'Emp%10%' AND "IS_DELETED"='NO' AND "IS_CURRENT"='YES')

SQL>
SQL>
SQL> set autotrace off

Notice that the first and third indexes are used, with the third index used on the second and third queries. Remember that the first index was used on all three queries so given the presumptions made on the nature of the data the second and third indexes appear to not be needed. Of course this is not rigorous testing and other queries and/or data distributions could produce differing results. You really need to test, test, test, test and test again before making a final decision. But, the indications are good that a single index would suffice to provide “speedy” data retrieval.

Please don’t make me repeat myself.

February 17, 2016

“That Ain’t Right”

Filed under: General — dfitzjarrell @ 09:46

"Is everyone who lives in Ignorance like you?" asked Milo.
"Much worse," he said longingly. "But I don't live here. I'm from a place very far away called Context." 
-- Norton Juster, The Phantom Tollbooth 

Dates are, well, dates. They are not strings, they are not numbers, they are dates and to handle them properly the correct data type needs to be used. I can’t count how many times I’ve seen dates stored as VARCHAR2 strings or as numbers, neither of which treat them properly or order them correctly. Recently in a forum I frequent this question was asked:


Version Oracle 11 g R2


Hello every body
 

First of All i know the Format is wrong
but as it is a old project i need to extract the report as it is

there is a table with two fields and data as follow

create table Onduty_leave
(
Start_time varchar2(100),
end_time  varchar2(100)
);

insert into onduty_leave (start_time,end_time) values('09.00 am','06.00 pm');

Minutes i am handling differently

Please just tell me how to calculate hours between 09 am to 06 pm 

Yes, this person realizes that the data type is incorrect for the data being stored, but the fact remains someone thought this was a good idea and implemented it into a production system. Of course it’s possible to provide the results the person posting this question is after, but it would be so much easier had the correct data type been used. Be that as it may let’s proceed with the current table definition and show what can be done to calculate the desired quantity. Let’s build and populate the table with some simple code:


SQL> 
SQL> --
SQL> -- Create the table with the improper data type
SQL> --
SQL> create table Onduty_leave
  2  (
  3  Start_time varchar2(100),
  4  end_time  varchar2(100)
  5  );

Table created.

SQL> 
SQL> --
SQL> -- Populate the table
SQL> --
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into onduty_leave
  4  		     values(sysdate-i, sysdate+i);
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 
SQL> --
SQL> -- Set the display format to see the date and time
SQL> --
SQL> alter session set nls_date_format = 'DD-MON-RRRR HH24:MI:SS';

Session altered.

SQL> 
SQL> --
SQL> -- Format the strings for display
SQL> --
SQL> column start_time format a25
SQL> column end_time format a25
SQL> 

We are now ready to compute the hours between the two provided dates in each record. Since these are VARCHAR2 strings it will be necessary to use TO_CHAR and TO_DATE to get proper results.


SQL> --
SQL> -- 24 hours per day, find days between and multiply
SQL> -- This uses the entire date string to provide correct
SQL> -- results
SQL> --
SQL> 
SQL> select (to_date(end_time) - to_date(start_time))*24 hours_between
  2  from onduty_leave;

HOURS_BETWEEN                                                                   
-------------                                                                   
           48                                                                   
           96                                                                   
          144                                                                   
          192                                                                   
          240                                                                   
          288                                                                   
          336                                                                   
          384                                                                   
          432                                                                   
          480                                                                   

10 rows selected.

SQL> 

It isn’t rocket science but it is a bit more work than we should be doing to get these answers. The data in the table was generated simply; let’s generate some new data that may more accurately reflect real-world entries into this table:


SQL> --
SQL> -- Purge all current data
SQL> --
SQL> truncate table onduty_leave;

Table truncated.

SQL> 
SQL> --
SQL> -- Populate table with more realistic records
SQL> --
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into onduty_leave
  4  		     values(sysdate, sysdate+((25*i)/24));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 

Using just the time from each entry let’s compute the new hours difference between them:


SQL> --
SQL> -- Hours between entries
SQL> -- Use only the time portion of the date
SQL> --
SQL> -- This produces wrong results since the date
SQL> -- isn't included in the calculation
SQL> --
SQL> select start_time, end_time,
  2  	    (to_date(to_char(to_date(end_time), 'HH24:MI:SS'), 'HH24:MI:SS') - to_date(to_char(to_date(start_time), 'HH24:MI:SS'), 'HH24:MI:SS'))*24 hours_between
  3  from onduty_leave;

START_TIME                END_TIME                  HOURS_BETWEEN               
------------------------- ------------------------- -------------               
17-FEB-2016 09:12:28      18-FEB-2016 10:12:28                  1               
17-FEB-2016 09:12:28      19-FEB-2016 11:12:28                  2               
17-FEB-2016 09:12:28      20-FEB-2016 12:12:28                  3               
17-FEB-2016 09:12:28      21-FEB-2016 13:12:28                  4               
17-FEB-2016 09:12:28      22-FEB-2016 14:12:28                  5               
17-FEB-2016 09:12:28      23-FEB-2016 15:12:28                  6               
17-FEB-2016 09:12:28      24-FEB-2016 16:12:28                  7               
17-FEB-2016 09:12:28      25-FEB-2016 17:12:28                  8               
17-FEB-2016 09:12:28      26-FEB-2016 18:12:28                  9               
17-FEB-2016 09:12:28      27-FEB-2016 19:12:28                 10               

10 rows selected.

SQL> 

The initial request was flawed by assuming each record would reference a single day; even though this may be true for the actual data the solution cannot rely on such assumptions. Obviously the results from that last query were wrong; let’s use the full date string and see what values we get:


SQL> --
SQL> -- 24 hours per day, find days between and multiply
SQL> --
SQL> -- Use full date string to compute the hours between
SQL> -- entries
SQL> --
SQL> 
SQL> select start_time, end_time,
  2  	    (to_date(end_time) - to_date(start_time))*24 hours_between
  3  from onduty_leave;

START_TIME                END_TIME                  HOURS_BETWEEN               
------------------------- ------------------------- -------------               
17-FEB-2016 09:12:28      18-FEB-2016 10:12:28                 25               
17-FEB-2016 09:12:28      19-FEB-2016 11:12:28                 50               
17-FEB-2016 09:12:28      20-FEB-2016 12:12:28                 75               
17-FEB-2016 09:12:28      21-FEB-2016 13:12:28                100               
17-FEB-2016 09:12:28      22-FEB-2016 14:12:28                125               
17-FEB-2016 09:12:28      23-FEB-2016 15:12:28                150               
17-FEB-2016 09:12:28      24-FEB-2016 16:12:28                175               
17-FEB-2016 09:12:28      25-FEB-2016 17:12:28                200               
17-FEB-2016 09:12:28      26-FEB-2016 18:12:28                225               
17-FEB-2016 09:12:28      27-FEB-2016 19:12:28                250               

10 rows selected.

SQL> 

This looks much better, but it’s still based on date values stored as strings. With the format specified we can successfully order the data for a single month, but when we add data for other months and sort it ‘strange’ things can happen:


SQL> --
SQL> -- Populate table with more records
SQL> --
SQL> begin
  2  	     for i in 11..100 loop
  3  		     insert into onduty_leave
  4  		     values(sysdate, sysdate+((25*i)/24));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 
SQL> --
SQL> -- Sort the data
SQL> --
SQL> select *
  2  from onduty_leave
  3  order by 2;

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:04      01-APR-2016 03:29:04                                  
17-FEB-2016 09:29:04      01-MAR-2016 22:29:04                                  
17-FEB-2016 09:29:04      01-MAY-2016 08:29:04                                  
17-FEB-2016 09:29:04      02-APR-2016 04:29:04                                  
17-FEB-2016 09:29:04      02-MAR-2016 23:29:04                                  
17-FEB-2016 09:29:04      02-MAY-2016 09:29:04                                  
17-FEB-2016 09:29:04      03-APR-2016 05:29:04                                  
17-FEB-2016 09:29:04      03-MAY-2016 10:29:04                                  
17-FEB-2016 09:29:04      04-APR-2016 06:29:04                                  
17-FEB-2016 09:29:04      04-MAR-2016 00:29:04                                  
17-FEB-2016 09:29:04      04-MAY-2016 11:29:04                                  
17-FEB-2016 09:29:04      05-APR-2016 07:29:04                                  
17-FEB-2016 09:29:04      05-MAR-2016 01:29:04                                  
17-FEB-2016 09:29:04      05-MAY-2016 12:29:04                                  
17-FEB-2016 09:29:04      06-APR-2016 08:29:04                                  
17-FEB-2016 09:29:04      06-MAR-2016 02:29:04                                  
17-FEB-2016 09:29:04      06-MAY-2016 13:29:04                                  
17-FEB-2016 09:29:04      07-APR-2016 09:29:04                                  
17-FEB-2016 09:29:04      07-MAR-2016 03:29:04                                  
17-FEB-2016 09:29:04      07-MAY-2016 14:29:04                                  
17-FEB-2016 09:29:04      08-APR-2016 10:29:04                                  
17-FEB-2016 09:29:04      08-MAR-2016 04:29:04                                  
...

100 rows selected.

SQL> 

Oracle doesn’t know these are dates, so they are sorted as ASCII text making the order anything but logical, as far as dates go. Let’s drop the orginal table, recreate with the proper date type and try this again:


SQL> --
SQL> -- Drop the original table
SQL> --
SQL> drop table onduty_leave purge;

Table dropped.

SQL> 
SQL> --
SQL> -- Create the table with the proper data type
SQL> --
SQL> create table Onduty_leave
  2  (
  3  Start_time date,
  4  end_time  date
  5  );

Table created.

SQL> 
SQL> --
SQL> -- Populate the table
SQL> --
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into onduty_leave
  4  		     values(sysdate, sysdate+((25*i)/24));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 
SQL> --
SQL> -- Compute the hours between entries for a given record
SQL> --
SQL> select (end_time - start_time)*24 hours_between
  2  from onduty_leave;

HOURS_BETWEEN                                                                   
-------------                                                                   
           25                                                                   
           50                                                                   
           75                                                                   
          100                                                                   
          125                                                                   
          150                                                                   
          175                                                                   
          200                                                                   
          225                                                                   
          250                                                                   

10 rows selected.

SQL> 
SQL> --
SQL> -- Populate table with more records
SQL> --
SQL> begin
  2  	     for i in 11..100 loop
  3  		     insert into onduty_leave
  4  		     values(sysdate, sysdate+((25*i)/24));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 
SQL> --
SQL> -- Sort the data
SQL> --
SQL> select *
  2  from onduty_leave
  3  order by 2;

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      18-FEB-2016 10:29:05                                  
17-FEB-2016 09:29:05      19-FEB-2016 11:29:05                                  
17-FEB-2016 09:29:05      20-FEB-2016 12:29:05                                  
17-FEB-2016 09:29:05      21-FEB-2016 13:29:05                                  
17-FEB-2016 09:29:05      22-FEB-2016 14:29:05                                  
17-FEB-2016 09:29:05      23-FEB-2016 15:29:05                                  
17-FEB-2016 09:29:05      24-FEB-2016 16:29:05                                  
17-FEB-2016 09:29:05      25-FEB-2016 17:29:05                                  
17-FEB-2016 09:29:05      26-FEB-2016 18:29:05                                  
17-FEB-2016 09:29:05      27-FEB-2016 19:29:05                                  
17-FEB-2016 09:29:05      28-FEB-2016 20:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      29-FEB-2016 21:29:05                                  
17-FEB-2016 09:29:05      01-MAR-2016 22:29:05                                  
17-FEB-2016 09:29:05      02-MAR-2016 23:29:05                                  
17-FEB-2016 09:29:05      04-MAR-2016 00:29:05                                  
17-FEB-2016 09:29:05      05-MAR-2016 01:29:05                                  
17-FEB-2016 09:29:05      06-MAR-2016 02:29:05                                  
17-FEB-2016 09:29:05      07-MAR-2016 03:29:05                                  
17-FEB-2016 09:29:05      08-MAR-2016 04:29:05                                  
17-FEB-2016 09:29:05      09-MAR-2016 05:29:05                                  
17-FEB-2016 09:29:05      10-MAR-2016 06:29:05                                  
17-FEB-2016 09:29:05      11-MAR-2016 07:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      12-MAR-2016 08:29:05                                  
17-FEB-2016 09:29:05      13-MAR-2016 09:29:05                                  
17-FEB-2016 09:29:05      14-MAR-2016 10:29:05                                  
17-FEB-2016 09:29:05      15-MAR-2016 11:29:05                                  
17-FEB-2016 09:29:05      16-MAR-2016 12:29:05                                  
17-FEB-2016 09:29:05      17-MAR-2016 13:29:05                                  
17-FEB-2016 09:29:05      18-MAR-2016 14:29:05                                  
17-FEB-2016 09:29:05      19-MAR-2016 15:29:05                                  
17-FEB-2016 09:29:05      20-MAR-2016 16:29:05                                  
17-FEB-2016 09:29:05      21-MAR-2016 17:29:05                                  
17-FEB-2016 09:29:05      22-MAR-2016 18:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      23-MAR-2016 19:29:05                                  
17-FEB-2016 09:29:05      24-MAR-2016 20:29:05                                  
17-FEB-2016 09:29:05      25-MAR-2016 21:29:05                                  
17-FEB-2016 09:29:05      26-MAR-2016 22:29:05                                  
17-FEB-2016 09:29:05      27-MAR-2016 23:29:05                                  
17-FEB-2016 09:29:05      29-MAR-2016 00:29:05                                  
17-FEB-2016 09:29:05      30-MAR-2016 01:29:05                                  
17-FEB-2016 09:29:05      31-MAR-2016 02:29:05                                  
17-FEB-2016 09:29:05      01-APR-2016 03:29:05                                  
17-FEB-2016 09:29:05      02-APR-2016 04:29:05                                  
17-FEB-2016 09:29:05      03-APR-2016 05:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      04-APR-2016 06:29:05                                  
17-FEB-2016 09:29:05      05-APR-2016 07:29:05                                  
17-FEB-2016 09:29:05      06-APR-2016 08:29:05                                  
17-FEB-2016 09:29:05      07-APR-2016 09:29:05                                  
17-FEB-2016 09:29:05      08-APR-2016 10:29:05                                  
17-FEB-2016 09:29:05      09-APR-2016 11:29:05                                  
17-FEB-2016 09:29:05      10-APR-2016 12:29:05                                  
17-FEB-2016 09:29:05      11-APR-2016 13:29:05                                  
17-FEB-2016 09:29:05      12-APR-2016 14:29:05                                  
17-FEB-2016 09:29:05      13-APR-2016 15:29:05                                  
17-FEB-2016 09:29:05      14-APR-2016 16:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      15-APR-2016 17:29:05                                  
17-FEB-2016 09:29:05      16-APR-2016 18:29:05                                  
17-FEB-2016 09:29:05      17-APR-2016 19:29:05                                  
17-FEB-2016 09:29:05      18-APR-2016 20:29:05                                  
17-FEB-2016 09:29:05      19-APR-2016 21:29:05                                  
17-FEB-2016 09:29:05      20-APR-2016 22:29:05                                  
17-FEB-2016 09:29:05      21-APR-2016 23:29:05                                  
17-FEB-2016 09:29:05      23-APR-2016 00:29:05                                  
17-FEB-2016 09:29:05      24-APR-2016 01:29:05                                  
17-FEB-2016 09:29:05      25-APR-2016 02:29:05                                  
17-FEB-2016 09:29:05      26-APR-2016 03:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      27-APR-2016 04:29:05                                  
17-FEB-2016 09:29:05      28-APR-2016 05:29:05                                  
17-FEB-2016 09:29:05      29-APR-2016 06:29:05                                  
17-FEB-2016 09:29:05      30-APR-2016 07:29:05                                  
17-FEB-2016 09:29:05      01-MAY-2016 08:29:05                                  
17-FEB-2016 09:29:05      02-MAY-2016 09:29:05                                  
17-FEB-2016 09:29:05      03-MAY-2016 10:29:05                                  
17-FEB-2016 09:29:05      04-MAY-2016 11:29:05                                  
17-FEB-2016 09:29:05      05-MAY-2016 12:29:05                                  
17-FEB-2016 09:29:05      06-MAY-2016 13:29:05                                  
17-FEB-2016 09:29:05      07-MAY-2016 14:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      08-MAY-2016 15:29:05                                  
17-FEB-2016 09:29:05      09-MAY-2016 16:29:05                                  
17-FEB-2016 09:29:05      10-MAY-2016 17:29:05                                  
17-FEB-2016 09:29:05      11-MAY-2016 18:29:05                                  
17-FEB-2016 09:29:05      12-MAY-2016 19:29:05                                  
17-FEB-2016 09:29:05      13-MAY-2016 20:29:05                                  
17-FEB-2016 09:29:05      14-MAY-2016 21:29:05                                  
17-FEB-2016 09:29:05      15-MAY-2016 22:29:05                                  
17-FEB-2016 09:29:05      16-MAY-2016 23:29:05                                  
17-FEB-2016 09:29:05      18-MAY-2016 00:29:05                                  
17-FEB-2016 09:29:05      19-MAY-2016 01:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      20-MAY-2016 02:29:05                                  
17-FEB-2016 09:29:05      21-MAY-2016 03:29:05                                  
17-FEB-2016 09:29:05      22-MAY-2016 04:29:05                                  
17-FEB-2016 09:29:05      23-MAY-2016 05:29:05                                  
17-FEB-2016 09:29:05      24-MAY-2016 06:29:05                                  
17-FEB-2016 09:29:05      25-MAY-2016 07:29:05                                  
17-FEB-2016 09:29:05      26-MAY-2016 08:29:05                                  
17-FEB-2016 09:29:05      27-MAY-2016 09:29:05                                  
17-FEB-2016 09:29:05      28-MAY-2016 10:29:05                                  
17-FEB-2016 09:29:05      29-MAY-2016 11:29:05                                  
17-FEB-2016 09:29:05      30-MAY-2016 12:29:05                                  

START_TIME                END_TIME                                              
------------------------- -------------------------                             
17-FEB-2016 09:29:05      31-MAY-2016 13:29:05                                  

100 rows selected.

SQL> 

Proper date arithmetic can be performed and the dates sort correctly since Oracle now knows they are dates.

Using the correct data type for the data being used is critical; storing data in a format other than what should be used can create problems with applications and can result in incorrect values. Analytical functions can produce erroneous output when data isn’t using the correct type, especially functions like lead(), lag() and sum() that can partition and order data to provide ‘rolling’ results. And if you can manipulate the data to provide correct answers it usually requires much more work than would be necessary using the proper data type. Keep that in mind when you design tables and applications.

Context is everything.

January 26, 2016

Heap Of Trouble

Filed under: General — dfitzjarrell @ 07:31

"But I suppose there's a lot to see everywhere, if only you keep your eyes open."
-- Norton Juster, The Phantom Tollbooth 

Oracle Support recently updated what I call a ‘blast from the past’; it involves a subpool memory leak in Oracle 11.2.0.1 and shared cursors. Originally submitted in 2010 this issue was updated the end of last year to reflect the status of ‘Fixed in Product Version 12.1’. There have been several bug fixes to the 11.2.0 series of releases addressing this and similar issues, but I suspect Oracle Support waited for its declaration of ‘fixed’ until 12.1.0.2 was finally released. Let’s look at the symptoms and show how you can investigate the shared pool subpools without generating a trace file.

This issue finally becomes apparent when an ORA-04031 error is thrown:


ORA-4031: unable to allocate ... bytes of shared memory ("shared pool","STANDARD","PLMCD^e93dbe75","BAMIMA: Bam Buffer")

In the original submission a trace file for the 4031 error, generated by the database, was submitted which revealed the offending portion of subpool 2:


Memory Utilization of Subpool 1
================================
"free memory              "      25488576
"SQLA                     "     134248904
"PCUR                     "      25094632
Memory Utilization of Subpool 2
================================
"free memory              "     127129872
"PCUR                     "    2427488432

Notice that the PCUR area in subpool 2 is almost 100 times the size of the same area in subpool 1. Unfortunately an AWR report shows only the Shared Pool Size, not a breakdown of the various subpools in the Shared Pool. Fortunately Tanel Poder has provided a tool to print out the various memory segments in the subpools called sgastatx.sql that can be used to see how much memory is allocated to each subpool by reporting on all of the memory areas of interest. Looking at an example of how the script is run:


SQL> @sgastatx %

-- All allocations:

SUBPOOL                             BYTES         MB
------------------------------ ---------- ----------
shared pool (0 - Unused):        83886080         80
shared pool (1):                184549376        176
shared pool (Total):            268435456        256

-- Allocations matching "%":

SUBPOOL                        NAME                       SUM(BYTES)         MB
------------------------------ -------------------------- ---------- ----------
shared pool (0 - Unused):      free memory                  83886080         80

shared pool (1):               free memory                  41599128      39.67
                               SQLA                         14559048      13.88
                               KGLH0                        12547880      11.97
                               row cache                     8641136       8.24
                               KGLS                          7616536       7.26
                               db_block_hash_buckets         5836800       5.57
                               KGLSG                         5267216       5.02
                               dbwriter coalesce buffer      4194816          4
                               KCB Table Scan Buffer         4194816          4
                               kglsim hash table bkts        4194304          4
...

The default behavior of the script is to report everything, so the % parameter is not necessary. Other text parameters can be passed to return memory areas of concern, in this case the KGLH0 area which maps to the PCUR data reported in the trace file. Returning only the KGLH0 allocations:


SQL> @sgastatx KGLH0

-- All allocations:

SUBPOOL                             BYTES         MB
------------------------------ ---------- ----------
shared pool (0 - Unused):        83886080         80
shared pool (1):                184549376        176
shared pool (Total):            268435456        256

-- Allocations matching "KGLH0":

SUBPOOL                        NAME                       SUM(BYTES)         MB
------------------------------ -------------------------- ---------- ----------
shared pool (1):               KGLH0                        12560024      11.98


SQL>

In the database used for these runs only one subpool is allocated outside of subpool 0, which is the free memory not yet allocated to a subpool. As allocations to various subpools are made subpool 0 decreases in size until all of the unallocated free space is consumed. This can result in ‘lopsided’ allocations to some subpools; once memory is allocated to a particular subpool it cannot be re-allocated to another subpool. In ‘normal’ cases when ORA-04031 errors arise expanding the shared pool will correct them, provided dynamic shared pool allocations are configured by setting sga_max_size larger than sga_target. In extreme cases, such as this memory leak, the only solution is to restart the database to clear out the existing memory segments.

The sgastatx.sql script can be run at any time, won’t impact performance and will provide the current allocations allowing the DBA to monitor the subpool segments so proactive changes can be made to prevent ORA-04031 errors, again provided dynamic shared pool allocations are configured.

The sgastatx.sql script can also report on the free space in all allocated subpools as well as in subpool 0:


SQL> @sgastatx "free memory"

-- All allocations:

SUBPOOL                             BYTES         MB
------------------------------ ---------- ----------
shared pool (0 - Unused):        83886080         80
shared pool (1):                184549376        176
shared pool (Total):            268435456        256

-- Allocations matching "free memory":

SUBPOOL                        NAME                       SUM(BYTES)         MB
------------------------------ -------------------------- ---------- ----------
shared pool (0 - Unused):      free memory                  83886080         80

shared pool (1):               free memory                  37168136      35.45


SQL>

When subpool 0 is exhausted it will be eliminated from the displayed output, which will indicate that Oracle can make no more adjustments to the existing subpools and either a dynamic shared pool increase needs to be executed or, failing that, the spfile needs to be changed, the database stopped and then restarted. [If an spfile is not in use then the pfile needs to be modified to reflect the memory increase, followed by a shutdown and startup of the database. For ease of administration it is recommended that an spfile, rather than a pfile, be used.]

The ideal solution to this issue is to either patch to the terminal release of 11.2.0 (11.2.0.5) or upgrade to 12.1.0.2 so the memory-related bugs can be patched. It is possible that a third-party application may not be certified for any release newer than 11.2.0.1 (which should be an extremely rare case) then the next solution is to apply relevant patches for 11.2.0.1. Oracle Support lists 8 merge patches for bug fixes related to this memory leak (18730652, 18837746, 18980005, 19015163, 19494335, 19481242, 21078557 and 21076074).

Hopefully those shops still running on the base version 11.2.0.1 are few and far between, but anything is possible. Being prepared is the best way a DBA can quickly and reliably address issues, even those needing a bit of research.

Apparently there is quite a bit to see, if you look.

January 8, 2016

“Say What?!?!?”

Filed under: General,Indexes,Performance — dfitzjarrell @ 16:04

"The only thing you can do easily is be wrong, and that's hardly worth the effort." 
Norton Juster, The Phantom Tollbooth

Oracle can lie to you. Not like a disreputable used-car salesman but more like the ‘little white lie’ sometimes told in order to hide less-than-desirable parts of the truth. And it’s not Oracle, really, it’s the optimizer and it does it by reporting query plans that may not accurately report the execution path. Sometimes we can make Oracle tell lies that it doesn’t know its telling, by using features not everyone uses. Take, for example, NLS settings. There have been improvements in National Language Support in most Oracle releases, and those improvements can introduce behavior that causes Oracle to lie to you. Let’s look at an example where this is true.

Jonathan Lewis brought this example to light in a recent blog post where he dislplayed the plan using dbms_xplan. After reading the post I decided to add to it by using autotrace to generate the plan, just to see if the two plans matched. Let’s look at the modified example:


SQL> --
SQL> --  Setup NLS parameters
SQL> --
SQL> ALTER session SET nls_sort=binary_ci;

Session altered.

SQL> ALTER session SET nls_comp=linguistic;

Session altered.

SQL> 
SQL> --
SQL> -- Create table, populate table and
SQL> -- create index
SQL> --
SQL> CREATE TABLE log_data(
  2    account_id NUMBER,
  3    log_type NUMBER,
  4    sys_name VARCHAR2(30),
  5    log_time TIMESTAMP,
  6    msg varchar2(4000)
  7  )
  8  nologging
  9  ;

Table created.

SQL> 
SQL> insert /*+ append */ into log_data(
  2    account_id,
  3    log_type,
  4    sys_name,
  5    log_time,
  6    msg
  7  )
  8  select
  9  	     5,
 10  	     2,
 11  	     dbms_random.string('a',1),
 12  	     sysdate + dbms_random.value,
 13  	     rpad('x',200)
 14  from
 15  	     dual
 16  connect by
 17  	     level  user,
  4  		     tabname	      =>'LOG_DATA',
  5  		     method_opt       => 'for all columns size 1'
  6  	     );
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL> 

The setup is complete so let’s see what autotrace reports as the execution plan:


SQL> 
SQL> column msg format a20
SQL> 
SQL> --
SQL> -- Use autotrace to generate the plan
SQL> --
SQL> set autotrace on
SQL> 
SQL> SELECT
  2  	     *
  3  FROM
  4    (
  5  	 SELECT
  6  	     sys_name, log_time,  substr(msg,1,40) msg
  7  	 FROM log_data
  8  	 WHERE
  9  	   account_id=5
 10  	   AND log_type=2
 11  	   AND sys_name='a'
 12  	 ORDER BY
 13  	   log_time  desc
 14    )
 15  WHERE
 16    rownum <= 10
 17  ;

SYS_NAME                       LOG_TIME                                                                    MSG                                        
------------------------------ --------------------------------------------------------------------------- --------------------                       
A                              09-JAN-16 02.42.54.000000 PM                                                x                                          
a                              09-JAN-16 02.41.02.000000 PM                                                x                                          
a                              09-JAN-16 02.40.54.000000 PM                                                x                                          
a                              09-JAN-16 02.36.38.000000 PM                                                x                                          
A                              09-JAN-16 02.36.02.000000 PM                                                x                                          
a                              09-JAN-16 02.34.15.000000 PM                                                x                                          
a                              09-JAN-16 02.31.15.000000 PM                                                x                                          
A                              09-JAN-16 02.30.59.000000 PM                                                x                                          
a                              09-JAN-16 02.30.36.000000 PM                                                x                                          
a                              09-JAN-16 02.29.53.000000 PM                                                x                                          

10 rows selected.


Execution Plan
----------------------------------------------------------                                                                                            
Plan hash value: 1444908817                                                                                                                           
                                                                                                                                                      
--------------------------------------------------------------------------------------------                                                          
| Id  | Operation                       | Name     | Rows  | Bytes | Cost (%CPU)| Time     |                                                          
--------------------------------------------------------------------------------------------                                                          
|   0 | SELECT STATEMENT                |          |    10 |  1120 |    13   (0)| 00:00:01 |                                                          
|*  1 |  COUNT STOPKEY                  |          |       |       |            |          |                                                          
|   2 |   VIEW                          |          |    11 |  1232 |    13   (0)| 00:00:01 |                                                          
|*  3 |    SORT ORDER BY STOPKEY        |          |    11 |  2453 |    13   (0)| 00:00:01 |                                                          
|   4 |     TABLE ACCESS BY INDEX ROWID | LOG_DATA |  1000 |   217K|    13   (0)| 00:00:01 |                                                          
|*  5 |      INDEX RANGE SCAN DESCENDING| LOG_DATE |    11 |       |     2   (0)| 00:00:01 |                                                          
--------------------------------------------------------------------------------------------                                                          
                                                                                                                                                      
Predicate Information (identified by operation id):                                                                                                   
---------------------------------------------------                                                                                                   
                                                                                                                                                      
   1 - filter(ROWNUM<=10)                                                                                                                             
   3 - filter(ROWNUM<=10)                                                                                                                             
   5 - access("ACCOUNT_ID"=5 AND "LOG_TYPE"=2 AND                                                                                                     
              NLSSORT("SYS_NAME",'nls_sort=''BINARY_CI''')=HEXTORAW('6100') )                                                                         


Statistics
----------------------------------------------------------                                                                                            
          8  recursive calls                                                                                                                          
          0  db block gets                                                                                                                            
        981  consistent gets                                                                                                                          
          0  physical reads                                                                                                                           
          0  redo size                                                                                                                                
        923  bytes sent via SQL*Net to client                                                                                                         
        524  bytes received via SQL*Net from client                                                                                                   
          2  SQL*Net roundtrips to/from client                                                                                                        
          1  sorts (memory)                                                                                                                           
          0  sorts (disk)                                                                                                                             
         10  rows processed                                                                                                                           

SQL> 
SQL> set autotrace off
SQL> 

SQL> 
SQL> --
SQL> -- Now use dbms_xplan to generate the plan
SQL> --
SQL> SELECT /*+ gather_plan_statistics */
  2  	     *
  3  FROM
  4    (
  5  	 SELECT
  6  	     sys_name, log_time,  substr(msg,1,40) msg
  7  	 FROM log_data
  8  	 WHERE
  9  	   account_id=5
 10  	   AND log_type=2
 11  	   AND sys_name='a'
 12  	 ORDER BY
 13  	   log_time  desc
 14    )
 15  WHERE
 16    rownum 'allstats last'));

PLAN_TABLE_OUTPUT                                                                                                                                     
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  36qykj7j9tsrp, child number 0                                                                                                                 
-------------------------------------                                                                                                                 
SELECT /*+ gather_plan_statistics */         * FROM   (     SELECT                                                                                    
   sys_name, log_time,  substr(msg,1,40) msg     FROM log_data                                                                                        
WHERE       account_id=5       AND log_type=2       AND sys_name='a'                                                                                  
 ORDER BY       log_time  desc   ) WHERE   rownum <= 10                                                                                               
                                                                                                                                                      
Plan hash value: 1444908817                                                                                                                           
                                                                                                                                                      
---------------------------------------------------------------------------------------------------------------------------------                     
| Id  | Operation                       | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |                     

PLAN_TABLE_OUTPUT                                                                                                                                     
------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------                     
|   0 | SELECT STATEMENT                |          |      1 |        |     10 |00:00:00.01 |     979 |       |       |          |                     
|*  1 |  COUNT STOPKEY                  |          |      1 |        |     10 |00:00:00.01 |     979 |       |       |          |                     
|   2 |   VIEW                          |          |      1 |     11 |     10 |00:00:00.01 |     979 |       |       |          |                     
|*  3 |    SORT ORDER BY STOPKEY        |          |      1 |     11 |     10 |00:00:00.01 |     979 |  2048 |  2048 | 2048  (0)|                     
|   4 |     TABLE ACCESS BY INDEX ROWID | LOG_DATA |      1 |   1000 |    975 |00:00:00.01 |     979 |       |       |          |                     
|*  5 |      INDEX RANGE SCAN DESCENDING| LOG_DATE |      1 |     11 |    975 |00:00:00.01 |       5 |       |       |          |                     
---------------------------------------------------------------------------------------------------------------------------------                     
                                                                                                                                                      
Predicate Information (identified by operation id):                                                                                                   
---------------------------------------------------                                                                                                   

PLAN_TABLE_OUTPUT                                                                                                                                     
------------------------------------------------------------------------------------------------------------------------------------------------------
                                                                                                                                                      
   1 - filter(ROWNUM<=10)                                                                                                                             
   3 - filter(ROWNUM<=10)                                                                                                                             
   5 - access("ACCOUNT_ID"=5 AND "LOG_TYPE"=2 AND "LOG_DATA"."SYS_NC00006$"=HEXTORAW('6100') )                                                        
                                                                                                                                                      

27 rows selected.

SQL> 

Notice that both methods report the same execution plan; it’s the access predicates that differ, and autotrace reports the index as the access point rather than the hidden column in the table. Hihstorically it’s been dbms_xplan.display_cursor that’s been the ‘truth’, but in this case it may be autotrace reporting the actual predicates in use.

And that’s the truth.

December 1, 2015

Export-Import Business

Filed under: General — dfitzjarrell @ 07:22

"Don't you know anything at all about numbers?"

"Well, I don't think they're very important," snapped Milo, too embarrassed to admit the truth.

"NOT IMPORTANT!" roared the Dodecahedron, turning red with fury. "Could you have tea for two without
the two -- or three blind mice without the three? Would there be four corners of the earth if there
weren't a four? And how would you sail the seven seas without a seven?"

"All I meant was--" began Milo, but the Dodecahedron, overcome with emotion and shouting furiously,
carried right on.

"If you had high hopes, how would you know how high they were? And did you know that narrow escapes
come in all different widths? Would you travel the whole wide world without ever knowing how wide it
was? And how could you do anything at long last," he concluded, waving his arms over his head,
"without knowing how long the last was? Why, numbers are the most beautiful and valuable things in
the world. Just follow me and I'll show you." He turned on his heel and stalked off into the cave. 

Norton Juster, The Phantom Tollbooth 

Numbers mean a lot of things, and in Oracle 12.1.0.2 they can spell trouble for interval-partitioned tables. Oracle Support received notification of this back in March of 2015 and as of the most recent update to the SR the issue hasn’t been solved. Let’s look at what can happen with an interval-partitioned table and Data Pump.

Data Pump export and import usually function without error but an interval-partitioned table in release 12.1.0.2 causes some serious trouble. The example below shows the problem, and was modified from the example provided in the bug report by changing table and column names:


SQL> 
SQL> CREATE TABLE "EXP_IMP_TEST"
  2  (	     "KEY_NR" NUMBER(6,0) NOT NULL ENABLE,
  3  	     "VERS" NUMBER(5,0) NOT NULL ENABLE,
  4  	     "MY_DAT" DATE NOT NULL ENABLE,
  5  	     "YONP" NUMBER(3,0) NOT NULL ENABLE,
  6  	     "LAST_MODIFY_TIME" TIMESTAMP (6) DEFAULT SYSTIMESTAMP NOT NULL ENABLE,
  7  	      CONSTRAINT "IX_PK_WAL" PRIMARY KEY ("KEY_NR","VERS", "MY_DAT","YONP")
  8    USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
  9    STORAGE(
 10    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT )
 11    LOCAL (PARTITION "P_EXP_IMP_201301"  PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING
 12    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
 13    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 14    TABLESPACE USERS )  ENABLE   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 15    STORAGE(  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT )
 16    TABLESPACE USERS
 17    PARTITION BY RANGE ("KEY_NR") INTERVAL (100)
 18    (PARTITION "P_EXP_IMP_201301"  VALUES LESS THAN (201401) SEGMENT CREATION IMMEDIATE
 19    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS NOLOGGING
 20    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
 21    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
 22    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT )
 23    TABLESPACE USERS )  ENABLE ROW MOVEMENT;

Table created.

SQL> 
SQL>  insert into EXP_IMP_TEST("KEY_NR","VERS", "MY_DAT", "YONP") values(999901,1,sysdate,1);

1 row created.

SQL>   commit;

Commit complete.

SQL> 
SQL> $ expdp bing/*********@yinski directory=data_pump_dir2 tables=EXP_IMP_TEST dumpfile=exp_imp_20b.dmp logfile=te_tst_log10b.log
...
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "BING"."SYS_EXPORT_TABLE_03":  bing/********@yinski directory=data_pump_dir2 tables=EXP_IMP_TEST dumpfile=exp_imp_30b.dmp logfile=te_tst_log10b.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 8.062 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "BING"."EXP_IMP_TEST":"SYS_P9681"           6.929 KB       1 rows
. . exported "BING"."EXP_IMP_TEST":"P_EXP_IMP_201301"        0 KB       0 rows
Master table "BING"."SYS_EXPORT_TABLE_03" successfully loaded/unloaded
******************************************************************************
Dump file set for BING.SYS_EXPORT_TABLE_03 is:
  C:\USERS\APP\ORACLE\ADMIN\SMED1234\DPDUMP\EXP_IMP_30B.DMP
Job "BING"."SYS_EXPORT_TABLE_03" successfully completed at Mon Nov 30 14:36:54 2015 elapsed 0 00:02:40

So far, so good, the export completes successfully. Let’s try to import that table into another schema in the same database:


SQL> 
SQL> $ impdp bing/*********@yinski directory=data_pump_dir2 dumpfile=exp_imp_20b.dmp tables=EXP_IMP_TEST remap_schema=BING:BONG logfile=ti_tst_log10b.log
...
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "BING"."SYS_IMPORT_TABLE_02" successfully loaded/unloaded
Starting "BING"."SYS_IMPORT_TABLE_02":  bing/********@yinski directory=data_pump_dir2 dumpfile=exp_imp_20b.dmp tables=EXP_IMP_TEST remap_schema=BING:BONG logfile=ti_tst_log10b.log
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:"BONG"."EXP_IMP_TEST" failed to create with error:
ORA-01438: value larger than specified precision allowed for this column
Failing sql is:
CREATE TABLE "BONG"."EXP_IMP_TEST" ("KEY_NR" NUMBER(6,0) NOT NULL ENABLE, "VERS" NUMBER(5,0) NOT NULL ENABLE,
"MY_DAT" DATE NOT NULL ENABLE, "YONP" NUMBER(3,0) NOT NULL ENABLE, "LAST_MODIFY_TIME" TIMESTAMP (6)
DEFAULT SYSTIMESTAMP NOT NULL ENABLE) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
ORA-39112: Dependent object type INDEX:"BONG"."IX_PK_WAL" skipped, base object type TABLE:"BONG"."EXP_IMP_TEST" creation failed
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ORA-39112: Dependent object type CONSTRAINT:"BONG"."IX_PK_WAL" skipped, base object type TABLE:"BONG"."EXP_IMP_TEST" creation failed
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "BING"."SYS_IMPORT_TABLE_02" completed with 3 error(s) at Mon Nov 30 14:24:15 2015 elapsed 0 00:02:06

Oracle throws a precision error due to the number column declarations. Let’s see if increasing the length of the NUMBER columns improves the situation:


SQL> 
SQL> CREATE TABLE "EXP_IMP_TEST2"
  2  (	     "KEY_NR" NUMBER(10) NOT NULL ENABLE,
  3  	     "VERS" NUMBER(10) NOT NULL ENABLE,
  4  	     "MY_DAT" DATE NOT NULL ENABLE,
  5  	     "YONP" NUMBER(10) NOT NULL ENABLE,
  6  	     "LAST_MODIFY_TIME" TIMESTAMP (6) DEFAULT SYSTIMESTAMP NOT NULL ENABLE,
  7  	      CONSTRAINT "IX_PK_WAL2" PRIMARY KEY ("KEY_NR","VERS", "MY_DAT","YONP")
  8    USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
  9    STORAGE(
 10    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT )
 11    LOCAL (PARTITION "P_EXP_IMP2_201301"  PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING
 12    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
 13    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 14    TABLESPACE USERS )  ENABLE   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 15    STORAGE(  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT )
 16    TABLESPACE USERS
 17    PARTITION BY RANGE ("KEY_NR") INTERVAL (100)
 18    (PARTITION "P_EXP_IMP2_201301"  VALUES LESS THAN ('201401') SEGMENT CREATION IMMEDIATE
 19    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS NOLOGGING
 20    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
 21    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
 22    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT )
 23    TABLESPACE USERS )  ENABLE ROW MOVEMENT;

Table created.

SQL> 
SQL>  insert into EXP_IMP_TEST2("KEY_NR","VERS", "MY_DAT", "YONP") values(999901,1,sysdate,1);

1 row created.

SQL>   commit;

Commit complete.

SQL> 
SQL> $ expdp bing/*******@yinski directory=data_pump_dir2 tables=EXP_IMP_TEST2 dumpfile=exp_imp_21b.dmp logfile=te_tst_log11b.log

The export output won’t be repeated as it’s the same as the previous example. Looking at the output from the import we see a difference:


SQL> 
SQL> $ impdp bing/*********@yinski directory=data_pump_dir2 dumpfile=exp_imp_21b.dmp tables=EXP_IMP_TEST2 remap_schema=BING:BONG logfile=ti_tst_log11b.log
...
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "BING"."SYS_IMPORT_TABLE_02" successfully loaded/unloaded
Starting "BING"."SYS_IMPORT_TABLE_02":  bing/********@yinski directory=data_pump_dir2 dumpfile=exp_imp_21b.dmp tables=EXP_IMP_TEST2 remap_schema=BING:BONG logfile=ti_tst_log11b.log
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "BONG"."EXP_IMP_TEST2":"SYS_P9701"          6.937 KB       1 rows
. . imported "BONG"."EXP_IMP_TEST2":"P_EXP_IMP2_201301"      0 KB       0 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "BING"."SYS_IMPORT_TABLE_02" successfully completed at Mon Nov 30 14:44:47 2015 elapsed 0 00:02:03

Declaring the NUMBER columns larger than the expected data length seems to ‘fix’ the issue. Obviously if we eliminate any size declaration for those same columns the import succeeds, but let’s prove that with the next example:


SQL> 
SQL> CREATE TABLE "EXP_IMP_TEST3"
  2  (	     "KEY_NR" NUMBER NOT NULL ENABLE,
  3  	     "VERS" NUMBER NOT NULL ENABLE,
  4  	     "MY_DAT" DATE NOT NULL ENABLE,
  5  	     "YONP" NUMBER NOT NULL ENABLE,
  6  	     "LAST_MODIFY_TIME" TIMESTAMP (6) DEFAULT SYSTIMESTAMP NOT NULL ENABLE,
  7  	      CONSTRAINT "IX_PK_WAL3" PRIMARY KEY ("KEY_NR","VERS", "MY_DAT","YONP")
  8    USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
  9    STORAGE(
 10    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT )
 11    LOCAL (PARTITION "P_EXP_IMP3_201301"  PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING
 12    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
 13    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 14    TABLESPACE USERS )  ENABLE   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 15    STORAGE(  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT )
 16    TABLESPACE USERS
 17    PARTITION BY RANGE ("KEY_NR") INTERVAL (100)
 18    (PARTITION "P_EXP_IMP3_201301"  VALUES LESS THAN (201401) SEGMENT CREATION IMMEDIATE
 19    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS NOLOGGING
 20    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
 21    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
 22    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT )
 23    TABLESPACE USERS )  ENABLE ROW MOVEMENT;

Table created.

SQL> 
SQL>  insert into EXP_IMP_TEST3("KEY_NR","VERS", "MY_DAT", "YONP") values(999901,1,sysdate,1);

1 row created.

SQL>   commit;

Commit complete.

SQL> 
SQL> $ expdp bing/********@yinski directory=data_pump_dir2 tables=EXP_IMP_TEST3 dumpfile=exp_imp_22b.dmp logfile=te_tst_log12b.log
(output not shown)

SQL> 
SQL> $ impdp bing/**********@yinski directory=data_pump_dir2 dumpfile=exp_imp_22b.dmp tables=EXP_IMP_TEST3 remap_schema=BING:BONG logfile=ti_tst_log12b.log
...
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "BING"."SYS_IMPORT_TABLE_02" successfully loaded/unloaded
Starting "BING"."SYS_IMPORT_TABLE_02":  bing/********@yinski directory=data_pump_dir2 dumpfile=exp_imp_22b.dmp tables=EXP_IMP_TEST3 remap_schema=BING:BONG logfile=ti_tst_log12b.log
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "BONG"."EXP_IMP_TEST3":"SYS_P9721"          6.789 KB       1 rows
. . imported "BONG"."EXP_IMP_TEST3":"P_EXP_IMP3_201301"      0 KB       0 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "BING"."SYS_IMPORT_TABLE_02" successfully completed at Mon Nov 30 14:50:07 2015 elapsed 0 00:02:13

Having number data reach the data length limit of NUMBER columns is a ‘no-no’ in Oracle 12.1.0.2 when it comes to interval-partitioned tables, it appears. Some industrious souls may decide that altering the column lengths of an existing table will correct the problem, but not for interval-partitioned tables since the partition key cannot be modified:


SQL> 
SQL> alter table exp_imp_test modify (key_nr number(10), vers number(9), yonp number(7));
alter table exp_imp_test modify (key_nr number(10), vers number(9), yonp number(7))
                                 *
ERROR at line 1:
ORA-14060: data type or length of a table partitioning column may not be 
changed 


SQL> 

Until this issue is resolved care must be taken when creating interval-partitioned tables in Oracle 12.1.0.2 as imports will fail if the lenth of the data values reaches the declared maximum the NUMBER column can contain. This not only affects ‘normal’ import operations but also those when using transportable tablespaces; the same error is generated in those cases.

Numbers, they’re not just painful at tax time anymore.

November 11, 2015

You’re On A Roll(back)

Filed under: General — dfitzjarrell @ 14:26

"I know one thing for certain; it is much harder to tell whether you are lost than whether 
you were lost, for, on many occasions, where you are going is exactly where you are. On 
the other hand, if you often find that where you've been is not at all where you should have 
gone, and, since it's much more difficult to find your way back from someplace you've never 
left, I suggest you go there immediately and then decide."
-- Norton Juster, The Phantom Tollbooth

A recent question in an Oracle forum regarding Automatic UNDO Management shows that the topic is still misunderstood by some. The basic idea behind Automatic UNDO Management is that the database essentially does the work for the DBA, freeing him or her from a tedious task of monitoring and adjusting UNDO (or rollback) segments. This mechanism also manages the number of segments created at database startup, which initiated the question and fueled the discussion. How does Oracle decide how many UNDO segments to create at startup, and what is the underlying goal of the process? Let’s investigate.

Automatic UNDO Management attempts to do the following:


	*	Manage the UNDO tablespace
	*	Allocate UNDO segments
	*	Assign transactions to UNDO segments
	*	Release UNDO segments when transactions complete

To do all of this Oracle reqiures an UNDO tablespace that is different from the others. No standard database objects can be created in an UNDO tablespace (users cannot put tables or indexes there) so Oracle can manage the objects unimpeded. This makes the job easier for the DBA, and also restricts what a DBA can do to the tablespace. The list of available tasks a DBA can perform is shown below:


    Adding a data file

    Renaming a data file

    Bringing a data file online or taking it offline

    Beginning or ending an open backup on a data file

    Enabling and disabling undo retention guarantee

Only one UNDO tablespace can be active for a database but that doesn’t mean additional UNDO tablespaces can’t be created to address different processing scenarios. For example UNDOTBS1 could be a ‘standard’ UNDO tablespace not set to autoextend since OLTP transactions are usually small and don’t need the extra segment space, where UNDOTBS2 would have its datafiles set to autoextend, so that large batch processing jobs could complete without issue. Setting which UNDO tablespace is used is a simple task:


ALTER SYSTEM SET UNDO_TABLESPACE = UNDOTBS2;

This parameter is dynamic so no restart of the database is necessary, thus it can be included as a command in a batch processing script; once the command is run any new transactions will use UNDOTBS2; existnig transactions using UNDOTBS1 will continue to completion and when all current transactions are done UNDOTBS1 will go offline. For a RAC environment doing this may not be possible if, for some reason, more than one node is using the same UNDO tablespace or the UNDO tablespace being switched to is the same as that for another node. In the second case the initial switch would occur without error; when an attempt was made to switch back to the original UNDO tablespace for the node in question the switch would fail since the current UNDO tablespace is in use by two nodes, which would be the first case. Granted neither situation is likely to occur but it is wise to be prepared in the event someone has made such a switch.

Although it is not a recommended practice quotas can be established for the UNDO tablespace for a resource group using the Resource Manager. Users in a resource group will only get the total UNDO specified in the UNDO_POOL directive. The UNDO_POOL setting applies to the resource group, not each individual member, so if one user of a resource group consumes all of the assigned UNDO space before an update transaction completes that transaction is terminated. Unti a rollback is issued, releasing the UNDO, no other users of the group can perform updates.

Several views provide information on the UNDO tablespace and its activity:


	V$UNDOSTAT
	V$ROLLSTAT
	V$TRANSACTION
	GV$UNDOSTAT (RAC)
	GV$ROLLSTAT (RAC)
	GV$TRANSACTION (RAC)
	DBA_UNDO_EXTENTS
	DBA_HIST_UNDOSTAT <-- Needs no licensing in versions 11.2 and later

When Automatic UNDO management is used V$ROLLSTAT provides information on the UNDO segments. [The GV$ROLLSTAT view provides cluster-wide UNDO information.] The V$UNDOSTAT view can be very helpful in tracking UNDO usage over a short period of time, such as the last 24 to 48 hours. For longer periods DBA_HIST_UNDOSTAT can provide usage trends over the configured AWR retention window. Looking at V$UNDOSTAT:


SQL> SELECT TO_CHAR(BEGIN_TIME, 'MM/DD/YYYY HH24:MI:SS') BEGIN_TIME,
  2      TO_CHAR(END_TIME, 'MM/DD/YYYY HH24:MI:SS') END_TIME,
  3      UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON"
  4      FROM v$UNDOSTAT WHERE rownum <= 144;

BEGIN_TIME          END_TIME                UNDOTSN    UNDOBLKS    TXNCOUNT      MAXCON
------------------- ------------------- ----------- ----------- ----------- -----------
09/25/2015 14:05:13 09/25/2015 14:07:56           5          16          40           4
09/25/2015 13:55:13 09/25/2015 14:05:13           5         211         459           4
09/25/2015 13:45:13 09/25/2015 13:55:13           5          67         200           4
09/25/2015 13:35:13 09/25/2015 13:45:13           5          69         193           4
...
09/24/2015 14:45:13 09/24/2015 14:55:13           5          19         149           2
09/24/2015 14:35:13 09/24/2015 14:45:13           5          11          97           1
09/24/2015 14:25:13 09/24/2015 14:35:13           5          14         127           2
09/24/2015 14:15:13 09/24/2015 14:25:13           5          21         143           3

144 rows selected.

SQL>

This report shows the UNDO tablespace being used (UNDOTSN), the blocks consumed (UNDOBLKS), the total numner of transactions executed during the period (TXNCOUNT) and the maximum number of concurrent transactions in that same period (MAXCON). This same report, for a longer window, is returned from DBA_HIST_UNDOSTAT; it would be a good idea to order the results to give a clearer picture of the usage trends over time.

UNDO retention is set with the undo_retention parameter, which provides the minimum retention period. Oracle does its best to honor that retention if the UNDO datafiles are set to autoextend. Oracle automatically manages the retention setting by setting the retention period to somewhat longer than the longest currently running active query to avoid “Snapshot too old” errors. As UNDO needs increase Oracle extends the datafiles to the limit set by MAXSIZE; when that limit is reached then unexpired UNDO information may be overwritten, oldest first. When UNDO datafiles are set to fixed sizes then this setting is ignored.

UNDO retention can be guaranteed but that can cause DML operations to fail on active systems. When the UNDO tablespace is created RETENTION GUARANTEE can be specified; it can also be set for exising UNDO tablespaces with the ALTER TABLESPACE statement. To reverse that change the ALTER TABLESPACE command can be used with RETENTION NOGUARANTEE specified.

V$UNDOSTAT allows the tuned UNDO retention size to be tracked:


SQL> select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time,
   2    to_char(end_time, 'DD-MON-RR HH24:MI') end_time, tuned_undoretention
   3    from v$undostat order by end_time;

BEGIN_TIME      END_TIME        TUNED_UNDORETENTION
--------------- --------------- -------------------
22-SEP-15 11:45 22-SEP-15 11:55               36000
22-SEP-15 11:55 22-SEP-15 12:05               36000
22-SEP-15 12:05 22-SEP-15 12:15               36000
22-SEP-15 12:15 22-SEP-15 12:25               36000
22-SEP-15 12:25 22-SEP-15 12:35               36000
...
25-SEP-15 14:45 25-SEP-15 14:55               36000
25-SEP-15 14:55 25-SEP-15 14:57               36000

452 rows selected.

SQL>

Changes to the retention will be shown in the query results, allowing the DBA to monitor the automatic retention setting. For some systems the undo_retention parameter setting won’t be overridden (as in the results shown above) but in other cases it may be significantly larger to accomodate longer transactions.

Oracle sets the initial number of UNDO segments based on the system activity; by default it appears that 10 is the ‘magic’ number Oracle allocates, subject to change as the system activity increases. It may not be unusual to have almost 400 UNDO segments, or more, allocated for extremely busy systems. DBA_SEGMENTS can provide that information; simply specify the UNDO tablespace for tablespace_name in your query.

Automatic UNDO Management isn’t voodoo or black magic, although it can seem that way when it isn’t clearly understood. Hopefully you now have a better understanding of the mechanism and how it behaves, making your job a bit easier.

You can now go back to where you weren’t.

October 26, 2015

Results, What Results?

Filed under: General — dfitzjarrell @ 11:55

"For instance," said the boy again, "if Christmas trees were people and people were Christmas trees,
we'd all be chopped down, put up in the living room, and covered in tinsel, while the trees opened our presents."
"What does that have to do with it?" asked Milo.
"Nothing at all," he answered, "but it's an interesting possibility, don't you think?"
Norton Juster, The Phantom Tollbooth 

Oracle offers a results cache in the database (from 11.2 onwards) and in PL/SQL (again, from 11.2 onwards) which can greatly reduce execution time of repeated statements when insert/update/delete activity is not heavy. The mechanism in PL/SQL, however, may not work as expected with global temporary tables, a fact mentioned in the documentation but possibly overlooked until its effects are seen when it is actually used. Let’s look at what behavior you might see when using this option, using a slightly modified example from Jonathan Lewis.

This starts with flushing the shared pool, creating and populating a global temporary table and creating a function to display the number of available records from the results cache. First the table is created and populated with a single row:


SQL> set serveroutput on size 1000000
SQL> alter system flush shared_pool;

System altered.

SQL>
SQL> truncate table glob_tmp;

Table truncated.

SQL> drop table glob_tmp;

Table dropped.

SQL> create global temporary table glob_tmp (fnum number)
  2  on commit preserve rows
  3  ;

Table created.

SQL>
SQL> insert into glob_tmp values(1);

1 row created.

SQL> commit;

Commit complete.

SQL>

Compute garden-variety stats on the global temporary table (no ‘special’ options in use):


SQL> execute dbms_stats.gather_table_stats(user,'glob_tmp');

PL/SQL procedure successfully completed.

SQL>

Create the function to return the number of visible rows in the results cache:


SQL>
SQL> create or replace function f_cache
  2  return number
  3  result_cache
  4
  5  is
  6          m_ret number;
  7  begin
  8          select  max(fnum)
  9          into    f_cache.m_ret
 10          from    glob_tmp
 11          ;
 12
 13          return f_cache.m_ret;
 14  end;
 15  /

Function created.

SQL>

Execute the function for the current session; all seems well since we’ve inserted and committed a row, but this is about to change:


SQL> execute dbms_output.put_line(f_cache)
1

PL/SQL procedure successfully completed.

SQL>

Now create a second session and do nothing except run the function:


SQL> execute dbms_output.put_line(f_cache);
1

PL/SQL procedure successfully completed.

SQL>

From the second sesson insert a row into the global temporary table and run the function again:


SQL> insert into glob_tmp values(0);

1 row created.

SQL> execute dbms_output.put_line(f_cache);
0

PL/SQL procedure successfully completed.

SQL>

This is curious behavior. After Session 1 had inserted a row into the global temporary table and issued a commit both sessions could ‘see’ it, yet when Session 2 inserts a row, but doesn’t commit, that original row ‘goes away’ because Oracle thinks it’s providing the correct answer. Let’s do something really destructive and truncate the global temporary table in Session 1:


SQL> truncate table glob_tmp;

Table truncated.

SQL> execute dbms_output.put_line(f_cache);
1

PL/SQL procedure successfully completed.

SQL>

The table data has been truncated, yet the function STILL shows that one row exists in the results cache. Note that Session 2 has not issued a commit for its insert. Let’s do that now:


SQL> commit;

Commit complete.

SQL>

Back to Session 1, let’s execute the function again:


SQL> execute dbms_output.put_line(f_cache);

PL/SQL procedure successfully completed.

SQL>

And from Session 2, where nothing else has occurred other than the commit, we see the same results:


SQL> execute dbms_output.put_line(f_cache);

PL/SQL procedure successfully completed.

SQL>

Yet, the truncate in Session 1 only affected the data inserted by Session 1. Going back to Session 2, let’s chech that session’s contents of glob_tmp:



SQL> select count(*) from glob_tmp;

  COUNT(*)
----------
         1

SQL>

It would be expected, I would think, that the function would return a 1 for the committed row in Session 2, and either NULL or 0 for Session 1, but that’s not the case as evidenced here. Oracle reports several bugs in relation to the PL/SQL results cache, listed below:

Bug 21905592 : PL/SQL RESULT CACHE MUST BE BYPASSSED WHEN SESSION SCN NOT CURRENT
Bug 21905695 : BAD X-SESSION RESULTS W/ RESULT-CACHED PL/SQL FN AND “SET TXN READ ONLY”
Bug 21907155 : PL/SQL RC FN SHOWS ONE SESSION THE CONTENTS OF ANOTHER SESSION’S GTT
Bug 21885173 : THE LNPLS SECTION “RESTRICTIONS ON RESULT-CACHED FUNCTIONS” NEEDS RE-WRITE

So we see that the PL/SQL results cache can produce inconsistent results and unexpected behavior across sessions with global temporary tables. Take care when using the PL/SQL results cache with temporary tables since you may see ‘answers’ that aren’t correct even though Oracle thinks that they are. Looking in the documentatipon Oracle notes that global temporary tables don’t meet the criteria for result caching, and this example shows a good reason to beleive that restriction. I’ve found that the documentation may not be referenced until a ‘problem’, like the one illustrated here, surfaces.

When properly used the PL/SQL results cache can be of great help to application programmers, once the known restrictions have been taken into account. For such features it’s best to read the documentation before attempting to use them, so that you won’t be surprised later.

Sometimes a change in perspective can be very helpful.

October 21, 2015

Hakan You Do That?

Filed under: General — dfitzjarrell @ 13:49

"Everyone understands numbers. No matter what language you speak,
they always mean the same thing. A seven is a seven anywhere in the world." 
Norton Juster, The Phantom Tollbooth 

An interesting problem submitted to Oracle Support involves a possibly little-known aspect of a table called the Hakan Factor. The problem description, in brief, states that the Hakan Factor isn’t preserved for a partitioned table when a single partition is compressed. Why is this a problem? The Hakan Factor is used by bitmap indexes to map rows to the index entries and it’s currently one less than the number of rows contained in a database block, due to adjustments to the calculation made necessary by unwanted behavior in Oracle releases 9.2.0.8 and earlier. To provide a ‘real-world’ example consder a parking garage. Each level of the garage can hold X number of vehicles and if the garage is extended upwards, by adding more levels, this doesn’t affect the vehicle-to-level mapping since each new level matches the previous levels. Now consider this same parking garage being modified to increase the number of vehicles each level can contain, keeping the number of levels the same. The original garage mapping no longer applies so the garage map is invalid. Something similar happens when the Hakan Factor changes for a partition and no longer matches the table you wanted to use for partition exchange; the partition mapping doesn’t match the table mapping and the bitmap indexes can’t find the new data and an error is thrown. Since the Hakan Factor is critical for bitmap indexes let’s look at this factor in a bit more depth.

The Hakan Factor is set by using ‘alter table … minimize records_per_block;’ or by creating a table using the familiar ‘create table … as select … from …;’ mechanism. The latter sets the Hakan Factor of the new table to that of the source table. When the Hakan Factor becomes a problem is when dealing with a table having bitmap indexes on it, notably a partitioned table where partition exchange is being executed. If the Hakan Factor of the partition and the table it’s being exchanged for don’t match and there are bitmap indexes present the exchange will fail with the following ‘informative’ error:


ORA-14642: Bitmap index mismatch

This can happen for a number of reasons, including adding columns to a partition or table definition, minimizing the rows_per_block (mentioned above) and, according to the Bug report by compressing a partition. [The Bug number is 18115378.] The bug has not been reported as fixed; the workaround is to simply execute ‘alter table … minimize records_per_block;’. Of course if bitmap indexes are not in place the error won’t be thrown so another way around this problem is to drop the bitmap indexes on the partitioned table, perform the exchange then recreate the bitmap indexes (a procedure recommended by Oraclec support). The bug report concerns the building of bitmap indexes after a partition has been compressed; the error thrown is:


ORA-28604: table too fragmented to build bitmap index (4625956,272,272)

even though it really isn’t. The problem stems from differing Hakan Factors for the compressed and uncompressed partitions, a ‘problem’ Oracle should be able to build a bitmap index around.

How can you know the current Hakan Factor for a given table? Jonathan Lewis has constructed an interesting procedure to generate the Hakan factor, shown below (thank you, Jonathan) :


create or replace procedure show_hakan(
    i_table     in  varchar2,
    i_owner     in  varchar2 default user
) as
    m_obj       number(8,0);
    m_flags     varchar2(12);
    m_hakan     number(8,0);
begin
 
    /* created by show_hakan.sql    */
    select
        obj#,
/*
        case
            when    (spare1 > 5 * power(2,15))
                then    (spare1 - 5 * power(2,15))
            when    (spare1 > power(2,17))
                then    (spare1 - power(2,17))
            when    (spare1 > power(2,15))
                then    (spare1 - power(2,15))
                else    spare1
        end                 hakan
*/
    to_char(
        bitand(
            spare1, to_number('ffff8000','xxxxxxxx')
        ),
        'xxxxxxxx'
    )                   flags,
    bitand(spare1, 32767)           hakan   -- 0x7fff
    into
        m_obj,
        m_flags,
        m_hakan
    from
        tab$
    where   obj# in (
            select  object_id
            from    dba_objects
            where   object_name = upper(i_table)
            and object_type = 'TABLE'
            and owner = upper(i_owner)
        )
    ;
 
    dbms_output.put_line(
        'Hakan factor - 1 (see bug history) for object ' ||
        m_obj   || ' (' ||
        i_owner || '.' ||
        i_table || ') is ' ||
        m_hakan || ' with flags ' ||
        m_flags
    );
end;
/
 
drop public synonym show_hakan;
create public synonym show_hakan for show_hakan;
grant execute on show_hakan to public;

The procedure needs to be created by SYS, thus the creation of the public synonym and the grant to PUBLIC. Once it’s created using it is fairly simple as the following example illustrates:


SQL>> 
SQL>> create table df1 (
  2  	     snord   number,
  3  	     snerd   varchar2(20),
  4  	     snard   date);

Table created.

SQL>> 
SQL>> begin
  2  	     for i in 1..10000 loop
  3  		     insert into df1
  4  		     values(i, 'Hakan test '||i, sysdate+mod(i, 19));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL> procedure successfully completed.

SQL>> 
SQL>> exec show_hakan('df1');
Hakan factor - 1 (see bug history) for object 95173 (BING.df1) is 736 with flags         0                                          

PL/SQL> procedure successfully completed.

SQL>> 

The table MUST be populated to set the Hakan factor; this is done automatically when the table is loaded with data either by a series of inserts or by using SQL*Loader. This example was run on Oracle 12.1.0.2 but it has been tested and will run on 11.2 without issues. [Jonathan Lewis tested this on 9.2.0.8 with an earlier version of the procedure and it calculated the Hakan Factor correctly, however the code posted above will need to be modified (possibly by trial-and-error) to produce results.] If a new table is created using ‘create table .. as select .. ‘ the Hakan Factor is replicated to the new table, as mentioned above:


SQL>> exec show_hakan('EMP');
Hakan factor - 1 (see bug history) for object 95181 (BING.EMP) is 736 with flags         0

PL/SQL> procedure successfully completed.

SQL>>
SQL>> create table pme as select * from emp;

Table created.

SQL>>
SQL>> exec show_hakan('PME');
Hakan factor - 1 (see bug history) for object 95187 (BING.PME) is 736 with flags         0

PL/SQL> procedure successfully completed.

SQL>>

Altering the table to minimize records_per_block changes the Hakan Factor:


SQL> alter table df1 minimize records_per_block;

Table altered.

SQL> 
SQL> exec show_hakan('df1');
Hakan factor - 1 (see bug history) for object 95173 (BING.df1) is 232 with flags      8000                                          

PL/SQL procedure successfully completed.

SQL> 

Compressing a table can also alter the Hakan Factor according to the bug report, so that should be tested:


SQL> alter table pme compress for oltp;

Table altered.

SQL>
SQL> exec show_hakan('PME');
Hakan factor - 1 (see bug history) for object 95194 (BING.PME) is 32759 with flags     20000

PL/SQL procedure successfully completed.

SQL>

This shouldn’t be a surprise since compression alters how rows are stored in a block and, in Hybrid Columnar Compression, alters how the blocks are configured. I can see why Oracle is considering this a bug; compressed partitions are marked as such and creating a bitmap index on a partitioned table with ‘mixed’ partitions shouldn’t throw an error. Advanced Compression options can leave a compressed table/partition in a state where some blocks are compressed and others are not, but the Hakan Factor is the maximum number of rows a block can have for a given partition and since there should always be fully compressed blocks in a partition using Advanced Compression Oracle should be able to build a bitmap index on a table containing both compresed and uncompressed partitions. Generating the Hakan Factor for a table using OLTP compression before and after a large update produces the following results:


SQL> begin
  2          for i in 1..19 loop
  3                  insert into pme select * from pme;
  4          end loop;
  5
  6          commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL>
SQL> alter table pme compress for oltp;

Table altered.

SQL>
SQL> exec show_hakan('PME');
Hakan factor - 1 (see bug history) for object 95208 (BING.PME) is 32759 with flags     20000

PL/SQL procedure successfully completed.

SQL>
SQL> update pme set job='CLORK' where job = 'CLERK';

2097152 rows updated.

SQL> commit;

Commit complete.

SQL>
SQL exec show_hakan('PME');
Hakan factor - 1 (see bug history) for object 95208 (BING.PME) is 32759 with flags     20000

PL/SQL procedure successfully completed.

SQL>

The Hakan Factor for the compressed table didn’t change even after a large update occurred that left some blocks uncompressed. Since Oracle knows the partition is compressed it shoudn’t complain about ‘fragmentation’, it should consider the compression and continue to build the bitmap index.

In Oracle 9.2.0.8 and earlier releases the Hakan Factor was equal to the number of rows a block would contain; unfortunately it would decrease by 1 each time it was regenerated using ‘alter table … minimize records_per_block;’ so it was possible to reduce the Hakan Factor to 1 and end up with an extremely large table containing a great deal of wasted space. Also in 9.2.0.8 and earlier releases direct path inserts behaved differently with respect to the Hakan Factor than garden-variety inserts so large direct-path loads during batch jobs could also result in an overly large table with a lot of wasted space. An attempt was made in Version 10g to make the normal inserts and direct-path inserts behave consistently; the change resulted in the Hakan Factor being one less than the actual number of rows in a block and since the Hakan Factor cannot be 0 the wasted space issue was ‘resolved’, at the expense of an ‘incorrect’ Hakan Factor.

The Hakan Factor is (or, hopefully, was) a little-known nugget of table knowledge, critical for the implementation and performance of bitmap indexes. Hopefully understanding what it is and what it affects will make the DBA’s job easier when such errors, as reported here, are encountered.

Remember, Oracle has your number.

September 29, 2015

You’re Not Allowed

Filed under: Security — dfitzjarrell @ 13:21

"Have you ever heard a blindfolded octopus unwrap a cellophane-covered bathtub?"
Norton Juster, The Phantom Tollbooth 

Recently a thread in an Oracle forum I frequent asked the following question:

“… is there a way to restrict the login to database only from the application which we have made.”

That, of course, is not an easy question to answer as several possibilities emerge with none of them foolproof. Let’s look at those options.

The issue is to restrict logins from one (or more) specific servers to only those from the in-house application. The first thought is a logon trigger using MACHINE and PROGRAM information found in V$SESSION along with other available details:


Create table app_ip_add(
	app_nm		varchar2(40),
	machine		varchar2(64),
	short_nm	varchar2(64),
	ip_address	varchar2(16)
);

create index app_nm_ip_idx on app_ip_add(app_nm, ip_address);

create table blocked_logon(
	block_dt	date,
	machine		varchar2(64),
	ip_address	varchar2(16),
	message		varchar2(200)
);

insert into app_ip_add values ('flump.exe','INFOCROSSING\E642-FITZJARREL','E642-Fitzjarrel','10.1.68.25');

commit;

CREATE OR REPLACE TRIGGER SYS.BLOCK_LOGON_TRG
after LOGON ON DATABASE
DECLARE

v_program	v$session.program%type;
v_machine	v$session.machine%type;
v_cip		varchar2(20);
v_usr		varchar2(35);
v_host		varchar2(100);
v_ip		varchar2(20);
v_logtme	date;
v_blk		number:=0;
blocked		exception;
pragma exception_init(blocked, -20001);

begin
select sys_context('USERENV','SESSION_USER') into v_usr from dual;
select sys_context('USERENV','HOST') into v_host from dual;
select sys_context('USERENV','MODULE') into v_program from dual;
select sys_context('USERENV','IP_ADDRESS') into v_cip from dual;
select SYSDATE into v_logtme from dual;


---- then your logic to check whom to restrict.
select 1 into v_blk from app_ip_add where ip_address = v_cip and app_nm  v_program;
dbms_output.put_line(v_blk);
-- and finally message to the user.

if v_blk = 1 then
  insert into blocked_logon                       -- this table tracks all access to the database.
  values
  (sysdate, v_machine, v_ip,'Blocked by Logon Trigger');
  raise blocked;
end if;


exception
	when blocked then
	    raise_application_error(-20001,v_usr||' - Blocked - please contact Admin');

end block_logon_trg;
/

SQL>

The trigger compiles successfully and populates the BLOCKED_LOGON table but it doesn’t throw the error and the login succeeds. Since a session cannot kill itself the trigger can’t be modified to execute a ‘alter system kill session’ command and, thus, it won’t do much except log that a particular session should have been blocked. Strike 1.

Another possibility is to configure SQL*Net to use tcp.validnode_checking along with tcp.invited_nodes:


tcp.validnode_checking = yes
tcp.invited_nodes = ()

When using IP Addresses the ‘*’ wildcard can be used to define a range of valid IPs, such as 1.2.3.* to allow any server from 1.2.3.0 through 1.2.3.255. Additional nodes can be invited by IP Address (again using wildcards) or hostname, each entry separated by a comma. A valid entry for three ranges of IP addresses would look like this:


tcp.invited_nodes = (1.2.3.*, 78.44.227.*, 121.217.*.*)

Such a configuration would be useful where application servers reside in several subnets, allowing you to configure all subnets to connect to the database. Unfortunately this option blocks connections from all ‘uninvited’ nodes and doesn’t disallow connections simply because the program trying to connect isn’t in a list of valid executables, so sqlplus can run as well as the application program since both are on the invited nodes. This was not the original intention as additional criteria should be met (what program the user is running) to ‘weed out’ the invalid connections.

Going the ‘other way’ we can also use tcp.excluded_nodes in the same way, which excludes access to the hostnames/IP addresses in the list:


tcp.excluded_nodes = (1.2.3.*, 78.44.227.*, 121.217.*.*)

Now the listed IP address ranges are excluded from connecting to the database. And it is possible to use both parameters, but remember that tcp.invited_nodes takes precedence over tcp.excluded_nodes and any conflicts would be resolved in favor of connection.

Try as we may it appears that even though we can restrict connections to a set of valid server names or IP addresses we cannot go further and restrict connections to a valid list of executables. Well, we probably could but a resourceful end user might make a copy of the ‘undesirable’ program and rename it to an ‘acceptable’ name, especially on client laptops. Thus the ‘security’ would be compromised since sqlplus.exe could be named frimmitz.exe, or gramp.exe or any number of other names that do not appear in the ‘no fly’ list. It seems the best we can do is to either allow connections from a small subset of servers or disallow connections from a small subset of servers.

It is expected that this would be in addition to the security policies and standards already in place with your employer such that you’re not trying to keep ‘unauthorized’ persons from accessing your databases, simply making an attempt to restrict the database tools the end-users have at their disposal. I imagine the goal is to prevent ‘runaway’ queries from consuming resources, and if that’s the case then Oracle Resource Manager would probably be a better option as Resource Groups can be established along with corrective actions to be taken when resource limits are exceeded. It’s usually best to use the tools the vendor supplies, rather than try to ‘rig’ other, less robust methods. Oracle provides sufficient tools for such purposes, and even though Resource Manager cannot reject logins it can keep users from stealing resources other applications and business processes need. And that, in my mind, is the end goal, to prevent any one user from hogging server resources like CPU and memory. There should be no reason to re-invent the wheel.

Now, have you seen my blindfolded octopus? He’s here somewhere…

August 26, 2015

“But, You Don’t Understand…”

Filed under: General — dfitzjarrell @ 10:11

"Let me try once more," he said in an effort to explain. "In other words--"

"You mean you have other words?" cried the bird happily. "Well, by all means, use them.
You're certainly not doing very well with the ones you have now."
- Norton Juster, The Phantom Tollbooth 

A seemingly constant battle in many forums is getting those who post questions to be clear in what they are asking. Some questions are so broad that they can’t be answered without further explanation and detail. Possibly this is due to the nature of the Internet, where ‘answers’ are in abundant supply. Notice the quotes; the Internet is as notorious for mis-information as it is praised for its factual content. Those unfamiliar with resources for a given topic can be overwhelmed by the sheer volume of responses, and can be lulled into a false sense of security when broadly phrased questions return specific ‘answers’, such as with google.com, bing.com and other search engines. There is, of course, no all-seeing editor for the Internet so the good and the bad occur with equal ease. Those in forums know this and give their best efforts to guide those unfamiliar with the available resources a gentle nudge toward reliable websites, yet not everyone visits forums and newsgroups to find answers. It’s time for those of us writing blogs to step up our game and ensure that what is posted is accurate and reliable, and it’s also time for those asking questions to learn how to do so effectively.

It all starts, I think, with blog posts, since apparently this is where people go to find answers. Searching google.com, duckduckgo.com, bing.com for answers usually provides a long list of blog posts and web links claiming to answer the given question. Some are good, some aren’t and it can be difficult to know the difference for the uneducated. The people I know in the Oracle community who blog and frequent forums provide the best information they can in their posts, as do I. I’ve known more than one person who has gone back and edited an existing post to provide accurate content; to those who care a blog post isn’t the end, it’s a living document intended to provide insight into specific topics or provide a general overview of a process, all done for the purpose of sharing knowledge. It should be understood that if you’re going to share knowledge it should be accurate and reliable, and not simply an effort to be the most prolific blog poster in history. The same holds true for example code you may post; NEVER post untested code as it can only confuse and frustrate the recipient when it doesn’t work because of coding errors. It isn’t enough to simply post code, though, you should seriously consider posting worked examples, including the output the scripts and queries generate. Having a worked example, with the correct output, is invaluable when someone is running your code in their own database since they can know, immediately, if they have copied the code correctly and completely. Also, don’t assume their environment matches yours; include any settings you made so that anyone using your code runs it under the same conditions you used. It doesn’t matter how many examples you have posted online if they don’t work as expected or intended. Quality beats quantity every time.

If the blog posts are accurate then it stands to reason that the questions that follow and are posted on newsgroups and forums will get better because the information that generated that question will be better. There will still be those questions that are based on misunderstanding and ignorance (and I use the term ignorance to indicate a lack of knowledge in the subject area, not as a derogatory remark against anyone), such as this one found in an Oracle forum I frequent:



I have some Oracle queries which is executing with 2 to 3 seconds response time. Is it possible to make those queries zero response time?
If so is there any tool which will help for that?

That question can be answered in many ways, all of them really dependent on information the person asking the question failed to provide. The responses thus far are found here with many of those requesting more detail than what was originally provided. This question was mentioned because it is generated based on a lack of understanding regarding how I/O works. Even with flash memory and solid-state disk a true ‘0’ response time is impossible. Better knowledge, even at a basic level, about I/O would have produced a more ‘reasonable’ question, and more research on I/O before the question was raised could have answered it.

Everyone can ask questions, but in technical forums it goes a long way when a person knows HOW to ask questions. With respect to Oracle forums a good method to take when asking questions involves the following information that forum volunteers need before even attempting to provide an answer:


The Oracle version in use.
The clear statement of the problem.
The exact error message and error code Oracle provided when the problem occurred.
In some cases sample queries illustrating the problem and DDL to create and populate objects used in those queries.
Log files if available.

The above list is not exhaustive but it is a very good place to begin when asking a question. Expect that more information will be requested; digging deeper into a problem can reveal areas of concern not apparent from the initial problem statement. Remember that providing more information gets better responses; the ‘answer’ is only as good as the question. For a more complete discussion of how to ask questions in a technical forum this link is an excellent resource.

In some cases English is not the questioner’s first language and that can lead to errors in communication. Idioms in the English language (both US and British) can be confusing to non-native English speakers, and literal translations into English (using friends, colleagues or websites like http://freetranslation.com) may not improve the situation. This is when forum volunteers may ask for additional information simply to understand the question as posed, and patience is truly a virtue. In such situations imagine yourself trying to ask a question in someone else’s native language, one you may have a rudimentary knowledge of but are far from fluent in. Making yourself understood is a challenge and it’s good to realize that when non-native English speakers are involved. The better you, as a responder, can direct the discussion the better success you and the questioner will have in getting a problem understood and possibly resolved. The person asking is in the forum for good reason so don’t let them think it was a waste of time to ask. Rudeness has no place in a technical forum, even for those questions asked 100 billion times that are answered in the documentation. Directing a person to that documentation doesn’t need to be an adversarial exchange, and a helpful ‘nudge’ goes much farther than a rude and terse “RTFM”.

It would appear that we, as a society, have become so self-absorbed that we abhor any demands placed on ‘our’ time. Unfortunately I see this behavior in forums and newsgroups by the volunteers, especially when simple, “already-answered” questions appear or the person asking the question has a limited grasp of the English language. Remember that you are a VOLUNTEER in such a group and have CHOSEN to participate; you were not forced into contributing your time and knowledge by despicable members of some Evil Empire. Behaving like a volunteer rather than as a prisoner can make all of the difference in how a thread progresses. You don’t need the patience of Job but some compassion for the person on the other end of your response would be appreciated. You have one opportunity to make a first impression so try your best to make it a good one.

There is no fame, fortune or glory in being a forum or newsgroup volunteer, magazine contributor or technical book author. If any of those are your goal then go elsewhere and write a steamy romance novel; sharing what you know should be the driving force behind any decision to join a forum or newsgroup, contribute to a technical publication or conference or write a technical book. It isn’t about your social image, it’s about helping others to understand what you know best. Also remember that it isn’t a crime to say ‘I don’t know’ because we can’t know everything, no matter how hard we might want to. Honesty and arrogance rarely co-exist; choose honesty, build a reputation of quality and people will take notice in a good way and may eventually seek you out for your knowledge, wisdom and experience. And that’s a good thing, indeed.

Let me be clear about that.

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,248 other followers