Oracle Tips and Tricks — David Fitzjarrell

April 7, 2017

You Bet Your ASCII

Filed under: General — dfitzjarrell @ 10:58

"Why, did you know that if a beaver two feet long with a tail a foot and a half long can build a dam twelve 
feet high and six feet wide in two days, all you would need to build Boulder Dam is a beaver sixty-eight
feet long with a fifty-one-foot tail?"
"Where would you find a beaver that big?" grumbled the Humbug as his pencil point snapped.
"I'm sure I don't know," he replied, "but if you did, you'd certainly know what to do with him."
-- Norton Juster, The Phantom Tollbooth 

International character sets, such as AL32UTF8, can solve a host of problems when non-ASCII characters need to be stored in the database. This, unfortunately, can create problems when having to convert those characters to ASCII-compatible text using Oracle’s built-in function ASCIISTR(). Let’s look at an example and see what might occur.

Two databases exist, one 11.2.0.4, the other 12.1.0.2, and both use the AL32UTF8 character set. Let’s create a table in both databases and load the CLOB column with non-ASCII characters (characters that will print on the screen but will be processed by the ASCIISTR() function):


SQL> create table yumplerzle(
  2  smarg   number,
  3  weebogaz	     clob);

Table created.

SQL> 
SQL> begin
  2  	     for i in 1..1000 loop
  3  		     insert into yumplerzle
  4  		     values(i, rpad(i, 8000, chr(247)));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 

Query the table absent the ASCIISTR() function to see what character we’ve chosen:


...
SUBSTR(WEEBOGAZ,1,4000)
--------------------------------------------------------------------------------
991ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
992ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
993ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
994ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
995ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
996ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
997ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
998ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
999ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
1000ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷

1000 rows selected.

Interesting data, to be sure. Now let’s try to use the ASCIISTR() function on the output:


SQL> select asciistr(substr(weebogaz,1,4000)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,4000)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 

Unfortunately the character ‘conversion’ replaces the non-ASCII characters with their HEX codes and that can expand the line length considerably. Since this is 11.2.0.4 the length limit for VARCHAR2 columns is 4000 (characters or bytes depending on how your database or table column is configured). Given that restriction it’s impossible to use ASCIISTR() on any longer line than 1000 characters/bytes as shown below:


SQL> select asciistr(substr(weebogaz,1,32767)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,32767)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,16000)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,16000)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,4000)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,4000)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,3000)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,3000)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,2000)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,2000)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,1000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,1000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,1000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...

It’s apparent how the line length has expanded based on the output from ASCIISTR(). If you’re using any release older than 12.1 you’re stuck with this restriction. Thankfully Oracle 12.1 and later versions offer the possibility of extended string length for text fields, configured using the max_string_size parameter. Setting this to EXTENDED and running the utl32k.sql script in $ORACLE_HOME/rdbms/admin (on UNIX and Linux systems, %ORACLE_HOME%\rdbms\admin on Windows) can fix this error. This requires a shutdown of the database and starting in UPGRADE mode. The exact steps are shown below:


SQL>
SQL> alter system set max_string_size = EXTENDED scope=spfile;
SQL> shutdown immediate
...
SQL> startup upgrade
...
SQL> @?/rdbms/admin/utl32k.sql
...
SQL> shutdown immediate
...
SQL> startup
...
SQL>

The script makes necessary changes to the data dictionary that allow Oracle to utilize this expanded string length and, in turn, indirectly modify functions like ASCIISTR() so their string buffer lengths are increased. Moving over to the database running under 12.1.0.2 that has had this modification completed the error experienced in 11.2.0.4 is gone:


SQL> select asciistr(substr(weebogaz,1,32767)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,32767))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,32767))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...
SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,16000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,16000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,16000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...
SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,4000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,4000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,4000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...
SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,3000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,3000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,3000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...
SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,2000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,2000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,2000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...
SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,1000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,1000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,1000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...

No “buffer too small” errors were thrown with the extended string length configured in 12.1.0.2, even passing a substring length of 32767. Using even longer sub-strings, and even eliminating the substr() call entirely, also seems to pose no problems:


SQL> 
SQL> select asciistr(substr(weebogaz,1,64000)) from yumplerzle
  2  where rownum < 2
  3  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,64000))                                              
--------------------------------------------------------------------------------
305\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
...
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD                                     
                                                                                

SQL> 
SQL> select asciistr(substr(weebogaz,1,128000)) from yumplerzle
  2  where rownum < 2
  3  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,128000))                                             
--------------------------------------------------------------------------------
305\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
...
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD                                     
                                                                                

SQL> 
SQL> select asciistr(substr(weebogaz,1,256000)) from yumplerzle
  2  where rownum < 2
  3  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,256000))                                             
--------------------------------------------------------------------------------
305\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
...
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD                                     
                                                                                

SQL> 
SQL> select asciistr(weebogaz) from yumplerzle
  2  where rownum < 2
  3  /

ASCIISTR(WEEBOGAZ)                                                              
--------------------------------------------------------------------------------
305\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
...
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD                                     
                                                                                

SQL>  

When using character sets like WE8MSWIN1252 and US7ASCII these issues aren’t present as the data is converted to something ASCII can handle during the insert; only extended character sets seem to produce this error on conversion, something to remember since, if your database is NOT using a UTF8/UTF16 character set such problems won’t occur. There will be no need to increase the max_string_size when ASCII-centric character sets are used unless, of course, you want the extended length to store longer pieces of text.

Notice that the character set of the database was NOT changed during this process, only the maximum declarable length of a VARCHAR2/NVARCHAR2 column was affected. Also remember that this should be done in a test environment first, to ensure that such a change doesn’t adversely affect existing applications and code. I have not experienced any issues of that sort but mine isn’t the only database in the world and there could be exceptions in other environments. Only after you are reasonably certain this change doesn’t break anything can you move this into a production environment.

Fill ‘er up.

Advertisements

April 5, 2017

GrACE Period

Filed under: General — dfitzjarrell @ 12:27

“You can swim all day in the Sea of Knowledge and not get wet.” 
― Norton Juster, The Phantom Tollbooth

The time is fast approaching where it will be three years since I was graced with the status of Oracle ACE, and I’ve enjoyed every minute of it. I can’t speak highly enough of the program and its steadily growing list of members. But don’t think for a minute that once someone becomes an Oracle ACE or Oracle ACE Director the work stops; nothing could be further from the truth.

Sharing my knowledge got me here and that hasn’t stopped or slowed down. I still blog, still write two articles each month for http://www.databasejournal.com and still contribute to the Oracle technical forums and I wouldn’t change a thing. I’ve said it before, sharing what you know may not seem important to you at the time, but others who you may not know or ever see will find it useful and will be grateful that you took the time and effort to pass it on. It’s not about the laurels and praise, it’s about contributing knowledge to the Oracle community in order to help others.

Allow me to quote from my original post on being an Oracle ACE:


Being an Oracle ACE is an honor, but it's also a responsibility. What got me
here was writing and participating in discussion groups, and that won't change.
Knowledge is to be shared, not hoarded. What you know does no 
one else any good if you don't share that knowledge and experience. If Da Vinci 
had kept his notes to himself, if Newton hadn't published his Laws of Motion, if 
Copernicus has kept quiet our world may have been quite different. It's because 
these people had the foresight to look deeper into our world and then tell us 
what they found that puts us where we are today. It's only right that we, as 
beneficiaries of the knowledge others before us shared, share our knowledge no 
matter how unimportant it may seem. Someone, somewhere, will find it useful and 
will be grateful that we did.

That still holds true almost three years later; I keep that thought in mind every time I post to a forum, write an article or create a blog post because I do those things to add to the knowledge base provided by members of the Oracle community. And even though others may post more often it doesn’t mean my contributions are diminished in any way, since I (hopefully) have a unique voice and viewpoint that adds to, not detracts from or duplicates, the contributions made by others. The world is a vast place and everyone’s voice should be heard. It’s incumbent upon you to make that so; raise your voice and be heard.

Oracle isn’t just a product line, it’s also a community and it takes more than one person to keep a community going. Others may be blogging and sharing but don’t let that stop you from doing the same. There is no rule that each contribution be unique; sometimes a different view on the same topic can turn the light of understanding on and provide insight and knowledge to clear the confusion. Your voice is special; don’t deprive others of your contributions simply because you can’t think of a unique topic. You may provide understanding by approaching the topic from a different direction. Sometimes a change of perspective is all it takes.

Again from that previous blog post:


I love sharing what I know; I've been doing it for over 16 years now, in various 
forums, some that are no longer as popular as they once were.  I never realized 
how far my commentary reached until I became an Oracle ACE; I have received 
congratulations and comments that I never expected, mostly of the 'it's about 
time' sentiment.  Simply because you don't see the gratitude of others who 
benefit from your knowledge doesn't mean that gratitude doesn't exist.  I see 
now that it does, and I am humbled by it.

It’s still great to be an Oracle ACE, and to me it always will be. But it is good to remember that being an ACE isn’t the destination, it’s just the start of the journey.

Head ’em up, move ’em out.

March 31, 2017

You’re A Natural

Filed under: General — dfitzjarrell @ 08:44

"'Why is it,' he said quietly, 'that quite often even the things which are correct just don't seem to be right?'"
-- Norton Juster, The Phantom Tollbooth

A YouTube video is currently being promoted in the database community regarding joins, including the variety of joins available in most database engines (the engine referred to in the video is Oracle). A very good discussion ensues covering inner, left outer, right outer, full outer and cross joins. Notably absent (and, conceivably, of limited use) is the natural join, used when the join columns have the same name (and, hopefully, the same definition). Let’s look at the natural join and what it can, and cannot, do.

The following example sets up the conditions for a successful natural join: two tables with a common column that will facilitate the use of the natural join. Notice that in a natural join columns cannot have a prefix; the natural join returns the selected columns from all tables in the join which can make things a bit confusing when writing such a select list. We begin with a simple ‘select *’ query:


SQL> create table yooper(
  2  snorm   number,
  3  wabba   varchar2(20),
  4  oplunt  date);

Table created.

SQL> 
SQL> create table amplo(
  2  snorm   number,
  3  fleezor date,
  4  smang   varchar2(20),
  5  imjyt   varchar2(17));

Table created.

SQL> 
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into yooper
  4  		     values(i, 'Quarzenfleep '||i, sysdate +i);
  5  		     insert into amplo
  6  		     values(i, sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7  	     end loop;
  8  
  9  	     commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL> 
SQL> select *
  2  from yooper natural join amplo;

     SNORM WABBA                OPLUNT    FLEEZOR   SMANG                IMJYT
---------- -------------------- --------- --------- -------------------- -----------------
         1 Quarzenfleep 1       31-MAR-17 29-MAR-17 Erblo1               Zaxegoomp1
         2 Quarzenfleep 2       01-APR-17 28-MAR-17 Erblo2               Zaxegoomp2
         3 Quarzenfleep 3       02-APR-17 27-MAR-17 Erblo3               Zaxegoomp3
         4 Quarzenfleep 4       03-APR-17 26-MAR-17 Erblo4               Zaxegoomp4
         5 Quarzenfleep 5       04-APR-17 25-MAR-17 Erblo5               Zaxegoomp5
         6 Quarzenfleep 6       05-APR-17 24-MAR-17 Erblo6               Zaxegoomp6
         7 Quarzenfleep 7       06-APR-17 23-MAR-17 Erblo7               Zaxegoomp7
         8 Quarzenfleep 8       07-APR-17 22-MAR-17 Erblo8               Zaxegoomp8
         9 Quarzenfleep 9       08-APR-17 21-MAR-17 Erblo9               Zaxegoomp9
        10 Quarzenfleep 10      09-APR-17 20-MAR-17 Erblo10              Zaxegoomp10

10 rows selected.

SQL> 
>

In this case the common column not only has the same name but also the same data type. When processing such queries Oracle returns the join column data from the first table listed in the join. Since the column has the same name and definition in this example it really doesn’t matter which table the join column data comes from, but in the next example there will be a difference. Let’s drop the existing tables and recreate them, this time with one table having a number data type and the other having a VARCHAR2 data type. The data in both tables will be the same (although the numbers will be stored as characters in the VARCHAR2 column). Since there are no alpha characters in the VARCHAR2 column the implicit TO_NUMBER() conversion succeeds; the difference in the output is the SNORM column (the commonly-named join column) printed is the character data, not the numeric, but, as mentioned previously, that is dependent on the table order in the join:


SQL> drop table yooper purge;

Table dropped.

SQL> drop table amplo purge;

Table dropped.

SQL> 
SQL> create table yooper(
  2  snorm   number,
  3  wabba   varchar2(20),
  4  oplunt  date);

Table created.

SQL> 
SQL> create table amplo(
  2  snorm   varchar2(10),
  3  fleezor date,
  4  smang   varchar2(20),
  5  imjyt   varchar2(17));

Table created.

SQL> 
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into yooper
  4  		     values(i, 'Quarzenfleep '||i, sysdate +i);
  5  		     insert into amplo
  6  		     values(i, sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7  	     end loop;
  8  
  9  	     commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL> 
SQL> select *
  2  from yooper natural join amplo;

SNORM      WABBA                OPLUNT    FLEEZOR   SMANG                IMJYT
---------- -------------------- --------- --------- -------------------- -----------------
1          Quarzenfleep 1       31-MAR-17 29-MAR-17 Erblo1               Zaxegoomp1
2          Quarzenfleep 2       01-APR-17 28-MAR-17 Erblo2               Zaxegoomp2
3          Quarzenfleep 3       02-APR-17 27-MAR-17 Erblo3               Zaxegoomp3
4          Quarzenfleep 4       03-APR-17 26-MAR-17 Erblo4               Zaxegoomp4
5          Quarzenfleep 5       04-APR-17 25-MAR-17 Erblo5               Zaxegoomp5
6          Quarzenfleep 6       05-APR-17 24-MAR-17 Erblo6               Zaxegoomp6
7          Quarzenfleep 7       06-APR-17 23-MAR-17 Erblo7               Zaxegoomp7
8          Quarzenfleep 8       07-APR-17 22-MAR-17 Erblo8               Zaxegoomp8
9          Quarzenfleep 9       08-APR-17 21-MAR-17 Erblo9               Zaxegoomp9
10         Quarzenfleep 10      09-APR-17 20-MAR-17 Erblo10              Zaxegoomp10

10 rows selected.

SQL> 

Notice when the tables are reversed the SNORM data is again numeric:


SQL> select *
  2  from amplo natural join yooper;

     SNORM FLEEZOR   SMANG                IMJYT             WABBA                OPLUNT
---------- --------- -------------------- ----------------- -------------------- ---------
         1 29-MAR-17 Erblo1               Zaxegoomp1        Quarzenfleep 1       31-MAR-17
         2 28-MAR-17 Erblo2               Zaxegoomp2        Quarzenfleep 2       01-APR-17
         3 27-MAR-17 Erblo3               Zaxegoomp3        Quarzenfleep 3       02-APR-17
         4 26-MAR-17 Erblo4               Zaxegoomp4        Quarzenfleep 4       03-APR-17
         5 25-MAR-17 Erblo5               Zaxegoomp5        Quarzenfleep 5       04-APR-17
         6 24-MAR-17 Erblo6               Zaxegoomp6        Quarzenfleep 6       05-APR-17
         7 23-MAR-17 Erblo7               Zaxegoomp7        Quarzenfleep 7       06-APR-17
         8 22-MAR-17 Erblo8               Zaxegoomp8        Quarzenfleep 8       07-APR-17
         9 21-MAR-17 Erblo9               Zaxegoomp9        Quarzenfleep 9       08-APR-17
        10 20-MAR-17 Erblo10              Zaxegoomp10       Quarzenfleep 10      09-APR-17

10 rows selected.

SQL>

Adjusting the select list by using specific columns produces a smaller data set; notice that no table aliases or prefixes are used, which can make it difficult to keep track of what columns are coming from which table:


SQL> select smang, snorm, fleezor
  2  from yooper natural join amplo;

SMANG                SNORM      FLEEZOR
-------------------- ---------- ---------
Erblo1               1          29-MAR-17
Erblo2               2          28-MAR-17
Erblo3               3          27-MAR-17
Erblo4               4          26-MAR-17
Erblo5               5          25-MAR-17
Erblo6               6          24-MAR-17
Erblo7               7          23-MAR-17
Erblo8               8          22-MAR-17
Erblo9               9          21-MAR-17
Erblo10              10         20-MAR-17

10 rows selected.

SQL>

The natural join only requires the column names to be the same; the definitions can be different and as long as the data can be converted so a comparison can be made the query succeeds. Now let’s change the picture a bit more and store character strings in the one table while the other remains with numeric data:


SQL> drop table yooper purge;

Table dropped.

SQL> drop table amplo purge;

Table dropped.

SQL> 
SQL> create table yooper(
  2  snorm   number,
  3  wabba   varchar2(20),
  4  oplunt  date);

Table created.

SQL> 
SQL> create table amplo(
  2  snorm   varchar2(10),
  3  fleezor date,
  4  smang   varchar2(20),
  5  imjyt   varchar2(17));

Table created.

SQL> 
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into yooper
  4  		     values(i, 'Quarzenfleep '||i, sysdate +i);
  5  		     insert into amplo
  6  		     values(i||'Bubba', sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7  	     end loop;
  8  
  9  	     commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL> 
SQL> select *
  2  from yooper natural join amplo;
select *
*
ERROR at line 1:
ORA-01722: invalid number


SQL> 
SQL> select *
  2  from amplo natural join yooper;
select *
*
ERROR at line 1:
ORA-01722: invalid number


SQL> 
SQL> drop table yooper purge;

Table dropped.

SQL> drop table amplo purge;

Table dropped.

SQL> 

Now the natural join fails to return data since the TO_NUMBER() conversion fails; it doesn’t matter which table is listed first in the natural join from this example as the conversion will be from the character string to a number.

The natural join will use all commonly named columns in the join condition, so let’s add another matching column to this example and see what happens:


SQL> create table yooper(
  2  snorm      number,
  3  fleezor date,
  4  wabba      varchar2(20),
  5  oplunt     date);

Table created.

SQL>
SQL> create table amplo(
  2  snorm      number(10),
  3  fleezor date,
  4  smang      varchar2(20),
  5  imjyt      varchar2(17));

Table created.

SQL>
SQL> begin
  2          for i in 1..10 loop
  3                  insert into yooper
  4                  values(i, sysdate-i, 'Quarzenfleep '||i, sysdate +i);
  5                  insert into amplo
  6                  values(i, sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7          end loop;
  8
  9          commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from yooper natural join amplo;

     SNORM FLEEZOR   WABBA                OPLUNT    SMANG                IMJYT
---------- --------- -------------------- --------- -------------------- -----------------
         1 30-MAR-17 Quarzenfleep 1       01-APR-17 Erblo1               Zaxegoomp1
         2 29-MAR-17 Quarzenfleep 2       02-APR-17 Erblo2               Zaxegoomp2
         3 28-MAR-17 Quarzenfleep 3       03-APR-17 Erblo3               Zaxegoomp3
         4 27-MAR-17 Quarzenfleep 4       04-APR-17 Erblo4               Zaxegoomp4
         5 26-MAR-17 Quarzenfleep 5       05-APR-17 Erblo5               Zaxegoomp5
         6 25-MAR-17 Quarzenfleep 6       06-APR-17 Erblo6               Zaxegoomp6
         7 24-MAR-17 Quarzenfleep 7       07-APR-17 Erblo7               Zaxegoomp7
         8 23-MAR-17 Quarzenfleep 8       08-APR-17 Erblo8               Zaxegoomp8
         9 22-MAR-17 Quarzenfleep 9       09-APR-17 Erblo9               Zaxegoomp9
        10 21-MAR-17 Quarzenfleep 10      10-APR-17 Erblo10              Zaxegoomp10

10 rows selected.

SQL>
SQL> select *
  2  from amplo natural join yooper;

     SNORM FLEEZOR   SMANG                IMJYT             WABBA                OPLUNT
---------- --------- -------------------- ----------------- -------------------- ---------
         1 30-MAR-17 Erblo1               Zaxegoomp1        Quarzenfleep 1       01-APR-17
         2 29-MAR-17 Erblo2               Zaxegoomp2        Quarzenfleep 2       02-APR-17
         3 28-MAR-17 Erblo3               Zaxegoomp3        Quarzenfleep 3       03-APR-17
         4 27-MAR-17 Erblo4               Zaxegoomp4        Quarzenfleep 4       04-APR-17
         5 26-MAR-17 Erblo5               Zaxegoomp5        Quarzenfleep 5       05-APR-17
         6 25-MAR-17 Erblo6               Zaxegoomp6        Quarzenfleep 6       06-APR-17
         7 24-MAR-17 Erblo7               Zaxegoomp7        Quarzenfleep 7       07-APR-17
         8 23-MAR-17 Erblo8               Zaxegoomp8        Quarzenfleep 8       08-APR-17
         9 22-MAR-17 Erblo9               Zaxegoomp9        Quarzenfleep 9       09-APR-17
        10 21-MAR-17 Erblo10              Zaxegoomp10       Quarzenfleep 10      10-APR-17

10 rows selected.

SQL>

As with all inner joins only the matching data is returned so if YOOPER is reloaded with only the even-numbered records that will be seen in the output from the join:


SQL> truncate table yooper;

Table truncated.

SQL>
SQL> begin
  2          for i in 1..10 loop
  3                  if mod(i,2) = 0 then
  4                          insert into yooper
  5                          values(i, sysdate-i, 'Quarzenfleep '||i, sysdate +i);
  6                  end if;
  7          end loop;
  8
  9          commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from yooper natural join amplo;

     SNORM FLEEZOR   WABBA                OPLUNT    SMANG                IMJYT
---------- --------- -------------------- --------- -------------------- -----------------
         2 29-MAR-17 Quarzenfleep 2       02-APR-17 Erblo2               Zaxegoomp2
         4 27-MAR-17 Quarzenfleep 4       04-APR-17 Erblo4               Zaxegoomp4
         6 25-MAR-17 Quarzenfleep 6       06-APR-17 Erblo6               Zaxegoomp6
         8 23-MAR-17 Quarzenfleep 8       08-APR-17 Erblo8               Zaxegoomp8
        10 21-MAR-17 Quarzenfleep 10      10-APR-17 Erblo10              Zaxegoomp10

SQL>
SQL> select *
  2  from amplo natural join yooper;

     SNORM FLEEZOR   SMANG                IMJYT             WABBA                OPLUNT
---------- --------- -------------------- ----------------- -------------------- ---------
         2 29-MAR-17 Erblo2               Zaxegoomp2        Quarzenfleep 2       02-APR-17
         4 27-MAR-17 Erblo4               Zaxegoomp4        Quarzenfleep 4       04-APR-17
         6 25-MAR-17 Erblo6               Zaxegoomp6        Quarzenfleep 6       06-APR-17
         8 23-MAR-17 Erblo8               Zaxegoomp8        Quarzenfleep 8       08-APR-17
        10 21-MAR-17 Erblo10              Zaxegoomp10       Quarzenfleep 10      10-APR-17

SQL>

All data in the common columns must match to return data; if ids and dates in in YOOPER don’t match up with the ids and dates in AMPLO no data is returned:


SQL> truncate table yooper;

Table truncated.

SQL>
SQL> begin
  2          for i in 1..10 loop
  3                  if mod(i,2) = 0 then
  4                          insert into yooper
  5                          values(i-1, sysdate-i, 'Quarzenfleep '||i, sysdate +i);
  6                  end if;
  7          end loop;
  8
  9          commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from yooper natural join amplo;

No rows selected.

SQL>
SQL> select *
  2  from amplo natural join yooper;

No rows selected.

SQL>

There are matching id values, and there are matching date values between the tables but the combination of id and date produces no matching records. With a traditional inner join data could be returned based on either id values or date values (although how good those results would be is questionable, given the matching key structure).

A natural join isn’t a commonly used join type, mainly because the joined tables are not likely to contain join columns with the same name (the demonstration schema provided with Oracle is another good set of tables to use for a natural join). When such a condition exists a natural join is an option but testing is necessary to ensure that the results returned are both desirable and usable.

Just because it’s ‘correct’ doesn’t make it ‘right’. Right?

March 28, 2017

Finding Your Way

Filed under: General — dfitzjarrell @ 08:00

"Whether or not you find your own way, you're bound to find some way. If you happen to find my way, please return it,
as it was lost years ago. I imagine by now it's quite rusty."
-- Norton Juster, The Phantom Tollbooth

Oracle has provided access to its wait interface for several releases and with each new release it expands the range of wait information available, so much so that it’s hard to not find something to examine. Disk reads, logical reads, sort activity, table scans all vie for the attention of the DBA. Of course examination leads to investigation which leads, inevitably, to tuning, even when there is nothing to tune. Such constant twiddling and tweaking is known as Compulsive Tuning Disorder, or CTD. Unfortunately the more ways Oracle provides to interrogate the wait interface the more the DBA can fall victim to CTD. To help reduce the urge to tune a few questions need to be asked regarding the so-called ‘problem area’. Let’s dig in and ask those questions.

First, and foremost, is the following question:

“What problem are you trying to solve?”

If you can’t answer that question then there really isn’t a reason to tune anything; you’ll never know when you’re done and the task will go on and on and on and on and on … ad infinitum, ad nauseaum with no progress to report and no end in sight, another DBA sucked into the rabbit hole of CTD. One thing will lead to another and another and another as you find more areas to ‘tune’ based on the wait interface data and blog posts and articles clearly telling you something needs to be fixed. In most cases nothing could be further from the truth.

Next is misleading or misunderstood numbers, mainly in reference to data reads and writes. I’ve seen some DBAs try to tune the database to reduce logical reads — it’s usually newer DBAs who see the large values for logical reads and conclude there is a problem. The issue isn’t the available data, it’s isolating one small aspect of the entire performance picture and tuning that to the exclusion of everything else. Large volumes of logical reads aren’t necessarily a problem, unless the buffer cache is being reloaded across short periods of time which would be accompanied by large volumes of physical reads. In cases such as this the physical reads would be the telling factor and those MAY be cause for concern. It depends upon the system; an OLTP system grinding through tables to get a single row is most likely a problem whereas a data warehouse churning through that same volume of data would be normal. Going back to the large volume of logical reads relative to the physical reads that can be a problem of interpretation when taking each area individually as that may cloud the water and obscure the real issue of a buffer cache that may be too small for the workload; a configuration that once was more than sufficient can, over time, become a performance bottleneck as more and more users are using the database. A database is a changing entity and it needs to be tended, like a garden, if it’s going to grow.

The DBA needs to listen to the users since they will be the first to complain when something isn’t right and needs attention. Performance is time and, for business, time is money, and when tasks, over time, take longer and longer to complete less work is getting done. The DBA shouldn’t need to hunt for things to do; continually tuning to get that last microsecond of performance is really wasted effort — if no one but the DBA is going to notice the ‘improvement’ it’s not worth pursuing.

Not all tuning is bad or wasted effort but the DBA needs to have a clear goal in mind and a path to follow that addresses issues and brings them to some sort of resolution, even if it’s only a temporary fix until a permanent solution can be implemented. It does no good to constantly pick apart the database to find problems to solve, especially when the users aren’t complaining.

When something is wrong the DBA will hear about it; that’s the time to step into action and start problem solving. The DBA doesn’t need to go looking for problems, they’ll show up all by themselves. And if he or she isn’t constantly twiddling with this or tweaking that the real issues can be dealt with when they happen. Then the users will stop complaining and peace and joy will reign supreme. Okay, so peace and joy won’t necessarily cover the land but the users will stop complaining, at least for a while, and there will be benefit seen from the effort expended.

CTD is thankless, relentless and never-ending, so don’t get caught up in wanting to fix everything; it can’t be done and there are some things that are, most likely, not worth the effort spent given the small return that investment will generate. It’s not enough to know when to stop, the DBA also needs to know when NOT to start; if there is no clear destination to the journey it’s best to not begin. There is plenty to do without making work out of nothing.

Go your own way, just don’t get lost.

March 13, 2017

It’s Private

Filed under: General — dfitzjarrell @ 10:35

“The only thing you can do easily is be wrong, and that's hardly worth the effort.” 
― Norton Juster, The Phantom Tollbooth

Oracle provides two parameters that affect the PGA that look very similar but operate very differently. One of these parameters is the well-known pga_max_size and the otheris a hidden parameter, _pga_max_size. Let’s look at both and see how one can be very effective while the other can create problems with respect to PGA memory management,

DBAs know pga_max_size from extensive documentation from Oracle Corporation and from numerous Oracle professionals writing blog posts about it. It’s a common parameter to set to restrict the overall size of the PGA in releases 11.2 and later. It’s available if Automatic Memory Management (AMM) is not in use; databases running on Linux and using hugepages would be in this group since AMM and hugepages is not a supported combination. Hugepages are available for IPC (Inter-Process Communication) shared memory; this is the ‘standard’ shared memory model (starting with UNIX System V) allowing multiple processes to access the same shared memory segment. There is also another form of shared memory segment, the memory-mapped file, and currently such segments are not supported by hugepages. Oracle, on Linux, gives you a choice of using hugepages or memory-mapped files and you implement that choice by selecting to use (or not use) Automatic Memory Management (AMM). Using Automatic Shared Memory Management (ASMM) allows the DBA to set such parameters as sga_target, sga_max_size, pga_aggregate_target and pga_max_size and have some control how those memory areas are sized.

Using pga_max_size is a simple task:


SQL> alter system set pga_max_size=2G;

Systen altered.

SQL>

Now Oracle will do its best to limit the overall PGA size to the requested value but remember this is a targeted max size, not an absolute. It is more restrictive than pga_aggregate_target, meaning it’s less likely to be exceeded.

On to its sister parameter, _pga_max_size. This parameter regulates the size of the PGA memory allocated to a single process. Oracle sets this using calculations based on pga_aggregate_target and pga_max_size and, since it is an ‘undocumented’ parameter, it should NOT be changed at the whim of the DBA. Setting this to any value prevents Oracle from setting it based on its standard calculations and can seriously impact database performance and memory usage. If, for example, the DBA does this:


SQL> alter system set "_pga_max_size"=2G;

Systen altered.

SQL>

Oracle is now capable of allocating up to 2 GB of PGA to each and every process started after that change has taken place. On an exceptionally active and busy system, with parallel processing enabled, each process can have up to 2 GB of RAM in its PGA. Since many systems still don’t have terabytes of RAM installed such allocations can bring the database, and the server, to a grinding halt, throwing ORA-04030 errors in the process. This, of course, is not what the DBA intended but it is what the DBA enabled by altering the _pga_max_size parameter. Unfortunately this parameter (_pga_max_size) is still being written on in blogs that provide ‘information’, which hasn’t been validated, to the Oracle community.

Knowledge is power; unfortunately unverified ‘information’ is seen as knowledge (especially since it’s a common misconception that ‘if it’s on the Internet it MUST be true’ which isn’t always the case) by those who don’t apply critical thinking to what they read. I know of DBAs who set _pga_max_size to match the pga_max_size parameter and found, to their dismay, that their actions seriously impacted production systems in a negative way. Sometimes in the database world prolific authors are taken as experts and their words looked upon as gospel. Unfortunately prolific doesn’t necessarily mean reliable.

It’s always best to test what others tell you before assuming the advice given to you is right.

March 1, 2017

Return To Sender

Filed under: General — dfitzjarrell @ 16:06

"The most important reason for going from one place to another is to see what's in between."
-- Norton Juster, The Phantom Tollbooth

Recently in an Oracle forum a question resurfaced regarding enabling row movement for tables. The posted question, from five years ago, asked if row movement was safe and if there could be any ‘undesired impact on application data’. The answer to the first part of that question is ‘yes’ (it’s safe because Oracle, except under extreme conditions, won’t lose your data) and the answer to the second part is also ‘yes’. That may be confusing so let’s look at what the end result could be.

Data rows are uniquely identified by a construct known far and wide as the ROWID. ROWIDs contain a wealth of information as to the location of a given row; the file number, the block number and row number are all encoded in this curious value. Updates can change pretty much everything in a row except the ROWID and primary key values (and, yes, there’s a way to change PK values but it involves deleting and inserting the row — Oracle does this when, for some bizarre reason known only to the user making the change, a PK value is updated). The ONLY way to change a ROWID value is to physically move the row, which is what enabling row movement will allow. This is undesirable for the reasons listed below:


	* Applications coded to store ROWID values can fail as the data that was once in Location A is now in Location B.
	* Indexes will become invalid or unusable, requiring that they be rebuilt.

Storing ROWID values in application tables isn’t the wisest of choices a developer can make. Exporting from the source and importing into a new destination will automatically cause those stored ROWID values to be useless. Cloning the database via RMAN will do the same thing since ROWID values are unique only within the database where they are generated; they do not transport across servers or platforms. Consider two imaginary countries, Blaggoflerp and Snormaflop. Each is unique in geography so that locations found in Blaggoflerp are not found in Snormaflop, with the reverse also being true. If the traveler has but one map, of Blaggoflerp, and tries to use that to navigate Snormaflop our traveler will become hopelessly lost and confused. Enable row movement on a table where indexes are present, an application stores ROWIDs for easy data access, or both and Oracle starts singing that old Elvis Presley hit, written by Winfield Scott, “Return To Sender”:


Return to sender, address unknown.
No such person, no such zone.

Don’t misunderstand, the data is STILL in the table, it’s just moved from its original location and left no forwarding address. It’s possible that new data now occupies the space where that trusty old row used to live, so the application doesn’t break but it does return unexpected results because the values that were once at that location are no longer there. And any indexes that referenced that row’s original ROWID are now invalidated, making them useless until manual intervention is employed to rebuild them.


"Since you got here by not thinking, it seems reasonable to expect that, in order to get out, you must start thinking."
-- Norton Juster, The Phantom Tollbooth

Maybe it’s not that the DBA didn’t think about the act before he or she did it, it might simply be that he or she didn’t think far enough ahead about the results of such an act to make a reasonable choice. Changes to a database can affect downstream processing; failing to consider the ripple effect of such changes can be disastrous, indeed. It isn’t enough in this day and age to consider the database as a ‘lone ranger’; many systems can depend on a single database and essentially haphazard changes can stop them in their tracks.

There may be times when enabling row movement is necessary; changing a partitioning key is one of them. Granted making such changes on a partitioned table will be part of a larger outage window where the local and global indexes can be maintained so the impact will be positive, not negative. Absent such tasks (ones where row movement would be necessary) it’s not recommended to enable row movement as it will certainly break things, especially things no one was expecting because of lack knowledge of the affected systems.

It’s not always good to go travelling.

January 27, 2017

“Back It Up!!!”

Filed under: General — dfitzjarrell @ 14:21

“Expectations is the place you must always go to before you get to where you're going.” 
― Norton Juster, The Phantom Tollbooth   

In a recent post in the Oracle General database forum the following question was asked:


Hi,

I have 3 schema's of almost equal sizes, off which only two are creating the backup's (Hot backup). One of the schema's is not creating 
any backup, by not creating I mean, the backup file that is generated size is too small than the other two files.

The size of the other two backup files is almost 20 GB while the third one is only 54 Bytes !!!

Below are the commands I am using for backup,

alter tablespace SCHEMA begin backup; ( DB level )
tar cpvf - SCHEMA_TS | compress -f > backup.tar.Z ( OS level )

The DB (Oracle Database 11g) is in Archive log mode and no error is being thrown while running the above commands.

Could you please help me in solving this issue.

Any reference related to this would also be of great help.

Thanks in advance !!

There are issues with that sequence of statements, the first is calling that a ‘backup’. The issue with that is it’s highly likely that, after the tablespace files are restored, the recovery will fail and the database will be left in an unusable state. The obvious omission is the archivelogs; nowhere in that command sequence is found any statement using tar to copy the archivelogs generated before, during and after that ‘backup’ is completed; apparently the entire script was not posted to the thread so additional steps that script might execute were not available to view. Since no recovery testing is reported (if such a script exists its contents were not presented) it’s very possible that this ‘backup’ is taken on faith, and unfortunately faith isn’t going to be of much help here.

Yet another problem is the lack of any query to determine the actual datafiles associated with the given tablespace; a ‘backup’ missing that important information means that not all required datafiles will be copied, making the tablespace incomplete and Oracle unable to recover it. This again leads to a down database with no hope of opening.

It was suggested several times in the thread that the poster stop using this ‘backup’ and move to RMAN to create dependable, reliable and recoverable backups. Why this method was in use was explained with this post:


I am new to Oracle DB and the guy who worked before me, wrote a (backup) script where he just created a tar of the table space files.

which leads one to wonder how this DBA thought he or she would restore these tablespaces to a useful and usable state should the time come. The poster added:


I want to use RMAN but thought of solving this issuse first (worst case scenario) and then create a new backup script using RMAN.

Honestly this is not the problem that needs to be solved; the problem is generating a reliable backup and RMAN has been proven time and again as the tool for that job. Further discussion lead to the realization that not all files were being sent to tar which explained the size discrepancy but didn’t truly address the recoverability issue. Anyone can take a so-called ‘backup’ using any number of tools and operating system utilities; it’s restoring and recovering from those ‘backups’ that tells the tale of success or failure, and failure in restoring and recovering a production database isn’t an option.

Sometimes you don’t get what you expect.

December 21, 2016

“Do You Have A Key?”

Filed under: General — dfitzjarrell @ 11:27

“Don't you know anything at all about numbers?"
"Well, I don't think they're very important," snapped Milo, too embarrassed to admit the truth.
"NOT IMPORTANT!" roared the Dodecahedron, turning red with fury. "Could you have tea for two without the two — or three blind mice 
without the three? Would there be four corners of the earth if there weren't a four? And how would you sail the seven seas without a seven?"
"All I meant was—" began Milo, but the Dodecahedron, overcome with emotion and shouting furiously, carried right on.
"If you had high hopes, how would you know how high they were? And did you know that narrow escapes come in all different widths? 
Would you travel the whole wide world without ever knowing how wide it was? And how could you do anything at long last," he concluded, 
waving his arms over his head, "without knowing how long the last was? Why, numbers are the most beautiful and valuable things in the world. 
Just follow me and I'll show you." He turned on his heel and stalked off into the cave.” 
-- Norton Juster, The Phantom Tollbooth

How best to generate a primary key is a discussion that seems to never get an answer suitable to everyone. Sequences are a popular choice, as are other forms of generated or “artificial” keys. In addition to that Oracle provides a function named sys_guid() that generates unique identifiers that can be used for a number of purposes, one of which is, as one might expect, as a generated primary key. Occasionally discussions in the Oracle forums discuss the merits and issues of using such global unique identifiers as primary keys; one current discussion asked if using sys_guid() was faster or slower than using a sequence; on a Linux system generating sys_guid() values was illustrated as being faster than using a sequence. The example code, slightly modified, is presented here having been run on Oracle 12.1.0.2 on Windows. Both 12c configurations were used (standard and container) to see if any differences appeared, and since both runs provided similar results the non-container results are provided here. Timing was set on to record and display the elapsed time for each set of serial tests; parallel tests were also completed and log table entries report the elapsed time for those runs. Separate tables were used for each set of tests, the results that were produced are shown below. The test begins with a table and a sequence created with the default settings:


SQL> -- default 20 cache
SQL> create sequence seq1;

Sequence created.

SQL>
SQL> create table t_seq
  2  	 ( id	  number(9) primary key
  3  	 , filler varchar2(1000)
  4  	 );

Table created.

SQL>

The next step inserts 999,999 records into the t_seq table; the execution plan and run statistics are shown below:


SQL> insert into t_seq
  2    select seq1.nextval ,'sdfsf' from dual connect by level < 1000000;

999999 rows created.

Elapsed: 00:00:10.19

Execution Plan
----------------------------------------------------------
Plan hash value: 3365622274

--------------------------------------------------------------------------------
| Id  | Operation                      | Name  | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | INSERT STATEMENT               |       |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL       | T_SEQ |       |            |          |
|   2 |   SEQUENCE                     | SEQ1  |       |            |          |
|*  3 |    CONNECT BY WITHOUT FILTERING|       |       |            |          |
|   4 |     FAST DUAL                  |       |     1 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(LEVEL<1000000)


Statistics
----------------------------------------------------------
      50149  recursive calls
     227827  db block gets
      59085  consistent gets
          0  physical reads
  113671864  redo size
        855  bytes sent via SQL*Net to client
        893  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
     999999  rows processed

SQL>
SQL> drop table t_seq;

Table dropped.

SQL> drop sequence seq1;

Sequence dropped.

SQL>

The amount of redo generated is large, a result of using the sequence. Another sequence test was executed using a sequence created with a larger cache value. Before each run the user was re-connected to reset the session statistics:


SQL> connect bing/!@#!@#
Connected.
SQL> create sequence seq1 cache 10000;

Sequence created.

SQL>
SQL> create table t_seq
  2  	 ( id	  number(9) primary key
  3  	 , filler varchar2(1000)
  4  	 );

Table created.

SQL>

The same insert statement was executed using the sequence having the larger cache; the execution plan and session statistics are shown below:


SQL> insert into t_seq
  2    select seq1.nextval ,'sdfsf' from dual connect by level < 1000000;

999999 rows created.

Elapsed: 00:00:05.24

Execution Plan
----------------------------------------------------------
Plan hash value: 3365622274

--------------------------------------------------------------------------------
| Id  | Operation                      | Name  | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | INSERT STATEMENT               |       |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL       | T_SEQ |       |            |          |
|   2 |   SEQUENCE                     | SEQ1  |       |            |          |
|*  3 |    CONNECT BY WITHOUT FILTERING|       |       |            |          |
|   4 |     FAST DUAL                  |       |     1 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(LEVEL<1000000)


Statistics
----------------------------------------------------------
        249  recursive calls
      77911  db block gets
       9188  consistent gets
          1  physical reads
   79744836  redo size
        854  bytes sent via SQL*Net to client
        893  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
     999999  rows processed

SQL>

Using the larger sequence cache reduced the redo size by 33927028, which cut the execution time roughly in half. On to the sys_guid() part of the serial testing, with a new table created and a new connection established:


SQL> connect bing/!@#!@#
Connected.
SQL> create table t_raw
  2  	 ( id	  raw(16) primary key
  3  	 , filler varchar2(1000)
  4  	 );

Table created.

SQL>
SQL> insert into t_raw
  2    select sys_guid(),'sdfsf' from dual connect by level < 1000000;

999999 rows created.

Elapsed: 00:00:54.15

Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825

-------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | INSERT STATEMENT              |       |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL      | T_RAW |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|       |       |            |          |
|   3 |    FAST DUAL                  |       |     1 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<1000000)


Statistics
----------------------------------------------------------
       1442  recursive calls
    2956342  db block gets
      23736  consistent gets
         13  physical reads
  375573628  redo size
        854  bytes sent via SQL*Net to client
        890  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         34  sorts (memory)
          0  sorts (disk)
     999999  rows processed

SQL>

A benefit of using the sequence with the larger cache size is the redo size is less than half that of using the sys_guid() call. Additionally, on Windows, the execution took almost 11 times longer than using a sequence with a large cache value. Returning to the default cache size for the sequence a PL/SQL loop is used to return the generated values of the sequence; since serveroutput is not turned on the time to return the values isn’t included in the execution time (and as a result the values aren’t displayed). The elapsed time to run the block is found at the end of the execution as well as the redo statistics for the session:

`


SQL> connect bing/!@#!@#
Connected.
SQL> create sequence seq1 ;

Sequence created.

SQL>
SQL> set timing on
SQL> declare
  2    x number(38);
  3    function sf return number is
  4    begin
  5  	 return seq1.nextval;
  6    end;
  7  begin
  8    for i in 1..100000 loop
  9  	 x := sf;
 10    end loop;
 11  end;
 12  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.23
SQL>
SQL> select m.statistic#, n.name, m.value
  2  from v$mystat m, v$statname n
  3  where m.statistic# = n.statistic#
  4  and n.name like '%redo%'
  5  and m.value > 0;

STATISTIC# NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
       257 redo entries                                                          10014
       258 redo size                                                           3620676
       302 redo subscn max counts                                                    1
       307 redo synch time (usec)                                                  251
       308 redo synch time overhead (usec)                                         553
       309 redo synch time overhead count (  2ms)                                    2
       314 redo synch writes                                                         3
       323 redo write info find                                                      2

8 rows selected.

Elapsed: 00:00:00.00

SQL>

The reported redo size is smaller, but there were no inserts into a table performed in this test. A similar test was run using a sequence with a cache value of 10000:


SQL> drop sequence seq1;

Sequence dropped.

SQL> connect bing/!@#!@#
Connected.
SQL> create sequence seq1 cache 10000;

Sequence created.

SQL>
SQL> set timing on
SQL> declare
  2    x number(38);
  3    function sf return number is
  4    begin
  5  	 return seq1.nextval;
  6    end;
  7  begin
  8    for i in 1..100000 loop
  9  	 x := sf;
 10    end loop;
 11  end;
 12  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.19
SQL>
SQL> select m.statistic#, n.name, m.value
  2  from v$mystat m, v$statname n
  3  where m.statistic# = n.statistic#
  4  and n.name like '%redo%'
  5  and m.value > 0;

STATISTIC# NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
       257 redo entries                                                             34
       258 redo size                                                             11940
       307 redo synch time (usec)                                                  110
       308 redo synch time overhead (usec)                                      303802
       309 redo synch time overhead count (  2ms)                                    1
       313 redo synch time overhead count (inf)                                      1
       314 redo synch writes                                                         3
       323 redo write info find                                                      2

8 rows selected.

Elapsed: 00:00:00.00
SQL>

As shown in a prior test the redo statistics show a smaller redo size for the larger cache. On to the sys_guid() test:


SQL> connect bing/!@#!@#
Connected.
SQL> declare
  2    x raw(16);
  3    function sf return varchar2 is
  4    begin
  5  	 return sys_guid();
  6    end;
  7  begin
  8    for i in 1..100000 loop
  9  	 x := sf;
 10    end loop;
 11  end;
 12  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.70
SQL>
SQL> select m.statistic#, n.name, m.value
  2  from v$mystat m, v$statname n
  3  where m.statistic# = n.statistic#
  4  and n.name like '%redo%'
  5  and m.value > 0;

STATISTIC# NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
       257 redo entries                                                              6
       258 redo size                                                              1476
       302 redo subscn max counts                                                    1
       307 redo synch time (usec)                                                    1
       308 redo synch time overhead (usec)                                         575
       309 redo synch time overhead count (  2ms)                                    1
       314 redo synch writes                                                         1
       323 redo write info find                                                      1

8 rows selected.

Elapsed: 00:00:00.00
SQL>

Now, absent the insert, the redo generation is much less. The execution time, however, is at least twice as long as it was for the large-cache sequence. So on a Windows-based Oracle installation using a sequence takes much less time than using the sys_guid() calls. Parallel execution may be faster and the results may be reversed so further testing is necessary. These tests begin with the sequence created with the default cache value but starting at 100000000 (to more accurately reflect real production conditions) and multiple tables created with the CACHE option, which places the blocks in the most-recently used area of the cache to speed access and delay aging:


SQL> -- SEQUENCE MULTI SESSION TEST
SQL> drop table t_seq;

Table dropped.

Elapsed: 00:00:00.02
SQL> drop table tmp1;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp2;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp3;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp4;

Table dropped.

Elapsed: 00:00:00.01
SQL>
SQL> create table tmp1 cache as select 1 dummy from dual connect by level  create table tmp2 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.54
SQL> create table tmp3 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.61
SQL> create table tmp4 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.49
SQL>
SQL> drop sequence seq1 ;

Sequence dropped.

Elapsed: 00:00:00.00
SQL> create sequence seq1 start with 100000000 ;

Sequence created.

Elapsed: 00:00:00.00
SQL>
SQL> create table t_seq
  2  	 ( id	  number(9) primary key
  3  	 , filler varchar2(1000)
  4  	 );

Table created.

Elapsed: 00:00:00.01
SQL>
SQL> alter system switch logfile;

System altered.

Elapsed: 00:00:01.29
SQL> alter system checkpoint;

System altered.

Elapsed: 00:00:00.20
SQL> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.13
SQL>
SQL> select /*+ full(tmp1) */ count(*) from tmp1;

  COUNT(*)
----------
    999999

Elapsed: 00:00:00.05
SQL> select /*+ full(tmp2) */ count(*) from tmp2;

  COUNT(*)
----------
    999999

Elapsed: 00:00:00.04
SQL> select /*+ full(tmp3) */ count(*) from tmp3;

  COUNT(*)
----------
    999999

Elapsed: 00:00:00.04
SQL> select /*+ full(tmp4) */ count(*) from tmp4;

  COUNT(*)
----------
    999999

Elapsed: 00:00:00.04
SQL>
SQL> drop table tmp_log;

Table dropped.

Elapsed: 00:00:00.04
SQL> create table tmp_log(mydata varchar2(4000), optime timestamp);

Table created.

Elapsed: 00:00:00.01
SQL>
SQL> create or replace PROCEDURE    sp_log(p varchar2) as
  2    PRAGMA AUTONOMOUS_TRANSACTION;
  3  begin
  4    insert into tmp_log values (p , systimestamp);
  5    commit;
  6  end;
  7  /

Procedure created.

Elapsed: 00:00:00.03
SQL>
SQL> show errors
No errors.
SQL>
SQL> create or replace procedure sp_test_seq(p number) as
  2  begin
  3    sp_log('JOB ' || p || ' BEGIN');
  4
  5    if p = 1 then
  6  	 insert  into t_seq
  7  	   select seq1.nextval ,'sdfsf' from tmp1;
  8    elsif p = 2 then
  9  	 insert into t_seq
 10  	   select seq1.nextval ,'sdfsf' from tmp2;
 11    elsif p = 3 then
 12  	 insert  into t_seq
 13  	   select seq1.nextval ,'sdfsf' from tmp3;
 14    elsif p = 4 then
 15  	 insert into t_seq
 16  	   select seq1.nextval ,'sdfsf' from tmp4;
 17    end if;
 18    commit;
 19
 20    sp_log('JOB ' || p || ' END');
 21  end;
 22  /

Procedure created.

Elapsed: 00:00:00.02
SQL>
SQL> show errors
No errors.
SQL>
SQL> declare
  2    x_time date := sysdate + 1/1440;
  3  begin
  4
  5
  6    dbms_scheduler.create_job(job_name => 'TEST_SEQ1',
  7  				 job_type => 'PLSQL_BLOCK',
  8  				 job_action => 'begin sp_test_seq(1); end;',
  9  				 enabled=> true,
 10  				 start_date=> x_time
 11  			       );
 12    dbms_scheduler.create_job(job_name => 'TEST_SEQ2',
 13  				 job_type => 'PLSQL_BLOCK',
 14  				 job_action => 'begin sp_test_seq(2); end;',
 15  				 enabled=> true,
 16  				 start_date=> x_time
 17  			       );
 18    dbms_scheduler.create_job(job_name => 'TEST_SEQ3',
 19  				 job_type => 'PLSQL_BLOCK',
 20  				 job_action => 'begin sp_test_seq(3); end;',
 21  				 enabled=> true,
 22  				 start_date=> x_time
 23  			       );
 24    dbms_scheduler.create_job(job_name => 'TEST_SEQ4',
 25  				 job_type => 'PLSQL_BLOCK',
 26  				 job_action => 'begin sp_test_seq(4); end;',
 27  				 enabled=> true,
 28  				 start_date=> x_time
 29  			       );
 30  end;
 31  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> select job_name, start_date from user_scheduler_jobs where job_name like 'TEST%';

JOB_NAME     START_DATE
------------ ---------------------------------------------------------------------------
TEST_SEQ1    27-NOV-16 01.46.47.000000 PM -07:00
TEST_SEQ2    27-NOV-16 01.46.47.000000 PM -07:00
TEST_SEQ3    27-NOV-16 01.46.47.000000 PM -07:00
TEST_SEQ4    27-NOV-16 01.46.47.000000 PM -07:00

Elapsed: 00:00:00.00
SQL>
SQL> exec dbms_lock.sleep(120)

PL/SQL procedure successfully completed.

Elapsed: 00:02:00.00
SQL>
SQL> select * from
  2  (select mydata, optime, lead(optime) over (order by mydata) optime_end, lead(optime) over (order by mydata) - optime elapsed
  3  from tmp_log)
  4  where mydata like '%BEGIN%'
  5  /

MYDATA          OPTIME                       OPTIME_END                   ELAPSED
--------------- ---------------------------- ---------------------------- ----------------------------
JOB 1 BEGIN     27-NOV-16 01.46.50.233000 PM 27-NOV-16 01.47.28.113000 PM +000000000 00:00:37.880000
JOB 2 BEGIN     27-NOV-16 01.46.50.234000 PM 27-NOV-16 01.47.27.904000 PM +000000000 00:00:37.670000
JOB 3 BEGIN     27-NOV-16 01.46.50.235000 PM 27-NOV-16 01.47.28.169000 PM +000000000 00:00:37.934000
JOB 4 BEGIN     27-NOV-16 01.46.50.244000 PM 27-NOV-16 01.47.28.121000 PM +000000000 00:00:37.877000

Elapsed: 00:00:00.00
SQL>

Parallel takes longer on Windows, possibly because of the underlying architecture, but each of the four concurrent inserts consumed aboutn 39 seconds. To see if concurrent processes using the sys_guid() call may end up faster we set up the test again, this time using the sys_guid() call:


SQL> -- SYS_GUID MULTI SESSION TEST
SQL> drop table t_raw;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp1;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp2;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp3;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp4;

Table dropped.

Elapsed: 00:00:00.01
SQL>
SQL> create table tmp1 cache as select 1 dummy from dual connect by level  create table tmp2 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.62
SQL> create table tmp3 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.57
SQL> create table tmp4 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.48
SQL>
SQL> create table t_raw
  2  	 ( id	  raw(16) primary key
  3  	 , filler varchar2(1000)
  4  	 );

Table created.

Elapsed: 00:00:00.01
SQL>
SQL> alter system switch logfile;

System altered.

Elapsed: 00:00:03.02
SQL> alter system checkpoint;

System altered.

Elapsed: 00:00:00.17
SQL> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.34
SQL>
SQL> select /*+ full(tmp1) */ count(*) from tmp1; -- to make sure table is in buffer_cache
  2  select /*+ full(tmp2) */ count(*) from tmp2; -- to make sure table is in buffer_cache
  3  select /*+ full(tmp3) */ count(*) from tmp3; -- to make sure table is in buffer_cache
  4  select /*+ full(tmp4) */ count(*) from tmp4; -- to make sure table is in buffer_cache
  5
SQL> drop table tmp_log;

Table dropped.

Elapsed: 00:00:00.03
SQL> create table tmp_log(mydata varchar2(4000), optime timestamp);

Table created.

Elapsed: 00:00:00.01
SQL>
SQL> create or replace PROCEDURE    sp_log(p varchar2) as
  2    PRAGMA AUTONOMOUS_TRANSACTION;
  3  begin
  4    insert into tmp_log values (p , systimestamp);
  5    commit;
  6  end;
  7  /

Procedure created.

Elapsed: 00:00:00.03
SQL>
SQL> show errors
No errors.
SQL>
SQL> create or replace procedure sp_test_guid(p number) as
  2  begin
  3    sp_log('JOB ' || p || ' BEGIN');
  4
  5    if p = 1 then
  6  	 insert  into t_raw
  7  	   select sys_guid() ,'sdfsf' from tmp1;
  8    elsif p = 2 then
  9  	 insert  into t_raw
 10  	   select sys_guid() ,'sdfsf' from tmp2;
 11    elsif p = 3 then
 12  	 insert  into t_raw
 13  	   select sys_guid() ,'sdfsf' from tmp3;
 14    elsif p = 4 then
 15  	 insert into t_raw
 16  	   select sys_guid() ,'sdfsf' from tmp4;
 17    end if;
 18    commit;
 19
 20    sp_log('JOB ' || p || ' END');
 21  end;
 22  /

Procedure created.

Elapsed: 00:00:00.02
SQL>
SQL> show errors
No errors.
SQL>
SQL> declare
  2    x_time date := sysdate + 1/1440;
  3  begin
  4
  5    dbms_scheduler.create_job(job_name => 'TEST_GUID1',
  6  				 job_type => 'PLSQL_BLOCK',
  7  				 job_action => 'begin sp_test_guid(1); end;',
  8  				 enabled=> true,
  9  				 start_date=> x_time
 10  			       );
 11    dbms_scheduler.create_job(job_name => 'TEST_GUID2',
 12  				 job_type => 'PLSQL_BLOCK',
 13  				 job_action => 'begin sp_test_guid(2); end;',
 14  				 enabled=> true,
 15  				 start_date=> x_time
 16  			       );
 17    dbms_scheduler.create_job(job_name => 'TEST_GUID3',
 18  				 job_type => 'PLSQL_BLOCK',
 19  				 job_action => 'begin sp_test_guid(3); end;',
 20  				 enabled=> true,
 21  				 start_date=> x_time
 22  			       );
 23    dbms_scheduler.create_job(job_name => 'TEST_GUID4',
 24  				 job_type => 'PLSQL_BLOCK',
 25  				 job_action => 'begin sp_test_guid(4); end;',
 26  				 enabled=> true,
 27  				 start_date=> x_time
 28  			       );
 29  end;
 30  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL>
SQL> select job_name, start_date from user_scheduler_jobs where job_name like 'TEST%';

JOB_NAME     START_DATE
------------ ---------------------------------------------------------------------------
TEST_GUID1   27-NOV-16 01.48.53.000000 PM -07:00
TEST_GUID2   27-NOV-16 01.48.53.000000 PM -07:00
TEST_GUID3   27-NOV-16 01.48.53.000000 PM -07:00
TEST_GUID4   27-NOV-16 01.48.53.000000 PM -07:00

Elapsed: 00:00:00.00
SQL>
SQL> exec dbms_lock.sleep(180);

PL/SQL procedure successfully completed.

Elapsed: 00:03:00.00
SQL>
SQL> select * from
  2  (select mydata, optime, lead(optime) over (order by mydata) optime_end, lead(optime) over (order by mydata) - optime elapsed
  3  from tmp_log)
  4  where mydata like '%BEGIN%'
  5  /

MYDATA          OPTIME                       OPTIME_END                   ELAPSED
--------------- ---------------------------- ---------------------------- ----------------------------
JOB 1 BEGIN     27-NOV-16 01.48.54.228000 PM 27-NOV-16 01.50.49.312000 PM +000000000 00:01:55.084000
JOB 2 BEGIN     27-NOV-16 01.48.54.236000 PM 27-NOV-16 01.50.46.200000 PM +000000000 00:01:51.964000
JOB 3 BEGIN     27-NOV-16 01.48.54.245000 PM 27-NOV-16 01.50.47.742000 PM +000000000 00:01:53.497000
JOB 4 BEGIN     27-NOV-16 01.48.54.267000 PM 27-NOV-16 01.50.48.966000 PM +000000000 00:01:54.699000

Elapsed: 00:00:00.00
SQL>
SQL> set timing off echo off linesize 80 trimspool off

Table dropped.


Table dropped.


Sequence dropped.

Generating sys_guid() values is not faster when run in parallel in a Windows environment; each process ran almost two minutes before completing, roughly four times longer than the parallel sequence executions and twice as long as the serial sys_guid() runs.

The redo size using the sys_guid() call and insert statements was consistent regardless of the operating system used (Windows or Linux) and was larger than that when using a sequence and insert statements; absent any DML the sys_guid() call generated far less redo. Sequence cache size can affect the redo generation as a larger cache generates smaller amounts of redo, and the redo generation affects the execution time with a sequence. Without fail on Windows using sys_guid() takes longer. This is one area where testing on the operating system you are using is a must since Oracle on Linux can, and does, perform much differently than Oracle on Windows. Before you decide to change your primary key strategy to using sys_guid() test to see how it performs; you may be surprised at the results.

It would seem that 42 isn’t the only important number.

November 21, 2016

Taking Things For Granted

Filed under: General — dfitzjarrell @ 09:54

"Sometimes I find the best way of getting from one place to another is simply to erase everything and begin again."
-- Norton Juster, The Phantom Tollbooth

In one of the Oracle forums a question was asked regarding revoking selected privileges from the DBA role. Unfortunately for the person posting the question the answer is a resounding “No”; granting a role grants all privileges assigned to that role and there is no “picking and choosing” as if you were in a cafeteria. Roles are designed (or should be, at least) to grant all necessary privileges a user would need to access objects that role will use. And roles supplied by Oracle are designed for the jobs they are named after, such as DBA. Changing the role affects functionality and can seriously impact those granted that role. Let’s look at that in a bit more detail.

Oracle provides pre-configured roles with every installation of the database, and the list can vary based on the options you choose to install. A partial list of these roles from 12.1.0.2 is shown below (remember this is not a complete and exhaustive list):


ROLE                           O
------------------------------ -
CONNECT                        Y
RESOURCE                       Y
DBA                            Y
AUDIT_ADMIN                    Y
AUDIT_VIEWER                   Y
SELECT_CATALOG_ROLE            Y
EXECUTE_CATALOG_ROLE           Y
DELETE_CATALOG_ROLE            Y
CAPTURE_ADMIN                  Y
EXP_FULL_DATABASE              Y
IMP_FULL_DATABASE              Y
CDB_DBA                        Y
PDB_DBA                        Y
RECOVERY_CATALOG_OWNER         Y
LOGSTDBY_ADMINISTRATOR         Y
DBFS_ROLE                      Y
GSMUSER_ROLE                   Y
AQ_ADMINISTRATOR_ROLE          Y
AQ_USER_ROLE                   Y
DATAPUMP_EXP_FULL_DATABASE     Y
DATAPUMP_IMP_FULL_DATABASE     Y
ADM_PARALLEL_EXECUTE_TASK      Y
PROVISIONER                    Y
XS_RESOURCE                    Y
XS_SESSION_ADMIN               Y
XS_NAMESPACE_ADMIN             Y
XS_CACHE_ADMIN                 Y
GATHER_SYSTEM_STATISTICS       Y
OPTIMIZER_PROCESSING_RATE      Y
GSMADMIN_ROLE                  Y
RECOVERY_CATALOG_USER          Y
EM_EXPRESS_BASIC               Y
EM_EXPRESS_ALL                 Y
SCHEDULER_ADMIN                Y
HS_ADMIN_SELECT_ROLE           Y
HS_ADMIN_EXECUTE_ROLE          Y
HS_ADMIN_ROLE                  Y
GLOBAL_AQ_USER_ROLE            Y
OEM_ADVISOR                    Y
OEM_MONITOR                    Y
XDBADMIN                       Y
XDB_SET_INVOKER                Y
AUTHENTICATEDUSER              Y
XDB_WEBSERVICES                Y
XDB_WEBSERVICES_WITH_PUBLIC    Y
XDB_WEBSERVICES_OVER_HTTP      Y
GSM_POOLADMIN_ROLE             Y
GDS_CATALOG_SELECT             Y
WM_ADMIN_ROLE                  Y
JAVAUSERPRIV                   Y
JAVAIDPRIV                     Y
JAVASYSPRIV                    Y
JAVADEBUGPRIV                  Y
EJBCLIENT                      Y
JMXSERVER                      Y
JAVA_ADMIN                     Y
JAVA_DEPLOY                    Y
CTXAPP                         Y
ORDADMIN                       Y
OLAP_XS_ADMIN                  Y
OLAP_DBA                       Y
OLAP_USER                      Y
SPATIAL_WFS_ADMIN              Y
WFS_USR_ROLE                   Y
SPATIAL_CSW_ADMIN              Y
CSW_USR_ROLE                   Y
LBAC_DBA                       Y
APEX_ADMINISTRATOR_ROLE        Y
APEX_GRANTS_FOR_NEW_USERS_ROLE Y
DV_SECANALYST                  Y
DV_MONITOR                     Y
DV_ADMIN                       Y
DV_OWNER                       Y
DV_ACCTMGR                     Y
DV_PUBLIC                      Y
DV_PATCH_ADMIN                 Y
DV_STREAMS_ADMIN               Y
DV_GOLDENGATE_ADMIN            Y
DV_XSTREAM_ADMIN               Y
DV_GOLDENGATE_REDO_ACCESS      Y
DV_AUDIT_CLEANUP               Y
DV_DATAPUMP_NETWORK_LINK       Y
DV_REALM_RESOURCE              Y
DV_REALM_OWNER                 Y

The ‘O’ header is for the ORACLE_MAINTAINED column which indicates the role is supplied by Oracle. [This is a new column in the DBA_ROLES view for 12.1; earlier releases do not have this column in the view definition.] That list has 84 different roles all created when your database was created. What privileges do these roles have? That’s a question answered by the ROLE_SYS_PRIVS and ROLE_TAB_PRIVS views; let’s look at the DBA role and see what Oracle deems as necessary system privileges to be an effective DBA:


PRIVILEGE
----------------------------------------
CREATE SESSION
ALTER SESSION
DROP TABLESPACE
BECOME USER
DROP ROLLBACK SEGMENT
SELECT ANY TABLE
INSERT ANY TABLE
UPDATE ANY TABLE
DROP ANY INDEX
SELECT ANY SEQUENCE
CREATE ROLE
EXECUTE ANY PROCEDURE
ALTER PROFILE
CREATE ANY DIRECTORY
CREATE ANY LIBRARY
EXECUTE ANY LIBRARY
ALTER ANY INDEXTYPE
DROP ANY INDEXTYPE
DEQUEUE ANY QUEUE
EXECUTE ANY EVALUATION CONTEXT
EXPORT FULL DATABASE
CREATE RULE
ALTER ANY SQL PROFILE
ADMINISTER ANY SQL TUNING SET
CHANGE NOTIFICATION
DROP ANY EDITION
DROP ANY MINING MODEL
ALTER ANY MINING MODEL
ALTER ANY CUBE DIMENSION
CREATE CUBE
DROP ANY CUBE BUILD PROCESS
USE ANY SQL TRANSLATION PROFILE
CREATE PLUGGABLE DATABASE
ALTER ROLLBACK SEGMENT
DELETE ANY TABLE
ALTER DATABASE
FORCE ANY TRANSACTION
ALTER ANY PROCEDURE
DROP ANY TRIGGER
DROP ANY MATERIALIZED VIEW
UNDER ANY TYPE
ALTER ANY LIBRARY
CREATE DIMENSION
DEBUG ANY PROCEDURE
CREATE RULE SET
ALTER ANY RULE SET
ANALYZE ANY DICTIONARY
ALTER ANY EDITION
CREATE ANY ASSEMBLY
ALTER ANY CUBE
SELECT ANY CUBE
DROP ANY MEASURE FOLDER
RESTRICTED SESSION
CREATE TABLESPACE
ALTER TABLESPACE
CREATE USER
ALTER USER
LOCK ANY TABLE
CREATE VIEW
DROP ANY VIEW
GRANT ANY ROLE
CREATE TRIGGER
CREATE TYPE
EXECUTE ANY OPERATOR
CREATE ANY DIMENSION
ALTER ANY DIMENSION
CREATE ANY OUTLINE
ADMINISTER DATABASE TRIGGER
RESUMABLE
FLASHBACK ANY TABLE
CREATE ANY RULE SET
EXECUTE ANY RULE SET
IMPORT FULL DATABASE
EXECUTE ANY RULE
EXECUTE ANY PROGRAM
CREATE ANY EDITION
CREATE ASSEMBLY
ALTER ANY ASSEMBLY
CREATE CUBE DIMENSION
CREATE ANY CUBE BUILD PROCESS
UPDATE ANY CUBE DIMENSION
EM EXPRESS CONNECT
SET CONTAINER
ALTER ANY MEASURE FOLDER
CREATE ANY TABLE
CREATE ANY INDEX
CREATE ANY SEQUENCE
ALTER ANY ROLE
ANALYZE ANY
DROP ANY LIBRARY
CREATE ANY OPERATOR
CREATE INDEXTYPE
UNDER ANY TABLE
DROP ANY DIMENSION
SELECT ANY DICTIONARY
GRANT ANY OBJECT PRIVILEGE
CREATE EVALUATION CONTEXT
CREATE ANY EVALUATION CONTEXT
DROP ANY EVALUATION CONTEXT
CREATE ANY RULE
CREATE JOB
CREATE ANY JOB
CREATE MINING MODEL
INSERT ANY CUBE DIMENSION
DROP ANY CUBE
UPDATE ANY CUBE BUILD PROCESS
EXEMPT DML REDACTION POLICY
READ ANY TABLE
ALTER SYSTEM
AUDIT SYSTEM
CREATE ROLLBACK SEGMENT
DROP ANY TABLE
COMMENT ANY TABLE
REDEFINE ANY TABLE
CREATE CLUSTER
ALTER ANY INDEX
DROP PUBLIC DATABASE LINK
CREATE PROFILE
ALTER ANY MATERIALIZED VIEW
ALTER ANY TYPE
DROP ANY TYPE
UNDER ANY VIEW
EXECUTE ANY INDEXTYPE
DROP ANY CONTEXT
ALTER ANY OUTLINE
ADMINISTER RESOURCE MANAGER
MANAGE SCHEDULER
MANAGE FILE GROUP
CREATE ANY MINING MODEL
SELECT ANY MINING MODEL
CREATE ANY MEASURE FOLDER
DELETE ANY MEASURE FOLDER
CREATE ANY SQL TRANSLATION PROFILE
CREATE ANY CREDENTIAL
EXEMPT DDL REDACTION POLICY
SELECT ANY MEASURE FOLDER
SELECT ANY CUBE BUILD PROCESS
ALTER ANY CUBE BUILD PROCESS
CREATE TABLE
BACKUP ANY TABLE
CREATE ANY CLUSTER
DROP ANY SYNONYM
DROP PUBLIC SYNONYM
CREATE ANY VIEW
CREATE SEQUENCE
ALTER ANY SEQUENCE
FORCE TRANSACTION
CREATE PROCEDURE
CREATE ANY PROCEDURE
ALTER RESOURCE COST
DROP ANY DIRECTORY
CREATE ANY TYPE
ALTER ANY OPERATOR
CREATE ANY INDEXTYPE
ENQUEUE ANY QUEUE
ON COMMIT REFRESH
DEBUG CONNECT SESSION
DROP ANY RULE SET
EXECUTE ANY CLASS
MANAGE ANY FILE GROUP
EXECUTE ANY ASSEMBLY
EXECUTE ASSEMBLY
COMMENT ANY MINING MODEL
CREATE ANY CUBE DIMENSION
DELETE ANY CUBE DIMENSION
SELECT ANY CUBE DIMENSION
DROP ANY SQL TRANSLATION PROFILE
CREATE CREDENTIAL
ALTER ANY TABLE
DROP ANY CLUSTER
CREATE SYNONYM
CREATE PUBLIC SYNONYM
DROP ANY SEQUENCE
DROP ANY ROLE
AUDIT ANY
DROP ANY PROCEDURE
CREATE ANY TRIGGER
ALTER ANY TRIGGER
DROP PROFILE
GRANT ANY PRIVILEGE
CREATE LIBRARY
CREATE OPERATOR
DROP ANY OUTLINE
MERGE ANY VIEW
ADMINISTER SQL TUNING SET
UPDATE ANY CUBE
INSERT ANY MEASURE FOLDER
ADMINISTER SQL MANAGEMENT OBJECT
CREATE SQL TRANSLATION PROFILE
LOGMINING
MANAGE TABLESPACE
DROP USER
ALTER ANY CLUSTER
CREATE ANY SYNONYM
CREATE DATABASE LINK
CREATE PUBLIC DATABASE LINK
CREATE MATERIALIZED VIEW
CREATE ANY MATERIALIZED VIEW
EXECUTE ANY TYPE
DROP ANY OPERATOR
QUERY REWRITE
GLOBAL QUERY REWRITE
MANAGE ANY QUEUE
CREATE ANY CONTEXT
ALTER ANY EVALUATION CONTEXT
ALTER ANY RULE
DROP ANY RULE
ADVISOR
SELECT ANY TRANSACTION
DROP ANY SQL PROFILE
CREATE ANY SQL PROFILE
READ ANY FILE GROUP
CREATE EXTERNAL JOB
DROP ANY ASSEMBLY
DROP ANY CUBE DIMENSION
CREATE ANY CUBE
CREATE MEASURE FOLDER
CREATE CUBE BUILD PROCESS
ALTER ANY SQL TRANSLATION PROFILE
FLASHBACK ARCHIVE ADMINISTER

220 system privileges are required to be an effective DBA for an Oracle database, an impressive list, indeed. Of course once a user is granted the DBA role he or she gets ALL of those system privileges and since the role is the only direct grant that gives those privileges that list cannot be modified by selectively revoking one or more of those privileges:


SQL> grant DBA to blorpo identified by gussyflorp;

Grant succeeded.

SQL> revoke select any transaction from blorpo;
revoke select any transaction from blorpo
*
ERROR at line 1:
ORA-01952: system privileges not granted to 'BLORPO'


SQL>

Yes, the user DOES have that privilege, albeit indirectly. It’s indirect because it’s the ROLE that was granted that privilege, among others, and no attempt was made to revoke the role from the user. It’s a ‘package deal’; you grant a role to a user and it’s all or nothing, and even though it behaves like the user has the privileges granted directly that’s not the case.

You could, of course, get all of the privileges the DBA role has (both system and table) and create a script to grant each individual privilege to the desired user. It would be a LONG script and such grants require attention from the DBA granting them to ensure they are both current and not being abused. In that case individual privileges can be revoked which would be a maintenance nightmare for the DBA having keep track of which user has which set of privileges. Another option presents itself, creating a new role with only the privileges the DBA wants to assign to a user. The privilege list for DBA could be shortened to create, say, a DB_OPERATOR or DBO role. Such privileges would depend upon the job description; creating such a role would make granting such access easier and make maintenance simpler since when the role grants are changed those who are granted that role have there privileges adjusted the next time they login.

Roles make granting privileges very easy and straightforward, provided the role is properly created and maintained. Roles also make it impossible to “pick and choose” privileges a user should have. It’s an “all or nothing” proposition and there’s no way around that when using a pre-defined role.

Sometimes you just need to begin again.

November 17, 2016

“That STILL Ain’t Right”

Filed under: General — dfitzjarrell @ 07:45

"If you want sense, you'll have to make it yourself."
- Norton Juster, The Phantom Tollbooth

In a recent post to an Oracle forum a query was presented and a tuning request was made. It appears that the query was taking 20 hours to complete. Through further interrogation and responses it was discovered that the dates were being stored in a VARCHAR2 column and implicit date conversion was being used. To show how much of a problem this can cause the following example was created; notice the results returned and the execution plans generated for each run of the query, once with the table defined in the manner the original poster described and one with the date column using the DATE datatype. We begin:


SQL>
SQL> create table datetst(
  2  myid    number,
  3  res_value	varchar2(3),
  4  mydt    varchar2(20));

Table created.

SQL>
SQL> create index datetest_idx on datetst(res_value);

Index created.

SQL>
SQL> begin
  2  	     for i in 1..10000 loop
  3  		     if mod(i,23) = 0 then
  4  		     insert into datetst
  5  		     values(i, 'PUR', to_char(sysdate+i, 'MM/DD/RRRR'));
  6  		     else
  7  		     insert into datetst
  8  		     values(i, 'BID', to_char(sysdate+i, 'MM/DD/RRRR'));
  9  		     end if;
 10  	     end loop;
 11
 12  	     commit;
 13  end;
 14  /

PL/SQL procedure successfully completed.

SQL>

Let’s now run a query using conditions similar to the posted query and see what Oracle returns:


SQL>
SQL>
SQL> select *
  2  from datetst
  3  where mydt <= sysdate +230;
where mydt <= sysdate +230
      *
ERROR at line 3:
ORA-01843: not a valid month


SQL>

As expected the implicit date conversion failed; modifying the query to explicitly convert the strings to dates produces ‘interesting’ results:


SQL>
SQL> select *
  2  from datetst
  3  where res_value = 'PUR'
  4  and mydt <= to_char(sysdate +230, 'MM/DD/RRRR');

      MYID RES MYDT
---------- --- --------------------
       920 PUR 05/26/2019
       943 PUR 06/18/2019
      1150 PUR 01/11/2020
      1173 PUR 02/03/2020
      1196 PUR 02/26/2020
      1219 PUR 03/20/2020
      1242 PUR 04/12/2020
      1265 PUR 05/05/2020
      1288 PUR 05/28/2020
      1311 PUR 06/20/2020
        46 PUR 01/02/2017
        69 PUR 01/25/2017
        92 PUR 02/17/2017
       115 PUR 03/12/2017
       138 PUR 04/04/2017
       161 PUR 04/27/2017
       184 PUR 05/20/2017
       207 PUR 06/12/2017
       230 PUR 07/05/2017
       414 PUR 01/05/2018
       437 PUR 01/28/2018
       460 PUR 02/20/2018
       483 PUR 03/15/2018
       506 PUR 04/07/2018
       529 PUR 04/30/2018
       552 PUR 05/23/2018
       575 PUR 06/15/2018
       782 PUR 01/08/2019
       805 PUR 01/31/2019
       828 PUR 02/23/2019
       851 PUR 03/18/2019
       874 PUR 04/10/2019
       897 PUR 05/03/2019
      2622 PUR 01/22/2024
      2645 PUR 02/14/2024
      2668 PUR 03/08/2024
      2691 PUR 03/31/2024
      2714 PUR 04/23/2024
      3335 PUR 01/04/2026
      3358 PUR 01/27/2026
      3381 PUR 02/19/2026
      3404 PUR 03/14/2026
      3427 PUR 04/06/2026
      3450 PUR 04/29/2026
      3473 PUR 05/22/2026
      3496 PUR 06/14/2026
      3703 PUR 01/07/2027
      3726 PUR 01/30/2027
      3749 PUR 02/22/2027
      3772 PUR 03/17/2027
      3795 PUR 04/09/2027
      3818 PUR 05/02/2027
      3841 PUR 05/25/2027
      3864 PUR 06/17/2027
      2737 PUR 05/16/2024
      2760 PUR 06/08/2024
      2783 PUR 07/01/2024
      2967 PUR 01/01/2025
      2990 PUR 01/24/2025
      3013 PUR 02/16/2025
      3036 PUR 03/11/2025
      3059 PUR 04/03/2025
      3082 PUR 04/26/2025
      3105 PUR 05/19/2025
      3128 PUR 06/11/2025
      3151 PUR 07/04/2025
      1518 PUR 01/13/2021
      1541 PUR 02/05/2021
      1564 PUR 02/28/2021
      1587 PUR 03/23/2021
      1610 PUR 04/15/2021
      1633 PUR 05/08/2021
      1656 PUR 05/31/2021
      1679 PUR 06/23/2021
      1886 PUR 01/16/2022
      1909 PUR 02/08/2022
      1932 PUR 03/03/2022

      MYID RES MYDT
---------- --- --------------------
      1955 PUR 03/26/2022
      1978 PUR 04/18/2022
      2001 PUR 05/11/2022
      2024 PUR 06/03/2022
      2047 PUR 06/26/2022
      2254 PUR 01/19/2023
      2277 PUR 02/11/2023
      2300 PUR 03/06/2023
      2323 PUR 03/29/2023
      2346 PUR 04/21/2023
      2369 PUR 05/14/2023
      2392 PUR 06/06/2023
      2415 PUR 06/29/2023
      5543 PUR 01/21/2032
      5566 PUR 02/13/2032
      5589 PUR 03/07/2032
      5612 PUR 03/30/2032
      5635 PUR 04/22/2032
      5658 PUR 05/15/2032
      5681 PUR 06/07/2032
      5704 PUR 06/30/2032
      5911 PUR 01/23/2033
      5934 PUR 02/15/2033
      5957 PUR 03/10/2033
      5980 PUR 04/02/2033
      6003 PUR 04/25/2033
      6026 PUR 05/18/2033
      4071 PUR 01/10/2028
      4094 PUR 02/02/2028
      4117 PUR 02/25/2028
      4140 PUR 03/19/2028
      4163 PUR 04/11/2028
      4186 PUR 05/04/2028
      4209 PUR 05/27/2028
      4232 PUR 06/19/2028
      4439 PUR 01/12/2029
      4462 PUR 02/04/2029
      4485 PUR 02/27/2029
      4508 PUR 03/22/2029
      4531 PUR 04/14/2029
      4554 PUR 05/07/2029
      4577 PUR 05/30/2029
      4600 PUR 06/22/2029
      4807 PUR 01/15/2030
      4830 PUR 02/07/2030
      4853 PUR 03/02/2030
      4876 PUR 03/25/2030
      4899 PUR 04/17/2030
      4922 PUR 05/10/2030
      4945 PUR 06/02/2030
      4968 PUR 06/25/2030
      5175 PUR 01/18/2031
      5198 PUR 02/10/2031
      5221 PUR 03/05/2031
      5244 PUR 03/28/2031
      5267 PUR 04/20/2031
      5290 PUR 05/13/2031
      5313 PUR 06/05/2031
      5336 PUR 06/28/2031
      6992 PUR 01/09/2036
      7015 PUR 02/01/2036
      7038 PUR 02/24/2036
      7061 PUR 03/18/2036
      7084 PUR 04/10/2036
      7107 PUR 05/03/2036
      7130 PUR 05/26/2036
      7153 PUR 06/18/2036
      7866 PUR 06/01/2038
      7889 PUR 06/24/2038
      8096 PUR 01/17/2039
      8119 PUR 02/09/2039
      8142 PUR 03/04/2039
      8165 PUR 03/27/2039
      8188 PUR 04/19/2039
      8211 PUR 05/12/2039
      8234 PUR 06/04/2039
      8257 PUR 06/27/2039

      MYID RES MYDT
---------- --- --------------------
      7360 PUR 01/11/2037
      7383 PUR 02/03/2037
      7406 PUR 02/26/2037
      7429 PUR 03/21/2037
      7452 PUR 04/13/2037
      7475 PUR 05/06/2037
      7498 PUR 05/29/2037
      7521 PUR 06/21/2037
      7728 PUR 01/14/2038
      7751 PUR 02/06/2038
      7774 PUR 03/01/2038
      7797 PUR 03/24/2038
      7820 PUR 04/16/2038
      7843 PUR 05/09/2038
      6049 PUR 06/10/2033
      6072 PUR 07/03/2033
      6256 PUR 01/03/2034
      6279 PUR 01/26/2034
      6302 PUR 02/18/2034
      6325 PUR 03/13/2034
      6348 PUR 04/05/2034
      6371 PUR 04/28/2034
      6394 PUR 05/21/2034
      6417 PUR 06/13/2034
      6624 PUR 01/06/2035
      6647 PUR 01/29/2035
      6670 PUR 02/21/2035
      6693 PUR 03/16/2035
      6716 PUR 04/08/2035
      6739 PUR 05/01/2035
      6762 PUR 05/24/2035
      6785 PUR 06/16/2035
      9982 PUR 03/17/2044
      8464 PUR 01/20/2040
      8487 PUR 02/12/2040
      8510 PUR 03/06/2040
      8533 PUR 03/29/2040
      8556 PUR 04/21/2040
      8579 PUR 05/14/2040
      8602 PUR 06/06/2040
      8625 PUR 06/29/2040
      8832 PUR 01/22/2041
      8855 PUR 02/14/2041
      8878 PUR 03/09/2041
      8901 PUR 04/01/2041
      8924 PUR 04/24/2041
      8947 PUR 05/17/2041
      8970 PUR 06/09/2041
      8993 PUR 07/02/2041
      9177 PUR 01/02/2042
      9200 PUR 01/25/2042
      9223 PUR 02/17/2042
      9246 PUR 03/12/2042
      9269 PUR 04/04/2042
      9292 PUR 04/27/2042
      9315 PUR 05/20/2042
      9338 PUR 06/12/2042
      9545 PUR 01/05/2043
      9568 PUR 01/28/2043
      9591 PUR 02/20/2043
      9614 PUR 03/15/2043
      9637 PUR 04/07/2043
      9660 PUR 04/30/2043
      9683 PUR 05/23/2043
      9706 PUR 06/15/2043
      9913 PUR 01/08/2044
      9936 PUR 01/31/2044
      9959 PUR 02/23/2044

222 rows selected.

SQL>

The query should have returned no more than 10 rows that met the criteria, and it returned 222. Looking at the plan we see:


SQL>
SQL> select * From table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  6qatrtphp5wjt, child number 0
-------------------------------------
select * from datetst where res_value = 'PUR' and mydt <=
to_char(sysdate +230, 'MM/DD/RRRR')

Plan hash value: 3255216368

----------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |              |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| DATETST      |   222 |  6216 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | DATETEST_IDX |   434 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("MYDT"<=TO_CHAR(SYSDATE@!+230,'MM/DD/RRRR'))
   2 - access("RES_VALUE"='PUR')

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)


25 rows selected.

SQL>

Oracle found 222 rows that ‘matched’ the conditions, illustrating a problem of using an incorrect datatype; Oracle can’t know these are dates and compares them as ASCII strings creating a result set that is much larger than it should be. We drop the table and start over:


SQL>
SQL> drop table datetst purge;

Table dropped.

SQL>
SQL> create table datetst(
  2  myid    number,
  3  res_value varchar2(3),
  4  mydt    date);

Table created.

SQL>
SQL> create index datetest_idx on datetst(res_value);

Index created.

SQL>
SQL> begin
  2  	     for i in 1..10000 loop
  3  		     if mod(i,23) = 0 then
  4  		     insert into datetst
  5  		     values(i, 'PUR', sysdate+i);
  6  		     else
  7  		     insert into datetst
  8  		     values(i, 'BID', sysdate+i);
  9  		     end if;
 10  	     end loop;
 11
 12  	     commit;
 13  end;
 14  /

PL/SQL procedure successfully completed.

SQL>

We now run the original query (that didn’t have explicit date conversion, since we no longer need it) and examine the results:


SQL>
SQL> select *
  2  from datetst
  3  where res_value = 'PUR'
  4  and mydt <= sysdate +230;

      MYID RES MYDT
---------- --- ---------
        23 PUR 10-DEC-16
        46 PUR 02-JAN-17
        69 PUR 25-JAN-17
        92 PUR 17-FEB-17
       115 PUR 12-MAR-17
       138 PUR 04-APR-17
       161 PUR 27-APR-17
       184 PUR 20-MAY-17
       207 PUR 12-JUN-17
       230 PUR 05-JUL-17

10 rows selected.

SQL>
SQL> select * From table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0m2c2mv7zhx49, child number 0
-------------------------------------
select * from datetst where res_value = 'PUR' and mydt <= sysdate +230

Plan hash value: 3255216368

----------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |              |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| DATETST      |    10 |   250 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | DATETEST_IDX |   434 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("MYDT"<=SYSDATE@!+230)
   2 - access("RES_VALUE"='PUR')

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)


24 rows selected.

SQL>

Oracle now found the 10 rows we sought using the conditions we specified because the date data was correctly stored as a DATE datatype. Using VARCHAR2 made the result set 2,220 percent larger, and that was for a 10000 row table. Let’s re-run the example with 1,000,000 rows and see what numbers Oracle produces:


SQL>
SQL> create table datetst(
  2  myid    number,
  3  res_value	varchar2(3),
  4  mydt    varchar2(20));

Table created.

SQL>
SQL> create index datetest_idx on datetst(res_value);

Index created.

SQL>
SQL> begin
  2  	     for i in 1..1000000 loop
  3  		     if mod(i,23) = 0 then
  4  		     insert into datetst
  5  		     values(i, 'PUR', to_char(sysdate+i, 'MM/DD/RRRR'));
  6  		     else
  7  		     insert into datetst
  8  		     values(i, 'BID', to_char(sysdate+i, 'MM/DD/RRRR'));
  9  		     end if;
 10  	     end loop;
 11
 12  	     commit;
 13  end;
 14  /

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from datetst
  3  where mydt <= sysdate +230;
where mydt <= sysdate +230
      *
ERROR at line 3:
ORA-01843: not a valid month


SQL>
SQL> select *
  2  from datetst
  3  where res_value = 'PUR'
  4  and mydt <= to_char(sysdate +230, 'MM/DD/RRRR');

      MYID RES MYDT
---------- --- --------------------
       920 PUR 05/26/2019
       943 PUR 06/18/2019
      1150 PUR 01/11/2020
      1173 PUR 02/03/2020
      1196 PUR 02/26/2020
      1219 PUR 03/20/2020
      1242 PUR 04/12/2020
      1265 PUR 05/05/2020
      1288 PUR 05/28/2020
      1311 PUR 06/20/2020
        46 PUR 01/02/2017
        69 PUR 01/25/2017
        92 PUR 02/17/2017
       115 PUR 03/12/2017
       138 PUR 04/04/2017
       161 PUR 04/27/2017
       184 PUR 05/20/2017
       207 PUR 06/12/2017
       230 PUR 07/05/2017
       414 PUR 01/05/2018
       437 PUR 01/28/2018
       460 PUR 02/20/2018
       483 PUR 03/15/2018
       506 PUR 04/07/2018
       529 PUR 04/30/2018
       552 PUR 05/23/2018
       575 PUR 06/15/2018
       782 PUR 01/08/2019
       805 PUR 01/31/2019
       828 PUR 02/23/2019
       851 PUR 03/18/2019
       874 PUR 04/10/2019
       897 PUR 05/03/2019
      2622 PUR 01/22/2024
      2645 PUR 02/14/2024
      2668 PUR 03/08/2024
      2691 PUR 03/31/2024
      2714 PUR 04/23/2024
      3335 PUR 01/04/2026
      3358 PUR 01/27/2026
      3381 PUR 02/19/2026
      3404 PUR 03/14/2026
      3427 PUR 04/06/2026
      3450 PUR 04/29/2026
      3473 PUR 05/22/2026
      3496 PUR 06/14/2026
      3703 PUR 01/07/2027
      3726 PUR 01/30/2027
      3749 PUR 02/22/2027
      3772 PUR 03/17/2027
      3795 PUR 04/09/2027
      3818 PUR 05/02/2027
      3841 PUR 05/25/2027
      3864 PUR 06/17/2027
      2737 PUR 05/16/2024
      2760 PUR 06/08/2024
      2783 PUR 07/01/2024
      2967 PUR 01/01/2025
      2990 PUR 01/24/2025
      3013 PUR 02/16/2025
      3036 PUR 03/11/2025
      3059 PUR 04/03/2025
      3082 PUR 04/26/2025
      3105 PUR 05/19/2025
      3128 PUR 06/11/2025
      3151 PUR 07/04/2025
      1518 PUR 01/13/2021
      1541 PUR 02/05/2021
      1564 PUR 02/28/2021
      1587 PUR 03/23/2021
      1610 PUR 04/15/2021
      1633 PUR 05/08/2021
      1656 PUR 05/31/2021
      1679 PUR 06/23/2021
      1886 PUR 01/16/2022
      1909 PUR 02/08/2022
      1932 PUR 03/03/2022
...
    986309 PUR 04/21/4717
    986332 PUR 05/14/4717
    986355 PUR 06/06/4717
    986378 PUR 06/29/4717
    990955 PUR 01/09/4730
    990978 PUR 02/01/4730
    991001 PUR 02/24/4730
    991024 PUR 03/19/4730
    995394 PUR 03/06/4742
    995417 PUR 03/29/4742
    995440 PUR 04/21/4742
    995463 PUR 05/14/4742
    995486 PUR 06/06/4742
    995509 PUR 06/29/4742
    986585 PUR 01/22/4718
    986608 PUR 02/14/4718
    986631 PUR 03/09/4718
    986654 PUR 04/01/4718
    986677 PUR 04/24/4718
    991047 PUR 04/11/4730
    991070 PUR 05/04/4730
    991093 PUR 05/27/4730
    991116 PUR 06/19/4730
    991323 PUR 01/12/4731
    995716 PUR 01/22/4743
    995739 PUR 02/14/4743
    995762 PUR 03/09/4743
    995785 PUR 04/01/4743
    995808 PUR 04/24/4743
    995831 PUR 05/17/4743
    995854 PUR 06/09/4743
    995877 PUR 07/02/4743

22054 rows selected.

SQL>
SQL> select * From table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  6qatrtphp5wjt, child number 0
-------------------------------------
select * from datetst where res_value = 'PUR' and mydt <=
to_char(sysdate +230, 'MM/DD/RRRR')

Plan hash value: 3255216368

----------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |              |       |       |   158 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| DATETST      | 25757 |   704K|   158   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | DATETEST_IDX | 49112 |       |   158   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("MYDT"<=TO_CHAR(SYSDATE@!+230,'MM/DD/RRRR'))
   2 - access("RES_VALUE"='PUR')

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)


25 rows selected.

SQL>
SQL> drop table datetst purge;

Table dropped.

SQL>

SQL> create table datetst(
  2  myid    number,
  3  res_value varchar2(3),
  4  mydt    date);

Table created.

SQL>
SQL> create index datetest_idx on datetst(res_value);

Index created.

SQL>
SQL> begin
  2  	     for i in 1..1000000 loop
  3  		     if mod(i,23) = 0 then
  4  		     insert into datetst
  5  		     values(i, 'PUR', sysdate+i);
  6  		     else
  7  		     insert into datetst
  8  		     values(i, 'BID', sysdate+i);
  9  		     end if;
 10  	     end loop;
 11
 12  	     commit;
 13  end;
 14  /

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from datetst
  3  where res_value = 'PUR'
  4  and mydt <= sysdate +230;

      MYID RES MYDT
---------- --- ---------
        23 PUR 10-DEC-16
        46 PUR 02-JAN-17
        69 PUR 25-JAN-17
        92 PUR 17-FEB-17
       115 PUR 12-MAR-17
       138 PUR 04-APR-17
       161 PUR 27-APR-17
       184 PUR 20-MAY-17
       207 PUR 12-JUN-17
       230 PUR 05-JUL-17

10 rows selected.

SQL>
SQL> select * From table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  0m2c2mv7zhx49, child number 0
-------------------------------------
select * from datetst where res_value = 'PUR' and mydt <= sysdate +230

Plan hash value: 3255216368

----------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |              |       |       |   148 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| DATETST      |    35 |   875 |   148   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | DATETEST_IDX | 46482 |       |   148   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("MYDT"<=SYSDATE@!+230)
   2 - access("RES_VALUE"='PUR')

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)


24 rows selected.

SQL>

With 1,000,000 rows of data Oracle inflated the original 10-row result set to 22,054 rows, a whopping 220,440 percent increase. As the data volumes increase this result set will increase to even larger numbers, resulting in extremely long query times and vast numbers of incorrect results, something I doubt the original poster had counted on.

Using the correct datatype is essential in ensuring Oracle can do its job and do it properly, returning result sets that are reliable. Storing data in formst that doesn’t reflect the actual data type can be disastrous, as illustrated here. It pays when writing or purchasing applications that the proper datatype is in force for the columns being used.

It only makes sense.

« Previous PageNext Page »

Blog at WordPress.com.