Oracle Tips and Tricks — David Fitzjarrell

April 28, 2017

Adjusting Things

Filed under: General — dfitzjarrell @ 10:53

"The only thing you can do easily is be wrong, and that's hardly worth the effort." 
-- Norton Juster, The Phantom Tollbooth

Query tuning is both an art and a science and, because of this, usually occurs on a case-by-case basis. There may be occasions, though, where a series of queries, similar in structure and differing in predicate values, need to be tuned. Rather than go through each and every query, setting up a ‘standard’ SQL Profile (where force_match=FALSE) and enabling it, it may be easier to alter the setting for optimizer_index_cost_adj (presuming indexes are in use) so the index scans/index access paths are more ‘favorable’ than a table scan. Yet on the other hand it might be worth the effort to create the profile with force_match=TRUE, to cover all queries with the same SQL text outside of the literal values supplied. Let’s look at examples of why these might be good plans of attack (and a case where it wouldn’t be).

The optimizer, in its infinite wisdom and using current statistics, computes the cost of access for every table and associated index touched by the problem query. Sometimes the index cost is ever so slightly greater than that for a table scan and, as a result, the index path gets scrapped in favor of the table scan. In such cases nudging the optimizer in the ‘proper’ direction is as simple as changing the value on the optimizer_index_cost_adj parameter, which defaults to 100. The value you need should be chosen carefully, hopefully so that the queries you want affected will be affected and most others won’t. In our example let’s look at a very small portion of the level 2 10053 trace; the names were changed to protect the ‘innocent’:


 ****** Costing Index PLORGENFLOTZ_PK
  SPD: Return code in qosdDSDirSetup: NODIR, estType = INDEX_SCAN
  SPD: Return code in qosdDSDirSetup: NODIR, estType = INDEX_FILTER
  Access Path: index (RangeScan)
    Index: PLORGENFLOTZ_PK
    resc_io: 3.000000  resc_cpu: 342602
    ix_sel: 0.954069  ix_sel_with_filters: 0.954069
    Cost: 3.014879  Resp: 3.014879  Degree: 1
...
  Best:: AccessPath: TableScan
         Cost: 2.006465  Degree: 1  Resp: 2.006465  Card: 228.000000  Bytes: 0.000000

Notice the cost of the index access is just slightly higher than the cost of a full table scan so the optimizer passes up that option and chooses the table scan. This is where optimizer_index_cost_adj can change things. If, for example, we set optimizer_index_cost_adj to 50 the cost of the index access will go down:


 ****** Costing Index PLORGENFLOTZ_PK
  SPD: Return code in qosdDSDirSetup: NODIR, estType = INDEX_SCAN
  SPD: Return code in qosdDSDirSetup: NODIR, estType = INDEX_FILTER
  Access Path: index (IndexOnly)
    Index: PLORGENFLOTZ_PK
    resc_io: 1.000000  resc_cpu: 63786
    ix_sel: 0.954069  ix_sel_with_filters: 0.954069
    Cost: 1.001385  Resp: 1.001385  Degree: 0
    SORT ressource         Sort statistics
      Sort width:        5989 Area size:     1048576 Max Area size:  1046896640
      Degree:               1
      Blocks to Sort: 1 Row size:     21 Total Rows:            243
      Initial runs:   1 Merge passes:  0 IO Cost / pass:          0
      Total IO sort cost: 0.000000      Total CPU sort cost: 23112595
      Total Temp space used: 0
...
  Best:: AccessPath: IndexRange
  Index: PLORGENFLOTZ_PK
         Cost: 1.507553  Degree: 1  Resp: 1.507553  Card: 34.346487  Bytes: 0.000000

The calculated cost of using this index has been cut in half (which should be expected when setting optimizer_index_cost_adj to 50) so now the optimizer elects to take the index range scan as the best possible path. Notice that the optimizer_index_cost_adj isn’t applied until the actual cost has been calculated; the total cost is adjusted by the percentage provided in the optimizer_index_cost_adj setting as the final step. Looking at the final execution plan we see the following steps:


...
| 44  |                   TABLE ACCESS BY INDEX ROWID BATCHED         | PLORGENFLOTZ_TBL              |    34 |  1326 |     2 |  00:00:01 |      |      |     |        |       |
| 45  |                    INDEX RANGE SCAN                           | PLORGENFLOTZ_PK               |   243 |       |     1 |  00:00:01 |      |      |     |        |       |
...

which replaced this step in the plan where optimizer_index_cost_adj was unmodified:


...
| 111 |                        TABLE ACCESS FULL                           | PLORGENFLOTZ_TBL            |    32 |  1248 |     2 |  00:00:01 |      |      |           |       |
...

Other path steps were changed in addition to those listed here and the overall execution plan was shortened, as evidenced by the step numbers from the included plan excerpts.

Careful planning and testing needs to be done before settling on a value for optimizer_index_cost_adj as it will affect all index access calculations and could change acceptable plans using table scans to less-than-desirable plans forcing index access. The value of 50 used here was chosen after several runs using smaller and smaller settings until the desired plans were obtained. Being aggressive isn’t necessarily best when setting optimizer_index_cost_adj as extremely small settings, such as 20 or lower, may make some queries run very fast and make some others very slow (because index access isn’t always the best path to choose). Never make such changes on a production system without first investigating the effects in your test environment. The user community does not like unpleasant surprises.

Yet another way to tune a set of queries that differ only in literal values is by using a SQL Profile with force_match set to TRUE. This works by replacing the literal values with system-generated bind variables before the signature is generated; any similar query with different literal values will be associated with the same profile as the original query and thus will use the same execution plan. This presumes that ONLY literal values are in the source query statement; any additional bind variables present will generate a new signature that won’t match the signature associated with the profile and the known ‘good’ plan won’t be selected.

Let’s look at an example of that in action:


SQL>
SQL>--
SQL>-- Create table
SQL>--
SQL>create table plan_test(
  2  id      number,
  3  class   number,
  4  data    varchar2(45),
  5  cr_dt   date);

Table created.

SQL>
SQL>--
SQL>-- Load table
SQL>--
SQL>begin
  2  	     for i in 1..500000 loop
  3  		     insert into plan_test
  4  		     values(i, mod(i,337)+1, 'Value '||i, sysdate+mod(i,337));
  5  	     end loop;
  6
  7  	     for i in 500001..1000000 loop
  8  		     if mod(i,2)=0 then
  9  			     insert into plan_test
 10  			     values(3999, 3999, 'Value '||i, sysdate+mod(i,37));
 11  		     else
 12  			     insert into plan_test
 13  			     values(7734, 1234, 'Value '||i, sysdate+mod(i,37));
 14  		     end if;
 15  	     end loop;
 16
 17  	     commit;
 18  end;
 19  /

PL/SQL procedure successfully completed.

SQL>
SQL>--
SQL>-- Add an index
SQL>--
SQL>create index plan_test_idx on plan_test(class);

Index created.

SQL>
SQL>--
SQL>-- Compute stats and histograms
SQL>--
SQL>exec dbms_stats.gather_table_stats(user, 'PLAN_TEST', method_opt=>'for all columns size skewonly', cascade=>true);

PL/SQL procedure successfully completed.

SQL>
SQL>--
SQL>-- Run a query to get an index-access plan
SQL>--
SQL>select *
  2  from plan_test
  3  where class = 1;

        ID      CLASS DATA                                          CR_DT
---------- ---------- --------------------------------------------- ---------
       337          1 Value 337                                     28-APR-17
       674          1 Value 674                                     28-APR-17
      1011          1 Value 1011                                    28-APR-17
...
    483932          1 Value 483932                                  28-APR-17
    487302          1 Value 487302                                  28-APR-17
    477529          1 Value 477529                                  28-APR-17
    480899          1 Value 480899                                  28-APR-17
    484269          1 Value 484269                                  28-APR-17
    487639          1 Value 487639                                  28-APR-17

1483 rows selected.

SQL>
SQL>--
SQL>-- Display the plan
SQL>--
SQL>select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  19vnyya8kzzsw, child number 0
-------------------------------------
select * from plan_test where class = 1

Plan hash value: 2494389488

-----------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |               |       |       |  1136 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| PLAN_TEST     |  2236 | 64844 |  1136   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | PLAN_TEST_IDX |  2236 |       |     7   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / PLAN_TEST@SEL$1
   2 - SEL$1 / PLAN_TEST@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CLASS"=1)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "PLAN_TEST"."ID"[NUMBER,22], "CLASS"[NUMBER,22], "PLAN_TEST"."DATA"[VARCHAR2,45],
       "PLAN_TEST"."CR_DT"[DATE,7]
   2 - "PLAN_TEST".ROWID[ROWID,10], "CLASS"[NUMBER,22]


32 rows selected.

SQL>
SQL>--
SQL>-- Run a query to get an full scan plan
SQL>--
SQL>select *
  2  from plan_test
  3  where class = 3999;

        ID      CLASS DATA                                          CR_DT
---------- ---------- --------------------------------------------- ---------
      3999       3999 Value 500682                                  02-JUN-17
      3999       3999 Value 500684                                  28-APR-17
      3999       3999 Value 500686                                  30-APR-17
...
      3999       3999 Value 997392                                  18-MAY-17
      3999       3999 Value 997394                                  20-MAY-17
      3999       3999 Value 997396                                  22-MAY-17

250000 rows selected.

SQL>
SQL>--
SQL>-- Display the plan
SQL>--
SQL>select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  g41z4n4rnvbqc, child number 0
-------------------------------------
select * from plan_test where class = 3999

Plan hash value: 534695957

-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |       |       |  1304 (100)|          |
|*  1 |  TABLE ACCESS FULL| PLAN_TEST |   244K|  6916K|  1304   (1)| 00:00:01 |
-------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / PLAN_TEST@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("CLASS"=3999)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "PLAN_TEST"."ID"[NUMBER,22], "CLASS"[NUMBER,22],
       "PLAN_TEST"."DATA"[VARCHAR2,45], "PLAN_TEST"."CR_DT"[DATE,7]


29 rows selected.

SQL>

SQL>--
SQL>-- Create script to create profile
SQL>--
SQL>-- Profile uses force_match=TRUE
SQL>--
SQL>@coe_xfr_sql_profile 19vnyya8kzzsw 2494389488
SQL>--
SQL>-- Create the profile
SQL>--
SQL>@coe_xfr_sql_profile_19vnyya8kzzsw_2494389488
SQL>--
SQL>-- Test the profile
SQL>--
SQL>select *
  2  from plan_test
  3  where class = 1;

        ID      CLASS DATA                                          CR_DT
---------- ---------- --------------------------------------------- ---------
       337          1 Value 337                                     28-APR-17
       674          1 Value 674                                     28-APR-17
      1011          1 Value 1011                                    28-APR-17
...
    483932          1 Value 483932                                  28-APR-17
    487302          1 Value 487302                                  28-APR-17
    477529          1 Value 477529                                  28-APR-17
    480899          1 Value 480899                                  28-APR-17
    484269          1 Value 484269                                  28-APR-17
    487639          1 Value 487639                                  28-APR-17

1483 rows selected.

SQL>
SQL>select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  19vnyya8kzzsw, child number 0
-------------------------------------
select * from plan_test where class = 1

Plan hash value: 2494389488

-----------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |               |       |       |  1136 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| PLAN_TEST     |  2236 | 64844 |  1136   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | PLAN_TEST_IDX |  2236 |       |     7   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / PLAN_TEST@SEL$1
   2 - SEL$1 / PLAN_TEST@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CLASS"=1)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "PLAN_TEST"."ID"[NUMBER,22], "CLASS"[NUMBER,22], "PLAN_TEST"."DATA"[VARCHAR2,45],
       "PLAN_TEST"."CR_DT"[DATE,7]
   2 - "PLAN_TEST".ROWID[ROWID,10], "CLASS"[NUMBER,22]

Note
-----
   - SQL profile coe_19vnyya8kzzsw_2494389488 used for this statement


36 rows selected.

SQL>
SQL>select *
  2  from plan_test
  3  where class = 107;

        ID      CLASS DATA                                          CR_DT
---------- ---------- --------------------------------------------- ---------
       443        107 Value 443                                     12-AUG-17
       780        107 Value 780                                     12-AUG-17
      1117        107 Value 1117                                    12-AUG-17
...
    487071        107 Value 487071                                  12-AUG-17
    477298        107 Value 477298                                  12-AUG-17
    480668        107 Value 480668                                  12-AUG-17
    484038        107 Value 484038                                  12-AUG-17
    487408        107 Value 487408                                  12-AUG-17
    477635        107 Value 477635                                  12-AUG-17
    481005        107 Value 481005                                  12-AUG-17

1484 rows selected.

SQL>
SQL>select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  93zcxckzy9g3f, child number 0
-------------------------------------
select * from plan_test where class = 107

Plan hash value: 2494389488

-----------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |               |       |       |  1136 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| PLAN_TEST     |  2236 | 64844 |  1136   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | PLAN_TEST_IDX |  2236 |       |     7   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / PLAN_TEST@SEL$1
   2 - SEL$1 / PLAN_TEST@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CLASS"=107)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "PLAN_TEST"."ID"[NUMBER,22], "CLASS"[NUMBER,22], "PLAN_TEST"."DATA"[VARCHAR2,45],
       "PLAN_TEST"."CR_DT"[DATE,7]
   2 - "PLAN_TEST".ROWID[ROWID,10], "CLASS"[NUMBER,22]

Note
-----
   - SQL profile coe_19vnyya8kzzsw_2494389488 used for this statement


36 rows selected.

SQL>
SQL>select *
  2  from plan_test
  3  where class = 391;

no rows selected

SQL>
SQL>select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7n1ab3tyk1f33, child number 0
-------------------------------------
select * from plan_test where class = 391

Plan hash value: 2494389488

-----------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |               |       |       |  1136 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| PLAN_TEST     |  2236 | 64844 |  1136   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | PLAN_TEST_IDX |  2236 |       |     7   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / PLAN_TEST@SEL$1
   2 - SEL$1 / PLAN_TEST@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CLASS"=391)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "PLAN_TEST"."ID"[NUMBER,22], "CLASS"[NUMBER,22], "PLAN_TEST"."DATA"[VARCHAR2,45],
       "PLAN_TEST"."CR_DT"[DATE,7]
   2 - "PLAN_TEST".ROWID[ROWID,10], "CLASS"[NUMBER,22]

Note
-----
   - SQL profile coe_19vnyya8kzzsw_2494389488 used for this statement


36 rows selected.

SQL>
SQL>select *
  2  from plan_test
  3  where class = 1044;

no rows selected

SQL>
SQL>select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  52r213wp9sr9a, child number 0
-------------------------------------
select * from plan_test where class = 1044

Plan hash value: 2494389488

-----------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |               |       |       |  1136 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| PLAN_TEST     |  2236 | 64844 |  1136   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | PLAN_TEST_IDX |  2236 |       |     7   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / PLAN_TEST@SEL$1
   2 - SEL$1 / PLAN_TEST@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CLASS"=1044)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "PLAN_TEST"."ID"[NUMBER,22], "CLASS"[NUMBER,22], "PLAN_TEST"."DATA"[VARCHAR2,45],
       "PLAN_TEST"."CR_DT"[DATE,7]
   2 - "PLAN_TEST".ROWID[ROWID,10], "CLASS"[NUMBER,22]

Note
-----
   - SQL profile coe_19vnyya8kzzsw_2494389488 used for this statement


36 rows selected.

SQL>

All of the queries shown above return about the same number of rows, presuming they return rows, and all used the created profile, which is good. What isn’t so good is the next query, returning 250000 rows, also uses the profile:


SQL>
SQL>--
SQL>-- This one probably shouldn't use the profile but it does
SQL>--
SQL>-- Result of force_match=TRUE
SQL>--
SQL>select *
  2  from plan_test
  3  where class = 3999;

        ID      CLASS DATA                                          CR_DT
---------- ---------- --------------------------------------------- ---------
      3999       3999 Value 500682                                  02-JUN-17
      3999       3999 Value 500684                                  28-APR-17
      3999       3999 Value 997388                                  14-MAY-17
...
      3999       3999 Value 997392                                  18-MAY-17
      3999       3999 Value 997394                                  20-MAY-17
      3999       3999 Value 997396                                  22-MAY-17

250000 rows selected.

SQL>
SQL>select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  g41z4n4rnvbqc, child number 0
-------------------------------------
select * from plan_test where class = 3999

Plan hash value: 2494389488

-----------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |               |       |       |   123K(100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| PLAN_TEST     |   244K|  6916K|   123K  (1)| 00:00:05 |
|*  2 |   INDEX RANGE SCAN                  | PLAN_TEST_IDX |   244K|       |   510   (1)| 00:00:01 |
-----------------------------------------------------------------------------------------------------


Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / PLAN_TEST@SEL$1
   2 - SEL$1 / PLAN_TEST@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CLASS"=3999)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "PLAN_TEST"."ID"[NUMBER,22], "CLASS"[NUMBER,22], "PLAN_TEST"."DATA"[VARCHAR2,45],
       "PLAN_TEST"."CR_DT"[DATE,7]
   2 - "PLAN_TEST".ROWID[ROWID,10], "CLASS"[NUMBER,22]

Note
-----
   - SQL profile coe_19vnyya8kzzsw_2494389488 used for this statement


36 rows selected.

SQL>

Compare the cost of the index plan (123K) and the full table scan plan (1304) and you can see using the SQL Profile when returning a quarter of the table data is not the preferred path to take. Fixing the majority of the queries can ‘fix’ queries that don’t need fixing, and that’s the major issue with tuning with a broad brush.

It’s usually best to tune queries on an individual basis, but sometimes applications generate a set of queries that need attention. Judicious setting of optimizer_index_cost_adj could be the answer to such a tuning task, as could creating a SQL Profile with force_match=TRUE. Remember that making such changes at the database level can affect more than you had bargained for so test, test, test to verify minimal impact outside of the set of queries you are targeting. With respect to setting force_match=TRUE for a SQL Profile you may inadvertently set an inefficient plan using an index when a table scan would be preferable, as shown in the provided example.

Sometimes the “easy button” can be too easy to press.

April 7, 2017

You Bet Your ASCII

Filed under: General — dfitzjarrell @ 10:58

"Why, did you know that if a beaver two feet long with a tail a foot and a half long can build a dam twelve 
feet high and six feet wide in two days, all you would need to build Boulder Dam is a beaver sixty-eight
feet long with a fifty-one-foot tail?"
"Where would you find a beaver that big?" grumbled the Humbug as his pencil point snapped.
"I'm sure I don't know," he replied, "but if you did, you'd certainly know what to do with him."
-- Norton Juster, The Phantom Tollbooth 

International character sets, such as AL32UTF8, can solve a host of problems when non-ASCII characters need to be stored in the database. This, unfortunately, can create problems when having to convert those characters to ASCII-compatible text using Oracle’s built-in function ASCIISTR(). Let’s look at an example and see what might occur.

Two databases exist, one 11.2.0.4, the other 12.1.0.2, and both use the AL32UTF8 character set. Let’s create a table in both databases and load the CLOB column with non-ASCII characters (characters that will print on the screen but will be processed by the ASCIISTR() function):


SQL> create table yumplerzle(
  2  smarg   number,
  3  weebogaz	     clob);

Table created.

SQL> 
SQL> begin
  2  	     for i in 1..1000 loop
  3  		     insert into yumplerzle
  4  		     values(i, rpad(i, 8000, chr(247)));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

SQL> 

Query the table absent the ASCIISTR() function to see what character we’ve chosen:


...
SUBSTR(WEEBOGAZ,1,4000)
--------------------------------------------------------------------------------
991ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
992ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
993ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
994ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
995ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
996ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
997ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
998ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
999ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷
1000ÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷ÎáÎáÎ÷

1000 rows selected.

Interesting data, to be sure. Now let’s try to use the ASCIISTR() function on the output:


SQL> select asciistr(substr(weebogaz,1,4000)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,4000)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 

Unfortunately the character ‘conversion’ replaces the non-ASCII characters with their HEX codes and that can expand the line length considerably. Since this is 11.2.0.4 the length limit for VARCHAR2 columns is 4000 (characters or bytes depending on how your database or table column is configured). Given that restriction it’s impossible to use ASCIISTR() on any longer line than 1000 characters/bytes as shown below:


SQL> select asciistr(substr(weebogaz,1,32767)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,32767)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,16000)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,16000)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,4000)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,4000)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,3000)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,3000)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,2000)) from yumplerzle
  2  /
select asciistr(substr(weebogaz,1,2000)) from yumplerzle
                *
ERROR at line 1:
ORA-64203: Destination buffer too small to hold CLOB data after character set 
conversion. 


SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,1000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,1000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,1000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...

It’s apparent how the line length has expanded based on the output from ASCIISTR(). If you’re using any release older than 12.1 you’re stuck with this restriction. Thankfully Oracle 12.1 and later versions offer the possibility of extended string length for text fields, configured using the max_string_size parameter. Setting this to EXTENDED and running the utl32k.sql script in $ORACLE_HOME/rdbms/admin (on UNIX and Linux systems, %ORACLE_HOME%\rdbms\admin on Windows) can fix this error. This requires a shutdown of the database and starting in UPGRADE mode. The exact steps are shown below:


SQL>
SQL> alter system set max_string_size = EXTENDED scope=spfile;
SQL> shutdown immediate
...
SQL> startup upgrade
...
SQL> @?/rdbms/admin/utl32k.sql
...
SQL> shutdown immediate
...
SQL> startup
...
SQL>

The script makes necessary changes to the data dictionary that allow Oracle to utilize this expanded string length and, in turn, indirectly modify functions like ASCIISTR() so their string buffer lengths are increased. Moving over to the database running under 12.1.0.2 that has had this modification completed the error experienced in 11.2.0.4 is gone:


SQL> select asciistr(substr(weebogaz,1,32767)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,32767))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,32767))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...
SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,16000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,16000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,16000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...
SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,4000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,4000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,4000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...
SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,3000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,3000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,3000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...
SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,2000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,2000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,2000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...
SQL> 
SQL> pause

SQL> 
SQL> select asciistr(substr(weebogaz,1,1000)) from yumplerzle
  2  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,1000))                                               
--------------------------------------------------------------------------------
1\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF

ASCIISTR(SUBSTR(WEEBOGAZ,1,1000))                                               
--------------------------------------------------------------------------------
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
D\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFF
...

No “buffer too small” errors were thrown with the extended string length configured in 12.1.0.2, even passing a substring length of 32767. Using even longer sub-strings, and even eliminating the substr() call entirely, also seems to pose no problems:


SQL> 
SQL> select asciistr(substr(weebogaz,1,64000)) from yumplerzle
  2  where rownum < 2
  3  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,64000))                                              
--------------------------------------------------------------------------------
305\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
...
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD                                     
                                                                                

SQL> 
SQL> select asciistr(substr(weebogaz,1,128000)) from yumplerzle
  2  where rownum < 2
  3  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,128000))                                             
--------------------------------------------------------------------------------
305\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
...
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD                                     
                                                                                

SQL> 
SQL> select asciistr(substr(weebogaz,1,256000)) from yumplerzle
  2  where rownum < 2
  3  /

ASCIISTR(SUBSTR(WEEBOGAZ,1,256000))                                             
--------------------------------------------------------------------------------
305\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
...
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD                                     
                                                                                

SQL> 
SQL> select asciistr(weebogaz) from yumplerzle
  2  where rownum < 2
  3  /

ASCIISTR(WEEBOGAZ)                                                              
--------------------------------------------------------------------------------
305\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
...
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\F
FFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD\FFFD                                     
                                                                                

SQL>  

When using character sets like WE8MSWIN1252 and US7ASCII these issues aren’t present as the data is converted to something ASCII can handle during the insert; only extended character sets seem to produce this error on conversion, something to remember since, if your database is NOT using a UTF8/UTF16 character set such problems won’t occur. There will be no need to increase the max_string_size when ASCII-centric character sets are used unless, of course, you want the extended length to store longer pieces of text.

Notice that the character set of the database was NOT changed during this process, only the maximum declarable length of a VARCHAR2/NVARCHAR2 column was affected. Also remember that this should be done in a test environment first, to ensure that such a change doesn’t adversely affect existing applications and code. I have not experienced any issues of that sort but mine isn’t the only database in the world and there could be exceptions in other environments. Only after you are reasonably certain this change doesn’t break anything can you move this into a production environment.

Fill ‘er up.

April 5, 2017

GrACE Period

Filed under: General — dfitzjarrell @ 12:27

“You can swim all day in the Sea of Knowledge and not get wet.” 
― Norton Juster, The Phantom Tollbooth

The time is fast approaching where it will be three years since I was graced with the status of Oracle ACE, and I’ve enjoyed every minute of it. I can’t speak highly enough of the program and its steadily growing list of members. But don’t think for a minute that once someone becomes an Oracle ACE or Oracle ACE Director the work stops; nothing could be further from the truth.

Sharing my knowledge got me here and that hasn’t stopped or slowed down. I still blog, still write two articles each month for http://www.databasejournal.com and still contribute to the Oracle technical forums and I wouldn’t change a thing. I’ve said it before, sharing what you know may not seem important to you at the time, but others who you may not know or ever see will find it useful and will be grateful that you took the time and effort to pass it on. It’s not about the laurels and praise, it’s about contributing knowledge to the Oracle community in order to help others.

Allow me to quote from my original post on being an Oracle ACE:


Being an Oracle ACE is an honor, but it's also a responsibility. What got me
here was writing and participating in discussion groups, and that won't change.
Knowledge is to be shared, not hoarded. What you know does no 
one else any good if you don't share that knowledge and experience. If Da Vinci 
had kept his notes to himself, if Newton hadn't published his Laws of Motion, if 
Copernicus has kept quiet our world may have been quite different. It's because 
these people had the foresight to look deeper into our world and then tell us 
what they found that puts us where we are today. It's only right that we, as 
beneficiaries of the knowledge others before us shared, share our knowledge no 
matter how unimportant it may seem. Someone, somewhere, will find it useful and 
will be grateful that we did.

That still holds true almost three years later; I keep that thought in mind every time I post to a forum, write an article or create a blog post because I do those things to add to the knowledge base provided by members of the Oracle community. And even though others may post more often it doesn’t mean my contributions are diminished in any way, since I (hopefully) have a unique voice and viewpoint that adds to, not detracts from or duplicates, the contributions made by others. The world is a vast place and everyone’s voice should be heard. It’s incumbent upon you to make that so; raise your voice and be heard.

Oracle isn’t just a product line, it’s also a community and it takes more than one person to keep a community going. Others may be blogging and sharing but don’t let that stop you from doing the same. There is no rule that each contribution be unique; sometimes a different view on the same topic can turn the light of understanding on and provide insight and knowledge to clear the confusion. Your voice is special; don’t deprive others of your contributions simply because you can’t think of a unique topic. You may provide understanding by approaching the topic from a different direction. Sometimes a change of perspective is all it takes.

Again from that previous blog post:


I love sharing what I know; I've been doing it for over 16 years now, in various 
forums, some that are no longer as popular as they once were.  I never realized 
how far my commentary reached until I became an Oracle ACE; I have received 
congratulations and comments that I never expected, mostly of the 'it's about 
time' sentiment.  Simply because you don't see the gratitude of others who 
benefit from your knowledge doesn't mean that gratitude doesn't exist.  I see 
now that it does, and I am humbled by it.

It’s still great to be an Oracle ACE, and to me it always will be. But it is good to remember that being an ACE isn’t the destination, it’s just the start of the journey.

Head ’em up, move ’em out.

March 31, 2017

You’re A Natural

Filed under: General — dfitzjarrell @ 08:44

"'Why is it,' he said quietly, 'that quite often even the things which are correct just don't seem to be right?'"
-- Norton Juster, The Phantom Tollbooth

A YouTube video is currently being promoted in the database community regarding joins, including the variety of joins available in most database engines (the engine referred to in the video is Oracle). A very good discussion ensues covering inner, left outer, right outer, full outer and cross joins. Notably absent (and, conceivably, of limited use) is the natural join, used when the join columns have the same name (and, hopefully, the same definition). Let’s look at the natural join and what it can, and cannot, do.

The following example sets up the conditions for a successful natural join: two tables with a common column that will facilitate the use of the natural join. Notice that in a natural join columns cannot have a prefix; the natural join returns the selected columns from all tables in the join which can make things a bit confusing when writing such a select list. We begin with a simple ‘select *’ query:


SQL> create table yooper(
  2  snorm   number,
  3  wabba   varchar2(20),
  4  oplunt  date);

Table created.

SQL> 
SQL> create table amplo(
  2  snorm   number,
  3  fleezor date,
  4  smang   varchar2(20),
  5  imjyt   varchar2(17));

Table created.

SQL> 
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into yooper
  4  		     values(i, 'Quarzenfleep '||i, sysdate +i);
  5  		     insert into amplo
  6  		     values(i, sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7  	     end loop;
  8  
  9  	     commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL> 
SQL> select *
  2  from yooper natural join amplo;

     SNORM WABBA                OPLUNT    FLEEZOR   SMANG                IMJYT
---------- -------------------- --------- --------- -------------------- -----------------
         1 Quarzenfleep 1       31-MAR-17 29-MAR-17 Erblo1               Zaxegoomp1
         2 Quarzenfleep 2       01-APR-17 28-MAR-17 Erblo2               Zaxegoomp2
         3 Quarzenfleep 3       02-APR-17 27-MAR-17 Erblo3               Zaxegoomp3
         4 Quarzenfleep 4       03-APR-17 26-MAR-17 Erblo4               Zaxegoomp4
         5 Quarzenfleep 5       04-APR-17 25-MAR-17 Erblo5               Zaxegoomp5
         6 Quarzenfleep 6       05-APR-17 24-MAR-17 Erblo6               Zaxegoomp6
         7 Quarzenfleep 7       06-APR-17 23-MAR-17 Erblo7               Zaxegoomp7
         8 Quarzenfleep 8       07-APR-17 22-MAR-17 Erblo8               Zaxegoomp8
         9 Quarzenfleep 9       08-APR-17 21-MAR-17 Erblo9               Zaxegoomp9
        10 Quarzenfleep 10      09-APR-17 20-MAR-17 Erblo10              Zaxegoomp10

10 rows selected.

SQL> 
>

In this case the common column not only has the same name but also the same data type. When processing such queries Oracle returns the join column data from the first table listed in the join. Since the column has the same name and definition in this example it really doesn’t matter which table the join column data comes from, but in the next example there will be a difference. Let’s drop the existing tables and recreate them, this time with one table having a number data type and the other having a VARCHAR2 data type. The data in both tables will be the same (although the numbers will be stored as characters in the VARCHAR2 column). Since there are no alpha characters in the VARCHAR2 column the implicit TO_NUMBER() conversion succeeds; the difference in the output is the SNORM column (the commonly-named join column) printed is the character data, not the numeric, but, as mentioned previously, that is dependent on the table order in the join:


SQL> drop table yooper purge;

Table dropped.

SQL> drop table amplo purge;

Table dropped.

SQL> 
SQL> create table yooper(
  2  snorm   number,
  3  wabba   varchar2(20),
  4  oplunt  date);

Table created.

SQL> 
SQL> create table amplo(
  2  snorm   varchar2(10),
  3  fleezor date,
  4  smang   varchar2(20),
  5  imjyt   varchar2(17));

Table created.

SQL> 
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into yooper
  4  		     values(i, 'Quarzenfleep '||i, sysdate +i);
  5  		     insert into amplo
  6  		     values(i, sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7  	     end loop;
  8  
  9  	     commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL> 
SQL> select *
  2  from yooper natural join amplo;

SNORM      WABBA                OPLUNT    FLEEZOR   SMANG                IMJYT
---------- -------------------- --------- --------- -------------------- -----------------
1          Quarzenfleep 1       31-MAR-17 29-MAR-17 Erblo1               Zaxegoomp1
2          Quarzenfleep 2       01-APR-17 28-MAR-17 Erblo2               Zaxegoomp2
3          Quarzenfleep 3       02-APR-17 27-MAR-17 Erblo3               Zaxegoomp3
4          Quarzenfleep 4       03-APR-17 26-MAR-17 Erblo4               Zaxegoomp4
5          Quarzenfleep 5       04-APR-17 25-MAR-17 Erblo5               Zaxegoomp5
6          Quarzenfleep 6       05-APR-17 24-MAR-17 Erblo6               Zaxegoomp6
7          Quarzenfleep 7       06-APR-17 23-MAR-17 Erblo7               Zaxegoomp7
8          Quarzenfleep 8       07-APR-17 22-MAR-17 Erblo8               Zaxegoomp8
9          Quarzenfleep 9       08-APR-17 21-MAR-17 Erblo9               Zaxegoomp9
10         Quarzenfleep 10      09-APR-17 20-MAR-17 Erblo10              Zaxegoomp10

10 rows selected.

SQL> 

Notice when the tables are reversed the SNORM data is again numeric:


SQL> select *
  2  from amplo natural join yooper;

     SNORM FLEEZOR   SMANG                IMJYT             WABBA                OPLUNT
---------- --------- -------------------- ----------------- -------------------- ---------
         1 29-MAR-17 Erblo1               Zaxegoomp1        Quarzenfleep 1       31-MAR-17
         2 28-MAR-17 Erblo2               Zaxegoomp2        Quarzenfleep 2       01-APR-17
         3 27-MAR-17 Erblo3               Zaxegoomp3        Quarzenfleep 3       02-APR-17
         4 26-MAR-17 Erblo4               Zaxegoomp4        Quarzenfleep 4       03-APR-17
         5 25-MAR-17 Erblo5               Zaxegoomp5        Quarzenfleep 5       04-APR-17
         6 24-MAR-17 Erblo6               Zaxegoomp6        Quarzenfleep 6       05-APR-17
         7 23-MAR-17 Erblo7               Zaxegoomp7        Quarzenfleep 7       06-APR-17
         8 22-MAR-17 Erblo8               Zaxegoomp8        Quarzenfleep 8       07-APR-17
         9 21-MAR-17 Erblo9               Zaxegoomp9        Quarzenfleep 9       08-APR-17
        10 20-MAR-17 Erblo10              Zaxegoomp10       Quarzenfleep 10      09-APR-17

10 rows selected.

SQL>

Adjusting the select list by using specific columns produces a smaller data set; notice that no table aliases or prefixes are used, which can make it difficult to keep track of what columns are coming from which table:


SQL> select smang, snorm, fleezor
  2  from yooper natural join amplo;

SMANG                SNORM      FLEEZOR
-------------------- ---------- ---------
Erblo1               1          29-MAR-17
Erblo2               2          28-MAR-17
Erblo3               3          27-MAR-17
Erblo4               4          26-MAR-17
Erblo5               5          25-MAR-17
Erblo6               6          24-MAR-17
Erblo7               7          23-MAR-17
Erblo8               8          22-MAR-17
Erblo9               9          21-MAR-17
Erblo10              10         20-MAR-17

10 rows selected.

SQL>

The natural join only requires the column names to be the same; the definitions can be different and as long as the data can be converted so a comparison can be made the query succeeds. Now let’s change the picture a bit more and store character strings in the one table while the other remains with numeric data:


SQL> drop table yooper purge;

Table dropped.

SQL> drop table amplo purge;

Table dropped.

SQL> 
SQL> create table yooper(
  2  snorm   number,
  3  wabba   varchar2(20),
  4  oplunt  date);

Table created.

SQL> 
SQL> create table amplo(
  2  snorm   varchar2(10),
  3  fleezor date,
  4  smang   varchar2(20),
  5  imjyt   varchar2(17));

Table created.

SQL> 
SQL> begin
  2  	     for i in 1..10 loop
  3  		     insert into yooper
  4  		     values(i, 'Quarzenfleep '||i, sysdate +i);
  5  		     insert into amplo
  6  		     values(i||'Bubba', sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7  	     end loop;
  8  
  9  	     commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL> 
SQL> select *
  2  from yooper natural join amplo;
select *
*
ERROR at line 1:
ORA-01722: invalid number


SQL> 
SQL> select *
  2  from amplo natural join yooper;
select *
*
ERROR at line 1:
ORA-01722: invalid number


SQL> 
SQL> drop table yooper purge;

Table dropped.

SQL> drop table amplo purge;

Table dropped.

SQL> 

Now the natural join fails to return data since the TO_NUMBER() conversion fails; it doesn’t matter which table is listed first in the natural join from this example as the conversion will be from the character string to a number.

The natural join will use all commonly named columns in the join condition, so let’s add another matching column to this example and see what happens:


SQL> create table yooper(
  2  snorm      number,
  3  fleezor date,
  4  wabba      varchar2(20),
  5  oplunt     date);

Table created.

SQL>
SQL> create table amplo(
  2  snorm      number(10),
  3  fleezor date,
  4  smang      varchar2(20),
  5  imjyt      varchar2(17));

Table created.

SQL>
SQL> begin
  2          for i in 1..10 loop
  3                  insert into yooper
  4                  values(i, sysdate-i, 'Quarzenfleep '||i, sysdate +i);
  5                  insert into amplo
  6                  values(i, sysdate -i, 'Erblo'||i, 'Zaxegoomp'||i);
  7          end loop;
  8
  9          commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from yooper natural join amplo;

     SNORM FLEEZOR   WABBA                OPLUNT    SMANG                IMJYT
---------- --------- -------------------- --------- -------------------- -----------------
         1 30-MAR-17 Quarzenfleep 1       01-APR-17 Erblo1               Zaxegoomp1
         2 29-MAR-17 Quarzenfleep 2       02-APR-17 Erblo2               Zaxegoomp2
         3 28-MAR-17 Quarzenfleep 3       03-APR-17 Erblo3               Zaxegoomp3
         4 27-MAR-17 Quarzenfleep 4       04-APR-17 Erblo4               Zaxegoomp4
         5 26-MAR-17 Quarzenfleep 5       05-APR-17 Erblo5               Zaxegoomp5
         6 25-MAR-17 Quarzenfleep 6       06-APR-17 Erblo6               Zaxegoomp6
         7 24-MAR-17 Quarzenfleep 7       07-APR-17 Erblo7               Zaxegoomp7
         8 23-MAR-17 Quarzenfleep 8       08-APR-17 Erblo8               Zaxegoomp8
         9 22-MAR-17 Quarzenfleep 9       09-APR-17 Erblo9               Zaxegoomp9
        10 21-MAR-17 Quarzenfleep 10      10-APR-17 Erblo10              Zaxegoomp10

10 rows selected.

SQL>
SQL> select *
  2  from amplo natural join yooper;

     SNORM FLEEZOR   SMANG                IMJYT             WABBA                OPLUNT
---------- --------- -------------------- ----------------- -------------------- ---------
         1 30-MAR-17 Erblo1               Zaxegoomp1        Quarzenfleep 1       01-APR-17
         2 29-MAR-17 Erblo2               Zaxegoomp2        Quarzenfleep 2       02-APR-17
         3 28-MAR-17 Erblo3               Zaxegoomp3        Quarzenfleep 3       03-APR-17
         4 27-MAR-17 Erblo4               Zaxegoomp4        Quarzenfleep 4       04-APR-17
         5 26-MAR-17 Erblo5               Zaxegoomp5        Quarzenfleep 5       05-APR-17
         6 25-MAR-17 Erblo6               Zaxegoomp6        Quarzenfleep 6       06-APR-17
         7 24-MAR-17 Erblo7               Zaxegoomp7        Quarzenfleep 7       07-APR-17
         8 23-MAR-17 Erblo8               Zaxegoomp8        Quarzenfleep 8       08-APR-17
         9 22-MAR-17 Erblo9               Zaxegoomp9        Quarzenfleep 9       09-APR-17
        10 21-MAR-17 Erblo10              Zaxegoomp10       Quarzenfleep 10      10-APR-17

10 rows selected.

SQL>

As with all inner joins only the matching data is returned so if YOOPER is reloaded with only the even-numbered records that will be seen in the output from the join:


SQL> truncate table yooper;

Table truncated.

SQL>
SQL> begin
  2          for i in 1..10 loop
  3                  if mod(i,2) = 0 then
  4                          insert into yooper
  5                          values(i, sysdate-i, 'Quarzenfleep '||i, sysdate +i);
  6                  end if;
  7          end loop;
  8
  9          commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from yooper natural join amplo;

     SNORM FLEEZOR   WABBA                OPLUNT    SMANG                IMJYT
---------- --------- -------------------- --------- -------------------- -----------------
         2 29-MAR-17 Quarzenfleep 2       02-APR-17 Erblo2               Zaxegoomp2
         4 27-MAR-17 Quarzenfleep 4       04-APR-17 Erblo4               Zaxegoomp4
         6 25-MAR-17 Quarzenfleep 6       06-APR-17 Erblo6               Zaxegoomp6
         8 23-MAR-17 Quarzenfleep 8       08-APR-17 Erblo8               Zaxegoomp8
        10 21-MAR-17 Quarzenfleep 10      10-APR-17 Erblo10              Zaxegoomp10

SQL>
SQL> select *
  2  from amplo natural join yooper;

     SNORM FLEEZOR   SMANG                IMJYT             WABBA                OPLUNT
---------- --------- -------------------- ----------------- -------------------- ---------
         2 29-MAR-17 Erblo2               Zaxegoomp2        Quarzenfleep 2       02-APR-17
         4 27-MAR-17 Erblo4               Zaxegoomp4        Quarzenfleep 4       04-APR-17
         6 25-MAR-17 Erblo6               Zaxegoomp6        Quarzenfleep 6       06-APR-17
         8 23-MAR-17 Erblo8               Zaxegoomp8        Quarzenfleep 8       08-APR-17
        10 21-MAR-17 Erblo10              Zaxegoomp10       Quarzenfleep 10      10-APR-17

SQL>

All data in the common columns must match to return data; if ids and dates in in YOOPER don’t match up with the ids and dates in AMPLO no data is returned:


SQL> truncate table yooper;

Table truncated.

SQL>
SQL> begin
  2          for i in 1..10 loop
  3                  if mod(i,2) = 0 then
  4                          insert into yooper
  5                          values(i-1, sysdate-i, 'Quarzenfleep '||i, sysdate +i);
  6                  end if;
  7          end loop;
  8
  9          commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL> select *
  2  from yooper natural join amplo;

No rows selected.

SQL>
SQL> select *
  2  from amplo natural join yooper;

No rows selected.

SQL>

There are matching id values, and there are matching date values between the tables but the combination of id and date produces no matching records. With a traditional inner join data could be returned based on either id values or date values (although how good those results would be is questionable, given the matching key structure).

A natural join isn’t a commonly used join type, mainly because the joined tables are not likely to contain join columns with the same name (the demonstration schema provided with Oracle is another good set of tables to use for a natural join). When such a condition exists a natural join is an option but testing is necessary to ensure that the results returned are both desirable and usable.

Just because it’s ‘correct’ doesn’t make it ‘right’. Right?

March 28, 2017

Finding Your Way

Filed under: General — dfitzjarrell @ 08:00

"Whether or not you find your own way, you're bound to find some way. If you happen to find my way, please return it,
as it was lost years ago. I imagine by now it's quite rusty."
-- Norton Juster, The Phantom Tollbooth

Oracle has provided access to its wait interface for several releases and with each new release it expands the range of wait information available, so much so that it’s hard to not find something to examine. Disk reads, logical reads, sort activity, table scans all vie for the attention of the DBA. Of course examination leads to investigation which leads, inevitably, to tuning, even when there is nothing to tune. Such constant twiddling and tweaking is known as Compulsive Tuning Disorder, or CTD. Unfortunately the more ways Oracle provides to interrogate the wait interface the more the DBA can fall victim to CTD. To help reduce the urge to tune a few questions need to be asked regarding the so-called ‘problem area’. Let’s dig in and ask those questions.

First, and foremost, is the following question:

“What problem are you trying to solve?”

If you can’t answer that question then there really isn’t a reason to tune anything; you’ll never know when you’re done and the task will go on and on and on and on and on … ad infinitum, ad nauseaum with no progress to report and no end in sight, another DBA sucked into the rabbit hole of CTD. One thing will lead to another and another and another as you find more areas to ‘tune’ based on the wait interface data and blog posts and articles clearly telling you something needs to be fixed. In most cases nothing could be further from the truth.

Next is misleading or misunderstood numbers, mainly in reference to data reads and writes. I’ve seen some DBAs try to tune the database to reduce logical reads — it’s usually newer DBAs who see the large values for logical reads and conclude there is a problem. The issue isn’t the available data, it’s isolating one small aspect of the entire performance picture and tuning that to the exclusion of everything else. Large volumes of logical reads aren’t necessarily a problem, unless the buffer cache is being reloaded across short periods of time which would be accompanied by large volumes of physical reads. In cases such as this the physical reads would be the telling factor and those MAY be cause for concern. It depends upon the system; an OLTP system grinding through tables to get a single row is most likely a problem whereas a data warehouse churning through that same volume of data would be normal. Going back to the large volume of logical reads relative to the physical reads that can be a problem of interpretation when taking each area individually as that may cloud the water and obscure the real issue of a buffer cache that may be too small for the workload; a configuration that once was more than sufficient can, over time, become a performance bottleneck as more and more users are using the database. A database is a changing entity and it needs to be tended, like a garden, if it’s going to grow.

The DBA needs to listen to the users since they will be the first to complain when something isn’t right and needs attention. Performance is time and, for business, time is money, and when tasks, over time, take longer and longer to complete less work is getting done. The DBA shouldn’t need to hunt for things to do; continually tuning to get that last microsecond of performance is really wasted effort — if no one but the DBA is going to notice the ‘improvement’ it’s not worth pursuing.

Not all tuning is bad or wasted effort but the DBA needs to have a clear goal in mind and a path to follow that addresses issues and brings them to some sort of resolution, even if it’s only a temporary fix until a permanent solution can be implemented. It does no good to constantly pick apart the database to find problems to solve, especially when the users aren’t complaining.

When something is wrong the DBA will hear about it; that’s the time to step into action and start problem solving. The DBA doesn’t need to go looking for problems, they’ll show up all by themselves. And if he or she isn’t constantly twiddling with this or tweaking that the real issues can be dealt with when they happen. Then the users will stop complaining and peace and joy will reign supreme. Okay, so peace and joy won’t necessarily cover the land but the users will stop complaining, at least for a while, and there will be benefit seen from the effort expended.

CTD is thankless, relentless and never-ending, so don’t get caught up in wanting to fix everything; it can’t be done and there are some things that are, most likely, not worth the effort spent given the small return that investment will generate. It’s not enough to know when to stop, the DBA also needs to know when NOT to start; if there is no clear destination to the journey it’s best to not begin. There is plenty to do without making work out of nothing.

Go your own way, just don’t get lost.

March 13, 2017

It’s Private

Filed under: General — dfitzjarrell @ 10:35

“The only thing you can do easily is be wrong, and that's hardly worth the effort.” 
― Norton Juster, The Phantom Tollbooth

Oracle provides two parameters that affect the PGA that look very similar but operate very differently. One of these parameters is the well-known pga_max_size and the otheris a hidden parameter, _pga_max_size. Let’s look at both and see how one can be very effective while the other can create problems with respect to PGA memory management,

DBAs know pga_max_size from extensive documentation from Oracle Corporation and from numerous Oracle professionals writing blog posts about it. It’s a common parameter to set to restrict the overall size of the PGA in releases 11.2 and later. It’s available if Automatic Memory Management (AMM) is not in use; databases running on Linux and using hugepages would be in this group since AMM and hugepages is not a supported combination. Hugepages are available for IPC (Inter-Process Communication) shared memory; this is the ‘standard’ shared memory model (starting with UNIX System V) allowing multiple processes to access the same shared memory segment. There is also another form of shared memory segment, the memory-mapped file, and currently such segments are not supported by hugepages. Oracle, on Linux, gives you a choice of using hugepages or memory-mapped files and you implement that choice by selecting to use (or not use) Automatic Memory Management (AMM). Using Automatic Shared Memory Management (ASMM) allows the DBA to set such parameters as sga_target, sga_max_size, pga_aggregate_target and pga_max_size and have some control how those memory areas are sized.

Using pga_max_size is a simple task:


SQL> alter system set pga_max_size=2G;

Systen altered.

SQL>

Now Oracle will do its best to limit the overall PGA size to the requested value but remember this is a targeted max size, not an absolute. It is more restrictive than pga_aggregate_target, meaning it’s less likely to be exceeded.

On to its sister parameter, _pga_max_size. This parameter regulates the size of the PGA memory allocated to a single process. Oracle sets this using calculations based on pga_aggregate_target and pga_max_size and, since it is an ‘undocumented’ parameter, it should NOT be changed at the whim of the DBA. Setting this to any value prevents Oracle from setting it based on its standard calculations and can seriously impact database performance and memory usage. If, for example, the DBA does this:


SQL> alter system set "_pga_max_size"=2G;

Systen altered.

SQL>

Oracle is now capable of allocating up to 2 GB of PGA to each and every process started after that change has taken place. On an exceptionally active and busy system, with parallel processing enabled, each process can have up to 2 GB of RAM in its PGA. Since many systems still don’t have terabytes of RAM installed such allocations can bring the database, and the server, to a grinding halt, throwing ORA-04030 errors in the process. This, of course, is not what the DBA intended but it is what the DBA enabled by altering the _pga_max_size parameter. Unfortunately this parameter (_pga_max_size) is still being written on in blogs that provide ‘information’, which hasn’t been validated, to the Oracle community.

Knowledge is power; unfortunately unverified ‘information’ is seen as knowledge (especially since it’s a common misconception that ‘if it’s on the Internet it MUST be true’ which isn’t always the case) by those who don’t apply critical thinking to what they read. I know of DBAs who set _pga_max_size to match the pga_max_size parameter and found, to their dismay, that their actions seriously impacted production systems in a negative way. Sometimes in the database world prolific authors are taken as experts and their words looked upon as gospel. Unfortunately prolific doesn’t necessarily mean reliable.

It’s always best to test what others tell you before assuming the advice given to you is right.

March 1, 2017

Return To Sender

Filed under: General — dfitzjarrell @ 16:06

"The most important reason for going from one place to another is to see what's in between."
-- Norton Juster, The Phantom Tollbooth

Recently in an Oracle forum a question resurfaced regarding enabling row movement for tables. The posted question, from five years ago, asked if row movement was safe and if there could be any ‘undesired impact on application data’. The answer to the first part of that question is ‘yes’ (it’s safe because Oracle, except under extreme conditions, won’t lose your data) and the answer to the second part is also ‘yes’. That may be confusing so let’s look at what the end result could be.

Data rows are uniquely identified by a construct known far and wide as the ROWID. ROWIDs contain a wealth of information as to the location of a given row; the file number, the block number and row number are all encoded in this curious value. Updates can change pretty much everything in a row except the ROWID and primary key values (and, yes, there’s a way to change PK values but it involves deleting and inserting the row — Oracle does this when, for some bizarre reason known only to the user making the change, a PK value is updated). The ONLY way to change a ROWID value is to physically move the row, which is what enabling row movement will allow. This is undesirable for the reasons listed below:


	* Applications coded to store ROWID values can fail as the data that was once in Location A is now in Location B.
	* Indexes will become invalid or unusable, requiring that they be rebuilt.

Storing ROWID values in application tables isn’t the wisest of choices a developer can make. Exporting from the source and importing into a new destination will automatically cause those stored ROWID values to be useless. Cloning the database via RMAN will do the same thing since ROWID values are unique only within the database where they are generated; they do not transport across servers or platforms. Consider two imaginary countries, Blaggoflerp and Snormaflop. Each is unique in geography so that locations found in Blaggoflerp are not found in Snormaflop, with the reverse also being true. If the traveler has but one map, of Blaggoflerp, and tries to use that to navigate Snormaflop our traveler will become hopelessly lost and confused. Enable row movement on a table where indexes are present, an application stores ROWIDs for easy data access, or both and Oracle starts singing that old Elvis Presley hit, written by Winfield Scott, “Return To Sender”:


Return to sender, address unknown.
No such person, no such zone.

Don’t misunderstand, the data is STILL in the table, it’s just moved from its original location and left no forwarding address. It’s possible that new data now occupies the space where that trusty old row used to live, so the application doesn’t break but it does return unexpected results because the values that were once at that location are no longer there. And any indexes that referenced that row’s original ROWID are now invalidated, making them useless until manual intervention is employed to rebuild them.


"Since you got here by not thinking, it seems reasonable to expect that, in order to get out, you must start thinking."
-- Norton Juster, The Phantom Tollbooth

Maybe it’s not that the DBA didn’t think about the act before he or she did it, it might simply be that he or she didn’t think far enough ahead about the results of such an act to make a reasonable choice. Changes to a database can affect downstream processing; failing to consider the ripple effect of such changes can be disastrous, indeed. It isn’t enough in this day and age to consider the database as a ‘lone ranger’; many systems can depend on a single database and essentially haphazard changes can stop them in their tracks.

There may be times when enabling row movement is necessary; changing a partitioning key is one of them. Granted making such changes on a partitioned table will be part of a larger outage window where the local and global indexes can be maintained so the impact will be positive, not negative. Absent such tasks (ones where row movement would be necessary) it’s not recommended to enable row movement as it will certainly break things, especially things no one was expecting because of lack knowledge of the affected systems.

It’s not always good to go travelling.

January 27, 2017

“Back It Up!!!”

Filed under: General — dfitzjarrell @ 14:21

“Expectations is the place you must always go to before you get to where you're going.” 
― Norton Juster, The Phantom Tollbooth   

In a recent post in the Oracle General database forum the following question was asked:


Hi,

I have 3 schema's of almost equal sizes, off which only two are creating the backup's (Hot backup). One of the schema's is not creating 
any backup, by not creating I mean, the backup file that is generated size is too small than the other two files.

The size of the other two backup files is almost 20 GB while the third one is only 54 Bytes !!!

Below are the commands I am using for backup,

alter tablespace SCHEMA begin backup; ( DB level )
tar cpvf - SCHEMA_TS | compress -f > backup.tar.Z ( OS level )

The DB (Oracle Database 11g) is in Archive log mode and no error is being thrown while running the above commands.

Could you please help me in solving this issue.

Any reference related to this would also be of great help.

Thanks in advance !!

There are issues with that sequence of statements, the first is calling that a ‘backup’. The issue with that is it’s highly likely that, after the tablespace files are restored, the recovery will fail and the database will be left in an unusable state. The obvious omission is the archivelogs; nowhere in that command sequence is found any statement using tar to copy the archivelogs generated before, during and after that ‘backup’ is completed; apparently the entire script was not posted to the thread so additional steps that script might execute were not available to view. Since no recovery testing is reported (if such a script exists its contents were not presented) it’s very possible that this ‘backup’ is taken on faith, and unfortunately faith isn’t going to be of much help here.

Yet another problem is the lack of any query to determine the actual datafiles associated with the given tablespace; a ‘backup’ missing that important information means that not all required datafiles will be copied, making the tablespace incomplete and Oracle unable to recover it. This again leads to a down database with no hope of opening.

It was suggested several times in the thread that the poster stop using this ‘backup’ and move to RMAN to create dependable, reliable and recoverable backups. Why this method was in use was explained with this post:


I am new to Oracle DB and the guy who worked before me, wrote a (backup) script where he just created a tar of the table space files.

which leads one to wonder how this DBA thought he or she would restore these tablespaces to a useful and usable state should the time come. The poster added:


I want to use RMAN but thought of solving this issuse first (worst case scenario) and then create a new backup script using RMAN.

Honestly this is not the problem that needs to be solved; the problem is generating a reliable backup and RMAN has been proven time and again as the tool for that job. Further discussion lead to the realization that not all files were being sent to tar which explained the size discrepancy but didn’t truly address the recoverability issue. Anyone can take a so-called ‘backup’ using any number of tools and operating system utilities; it’s restoring and recovering from those ‘backups’ that tells the tale of success or failure, and failure in restoring and recovering a production database isn’t an option.

Sometimes you don’t get what you expect.

December 21, 2016

“Do You Have A Key?”

Filed under: General — dfitzjarrell @ 11:27

“Don't you know anything at all about numbers?"
"Well, I don't think they're very important," snapped Milo, too embarrassed to admit the truth.
"NOT IMPORTANT!" roared the Dodecahedron, turning red with fury. "Could you have tea for two without the two — or three blind mice 
without the three? Would there be four corners of the earth if there weren't a four? And how would you sail the seven seas without a seven?"
"All I meant was—" began Milo, but the Dodecahedron, overcome with emotion and shouting furiously, carried right on.
"If you had high hopes, how would you know how high they were? And did you know that narrow escapes come in all different widths? 
Would you travel the whole wide world without ever knowing how wide it was? And how could you do anything at long last," he concluded, 
waving his arms over his head, "without knowing how long the last was? Why, numbers are the most beautiful and valuable things in the world. 
Just follow me and I'll show you." He turned on his heel and stalked off into the cave.” 
-- Norton Juster, The Phantom Tollbooth

How best to generate a primary key is a discussion that seems to never get an answer suitable to everyone. Sequences are a popular choice, as are other forms of generated or “artificial” keys. In addition to that Oracle provides a function named sys_guid() that generates unique identifiers that can be used for a number of purposes, one of which is, as one might expect, as a generated primary key. Occasionally discussions in the Oracle forums discuss the merits and issues of using such global unique identifiers as primary keys; one current discussion asked if using sys_guid() was faster or slower than using a sequence; on a Linux system generating sys_guid() values was illustrated as being faster than using a sequence. The example code, slightly modified, is presented here having been run on Oracle 12.1.0.2 on Windows. Both 12c configurations were used (standard and container) to see if any differences appeared, and since both runs provided similar results the non-container results are provided here. Timing was set on to record and display the elapsed time for each set of serial tests; parallel tests were also completed and log table entries report the elapsed time for those runs. Separate tables were used for each set of tests, the results that were produced are shown below. The test begins with a table and a sequence created with the default settings:


SQL> -- default 20 cache
SQL> create sequence seq1;

Sequence created.

SQL>
SQL> create table t_seq
  2  	 ( id	  number(9) primary key
  3  	 , filler varchar2(1000)
  4  	 );

Table created.

SQL>

The next step inserts 999,999 records into the t_seq table; the execution plan and run statistics are shown below:


SQL> insert into t_seq
  2    select seq1.nextval ,'sdfsf' from dual connect by level < 1000000;

999999 rows created.

Elapsed: 00:00:10.19

Execution Plan
----------------------------------------------------------
Plan hash value: 3365622274

--------------------------------------------------------------------------------
| Id  | Operation                      | Name  | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | INSERT STATEMENT               |       |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL       | T_SEQ |       |            |          |
|   2 |   SEQUENCE                     | SEQ1  |       |            |          |
|*  3 |    CONNECT BY WITHOUT FILTERING|       |       |            |          |
|   4 |     FAST DUAL                  |       |     1 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(LEVEL<1000000)


Statistics
----------------------------------------------------------
      50149  recursive calls
     227827  db block gets
      59085  consistent gets
          0  physical reads
  113671864  redo size
        855  bytes sent via SQL*Net to client
        893  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
     999999  rows processed

SQL>
SQL> drop table t_seq;

Table dropped.

SQL> drop sequence seq1;

Sequence dropped.

SQL>

The amount of redo generated is large, a result of using the sequence. Another sequence test was executed using a sequence created with a larger cache value. Before each run the user was re-connected to reset the session statistics:


SQL> connect bing/!@#!@#
Connected.
SQL> create sequence seq1 cache 10000;

Sequence created.

SQL>
SQL> create table t_seq
  2  	 ( id	  number(9) primary key
  3  	 , filler varchar2(1000)
  4  	 );

Table created.

SQL>

The same insert statement was executed using the sequence having the larger cache; the execution plan and session statistics are shown below:


SQL> insert into t_seq
  2    select seq1.nextval ,'sdfsf' from dual connect by level < 1000000;

999999 rows created.

Elapsed: 00:00:05.24

Execution Plan
----------------------------------------------------------
Plan hash value: 3365622274

--------------------------------------------------------------------------------
| Id  | Operation                      | Name  | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | INSERT STATEMENT               |       |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL       | T_SEQ |       |            |          |
|   2 |   SEQUENCE                     | SEQ1  |       |            |          |
|*  3 |    CONNECT BY WITHOUT FILTERING|       |       |            |          |
|   4 |     FAST DUAL                  |       |     1 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(LEVEL<1000000)


Statistics
----------------------------------------------------------
        249  recursive calls
      77911  db block gets
       9188  consistent gets
          1  physical reads
   79744836  redo size
        854  bytes sent via SQL*Net to client
        893  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
     999999  rows processed

SQL>

Using the larger sequence cache reduced the redo size by 33927028, which cut the execution time roughly in half. On to the sys_guid() part of the serial testing, with a new table created and a new connection established:


SQL> connect bing/!@#!@#
Connected.
SQL> create table t_raw
  2  	 ( id	  raw(16) primary key
  3  	 , filler varchar2(1000)
  4  	 );

Table created.

SQL>
SQL> insert into t_raw
  2    select sys_guid(),'sdfsf' from dual connect by level < 1000000;

999999 rows created.

Elapsed: 00:00:54.15

Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825

-------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | INSERT STATEMENT              |       |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL      | T_RAW |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|       |       |            |          |
|   3 |    FAST DUAL                  |       |     1 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<1000000)


Statistics
----------------------------------------------------------
       1442  recursive calls
    2956342  db block gets
      23736  consistent gets
         13  physical reads
  375573628  redo size
        854  bytes sent via SQL*Net to client
        890  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         34  sorts (memory)
          0  sorts (disk)
     999999  rows processed

SQL>

A benefit of using the sequence with the larger cache size is the redo size is less than half that of using the sys_guid() call. Additionally, on Windows, the execution took almost 11 times longer than using a sequence with a large cache value. Returning to the default cache size for the sequence a PL/SQL loop is used to return the generated values of the sequence; since serveroutput is not turned on the time to return the values isn’t included in the execution time (and as a result the values aren’t displayed). The elapsed time to run the block is found at the end of the execution as well as the redo statistics for the session:

`


SQL> connect bing/!@#!@#
Connected.
SQL> create sequence seq1 ;

Sequence created.

SQL>
SQL> set timing on
SQL> declare
  2    x number(38);
  3    function sf return number is
  4    begin
  5  	 return seq1.nextval;
  6    end;
  7  begin
  8    for i in 1..100000 loop
  9  	 x := sf;
 10    end loop;
 11  end;
 12  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.23
SQL>
SQL> select m.statistic#, n.name, m.value
  2  from v$mystat m, v$statname n
  3  where m.statistic# = n.statistic#
  4  and n.name like '%redo%'
  5  and m.value > 0;

STATISTIC# NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
       257 redo entries                                                          10014
       258 redo size                                                           3620676
       302 redo subscn max counts                                                    1
       307 redo synch time (usec)                                                  251
       308 redo synch time overhead (usec)                                         553
       309 redo synch time overhead count (  2ms)                                    2
       314 redo synch writes                                                         3
       323 redo write info find                                                      2

8 rows selected.

Elapsed: 00:00:00.00

SQL>

The reported redo size is smaller, but there were no inserts into a table performed in this test. A similar test was run using a sequence with a cache value of 10000:


SQL> drop sequence seq1;

Sequence dropped.

SQL> connect bing/!@#!@#
Connected.
SQL> create sequence seq1 cache 10000;

Sequence created.

SQL>
SQL> set timing on
SQL> declare
  2    x number(38);
  3    function sf return number is
  4    begin
  5  	 return seq1.nextval;
  6    end;
  7  begin
  8    for i in 1..100000 loop
  9  	 x := sf;
 10    end loop;
 11  end;
 12  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.19
SQL>
SQL> select m.statistic#, n.name, m.value
  2  from v$mystat m, v$statname n
  3  where m.statistic# = n.statistic#
  4  and n.name like '%redo%'
  5  and m.value > 0;

STATISTIC# NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
       257 redo entries                                                             34
       258 redo size                                                             11940
       307 redo synch time (usec)                                                  110
       308 redo synch time overhead (usec)                                      303802
       309 redo synch time overhead count (  2ms)                                    1
       313 redo synch time overhead count (inf)                                      1
       314 redo synch writes                                                         3
       323 redo write info find                                                      2

8 rows selected.

Elapsed: 00:00:00.00
SQL>

As shown in a prior test the redo statistics show a smaller redo size for the larger cache. On to the sys_guid() test:


SQL> connect bing/!@#!@#
Connected.
SQL> declare
  2    x raw(16);
  3    function sf return varchar2 is
  4    begin
  5  	 return sys_guid();
  6    end;
  7  begin
  8    for i in 1..100000 loop
  9  	 x := sf;
 10    end loop;
 11  end;
 12  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.70
SQL>
SQL> select m.statistic#, n.name, m.value
  2  from v$mystat m, v$statname n
  3  where m.statistic# = n.statistic#
  4  and n.name like '%redo%'
  5  and m.value > 0;

STATISTIC# NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
       257 redo entries                                                              6
       258 redo size                                                              1476
       302 redo subscn max counts                                                    1
       307 redo synch time (usec)                                                    1
       308 redo synch time overhead (usec)                                         575
       309 redo synch time overhead count (  2ms)                                    1
       314 redo synch writes                                                         1
       323 redo write info find                                                      1

8 rows selected.

Elapsed: 00:00:00.00
SQL>

Now, absent the insert, the redo generation is much less. The execution time, however, is at least twice as long as it was for the large-cache sequence. So on a Windows-based Oracle installation using a sequence takes much less time than using the sys_guid() calls. Parallel execution may be faster and the results may be reversed so further testing is necessary. These tests begin with the sequence created with the default cache value but starting at 100000000 (to more accurately reflect real production conditions) and multiple tables created with the CACHE option, which places the blocks in the most-recently used area of the cache to speed access and delay aging:


SQL> -- SEQUENCE MULTI SESSION TEST
SQL> drop table t_seq;

Table dropped.

Elapsed: 00:00:00.02
SQL> drop table tmp1;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp2;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp3;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp4;

Table dropped.

Elapsed: 00:00:00.01
SQL>
SQL> create table tmp1 cache as select 1 dummy from dual connect by level  create table tmp2 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.54
SQL> create table tmp3 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.61
SQL> create table tmp4 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.49
SQL>
SQL> drop sequence seq1 ;

Sequence dropped.

Elapsed: 00:00:00.00
SQL> create sequence seq1 start with 100000000 ;

Sequence created.

Elapsed: 00:00:00.00
SQL>
SQL> create table t_seq
  2  	 ( id	  number(9) primary key
  3  	 , filler varchar2(1000)
  4  	 );

Table created.

Elapsed: 00:00:00.01
SQL>
SQL> alter system switch logfile;

System altered.

Elapsed: 00:00:01.29
SQL> alter system checkpoint;

System altered.

Elapsed: 00:00:00.20
SQL> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.13
SQL>
SQL> select /*+ full(tmp1) */ count(*) from tmp1;

  COUNT(*)
----------
    999999

Elapsed: 00:00:00.05
SQL> select /*+ full(tmp2) */ count(*) from tmp2;

  COUNT(*)
----------
    999999

Elapsed: 00:00:00.04
SQL> select /*+ full(tmp3) */ count(*) from tmp3;

  COUNT(*)
----------
    999999

Elapsed: 00:00:00.04
SQL> select /*+ full(tmp4) */ count(*) from tmp4;

  COUNT(*)
----------
    999999

Elapsed: 00:00:00.04
SQL>
SQL> drop table tmp_log;

Table dropped.

Elapsed: 00:00:00.04
SQL> create table tmp_log(mydata varchar2(4000), optime timestamp);

Table created.

Elapsed: 00:00:00.01
SQL>
SQL> create or replace PROCEDURE    sp_log(p varchar2) as
  2    PRAGMA AUTONOMOUS_TRANSACTION;
  3  begin
  4    insert into tmp_log values (p , systimestamp);
  5    commit;
  6  end;
  7  /

Procedure created.

Elapsed: 00:00:00.03
SQL>
SQL> show errors
No errors.
SQL>
SQL> create or replace procedure sp_test_seq(p number) as
  2  begin
  3    sp_log('JOB ' || p || ' BEGIN');
  4
  5    if p = 1 then
  6  	 insert  into t_seq
  7  	   select seq1.nextval ,'sdfsf' from tmp1;
  8    elsif p = 2 then
  9  	 insert into t_seq
 10  	   select seq1.nextval ,'sdfsf' from tmp2;
 11    elsif p = 3 then
 12  	 insert  into t_seq
 13  	   select seq1.nextval ,'sdfsf' from tmp3;
 14    elsif p = 4 then
 15  	 insert into t_seq
 16  	   select seq1.nextval ,'sdfsf' from tmp4;
 17    end if;
 18    commit;
 19
 20    sp_log('JOB ' || p || ' END');
 21  end;
 22  /

Procedure created.

Elapsed: 00:00:00.02
SQL>
SQL> show errors
No errors.
SQL>
SQL> declare
  2    x_time date := sysdate + 1/1440;
  3  begin
  4
  5
  6    dbms_scheduler.create_job(job_name => 'TEST_SEQ1',
  7  				 job_type => 'PLSQL_BLOCK',
  8  				 job_action => 'begin sp_test_seq(1); end;',
  9  				 enabled=> true,
 10  				 start_date=> x_time
 11  			       );
 12    dbms_scheduler.create_job(job_name => 'TEST_SEQ2',
 13  				 job_type => 'PLSQL_BLOCK',
 14  				 job_action => 'begin sp_test_seq(2); end;',
 15  				 enabled=> true,
 16  				 start_date=> x_time
 17  			       );
 18    dbms_scheduler.create_job(job_name => 'TEST_SEQ3',
 19  				 job_type => 'PLSQL_BLOCK',
 20  				 job_action => 'begin sp_test_seq(3); end;',
 21  				 enabled=> true,
 22  				 start_date=> x_time
 23  			       );
 24    dbms_scheduler.create_job(job_name => 'TEST_SEQ4',
 25  				 job_type => 'PLSQL_BLOCK',
 26  				 job_action => 'begin sp_test_seq(4); end;',
 27  				 enabled=> true,
 28  				 start_date=> x_time
 29  			       );
 30  end;
 31  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
SQL>
SQL> select job_name, start_date from user_scheduler_jobs where job_name like 'TEST%';

JOB_NAME     START_DATE
------------ ---------------------------------------------------------------------------
TEST_SEQ1    27-NOV-16 01.46.47.000000 PM -07:00
TEST_SEQ2    27-NOV-16 01.46.47.000000 PM -07:00
TEST_SEQ3    27-NOV-16 01.46.47.000000 PM -07:00
TEST_SEQ4    27-NOV-16 01.46.47.000000 PM -07:00

Elapsed: 00:00:00.00
SQL>
SQL> exec dbms_lock.sleep(120)

PL/SQL procedure successfully completed.

Elapsed: 00:02:00.00
SQL>
SQL> select * from
  2  (select mydata, optime, lead(optime) over (order by mydata) optime_end, lead(optime) over (order by mydata) - optime elapsed
  3  from tmp_log)
  4  where mydata like '%BEGIN%'
  5  /

MYDATA          OPTIME                       OPTIME_END                   ELAPSED
--------------- ---------------------------- ---------------------------- ----------------------------
JOB 1 BEGIN     27-NOV-16 01.46.50.233000 PM 27-NOV-16 01.47.28.113000 PM +000000000 00:00:37.880000
JOB 2 BEGIN     27-NOV-16 01.46.50.234000 PM 27-NOV-16 01.47.27.904000 PM +000000000 00:00:37.670000
JOB 3 BEGIN     27-NOV-16 01.46.50.235000 PM 27-NOV-16 01.47.28.169000 PM +000000000 00:00:37.934000
JOB 4 BEGIN     27-NOV-16 01.46.50.244000 PM 27-NOV-16 01.47.28.121000 PM +000000000 00:00:37.877000

Elapsed: 00:00:00.00
SQL>

Parallel takes longer on Windows, possibly because of the underlying architecture, but each of the four concurrent inserts consumed aboutn 39 seconds. To see if concurrent processes using the sys_guid() call may end up faster we set up the test again, this time using the sys_guid() call:


SQL> -- SYS_GUID MULTI SESSION TEST
SQL> drop table t_raw;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp1;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp2;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp3;

Table dropped.

Elapsed: 00:00:00.01
SQL> drop table tmp4;

Table dropped.

Elapsed: 00:00:00.01
SQL>
SQL> create table tmp1 cache as select 1 dummy from dual connect by level  create table tmp2 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.62
SQL> create table tmp3 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.57
SQL> create table tmp4 cache as select 1 dummy from tmp1;

Table created.

Elapsed: 00:00:00.48
SQL>
SQL> create table t_raw
  2  	 ( id	  raw(16) primary key
  3  	 , filler varchar2(1000)
  4  	 );

Table created.

Elapsed: 00:00:00.01
SQL>
SQL> alter system switch logfile;

System altered.

Elapsed: 00:00:03.02
SQL> alter system checkpoint;

System altered.

Elapsed: 00:00:00.17
SQL> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.34
SQL>
SQL> select /*+ full(tmp1) */ count(*) from tmp1; -- to make sure table is in buffer_cache
  2  select /*+ full(tmp2) */ count(*) from tmp2; -- to make sure table is in buffer_cache
  3  select /*+ full(tmp3) */ count(*) from tmp3; -- to make sure table is in buffer_cache
  4  select /*+ full(tmp4) */ count(*) from tmp4; -- to make sure table is in buffer_cache
  5
SQL> drop table tmp_log;

Table dropped.

Elapsed: 00:00:00.03
SQL> create table tmp_log(mydata varchar2(4000), optime timestamp);

Table created.

Elapsed: 00:00:00.01
SQL>
SQL> create or replace PROCEDURE    sp_log(p varchar2) as
  2    PRAGMA AUTONOMOUS_TRANSACTION;
  3  begin
  4    insert into tmp_log values (p , systimestamp);
  5    commit;
  6  end;
  7  /

Procedure created.

Elapsed: 00:00:00.03
SQL>
SQL> show errors
No errors.
SQL>
SQL> create or replace procedure sp_test_guid(p number) as
  2  begin
  3    sp_log('JOB ' || p || ' BEGIN');
  4
  5    if p = 1 then
  6  	 insert  into t_raw
  7  	   select sys_guid() ,'sdfsf' from tmp1;
  8    elsif p = 2 then
  9  	 insert  into t_raw
 10  	   select sys_guid() ,'sdfsf' from tmp2;
 11    elsif p = 3 then
 12  	 insert  into t_raw
 13  	   select sys_guid() ,'sdfsf' from tmp3;
 14    elsif p = 4 then
 15  	 insert into t_raw
 16  	   select sys_guid() ,'sdfsf' from tmp4;
 17    end if;
 18    commit;
 19
 20    sp_log('JOB ' || p || ' END');
 21  end;
 22  /

Procedure created.

Elapsed: 00:00:00.02
SQL>
SQL> show errors
No errors.
SQL>
SQL> declare
  2    x_time date := sysdate + 1/1440;
  3  begin
  4
  5    dbms_scheduler.create_job(job_name => 'TEST_GUID1',
  6  				 job_type => 'PLSQL_BLOCK',
  7  				 job_action => 'begin sp_test_guid(1); end;',
  8  				 enabled=> true,
  9  				 start_date=> x_time
 10  			       );
 11    dbms_scheduler.create_job(job_name => 'TEST_GUID2',
 12  				 job_type => 'PLSQL_BLOCK',
 13  				 job_action => 'begin sp_test_guid(2); end;',
 14  				 enabled=> true,
 15  				 start_date=> x_time
 16  			       );
 17    dbms_scheduler.create_job(job_name => 'TEST_GUID3',
 18  				 job_type => 'PLSQL_BLOCK',
 19  				 job_action => 'begin sp_test_guid(3); end;',
 20  				 enabled=> true,
 21  				 start_date=> x_time
 22  			       );
 23    dbms_scheduler.create_job(job_name => 'TEST_GUID4',
 24  				 job_type => 'PLSQL_BLOCK',
 25  				 job_action => 'begin sp_test_guid(4); end;',
 26  				 enabled=> true,
 27  				 start_date=> x_time
 28  			       );
 29  end;
 30  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL>
SQL> select job_name, start_date from user_scheduler_jobs where job_name like 'TEST%';

JOB_NAME     START_DATE
------------ ---------------------------------------------------------------------------
TEST_GUID1   27-NOV-16 01.48.53.000000 PM -07:00
TEST_GUID2   27-NOV-16 01.48.53.000000 PM -07:00
TEST_GUID3   27-NOV-16 01.48.53.000000 PM -07:00
TEST_GUID4   27-NOV-16 01.48.53.000000 PM -07:00

Elapsed: 00:00:00.00
SQL>
SQL> exec dbms_lock.sleep(180);

PL/SQL procedure successfully completed.

Elapsed: 00:03:00.00
SQL>
SQL> select * from
  2  (select mydata, optime, lead(optime) over (order by mydata) optime_end, lead(optime) over (order by mydata) - optime elapsed
  3  from tmp_log)
  4  where mydata like '%BEGIN%'
  5  /

MYDATA          OPTIME                       OPTIME_END                   ELAPSED
--------------- ---------------------------- ---------------------------- ----------------------------
JOB 1 BEGIN     27-NOV-16 01.48.54.228000 PM 27-NOV-16 01.50.49.312000 PM +000000000 00:01:55.084000
JOB 2 BEGIN     27-NOV-16 01.48.54.236000 PM 27-NOV-16 01.50.46.200000 PM +000000000 00:01:51.964000
JOB 3 BEGIN     27-NOV-16 01.48.54.245000 PM 27-NOV-16 01.50.47.742000 PM +000000000 00:01:53.497000
JOB 4 BEGIN     27-NOV-16 01.48.54.267000 PM 27-NOV-16 01.50.48.966000 PM +000000000 00:01:54.699000

Elapsed: 00:00:00.00
SQL>
SQL> set timing off echo off linesize 80 trimspool off

Table dropped.


Table dropped.


Sequence dropped.

Generating sys_guid() values is not faster when run in parallel in a Windows environment; each process ran almost two minutes before completing, roughly four times longer than the parallel sequence executions and twice as long as the serial sys_guid() runs.

The redo size using the sys_guid() call and insert statements was consistent regardless of the operating system used (Windows or Linux) and was larger than that when using a sequence and insert statements; absent any DML the sys_guid() call generated far less redo. Sequence cache size can affect the redo generation as a larger cache generates smaller amounts of redo, and the redo generation affects the execution time with a sequence. Without fail on Windows using sys_guid() takes longer. This is one area where testing on the operating system you are using is a must since Oracle on Linux can, and does, perform much differently than Oracle on Windows. Before you decide to change your primary key strategy to using sys_guid() test to see how it performs; you may be surprised at the results.

It would seem that 42 isn’t the only important number.

November 21, 2016

Taking Things For Granted

Filed under: General — dfitzjarrell @ 09:54

"Sometimes I find the best way of getting from one place to another is simply to erase everything and begin again."
-- Norton Juster, The Phantom Tollbooth

In one of the Oracle forums a question was asked regarding revoking selected privileges from the DBA role. Unfortunately for the person posting the question the answer is a resounding “No”; granting a role grants all privileges assigned to that role and there is no “picking and choosing” as if you were in a cafeteria. Roles are designed (or should be, at least) to grant all necessary privileges a user would need to access objects that role will use. And roles supplied by Oracle are designed for the jobs they are named after, such as DBA. Changing the role affects functionality and can seriously impact those granted that role. Let’s look at that in a bit more detail.

Oracle provides pre-configured roles with every installation of the database, and the list can vary based on the options you choose to install. A partial list of these roles from 12.1.0.2 is shown below (remember this is not a complete and exhaustive list):


ROLE                           O
------------------------------ -
CONNECT                        Y
RESOURCE                       Y
DBA                            Y
AUDIT_ADMIN                    Y
AUDIT_VIEWER                   Y
SELECT_CATALOG_ROLE            Y
EXECUTE_CATALOG_ROLE           Y
DELETE_CATALOG_ROLE            Y
CAPTURE_ADMIN                  Y
EXP_FULL_DATABASE              Y
IMP_FULL_DATABASE              Y
CDB_DBA                        Y
PDB_DBA                        Y
RECOVERY_CATALOG_OWNER         Y
LOGSTDBY_ADMINISTRATOR         Y
DBFS_ROLE                      Y
GSMUSER_ROLE                   Y
AQ_ADMINISTRATOR_ROLE          Y
AQ_USER_ROLE                   Y
DATAPUMP_EXP_FULL_DATABASE     Y
DATAPUMP_IMP_FULL_DATABASE     Y
ADM_PARALLEL_EXECUTE_TASK      Y
PROVISIONER                    Y
XS_RESOURCE                    Y
XS_SESSION_ADMIN               Y
XS_NAMESPACE_ADMIN             Y
XS_CACHE_ADMIN                 Y
GATHER_SYSTEM_STATISTICS       Y
OPTIMIZER_PROCESSING_RATE      Y
GSMADMIN_ROLE                  Y
RECOVERY_CATALOG_USER          Y
EM_EXPRESS_BASIC               Y
EM_EXPRESS_ALL                 Y
SCHEDULER_ADMIN                Y
HS_ADMIN_SELECT_ROLE           Y
HS_ADMIN_EXECUTE_ROLE          Y
HS_ADMIN_ROLE                  Y
GLOBAL_AQ_USER_ROLE            Y
OEM_ADVISOR                    Y
OEM_MONITOR                    Y
XDBADMIN                       Y
XDB_SET_INVOKER                Y
AUTHENTICATEDUSER              Y
XDB_WEBSERVICES                Y
XDB_WEBSERVICES_WITH_PUBLIC    Y
XDB_WEBSERVICES_OVER_HTTP      Y
GSM_POOLADMIN_ROLE             Y
GDS_CATALOG_SELECT             Y
WM_ADMIN_ROLE                  Y
JAVAUSERPRIV                   Y
JAVAIDPRIV                     Y
JAVASYSPRIV                    Y
JAVADEBUGPRIV                  Y
EJBCLIENT                      Y
JMXSERVER                      Y
JAVA_ADMIN                     Y
JAVA_DEPLOY                    Y
CTXAPP                         Y
ORDADMIN                       Y
OLAP_XS_ADMIN                  Y
OLAP_DBA                       Y
OLAP_USER                      Y
SPATIAL_WFS_ADMIN              Y
WFS_USR_ROLE                   Y
SPATIAL_CSW_ADMIN              Y
CSW_USR_ROLE                   Y
LBAC_DBA                       Y
APEX_ADMINISTRATOR_ROLE        Y
APEX_GRANTS_FOR_NEW_USERS_ROLE Y
DV_SECANALYST                  Y
DV_MONITOR                     Y
DV_ADMIN                       Y
DV_OWNER                       Y
DV_ACCTMGR                     Y
DV_PUBLIC                      Y
DV_PATCH_ADMIN                 Y
DV_STREAMS_ADMIN               Y
DV_GOLDENGATE_ADMIN            Y
DV_XSTREAM_ADMIN               Y
DV_GOLDENGATE_REDO_ACCESS      Y
DV_AUDIT_CLEANUP               Y
DV_DATAPUMP_NETWORK_LINK       Y
DV_REALM_RESOURCE              Y
DV_REALM_OWNER                 Y

The ‘O’ header is for the ORACLE_MAINTAINED column which indicates the role is supplied by Oracle. [This is a new column in the DBA_ROLES view for 12.1; earlier releases do not have this column in the view definition.] That list has 84 different roles all created when your database was created. What privileges do these roles have? That’s a question answered by the ROLE_SYS_PRIVS and ROLE_TAB_PRIVS views; let’s look at the DBA role and see what Oracle deems as necessary system privileges to be an effective DBA:


PRIVILEGE
----------------------------------------
CREATE SESSION
ALTER SESSION
DROP TABLESPACE
BECOME USER
DROP ROLLBACK SEGMENT
SELECT ANY TABLE
INSERT ANY TABLE
UPDATE ANY TABLE
DROP ANY INDEX
SELECT ANY SEQUENCE
CREATE ROLE
EXECUTE ANY PROCEDURE
ALTER PROFILE
CREATE ANY DIRECTORY
CREATE ANY LIBRARY
EXECUTE ANY LIBRARY
ALTER ANY INDEXTYPE
DROP ANY INDEXTYPE
DEQUEUE ANY QUEUE
EXECUTE ANY EVALUATION CONTEXT
EXPORT FULL DATABASE
CREATE RULE
ALTER ANY SQL PROFILE
ADMINISTER ANY SQL TUNING SET
CHANGE NOTIFICATION
DROP ANY EDITION
DROP ANY MINING MODEL
ALTER ANY MINING MODEL
ALTER ANY CUBE DIMENSION
CREATE CUBE
DROP ANY CUBE BUILD PROCESS
USE ANY SQL TRANSLATION PROFILE
CREATE PLUGGABLE DATABASE
ALTER ROLLBACK SEGMENT
DELETE ANY TABLE
ALTER DATABASE
FORCE ANY TRANSACTION
ALTER ANY PROCEDURE
DROP ANY TRIGGER
DROP ANY MATERIALIZED VIEW
UNDER ANY TYPE
ALTER ANY LIBRARY
CREATE DIMENSION
DEBUG ANY PROCEDURE
CREATE RULE SET
ALTER ANY RULE SET
ANALYZE ANY DICTIONARY
ALTER ANY EDITION
CREATE ANY ASSEMBLY
ALTER ANY CUBE
SELECT ANY CUBE
DROP ANY MEASURE FOLDER
RESTRICTED SESSION
CREATE TABLESPACE
ALTER TABLESPACE
CREATE USER
ALTER USER
LOCK ANY TABLE
CREATE VIEW
DROP ANY VIEW
GRANT ANY ROLE
CREATE TRIGGER
CREATE TYPE
EXECUTE ANY OPERATOR
CREATE ANY DIMENSION
ALTER ANY DIMENSION
CREATE ANY OUTLINE
ADMINISTER DATABASE TRIGGER
RESUMABLE
FLASHBACK ANY TABLE
CREATE ANY RULE SET
EXECUTE ANY RULE SET
IMPORT FULL DATABASE
EXECUTE ANY RULE
EXECUTE ANY PROGRAM
CREATE ANY EDITION
CREATE ASSEMBLY
ALTER ANY ASSEMBLY
CREATE CUBE DIMENSION
CREATE ANY CUBE BUILD PROCESS
UPDATE ANY CUBE DIMENSION
EM EXPRESS CONNECT
SET CONTAINER
ALTER ANY MEASURE FOLDER
CREATE ANY TABLE
CREATE ANY INDEX
CREATE ANY SEQUENCE
ALTER ANY ROLE
ANALYZE ANY
DROP ANY LIBRARY
CREATE ANY OPERATOR
CREATE INDEXTYPE
UNDER ANY TABLE
DROP ANY DIMENSION
SELECT ANY DICTIONARY
GRANT ANY OBJECT PRIVILEGE
CREATE EVALUATION CONTEXT
CREATE ANY EVALUATION CONTEXT
DROP ANY EVALUATION CONTEXT
CREATE ANY RULE
CREATE JOB
CREATE ANY JOB
CREATE MINING MODEL
INSERT ANY CUBE DIMENSION
DROP ANY CUBE
UPDATE ANY CUBE BUILD PROCESS
EXEMPT DML REDACTION POLICY
READ ANY TABLE
ALTER SYSTEM
AUDIT SYSTEM
CREATE ROLLBACK SEGMENT
DROP ANY TABLE
COMMENT ANY TABLE
REDEFINE ANY TABLE
CREATE CLUSTER
ALTER ANY INDEX
DROP PUBLIC DATABASE LINK
CREATE PROFILE
ALTER ANY MATERIALIZED VIEW
ALTER ANY TYPE
DROP ANY TYPE
UNDER ANY VIEW
EXECUTE ANY INDEXTYPE
DROP ANY CONTEXT
ALTER ANY OUTLINE
ADMINISTER RESOURCE MANAGER
MANAGE SCHEDULER
MANAGE FILE GROUP
CREATE ANY MINING MODEL
SELECT ANY MINING MODEL
CREATE ANY MEASURE FOLDER
DELETE ANY MEASURE FOLDER
CREATE ANY SQL TRANSLATION PROFILE
CREATE ANY CREDENTIAL
EXEMPT DDL REDACTION POLICY
SELECT ANY MEASURE FOLDER
SELECT ANY CUBE BUILD PROCESS
ALTER ANY CUBE BUILD PROCESS
CREATE TABLE
BACKUP ANY TABLE
CREATE ANY CLUSTER
DROP ANY SYNONYM
DROP PUBLIC SYNONYM
CREATE ANY VIEW
CREATE SEQUENCE
ALTER ANY SEQUENCE
FORCE TRANSACTION
CREATE PROCEDURE
CREATE ANY PROCEDURE
ALTER RESOURCE COST
DROP ANY DIRECTORY
CREATE ANY TYPE
ALTER ANY OPERATOR
CREATE ANY INDEXTYPE
ENQUEUE ANY QUEUE
ON COMMIT REFRESH
DEBUG CONNECT SESSION
DROP ANY RULE SET
EXECUTE ANY CLASS
MANAGE ANY FILE GROUP
EXECUTE ANY ASSEMBLY
EXECUTE ASSEMBLY
COMMENT ANY MINING MODEL
CREATE ANY CUBE DIMENSION
DELETE ANY CUBE DIMENSION
SELECT ANY CUBE DIMENSION
DROP ANY SQL TRANSLATION PROFILE
CREATE CREDENTIAL
ALTER ANY TABLE
DROP ANY CLUSTER
CREATE SYNONYM
CREATE PUBLIC SYNONYM
DROP ANY SEQUENCE
DROP ANY ROLE
AUDIT ANY
DROP ANY PROCEDURE
CREATE ANY TRIGGER
ALTER ANY TRIGGER
DROP PROFILE
GRANT ANY PRIVILEGE
CREATE LIBRARY
CREATE OPERATOR
DROP ANY OUTLINE
MERGE ANY VIEW
ADMINISTER SQL TUNING SET
UPDATE ANY CUBE
INSERT ANY MEASURE FOLDER
ADMINISTER SQL MANAGEMENT OBJECT
CREATE SQL TRANSLATION PROFILE
LOGMINING
MANAGE TABLESPACE
DROP USER
ALTER ANY CLUSTER
CREATE ANY SYNONYM
CREATE DATABASE LINK
CREATE PUBLIC DATABASE LINK
CREATE MATERIALIZED VIEW
CREATE ANY MATERIALIZED VIEW
EXECUTE ANY TYPE
DROP ANY OPERATOR
QUERY REWRITE
GLOBAL QUERY REWRITE
MANAGE ANY QUEUE
CREATE ANY CONTEXT
ALTER ANY EVALUATION CONTEXT
ALTER ANY RULE
DROP ANY RULE
ADVISOR
SELECT ANY TRANSACTION
DROP ANY SQL PROFILE
CREATE ANY SQL PROFILE
READ ANY FILE GROUP
CREATE EXTERNAL JOB
DROP ANY ASSEMBLY
DROP ANY CUBE DIMENSION
CREATE ANY CUBE
CREATE MEASURE FOLDER
CREATE CUBE BUILD PROCESS
ALTER ANY SQL TRANSLATION PROFILE
FLASHBACK ARCHIVE ADMINISTER

220 system privileges are required to be an effective DBA for an Oracle database, an impressive list, indeed. Of course once a user is granted the DBA role he or she gets ALL of those system privileges and since the role is the only direct grant that gives those privileges that list cannot be modified by selectively revoking one or more of those privileges:


SQL> grant DBA to blorpo identified by gussyflorp;

Grant succeeded.

SQL> revoke select any transaction from blorpo;
revoke select any transaction from blorpo
*
ERROR at line 1:
ORA-01952: system privileges not granted to 'BLORPO'


SQL>

Yes, the user DOES have that privilege, albeit indirectly. It’s indirect because it’s the ROLE that was granted that privilege, among others, and no attempt was made to revoke the role from the user. It’s a ‘package deal’; you grant a role to a user and it’s all or nothing, and even though it behaves like the user has the privileges granted directly that’s not the case.

You could, of course, get all of the privileges the DBA role has (both system and table) and create a script to grant each individual privilege to the desired user. It would be a LONG script and such grants require attention from the DBA granting them to ensure they are both current and not being abused. In that case individual privileges can be revoked which would be a maintenance nightmare for the DBA having keep track of which user has which set of privileges. Another option presents itself, creating a new role with only the privileges the DBA wants to assign to a user. The privilege list for DBA could be shortened to create, say, a DB_OPERATOR or DBO role. Such privileges would depend upon the job description; creating such a role would make granting such access easier and make maintenance simpler since when the role grants are changed those who are granted that role have there privileges adjusted the next time they login.

Roles make granting privileges very easy and straightforward, provided the role is properly created and maintained. Roles also make it impossible to “pick and choose” privileges a user should have. It’s an “all or nothing” proposition and there’s no way around that when using a pre-defined role.

Sometimes you just need to begin again.

Next Page »

Create a free website or blog at WordPress.com.