Oracle Tips and Tricks — David Fitzjarrell

November 20, 2017

“It’s The Latest Version!”

Filed under: General — dfitzjarrell @ 12:01

"Expect everything, I always say, and the unexpected never happens."
-- Norton Juster, The Phantom Tollbooth

It’s inevitable that shops will eventually need to upgrade to Oracle 12.2.0.1 since 11.2.0.x is soon to be out of support and Oracle 18 (which would have been 12.2.0.2 but Oracle has decided to name new releases with the last two digits of the year of release) is looming on the horizon. There are good reasons for such an upgrade; the availabilty of expanded JSON functionality, APEX improvements and online table management such as moves and partition splits to name three. This means that careful planning is crucial to a successful upgrade, including object management tasks that may not seem important that can send an upgrade ‘sideways’. Let’s look at those possibly overlooked items.

Depending upon which release the upgrade starts from the path may become longer than expected, traveling through more than one Oracle release before finally ending up at 12.2.0.1. Upgrades from versions lower than 11.2.0.3 will necessitate getting to release 11.2.0.3 or 11.2.0.4 before proceeding further; it shall be presumed that the database is at either of those two releases for this discussion.

The obvious place to begin is with the pfile or spfile; deprecated parameters should be removed or commented to prevent them from being set. The following list shows all but one of the parameters deprecated by Oracle for releases up to and including 12.2.0.1:


active_instance_count
background_dump_dest
buffer_pool_keep
buffer_pool_recycle
commit_write
cursor_space_for_time
fast_start_io_target
global_context_pool_size
instance_groups
lock_name_space
log_archive_start
max_enabled_roles
parallel_automatic_tuning
parallel_io_cap_enabled
parallel_server
parallel_server_instances
plsql_debug
plsql_v2_compatibility
rdbms_server_dn
remote_os_authent
resource_manager_cpu_allocation
sec_case_sensitive_logon
serial_reuse
sql_trace
standby_archive_dest
user_dump_dest

The missing parameter is sec_case_sensitive_logon and is unsupported in what Oracle calls ‘Exclusive Mode’, a reference to how the SQLNET.ALLOWED_LOGON_VERSION_SERVER parameter is set (if it’s set at all) in the sqlnet.ora file. By default it’s set to 12, so that only version 12 clients can connect to the database. It can be set to values as low as 8 (to ensure backward compatibility for older applications) and it also affects accounts with a password version less than 11G. For compatibility reasons case-insensitive passwords are required for some older applications or applications that access both a 12.x database and an older release. The pre-upgrade tool provided by Oracle will report such issues so they can be addressed prior to the upgrade. The command to run the tool is:


$ORACLE_HOME/jdk/bin/java -jar {Oracle 12.2 home}/rdbms/admin/preupgrade.jar FILE TEXT DIR {directory to write the output to}

The $ORACLE_HOME variable, and the corresponding environment, must be set to the current, pre-12.2 Oracle home for the database to be upgraded. The tool is run from the 12.2 home. That utility will generate the following scripts and directories:


dbms_registry_basic.sql         postupgrade_fixups.sql          preupgrade.log                  upgrade
dbms_registry_extended.sql      preupgrade_driver.sql           preupgrade_messages.properties
oracle                          preupgrade_fixups.sql           preupgrade_package.sql

The preupgrade.log has all of the information necessary to prepare the database for upgrade. The .sql scripts generated tell you what they are for and the log will report which scripts to run to prepare the database. [For those who are curious setting SQLNET.ALLOWED_LOGON_VERSION_SERVER to 8 ‘turns off’ Exclusive Mode so accounts with a password version of 10G will still be accessible; not setting SQLNET.ALLOWED_LOGON_VERSION_SERVER when accounts have a 10G version password (such as those accounts created by a legacy application) will render them no longer accessible.] There may be post-upgrade actions to take (such as upgrading the database timezone) which will be reported once the postupgrade_fixups.sql script has run.

Invalid objects can interfere with an upgrade in such a way as to cause ORA-00600 errors to be generated. To be honest it is a special case that can cause this: views, materialized views, packages, functions and procedures that use database links that no longer exist can send an upgrade into a downward spiral. Obviously since these are already invalid and cannot be successfully compiled it’s highly likely they are not being used; another case, however, is when a test, dev or UA database is created from a production database and such database links are dropped for security reasons. It will still be necessary to find these objects (utlrp.sql will report them in the DBA_ERRORS table) so the DDL can be preserved and the objects dropped prior to starting the upgrade. After the upgrade is successfully completed these objects can be recreated, although they will still be invalid.

The dba_recyclebin MUST be purged prior to starting an upgrade; as the upgrade process accesses the data dictionary for the database undergoing the upgrade an recyclebin objects can stop an upgrade or cause the database to throw unexpected errors after the upgrade has apparently completed successfully.

Unused columns can also an issue as the data dictionary keeps track of them and sets their position in the table to 0; the upgrade process doesn’t check for a position value of 0, it simply checks the dictionary for columns that may need to be upgraded because of features that have been selected. Unlike invisible columns that CAN be used after they are set to INVISIBLE, dropped columns have no useful names and cannot be recovered. If developers have set set columns to UNUSED then they are really, truly gone and need to be dropped. A query of DBA_UNUSED_COL_TABS will provide a list of the owners, tables and number of columns marked UNUSED. It’s a simple task to use that view to create a script to rid those tables of their unused baggage:



select 'alter table '||owner||'.'||table_name||' drop unused columns;'
from dba_unused_col_tabs

spool drop_unused.sql
/
spool off

spool drop_unused_cols.log
set echo on
@drop_unused
set echo off
spool off

so the upgrade won’t fail.


[Compressed tables with unused columns won’t cause an upgrade to fail so there is no need to uncompress, drop the unused columns, then compress the table.]

Backup the database BEFORE starting an upgrade; should the upgrade fail the database can be restored into its original home so that the business isn’t without it during the ‘investigate the failure’ phase. Do not ignore this step; it has saved many a DBA and enterprise from losing a database to a failed upgrade.

It is no longer possible to run the catupgrd.sql script directly from the SQL> prompt; it’s a Java script that is called as shown below:


$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl $ORACLE_HOME/rdbms/admin/catupgrd.sql

which runs in a mixture of serial and parallel modes; each step, and the mode used, are reported at the terminal screen during the upgrade. Sample output is shown below:


------------------------------------------------------
Phases [0-115]         Start Time:[2017_09_21 13:59:45]
------------------------------------------------------
***********   Executing Change Scripts   ***********
Serial   Phase #:0    [{sid}] Files:1    Time: 146s
***************   Catalog Core SQL   ***************
Serial   Phase #:1    [{sid}] Files:5    Time: 54s
Restart  Phase #:2    [{sid}] Files:1    Time: 0s
***********   Catalog Tables and Views   ***********
Parallel Phase #:3    [{sid}] Files:19   Time: 18s
Restart  Phase #:4    [{sid}] Files:1    Time: 0s
*************   Catalog Final Scripts   ************
Serial   Phase #:5    [{sid}] Files:6    Time: 22s
*****************   Catproc Start   ****************
Serial   Phase #:6    [{sid}] Files:1    Time: 22s
*****************   Catproc Types   ****************
Serial   Phase #:7    [{sid}] Files:2    Time: 21s
Restart  Phase #:8    [{sid}] Files:1    Time: 1s
****************   Catproc Tables   ****************
Parallel Phase #:9    [{sid}] Files:69   Time: 35s
Restart  Phase #:10   [{sid}] Files:1    Time: 1s
...

When the upgrade completes the total time is reported and the names and locations of the log file and summary report are displayed. These files need to be checked before moving on to any post-upgrade activities as any errors encountered or generated during the upgrade will be reported in these files; no such output is displayed on the terminal during the upgrade process. If the steps recommended by the preupgrade tool have been followed it’s very unlikely that the upgrade will fail; obviously there is the odd chance that something in the database configuration specific to an application can cause problems but the ‘odds’ ase in the DBA’s favor if the preupgrade instructions are followed.

Upgrading to version 12.2.0.1 is different from previous upgrades but it doesn’t have to end in tragedy. Using the preupgrade tool (as Oracle recommends), following the instructions provided by that tool and addressing the issues listed here will go a long way toward making a 12.2.0.1 upgrade a success.

Which is, obviously, what you were expecting.

Advertisements

November 3, 2017

Go Big …

Filed under: General — dfitzjarrell @ 12:15

"You see, to tall men I'm a midget, and to short men I'm a giant;
to the skinny ones I'm a fat man, and to the fat ones I'm a thin man."
-- Norton Juster, The Phantom Tollbooth

For years DBAs (and users) have been searching for the elusive ‘go_faster’ initialization parameter in Oracle, only to be disappointed at every turn as no such setting exists, at least not with that name. Oracle has, though, moved in that direction lately with the 12.x releases by providing a way for speedier access to ‘big tables’. So what IS a ‘big table’? That’s up to how your database has the “_small_table_threshold” parameter configured. Let’s look at how Oracle has decided to make ‘big table’ access faster.

Oracle offers, in releases 12 and later, a new ‘twist’ that may improve performance of ‘big’ tables called, appropriately enough, Automatic Big Table Caching. This allows Oralce to automatically cache such tables into an area of the SGA reserved specifically for this purpose. The big table cache won’t ‘work’ for direct path reads; in the example that follows those have been ‘turned off’ for the session to ensure that the table will be loaded into the configured cache. Let’s look at how it’s configured and what it can do for you when it’s active.

Unlike the In-Memory database option Automatic Big Table Caching is a no additional cost feature; no additional licensing is required to use it. It does require a change to memory settings which may increase the shared pool/sga/memory settings and could require that additional memory be installed (if the database is heavily used and memory can’t be reallocated to the big table cache from the existing SGA/memory settings). That being said, configuring Automatic Big Table Caching is fairly simple as only one parameter, db_big_table_cache_percent_target, needs to be set. Depending upon the percentage configured this could create a situation where the current sga or memory settings need to be increased; that will depend upon the size of the big table or tables Oracle will be caching. [If the table is too large for the cache then it could be ‘cached’ to disk and the usage query will reflect that.] Let’s look at a possible configuration and see how it performs.

In this example the cache percent is set to 80 (a rather large number, true, but this is for a small, personal database and the table will be quite large in comparison to the available SGA. Let’s create and populate a table to run an example with:


BING @ quanghoo > 
BING @ quanghoo > create table bigly_bigly_table(
  2  smorgun	     number,
  3  fleester	     varchar2(800),
  4  yupqast	     date);

Table created.

BING @ quanghoo > 
BING @ quanghoo > begin
  2  	     for i in 1..1000000 loop
  3  		     insert into bigly_bigly_table
  4  		     values(i, 'Altoona'||lpad(i, 773,'0'), sysdate+mod(i,991));
  5  	     end loop;
  6  
  7  	     commit;
  8  end;
  9  /

PL/SQL procedure successfully completed.

BING @ quanghoo > 
BING @ quanghoo > exec dbms_stats.gather_table_stats(user, 'BIGLY_BIGLY_TABLE', cascade=>true)

PL/SQL procedure successfully completed.

BING @ quanghoo > 

Let’s check on what the database considers a ‘small’ table:


BING @ quanghoo > 
BING @ quanghoo > connect sys/########### as sysdba
Connected.
SYS @ quanghoo > 
SYS @ quanghoo > SELECT a.ksppinm  "Parameter",  c.ksppstvl "Value"
  2  	   FROM   x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
  3  	   WHERE  a.indx = b.indx AND a.indx = c.indx
  4  	    AND   p.name(+) = a.ksppinm
  5  	    AND   UPPER(a.ksppinm) IN ( UPPER('_db_block_buffers'),
  6  					UPPER('_small_table_threshold'));

Parameter                                             Value
----------------------------------------------------- ---------------------------------
_db_block_buffers                                     208502         
_small_table_threshold                                4170    
                                                                                                                                                      
SYS @ quanghoo > 

The value is in blocks, so any table larger than 4170 blocks in this particular database will be considered a big table. Let’s now check on the number of blocks our example table occupies:


SYS @ quanghoo > connect bing/##########
Connected.
BING @ quanghoo > set echo on linesize 150 pagesize 100
BING @ quanghoo > column table_name format a35
BING @ quanghoo > alter session set parallel_degree_policy=auto;

Session altered.

BING @ quanghoo > alter session set "_serial_direct_read"=never;

Session altered.

BING @ quanghoo > select blocks from dba_tables where table_name = 'BIGLY_BIGLY_TABLE';

    BLOCKS                                                                                                                                            
----------                                                                                                                                            
    112097                                                                                                                                            

BING @ quanghoo > 

We verify the percent target that was set earlier:


BING @ quanghoo > show parameter DB_BIG_TABLE_CACHE_PERCENT_TARGET

NAME                                 TYPE        VALUE                                                                                                
------------------------------------ ----------- ------------------------------                                                                       
db_big_table_cache_percent_target    string      80                                                                                                   
BING @ quanghoo > 

Now we check to see what the cache shows as being used; since this example has been run more than once the object count is greater than 1:


BING @ quanghoo > select BT_CACHE_TARGET, OBJECT_COUNT, MEMORY_BUF_ALLOC, MIN_CACHED_TEMP from v$bt_scan_cache;

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP                                                                                         
--------------- ------------ ---------------- ---------------                                                                                         
             80            2           170037            1000                                                                                         

BING @ quanghoo > 

Let’s run a count against the table (there are no indexes on the table) and exercise that cache a bit:


BING @ quanghoo > select count(*) from BIGLY_BIGLY_TABLE;

  COUNT(*)                                                                                                                                            
----------                                                                                                                                            
   1000000                                                                                                                                            

BING @ quanghoo > 
BING @ quanghoo > select BT_CACHE_TARGET, OBJECT_COUNT, MEMORY_BUF_ALLOC, MIN_CACHED_TEMP from v$bt_scan_cache;

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP                                                                                         
--------------- ------------ ---------------- ---------------                                                                                         
             80            2           170037            1000                                                                                         

BING @ quanghoo > 

That the object count did not increase indicates that the object is cached and that Oracle is using it to provide the results we requested. Repeated queries against the table (using full table scans and NOT using direct reads) will provide the same results. Checking the query statistics we see nothing but consistent gets, which is what is to be expected when using the big table cache for cached objects:


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
     111360  consistent gets
          1  physical reads
          0  redo size
        542  bytes sent via SQL*Net to client
        552  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Running the same query again produces the same results (since the statistics didn’t change those numbers will not be repeated):


BING @ quanghoo > 
BING @ quanghoo > select count(*) from BIGLY_BIGLY_TABLE;

  COUNT(*)                                                                                                                                            
----------                                                                                                                                            
   1000000                                                                                                                                            

BING @ quanghoo > 
BING @ quanghoo > select BT_CACHE_TARGET, OBJECT_COUNT, MEMORY_BUF_ALLOC, MIN_CACHED_TEMP from v$bt_scan_cache;

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP                                                                                         
--------------- ------------ ---------------- ---------------                                                                                         
             80            2           170037            1000                                                                                         

BING @ quanghoo > 

Should another ‘big’ table be accessed (by conventional reads) it would also go into the cache, provided there is enough space in the cache. Again, this is automatically done by Oracle once the big table cache has been configured.

Automatic Big Table Caching can be a performance improvement when large tables are the norm rather than the exception. That it’s also a feature that doesn’t require additional licensing makes it all the more desirable to configure and use.

Of course, it’s all in how you see things.

Blog at WordPress.com.