Oracle Tips and Tricks — David Fitzjarrell

September 21, 2022

It’s Personal

Filed under: General — dfitzjarrell @ 14:25

Data is everywhere, and some of that data is personally identifiable information, or PII, which is protected by law, or Personal Health information, or PHI, which is also protected. Such data puts the burden of ensuring data privacy on the DBA, among others. Oracle provides several ways to encrypt data in the database, at the column level and at the tablespace level. How much data is to be encrypted, and how easily it needs to be accessed, determines the method of choice for encryption. Let’s look at encryption and how to get the task done in Oracle version 19c and later releases to prevent ‘prying eyes’ from unauthorized access.

The most targeted approach to encryption is column-level encryption, where specific columns, in specific tables, are encrypted leaving the remainder of the table data unaffected. This allows the restricted data to be isolated from general view by requiring the column values of interest to be processed by encryption and decryption functions; such functions turn plain-text values into what may appear to be gibberish, and gibberish back into plain-text values. Only data in the selected columns is encrypted, preventing the casual observer to see anything that should be protected by law. Note that column-level encryption, using Oracle’s Transparent Data Encryption, does not prevent any encrypted columns from appearing in a general select list as long as the encryption wallet is open (more on that later). Column-level encryption is available for columns of the following data types:

BINARY_DOUBLE
BINARY_FLOAT
CHAR
DATE
INTERVAL DAY TO SECOND
INTERVAL YEAR TO MONTH
NCHAR
NUMBER
NVARCHAR2
RAW (legacy or extended)
TIMESTAMP (includes TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE)
VARCHAR2 (legacy or extended)

There are also size limitations, by data type:

Data Type                         Maximum Size
================================= ===============
CHAR                              1932 bytes
VARCHAR2 (legacy)                 3932 bytes
VARCHAR2 (extended)               32,699 bytes
NVARCHAR2 (legacy)                1966 bytes
NVARCHAR2 (extended)              16,315 bytes
NCHAR                             966 bytes
RAW (extended)                    32,699 bytes

Entire tablespaces can also be encrypted, affecting all data stored in an encrypted tablespace; with this option there are no restrictions on data type or data length.

Column data longer than the declared maximum size will fail encryption. Additional restrictions do apply, listed in the on-line Oracle documentation, so careful investigation should be performed prior to encrypting individual columns.

Set the COMPATIBLE parameter in the database to allow encryption, which usually involves setting it to the current database base version (11.2.0.0, 12.1.0.0, 12.2.0.0, 19.0.0.0, etc.). This can be done in the pfile or at the SQL> prompt if using an spfile, specifying scope=spfile. Next is to set the wallet_root parameter (a new twist to the older method for versions 12 and earlier) to the base directory of the wallet, in this case it will be /ow//wallet, where is the actual ORACLE_SID value. Shutdown and start the database to effect the changes.

Now the tde_configuration parameter needs to be set (another new parameter not available in versions 12 and earlier); the following command will complete the parameter setup:

alter system set tde_configuration=”KEYSTORE_CONFIGURATION=FILE” scope=both;

The database is now ready for the external configuration tasks. These are the same for all versions of Oracle supporting encryption, just remember that in versions newer than 12 the wallet directory is the wallet_root with /tde appended.

Since encryption requires an encryption wallet, and that the wallet is open, the actual wallet needs to be created. A wallet location must be specified in the $ORACLE_HOME/network/admin/sqlnet.ora file; an example configuration is provided below:

ENCRYPTION_WALLET_LOCATION =
(SOURCE =(METHOD = FILE)(METHOD_DATA =
(DIRECTORY = /ow/${ORACLE_SID}/wallet/tde)))

The path can be any valid path available on the database server. This example uses a location created specifically for wallet storage; notice that ‘tde’ has been appended to the setting for wallet_root — Oracle does this automatically when creating the directory on the server. Notice that each running database on the server will have its own wallet, identified by the $ORACLE_SID setting. Adding the configuration data to the sqlnet.ora does not require the database to be restarted; its existence in the configuration file allows Oracle to find the location and the wallet located therein. After the sqlnet.ora is edited and saved the following instructions can be used to create the encryption wallet:

administer key management create keystore ‘/ow/${ORACLE_SID}/wallet/tde ‘ identified by mypassword;
administer key management set keystore open identified by mypassword;
administer key management set key identified by mypassword with backup;
administer key management create auto_login keystore from keystore ‘/ow/${ORACLE_SID}/wallet/tde’ identified by mypassword;

Obviously ‘mypassword’ will be replaced with the actual password to be used for encryption; store this password in a password safe or some other utility offering protection, such as AWS Secrets Manager. Note the last instruction — this creates an autologin wallet that will open once the database is started, eliminating the need to use a separate set of instructions to open the wallet for use. Validate the wallet and its status by querying the V$ENCRYPTION_WALLET view:

SQL> SELECT * FROM v$encryption_wallet;

WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID


FILE /ow/test1/wallet/tde OPEN AUTOLOGIN SINGLE NO 0

With a wallet created and open it is now possible to encrypt data at the column level or at the tablespace level. As expected tablespace-level encryption is the easiest to implement as it requires only that an encrypted tablespace be created. That task is accomplished with the provided command, making changes specific to your environment and installation:

CREATE TABLESPACE encrypt_ts
DATAFILE ‘/my/datafiles/encrypt_df.dbf’ SIZE 1M
ENCRYPTION USING ‘AES256’
DEFAULT STORAGE (ENCRYPT);

The tablespace name, the path and the file name are all modifiable values which should use the current settings/locations/naming conventions in force for the affected database installation. The encryption parameters are submitted only for tablespace creation; additional data files can be created for this tablespace using the standard ‘alter tablespace … add datafile …’ syntax. All files added to an encrypted tablespace will be encrypted automatically. As mentioned previously all data stored in an encrypted tablespace will be encrypted without exception, and since the tablespace is encrypted no functions need be used to access, read and write data. One advantage of using encrypted tablespaces is that if the wallet is not available or open the files cannot be accessed for read or write operations. This protects data at the operating system level as a hacker cannot simply copy the datafiles, place them on a different server, and access the data. The autologin wallet adds another level of security as no passwords need be entered to open the encryption wallet. And copying the wallet files to a new server won’t allow the wallet to open automatically preserving the security of the data.

Opting for column-level encryption allows the DBA to selectively encrypt data, however if the wallet is not open the entire table will be unavailable. Creating a table with an encrypted column requires only that the ENCRYPT keyword be included in the column definition:

CREATE TABLE employee (
first_name VARCHAR2(128),
last_name VARCHAR2(128),
empID NUMBER,
empSSN varchar2(12) ENCRYPT,
salary NUMBER(6) ENCRYPT);

More than one column can be encrypted in a given table; in the example above both the SSN and the salary are encrypted. As long as the wallet is open data in those columns will appear unencrypted. Should the wallet be closed or the required wallet files missing an error will be thrown that the wallet is not open. Since the tablespace itself is not encrypted any other tables will return data; only tables with encrypted columns will be unavailable while the wallet is closed. As a result if a hacker copies the file to another server and tries to get the data by trying to add the file to the database the attempt will be unsuccessful without the wallet. And the UNIX/Linux strings utility against the datafile will not return usable data.

Encryption, either at the column or tablespace level, can be an effective tool to protect sensitive PII and PHI. Oracle’s wallet-based implementation can protect data without requiring additional functions to encrypt and decrypt data. Autologin wallets can increase that security by eliminating the need to manually open the wallet after database startup. Other security measures, such as fine-grained auditing (FGA), can be implemented to create additional audit trails identifying who queried the defined sensitive data and when that breach occurred. But FGA can go only so far, making encryption a good first step in securing both PII and PHI.

Some things just bear repeating.

August 5, 2022

Privileged Communications

Filed under: General — dfitzjarrell @ 14:24

Within the past decade (or longer) more and more people have access to relational databases, for work, fun, banking, you name it. This, of course, can make the DBA’s life more complicated, sorting out the necessary privileges such users need and ensuring that a path of least privilege is followed. Le’s look at what privileges are available in an Oracle database and why caution is a good thing.

As most know there are two roles pre-defined in every Oracle database:

DBA
SYSDBA

Even though both are extremely privileged they do differ considerably. DBA is the ‘standard’ role granted to, you guessed it, DBAs in the database. This level of access grants considerable power to the associated user, such as administering tablespaces, creating and modifying users, creating tables and indexes in any schema in the database, dropping tables in any schema in the database, running database health checks via ASH and AWR reports and other performance-related activities. It does NOT allow users granted this role the ability to startup or shutdown a database.

SYSDBA is not really a role, it’s more of an access level. It can be granted to users, but except for extremely rare cases (for example the end of the world as we know it) it should not even be considered. By default SYS is the only account afforded this privileged access, and it should remain that way. SYSDBA is, essentially, ‘root’ access to Oracle — connected as SYSDBA opens the floodgates of privilege so that databases can be created or dropped, stopped, started, and altered in various ways. This is a powerful level of access, one that should not be dispensed lightly.

This now leaves a broad range of privileges that can be granted, a la carte, to users and roles to provide various levels of access, from a read-only user that can be used for reporting to users with read and write access to one or more schemas (other than the user’s own). Additionally privileges to create packages, functions, procedures, sequences and triggers can also be granted, usually to developer-level accounts. These lists of privileges can be modified as needed to add or remove specific grants. Such grants can be made to the user account itself or to a role that account is assigned. There are advantages and disadvantages to both. Granting to a role ensures that all developers get exactly the same privileges; the down side to that is the possibility that some actions may not be possible through privileges granted through a role. Privileges granted directly to the user account won’t have this limitation, but do complicate the landscape by having the DBA manage the per-user list of grants and, should a privilege be removed, force the DBA to manage each user separately to ensure no user has privileges they shouldn’t.

The full list of SYS-level privileges is:

ADMINISTER ANY SQL TUNING SET, ADMINISTER DATABASE TRIGGER, ADMINISTER RESOURCE MANAGER, ADMINISTER SQL MANAGEMENT OBJECT, ADMINISTER SQL TUNING SET, ADVISOR,
ALTER ANY ASSEMBLY, ALTER ANY CLUSTER, ALTER ANY CUBE, ALTER ANY CUBE DIMENSION, ALTER ANY DIMENSION, ALTER ANY EDITION, ALTER ANY EVALUATION CONTEXT,
ALTER ANY INDEX, ALTER ANY INDEXTYPE, ALTER ANY LIBRARY, ALTER ANY MATERIALIZED VIEW, ALTER ANY MINING MODEL, ALTER ANY OPERATOR, ALTER ANY OUTLINE, ALTER ANY PROCEDURE,
ALTER ANY ROLE, ALTER ANY RULE, ALTER ANY RULE SET, ALTER ANY SEQUENCE, ALTER ANY SQL PROFILE, ALTER ANY TABLE, ALTER ANY TRIGGER, ALTER ANY TYPE, ALTER DATABASE,
ALTER PROFILE, ALTER RESOURCE COST, ALTER ROLLBACK SEGMENT, ALTER SESSION, ALTER SYSTEM, ALTER TABLESPACE, ALTER USER, ANALYZE ANY, ANALYZE ANY DICTIONARY, AUDIT ANY,
AUDIT SYSTEM, BACKUP ANY TABLE, BECOME USER, CHANGE NOTIFICATION, COMMENT ANY MINING MODEL, COMMENT ANY TABLE, CREATE ANY ASSEMBLY, CREATE ANY CLUSTER, CREATE ANY CONTEXT,
CREATE ANY CUBE, CREATE ANY CUBE BUILD PROCESS, CREATE ANY CUBE DIMENSION, CREATE ANY DIMENSION, CREATE ANY DIRECTORY, CREATE ANY EDITION, CREATE ANY EVALUATION CONTEXT,
CREATE ANY INDEX, CREATE ANY INDEXTYPE, CREATE ANY JOB, CREATE ANY LIBRARY, CREATE ANY MATERIALIZED VIEW, CREATE ANY MEASURE FOLDER, CREATE ANY MINING MODEL, CREATE ANY OPERATOR,
CREATE ANY OUTLINE, CREATE ANY PROCEDURE, CREATE ANY RULE, CREATE ANY RULE SET, CREATE ANY SEQUENCE, CREATE ANY SQL PROFILE, CREATE ANY SYNONYM, CREATE ANY TABLE,
CREATE ANY TRIGGER, CREATE ANY TYPE, CREATE ANY VIEW, CREATE ASSEMBLY, CREATE CLUSTER, CREATE CUBE, CREATE CUBE BUILD PROCESS, CREATE CUBE DIMENSION, CREATE DATABASE LINK,
CREATE DIMENSION, CREATE EVALUATION CONTEXT, CREATE EXTERNAL JOB, CREATE INDEXTYPE, CREATE JOB, CREATE LIBRARY, CREATE MATERIALIZED VIEW, CREATE MEASURE FOLDER, CREATE MINING MODEL,
CREATE OPERATOR, CREATE PROCEDURE, CREATE PROFILE, CREATE PUBLIC DATABASE LINK, CREATE PUBLIC SYNONYM, CREATE ROLE, CREATE ROLLBACK SEGMENT, CREATE RULE, CREATE RULE SET,
CREATE SEQUENCE, CREATE SESSION, CREATE SYNONYM, CREATE TABLE, CREATE TABLESPACE, CREATE TRIGGER, CREATE TYPE, CREATE USER, CREATE VIEW, DEBUG ANY PROCEDURE, DEBUG CONNECT SESSION,
DELETE ANY CUBE DIMENSION, DELETE ANY MEASURE FOLDER, DELETE ANY TABLE, DEQUEUE ANY QUEUE, DROP ANY ASSEMBLY, DROP ANY CLUSTER, DROP ANY CONTEXT, DROP ANY CUBE, DROP ANY CUBE BUILD PROCESS,
DROP ANY CUBE DIMENSION, DROP ANY DIMENSION, DROP ANY DIRECTORY, DROP ANY EDITION, DROP ANY EVALUATION CONTEXT, DROP ANY INDEX, DROP ANY INDEXTYPE, DROP ANY LIBRARY,
DROP ANY MATERIALIZED VIEW, DROP ANY MEASURE FOLDER, DROP ANY MINING MODEL, DROP ANY OPERATOR, DROP ANY OUTLINE, DROP ANY PROCEDURE, DROP ANY ROLE, DROP ANY RULE, DROP ANY RULE SET,
DROP ANY SEQUENCE, DROP ANY SQL PROFILE, DROP ANY SYNONYM, DROP ANY TABLE, DROP ANY TRIGGER, DROP ANY TYPE, DROP ANY VIEW, DROP PROFILE, DROP PUBLIC DATABASE LINK, DROP PUBLIC SYNONYM,
DROP ROLLBACK SEGMENT, DROP TABLESPACE, DROP USER, ENQUEUE ANY QUEUE, EXECUTE ANY ASSEMBLY, EXECUTE ANY CLASS, EXECUTE ANY EVALUATION CONTEXT, EXECUTE ANY INDEXTYPE, EXECUTE ANY LIBRARY,
EXECUTE ANY OPERATOR, EXECUTE ANY PROCEDURE, EXECUTE ANY PROGRAM, EXECUTE ANY RULE, EXECUTE ANY RULE SET, EXECUTE ANY TYPE, EXECUTE ASSEMBLY, EXPORT FULL DATABASE, FLASHBACK ANY TABLE,
FLASHBACK ARCHIVE ADMINISTER, FORCE ANY TRANSACTION, FORCE TRANSACTION, GLOBAL QUERY REWRITE, GRANT ANY OBJECT PRIVILEGE, GRANT ANY PRIVILEGE, GRANT ANY ROLE, IMPORT FULL DATABASE,
INSERT ANY CUBE DIMENSION, INSERT ANY MEASURE FOLDER, INSERT ANY TABLE, LOCK ANY TABLE, MANAGE ANY FILE GROUP, MANAGE ANY QUEUE, MANAGE FILE GROUP, MANAGE SCHEDULER, MANAGE TABLESPACE,
MERGE ANY VIEW, ON COMMIT REFRESH, QUERY REWRITE, READ ANY FILE GROUP, RESTRICTED SESSION, RESUMABLE, SELECT ANY CUBE, SELECT ANY CUBE DIMENSION, SELECT ANY DICTIONARY,
SELECT ANY MINING MODEL, SELECT ANY SEQUENCE, SELECT ANY TABLE, SELECT ANY TRANSACTION, UNDER ANY TABLE, UNDER ANY TYPE, UNDER ANY VIEW, UNLIMITED TABLESPACE, UPDATE ANY CUBE,
UPDATE ANY CUBE BUILD PROCESS, UPDATE ANY CUBE DIMENSION, UPDATE ANY TABLE

It’s quite a long list, indeed. Some privileges, such as CREATE TABLE. CREATE SEQUENCE. DROP TABLE and similar, are usually safe to grant to development users since these will apply to objects created in their own schemas. Other, more “invasive” privileges, such as the “ANY” privileges, should be reserved for more administrative users as they can be used to create, drop and alter any object in the database, including those owned by SYS and SYSTEM. A general rule to follow would be to avoid granting such far-reaching privileges unless it’s absolutely necessary. What, exactly, defines “absolutely necessary”? One rare possibility is the “need” to create tables in a different schema than the user’s own. This, of course, is not the best reason for such access but other possibilities, such as creating a dedicated, shared development account, don’t appear to be much better. In the end it’s up to the DBA to decide how to handle such situations, and to be prepared to defend that decision should it be necessary.

Oracle does make using roles to grant privileges easier by providing the authid pragma for functions, packages and procedures. This allows users to utilize the grants given to them through roles to execute these objects by directing Oracle to access the user’s and role’s privilege lists. Early on in the development and maturation of Oracle using packages, procedures and functions required direct grants on the objects involved to a user account attempting to use them. Using the authid pragma can toggle between the default behavior (authid owner) to allowing any user with privileges, granted directly or indirectly (authid current_user). This simplifies security in the database, allowing the DBA to assign pre-defined roles for access and still providing access to PL/SQL objects. There may be specific cases where a direct grant is required, but the incidence of this is small and it reduces the number of direct grants a DBA must manage for any given user.

Generally speaking using roles to encapsulate and distribute privileges is a preferred method. Given that it would behoove the intrepid DBA to catalog existing privilege sets this is where he or she should start this journey. Once documented role names can be decided and privilege sets granted. The time-consuming portion is to convert the direct grants to the role-based model, revoking the direct privileges and granting the newly created roles. Of course testing this model may reveal problems only direct grants can solve, but it should be expected that these situations will be few. Once these issues are resolved a script, similar to that posted below, can be used to generate scripts that can be modified to create any user for a given role:

rem ———————————————————————–
rem Filename: crate_like.sql
rem Purpose: Script to create a new user (with privs) like an existing
rem database user.
rem Date: 26 February 2013
rem Author: David Fitzjarrell
rem ———————————————————————–

set pages 0 feed off veri off lines 500

accept oldname prompt “Enter user to model new user to: “
accept newname prompt “Enter new user name: “

column password new_value psw noprint

select password
from user$
where name = upper(‘&&oldname’);

spool create_&&newname..sql
— Create user…
select ‘create user &&newname identified by values ”&&psw”’||
‘ default tablespace ‘||default_tablespace||
‘ temporary tablespace ‘||temporary_tablespace||’ profile ‘||
profile||’;’
from sys.dba_users
where username = upper(‘&&oldname’);

— Grant Roles…
select ‘grant ‘||granted_role||’ to &&newname’||
decode(ADMIN_OPTION, ‘YES’, ‘ WITH ADMIN OPTION’)||’;’
from sys.dba_role_privs
where grantee = upper(‘&&oldname’);

— Grant System Privs…
select ‘grant ‘||privilege||’ to &&newname’||
decode(ADMIN_OPTION, ‘YES’, ‘ WITH ADMIN OPTION’)||’;’
from sys.dba_sys_privs
where grantee = upper(‘&&oldname’);

— Grant Table Privs…
select ‘grant ‘||privilege||’ on ‘||owner||’.’||table_name||’ to &&newname;’
from sys.dba_tab_privs
where grantee = upper(‘&&oldname’);

— Grant Column Privs…
select ‘grant ‘||privilege||’ on ‘||owner||’.’||table_name||
‘(‘||column_name||’) to &&newname;’
from sys.dba_col_privs
where grantee = upper(‘&&oldname’);

— Set Default Role…
select ‘alter user &&newname default role all;’ from dual;
spool off

This will populate the username and the password hash, which should be replaced with parameters in the resulting script. (The values keyword should also be removed so an actual password can be provided.) Do this for one user in each privilege group, rename the script for role assigned and, voila, there is now an easy and efficient way to replicate permission levels for any user needing them. When done properly the script can be executed and it will prompt for the required values, allowing the DBA to supply the username and password. It will, of course, be necessary to document these values, at least temporarily, so that the affected user can be provided login information. As a final step the DBA should require the user to change passwords after the initial login. It is left to the DBA to decide whether or not to modify the code above to expire the password, forcing a password change.

Given the wide range of users a database may have using the role-based method of assigning privileges makes it easier and more reliable to grant these to a user account. Being consistent in granting access rights ensures all users with a given level of access have exactly the same set of grants. And grouping privileges by position makes the DBA’s job easier, especially if the role/script is named for the position to which these grants apply. Get the request, match the script, create the user. The only variable is making certain the login credentials work as expected. What could be easier?

Having someone ELSE do it, honestly.

July 28, 2022

It’s Magic?

Filed under: General — dfitzjarrell @ 16:45

Space may not be the final frontier, but it is one DBAs need to navigate. Oracle does make the task somewhat easier with locally managed tablespaces and one aspect of that management is the Space Management Coordinator process (SMCO}. This process, appearing in version 11.2 and all versions afterwards, does much to help manage space in tablespaces as it performs several tasks DBAs should be aware of. Let’s look at that process and what it covers.

The SMCO process coordinates the following space management tasks:

  • Performs proactive space allocation and space reclamation
  • Dynamically spawns secondary processes (Wnnn) space reclamation requires
  • Tablespace-level space (Extent) pre-allocation.
    Pre-allocation refers to datafile extension, which occurs when a space request (extent allocation) operation is triggered by inserts or loading to a segment and contiguous space is not currently available in the tablespace. A file extension
    request is initiated and the file is then incremented by the next defined allocation set for the datafile.

This, of course, requires that datafiles are set to autoextend for the tablespace of interest. SMCO expands the tablespace based on the history of extension activity. Such extensions are split evenly across all datafiles in the tablespace which have not reached their maxsize setting. These extensions are limited to 10% of the full tablespace size per hourly SMCO activity. As a reminder the full tablespace size is the total space across all datafiles in the tablespace at the time the action is taken.

In addition to the above mentioned operations the SMCO process is also responsible for the following:

  • Updating block and extent counts in SEG$ view for locally managed tablespaces after adding an extent.
  • Securefile lob segment pre-extension.
  • Securefile lob segment in-memory dispenser space pre-allocation.
  • Securefile lob segment space reclamation (moving free chunks from uncommitted free space area to committed free space area).
  • Temporary segment space reclamation.

Although the SMCO process starts at database startup the actual work is done via a related module. KTSJ is the space coordinator module, managing the Wnnn processes that actually perform the work. Querying V$SESSION from a typical running database we see:

SQL > select program, module from v$session
2 where module = ‘KTSJ’
3 order by 1;

PROGRAM MODULE

PROGRAM                                          MODULE
------------------------------------------------ ----------------------------------------------------------------
oracle@ip-1-2-3-4 (SMCO)                         KTSJ
oracle@ip-1-2-3-4 (W000)                         KTSJ
oracle@ip-1-2-3-4 (W001)                         KTSJ
oracle@ip-1-2-3-4 (W002)                         KTSJ
oracle@ip-1-2-3-4 (W003)                         KTSJ
oracle@ip-1-2-3-4 (W004)                         KTSJ
oracle@ip-1-2-3-4 (W005)                         KTSJ
oracle@ip-1-2-3-4 (W006)                         KTSJ
oracle@ip-1-2-3-4 (W007)                         KTSJ
oracle@ip-1-2-3-4 (W008)                         KTSJ
oracle@ip-1-2-3-4 (W009)                         KTSJ
oracle@ip-1-2-3-4 (W00A)                         KTSJ
oracle@ip-1-2-3-4 (W00B)                         KTSJ
oracle@ip-1-2-3-4 (W00C)                         KTSJ
oracle@ip-1-2-3-4 (W00D)                         KTSJ
oracle@ip-1-2-3-4 (W00E)                         KTSJ
oracle@ip-1-2-3-4 (W00F)                         KTSJ

17 rows selected.

16 Wnnn processes are spawned from SMCO via KTSJ, and it’s the KTSJ process that is reported in AWR and ASH reports. This can be reported in the following sections of an AWR:

Latch Miss Sources DB/Inst: CH33/CH33 Snaps: 2691-2692
-> only latches with sleeps are shown
-> ordered by name, sleeps desc

Latch Name               Where                       Misses     Sleeps   Sleeps
------------------------ -------------------------- ------- ---------- --------
ksuosstats global area   ksugetosstat                     0         14       14
space background task la ktsj_grab_task                   0      2,455    4,723
space background task la ktsj_detach_task                 0      2,270        6
space background task la ktsj_smco_purgeexpired           0          4        0
unknown latch            kghalo                           0          6        3
unknown latch            kghfrunp: alloc: wait            0          1        0
unknown latch            kghupr1                          0          1        3
                          ------------------------------------------------------

<edited>

Enqueue Type (Request Reason)
------------------------------------------------------------------------------
    Requests    Succ Gets Failed Gets       Waits  Wt Time (s) Av Wt Time(ms)
------------ ------------ ----------- ----------- ------------ --------------
CR-Reuse Block Range (block range reuse ckpt)
          40           40           0           4            0            .00
JG-Job Scheduler1
     189,060      189,060           0           0            0            N/A
JG-Job Scheduler1 (queue lock)
     189,060      189,060           0           0            0            N/A
JG-Job Scheduler1 (q mem clnup lck)
      18,004       18,004           0           0            0            N/A
CF-Controlfile Transaction
       4,462        4,462           0           0            0            N/A
SJ-KTSJ Slave Task Cancel (Slave Task Cancel)
       2,518        2,520           0           0            0            N/A

<edited>


The SMCO process polls for work every hour, so the sleeps are expected. It hands off the work to KTSJ, which then coordinates work between the 16 Wnnn slaves so tablespaces can have space managed efficiently. [Other databases may show more Wnnn slaves, some less, depending upon the configuration. 16 is what the database used for this example configured.] As noted earlier these management tasks include pre-allocation of extents and data cleanup of temporary extents. These ‘temporary’ extents could be due to table/index creation (each table or index extent is considered temporary until the creation task has completed). Such ‘temporary’ extents reveal themselves when querying the DBA_EXTENTS view when tables or indexes are being created. An interesting value, numeric in nature, presents itself while the object creation is in process. Once the operation successfully completes the segment_name is replaced with the actual segment name the object has been given. Should the operation fail the KTSJ/Wnnn processes perform the necessary cleanup, recovering the allocated extents for the object that no longer exists. This cleanup occurs rapidly, returning the blocks to the freelists so other sessions/processes can use them if necessary.

Notice the “grab_task” and “detach_task” latches; these, as would be expected, allow KTSJ and the Wnnn slaves to take on and complete storage management operations required for pre-allocation of extents. As this process is designed to be proactive, rather than reactive, as Oracle detects the need for additional space in a given tablespace the KTSJ/Wnnn “team” begins work on allocating the anticipated additional extents based upon the NEXT setting for the datafile or datafiles in question. Being proactive makes the space additions more fluid, reducing any wait time by creating these extents before Oracle has need for them. Remember that Oracle must allocate and format new storage extents before they are available for use; the SMCO/KTSJ/Wnnn architecture streamlines that process by allowing Oracle to allocate and format such storage before it’s actually needed. It may be that the only indication the DBA sees for autoextend activities is the presence of KTSJ entries in an AWR or ASH report.

It is possible to turn off this process, or change its behavior, by setting an initialization parameter, “_enable_space_preallocation”, to one of the following values:

  • 0 to turn off the tbs pre-extension feature.
  • 1 To enable tablespace extension.
  • 2 To enable segment growth.
  • 3 To enable both 1 and 2
  • 4 To enable chunk allocation.

As it is a dynamic parameter it can be set using an alter system call and no database restart is required. To turn the process off:

ALTER SYSTEM SET “_ENABLE_SPACE_PREALLOCATION” = 0;

By setting “_enable_space_preallocation”=3 (the default value) the expected behavior is restored. It goes without saying that available space in autoextend datafiles must still be managed by the DBA, and when that space is running low additional datafiles will need to be added manually. The SMCO mechanism can do many things, but expanding tablespace storage isn’t one of them.

Tablespace space management for autoextend datafiles has been given a helping hand with the SMCO process by introducing more efficiency in the allocation mechanism. Pre-allocation of extents keeps data flowing smoothly during times of heavy insert and update activity. It’s like having a crystal ball for database storage.

Without the spirits.

July 22, 2022

“We Need An Exterminator???”

Filed under: General — dfitzjarrell @ 07:35

An interesting event can occasionally appear in an ASH report, an event that sounds ominous:

buffer exterminate

Thankfully it’s not a cause for concern. Let’s discuss the automatic memory options Oracle provides to see how such an event can occur.

Oracle offers two automatic memory options — automatic memory management (or AMM) and automatic shared memory management (or ASMM). Both options allow Oracle to self-manage memory allocations within the confines of initialization parameter settings. AMM is configured using the memory_* parameters, but is not available on Linux systems configured to use hugepages as that file type is not supported. ASMM is available regardless of hugepages settings and restricts the memory management activities to the various SGA components. Regardless of which automatic memory management option is in effect if Oracle is managing its own memory the above event can occur. So it’s automatic memory management of some type that is the source of activities that generate that event.

What, then, triggers this event? Oracle’s automatic management of memory areas can shift available memory resources between the various pools, and when one pool needs an increase another pool, usually a buffer pool, decreases. When the default buffer pool needs to shrink Oracle starts the deallocation process for the default buffer pool. This requires that all objects in that part of the cache be dereferenced, so that Oracle no longer finds those object pointers and those memory addresses can be assigned to a different area. All this affects is Oracle’s ability to perform logical reads on those objects. The next time those block addresses for the affected objects are accessed they will be placed into the resized buffer cache and logical reads are restored. It’s a small performance hit while memory areas are resized to meet demand.

As mentioned it’s most likely the default buffer pool that is being reduced. Looking at the various dynamic SGA components we see:


COMPONENT                                                        CURRENT_SIZE   MIN_SIZE   MAX_SIZE USER_SPECIFIED_SIZE OPER_COUNT LAST_OPER_TYP LAST_OPER LAST_OPER_TIME                GRANULE_SIZE
---------------------------------------------------------------- ------------ ---------- ---------- ------------------- ---------- ------------- --------- ----------------------------- ------------
shared pool                                                        1912602624 1644167168 1912602624                   0          6 GROW          DEFERRED  21-JUL-2022 10:01:34              33554432
large pool                                                          369098752   67108864 3523215360                   0        245 GROW          IMMEDIATE 21-JUL-2022 10:58:00              33554432
java pool                                                            33554432   33554432   67108864                   0          1 SHRINK        DEFERRED  16-JUL-2022 19:57:14              33554432
streams pool                                                        100663296  100663296  100663296           100663296          0 STATIC                                                    33554432
DEFAULT buffer cache                                               1.0402E+10 7247757312 1.0972E+10                   0        258 SHRINK        IMMEDIATE 21-JUL-2022 10:58:00              33554432
KEEP buffer cache                                                           0          0          0                   0          0 STATIC                                                    33554432
RECYCLE buffer cache                                                        0          0          0                   0          0 STATIC                                                    33554432
DEFAULT 2K buffer cache                                                     0          0          0                   0          0 STATIC                                                    33554432
DEFAULT 4K buffer cache                                                     0          0          0                   0          0 STATIC                                                    33554432
DEFAULT 8K buffer cache                                                     0          0          0                   0          0 STATIC                                                    33554432
DEFAULT 16K buffer cache                                                    0          0          0                   0          0 STATIC                                                    33554432
DEFAULT 32K buffer cache                                                    0          0          0                   0          0 STATIC                                                    33554432
Shared IO Pool                                                              0          0          0                   0          6 GROW          IMMEDIATE 15-JUL-2022 20:07:30              33554432
ASM Buffer Cache                                                            0          0          0                   0          0 STATIC                                                    33554432

s

The last operation to occur on the shared pool and large pool was a GROW, resulting in a SHRINK operation for the default buffer pool. This SHRINK triggered the buffer exterminate as memory was shifted to the shared pool and large pool. An ASH report will report this in multiple areas:

op User Events           DB/Inst: V33PRAP2/v33prap22  (Jul 21 08:41 to 10:41)

                                                               Avg Active
Event                               Event Class        % Event   Sessions
----------------------------------- --------------- ---------- ----------
direct path read                    User I/O             52.44      13.94
CPU + Wait for CPU                  CPU                  14.44       3.84
db file sequential read             User I/O              6.75       1.80
log file sync                       Commit                5.74       1.53
buffer exterminate                  Other                 4.16       1.11
          -------------------------------------------------------------
...

Top Event P1/P2/P3 Values DB/Inst: V33PRAP2/v33prap22  (Jul 21 08:41 to 10:41)

Event                          % Event  P1 Value, P2 Value, P3 Value % Activity
------------------------------ ------- ----------------------------- ----------
Parameter 1                Parameter 2                Parameter 3
-------------------------- -------------------------- --------------------------
direct path read                 52.44         "15","123891968","16"       0.05
file number                first dba                  block cnt

db file sequential read           6.79                 "1","371","1"       0.01
file#                      block#                     blocks

log file sync                     5.74          "2","3547662223","0"       0.01
buffer#                    sync scn                   NOT DEFINED

buffer exterminate                4.25 "16","200643500","9428339040"       0.27
file#                      block#                     buf_ptr

db file async I/O submit          3.36                  "15","0","0"       1.49
requests                   interrupt                  timeout

          -------------------------------------------------------------
...

Top Client IDs            DB/Inst: V33PRAP2/v33prap22  (Jul 21 08:41 to 10:41)

08:50:00  (10.0 min)    1,506 direct path read                    845    4.41
                              CPU + Wait for CPU                  226    1.18
                              log file sync                       192    1.00
09:00:00  (10.0 min)    1,550 direct path read                    864    4.51
                              CPU + Wait for CPU                  251    1.31
                              db file sequential read             152    0.79
09:10:00  (10.0 min)    1,600 direct path read                    801    4.18
                              CPU + Wait for CPU                  277    1.45
                              db file sequential read             181    0.95
09:20:00  (10.0 min)    1,588 direct path read                    891    4.65
                              CPU + Wait for CPU                  225    1.18
                              enq: KO - fast object checkpoi      161    0.84
09:30:00  (10.0 min)    1,510 direct path read                    901    4.71
                              CPU + Wait for CPU                  233    1.22
                              db file sequential read             103    0.54
09:40:00  (10.0 min)    1,735 direct path read                  1,096    5.73
                              CPU + Wait for CPU                  209    1.09
                              db file sequential read             148    0.77
09:50:00  (10.0 min)    1,523 direct path read                    950    4.96
                              CPU + Wait for CPU                  225    1.18
                              db file sequential read              84    0.44
10:00:00  (10.0 min)    1,380 direct path read                    724    3.78
                              CPU + Wait for CPU                  253    1.32
                              db file sequential read             129    0.67
10:10:00  (10.0 min)    1,332 direct path read                    555    2.90
                              CPU + Wait for CPU                  202    1.06
                              free buffer waits                   184    0.96
10:20:00  (10.0 min)    1,540 direct path read                    682    3.56
                              buffer exterminate                  307    1.60
                              CPU + Wait for CPU                  275    1.44
10:30:00  (10.0 min)    2,395 direct path read                    975    5.09
                              buffer exterminate                  454    2.37
                              free buffer waits                   368    1.92
10:40:00   (1.8 min)      319 direct path read                    138    0.72
                              direct path write temp               64    0.33
                              CPU + Wait for CPU                   61    0.32
          -------------------------------------------------------------

Of all the areas where this is reported it’s the second section reported here that provides some detail into what datafile and block number were affected. Given these two pieces of data the DBA can know which object was affected, although it may take a bit of effort to narrow the search criteria. Since the reported block id may not be listed in the DBA_EXTENTS view (block_id in that view is reported for the initial block of a given extent) the DBA may need to fiddle with the boundary values for the BETWEEN operator to narrow the search. Patience is a virtue and for the data posted here the final query became:

select segment_name, block_id from dba_extents e, dba_data_files d
where d.file_id = 16
and e.tablespace_name = d.tablespace_name
and e.block_id between 200635000
and 200643500
/

The affected block was associated with an often accessed index in the database, and was replaced in the buffer cache at its next access. Only buffered block reads are affected when this occurs, and the performance ‘hit’ should be negligible.

Seeing a “buffer exterminate” event in an ASH report isn’t as dire is it may first appear, it’s basically Oracle having to reorganize the SGA dynamic memory areas to expand one while reducing another. Oftentimes such events go unreported in the ASH output because they occur outside of the requested reporting window. Don’t be alarmed if one does “pop up”, know it’s Oracle doing its job of managing memory so you don’t have to.

Rest assured, this isn’t a bug.

June 6, 2022

“In Our New Location…”

Filed under: General — dfitzjarrell @ 14:53

Occasionally a DBA will find himself or herself having to import data from another database, a database he or she does not have access to. The only information present resides in the dumpfile or dumpfiles generated by expdp. It will be unknown at the time the dumpfiles are received what tablespace is required, whether it’s encrypted or not, and roles/users granted access to the data outside of the owner. Let’s look at what information resides in the dumpfiles, how to extract it and how to prepare the destination database for a successful import.

Since an example will be easier to follow lets set one up. Data, supplied in a multi-file export, was supplied by the Castor Oil Research team for import into a database for the marketing team. Smidji Fidjit, the intrepid DBA, realized she would need to complete a number of steps to get the data from the supplied files into the destination database.

The dump files were copied to the destination database server in the MKT environment, to the /u01/datapump/ directory (defined as dp_dir in the database), so Oracle can find them. [This information is available in the DBA_DIRECTORIES view, listing all directories created in the database.] The first hurdle to surmount was knowing what the import would attempt to do and Smidjit knew Oracle provided the sqlfile option to impdp to generate a SQL file of all DDL commands the import will execute:

$ impdp directory=dp_dir dumpfile=CASTOR.%U.dmp logfile=CASTOR.log full=y sqlfile=castor.sql

This created the castor.sql file in the destination directory; opening that file and scanning for information necessary to complete the import a tablespace was mentioned that will need to be created:

CASTORTBLSPC_ENC

This will be an encrypted tablespace which will use Oracle’s Transparent Data Encryption (TDE). Since a wallet did not exist the following steps were completed prior to creating the tablespace:

1) Edit the $ORACLE_HOME/network/admin/sqlnet.ora file to provide a location for the encryption wallet. This may require that the sqlnet.ora file be created. The complete contents of that file are provided below:

SQLNET.ALLOWED_LOGON_VERSION=10

SQLNET.AUTHENTICATION_SERVICES=(BEQ, TCPS)

SSL_VERSION = 0

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT, HOSTNAME)
DEFAULT_SDU_SIZE=32767

SSL_CLIENT_AUTHENTICATION=FALSE

ENCRYPTION_WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY=/u01/app/oracle/admin/wallet/$ORACLE_SID)))

Once this file was created the wallet directory and wallet could be created using the provided commands. From the shell Smidjit executed:

mkdir -p /u01/app/oracle/admin/wallet/$ORACLE_SID

Verifying success on the directory creation Smidjit started SQL*Plus to execute the following commands:

administer key management create keystore ‘/u01/app/oracle/admin/wallet/${ORACLE_SID}’ identified by “ca$t0rpa$$”;
administer key management set keystore open identified by “ca$t0rpa$$”;
administer key management set key identified by “ca$t0rpa$$” with backup;
administer key management create auto_login keystore from keystore ‘/u01/app/oracle/admin/wallet/${ORACLE_SID}’ identified by “ca$t0rpa$$”;

All steps executed without error. Verifying the wallet exists and is open Smidjit queried V$ENCRYPTION_WALLET:

SQL> select * From v$encryption_wallet;

WRL_TYPE
--------------------
WRL_PARAMETER
--------------------------------------------------------------------------------
STATUS                         WALLET_TYPE          WALLET_OR KEYSTORE FULLY_BAC
------------------------------ -------------------- --------- -------- ---------
    CON_ID
----------
FILE
/u01/app/oracle/admin/wallet/MKT/
OPEN                           PASSWORD             SINGLE    NONE     NO
         0


SQL>

At this point the autologin wallet reported the type as password, as that is how it was created. Restarting the database will change the type to AUTOLOGIN; the key point to notice is the wallet is OPEN. Now the tablespace can be created:

create tablespace “CASTORTBLSPC_ENC”
datafile ‘/u01/db/data/MKT/castor_01.dbf’ size 1G autoextend on next 100M maxsize unlimited
ENCRYPTION USING ‘AES256’
DEFAULT STORAGE (ENCRYPT)
/

Datafiles will need to be added; these are simple to execute as they will automatically be encrypted due to the tablespace definition. Four additional data files will be needed:

alter tablespace castortblspc_enc
add datafile ‘/u01/db/data/MKT/castor_02.dbf’ size 1G autoextend on next 100M maxsize unlimited;
alter tablespace castortblspc_enc
add datafile ‘/u01/db/data/MKT/castor_03.dbf’ size 1G autoextend on next 100M maxsize unlimited;
alter tablespace castortblspc_enc
add datafile ‘/u01/db/data/MKT/castor_04.dbf’ size 1G autoextend on next 100M maxsize unlimited;
alter tablespace castortblspc_enc
add datafile ‘/u01/db/data/MKT/castor_05.dbf’ size 1G autoextend on next 100M maxsize unlimited;

Now that the tablespace exists and is properly sized the actual import can begin. Smidjit proceeded as follows:

$ impdp directory=dp_dir dumpfile=CASTOR.%U.dmp logfile=CASTOR_IMP.log full=y

To guarantee success SYS as SYSDBA will be used to connect to the database. At the Username: prompt Smidjit entered:

/ as sysdba

The Password: prompt appeared and Smidjit simply pressed Enter again; since she was on the database server OS authentication was in force and the import began. No errors were reported and the impdp utility produced the usual output. Once table names started appearing Smidjit opened another ssh session, started SQL*Plus and ran the following query, to report on created and creating segments and on the remaining free space:

set pagesize 60 linesize 132 verify off
column file_id heading “File|Id”
column object format a35
column owner format a35

Select
‘free space’ owner,
‘ ‘ object,
file_id,
block_id,
blocks
from
sys.dba_free_space
where
tablespace_name = upper(‘&&1’)
union
select
substr(owner,1,20),
substr(segment_name,1,32),
file_id,
block_id,
blocks
from
sys.dba_extents
where
tablespace_name = upper(‘&&1’)
order by 3,4

spool &&1._map.lst
/
spool off
undefine 1

Output will consist of table names and extents, with numbers replacing table names while the table is creating. Free space will also be shown in block address order. Smidjit knew that monitoring the autoextend files would also be helpful; additional monitoring information is available using the following querym, which Smidjit executed periodically:

select file_name, bytes, maxbytes,
increment_by(bytes/blocks) “INCREMENT”, maxbytes-bytes remaining, trunc((maxbytes-bytes)/(increment_by(bytes/blocks))) EXTENSIONS
from dba_data_files
where autoextensible = ‘YES’
/

This will display the file_name, currently allocated size in bytes, the auto-extend increment, the total available autoextend space and the number of extensions still available. As extensions are used the available space will decrease and could result in space being available with 0 extensions left, due to the autoextend increment being larger than the remaining space. Smidjit, when required, altered the NEXT value to a smaller size to correct this; careful monitoring using this query allowed Smidjit to maintain available extensions regardless of the remaining bytes. Adjustments are simple:

alter database datafile ‘[file name][‘ autoextend on next [new size];

Subsequent executions of the extensions query showed the new increment size and an increased number of extensions, preventing the import job from suspending for space.

In between space and extension monitoring Smidjit chose to let impdp report on its progress. Another ssh session was opened to the database server and impdp was used to monitor the import progress:

$ impdp attach=SYS_IMPORT_FULL_01
Username: / as sysdba
Password:

An accounting of the current job was displayed, including a progress report stating objects completed, current object being imported and, if the object is a table, percent completed. This status report can also show if the job has been suspended due to space issues; such suspensions can be addressed by adding a datafile to the affected tablesapace. Once the file or files are added the job will resume. [The import job name was found at the start of the import job screen output.]

Smidjit’s patience paid off handsomely as the Castor Oil Research tables were successfully imported into the marketing team’s database, providing them with all of the information they needed to build a successful marketing campaign.

Dumpfiles from unknown databases can be a challenge to import; patience and the knowledge provided here will help make that task less… formidable… and give the DBA a map, of sorts, to reference with milestones to mark completed tasks. Barring space issues such imports should proceed without major difficulty, and opting to follow Smidjit’s pattern of space monitoring, such issues should not rear their ugly heads. Forewarned is forearmed, as the saying goes.

And it keeps you from smacking your forehead against your desk.

May 17, 2022

“Have You Seen My Map?”

Filed under: General — dfitzjarrell @ 07:50

In the database realm space management sometimes takes a back seat to other areas, like performance. Given modern storage devices and configurations it’s easy to take disk space for granted. Add to that the existence and implementation of locally managed tablespaces and Oracle, for the most part, is pretty maintenance free. There are times, however. when free space exists but Oracle throws a space-related error because the largest piece of free space is still smaller than Oracle needs to complete the given task. Let’s look into that and see where the free space is, how large each contiguous piece is and why more space may be needed.

Introduced in Oracle version 8 the locally managed tablespacs (LMT) provides several benefits:

  • Extent sizes are always a multiple of the database block size, with the smallest sized at 8 blocks.
  • Contiguous small extents of free space can combine to create larger extents.
  • Extent management is accomplished through a bitmap in the file header, reducing work in the data dictionary.
  • All extents of free space are usable, compared with dictionary-managed tablespaces where extent sizes are set manually and could be an ‘odd’ size.

This, of course, can lead the DBA into a sense of complacency with respect to storage as, in most cases, space issues won’t arise until there isn’t enough available in the current data files. And, as has been the case with Oracle, free space can be located anywhere within the data file. Creating tables for temporary use, then dropping them, can change where the free space pieces are located in the data file. It would be nice to have a way to see how, and where, space is used in a tablespace.

Fortunately Oracle provides two views to report on ‘occupied’ space and all free space in a tablespace — DBA_EXTENTS and DBA_FREE_SPACE. Looking at the definitions for these views object segments and free space are reported by file id and block number. That being the case a straightforward query can be written to “weave” together the information from these views into a map, of sorts, of how the storage is used. One form of this query is shown below:

rem
rem Generates a ‘graphic’ representation of the tablespace
rem configuration regarding free space and object segment locations
rem

set pagesize 60 linesize 230 verify off
column file_id heading “File|Id”
column owner format a35
column object format a35

Select
‘*** free space ***’ owner,
‘ ‘ object,
file_id,
block_id,
blocks
from
sys.dba_free_space
where
tablespace_name = upper(‘&&1’)
union all
select
substr(owner,1,20),
substr(segment_name,1,32),
file_id,
block_id,
blocks
from
sys.dba_extents
where
tablespace_name = upper(‘&&1’)
order by 3,4

spool /home/oracle/scripts/tuning/reports/&&1._map.lst
/
spool off
undefine 1

Because the final output is ordered by file_id and block_id a UNION ALL is used to eliminate a sort operation a UNION would execute. The query reports results regardless of how many data files are associated with the given tablesapce, displaying either the object name or a string indicating free space for each starting block id. The query output resembles that posted below:

OWNER                               OBJECT                                      Id   BLOCK_ID     BLOCKS
----------------------------------- ----------------------------------- ---------- ---------- ----------
HR                                  LOC_COUNTRY_IX                              12        128          8
BING                                DEPARTMENTS                                 12        136          8
BING                                DEPARTMENTS_PK                              12        144          8
BING                                EMPLOYEES                                   12        152          8
BING                                EMPLOYEE                                    12        160          8
BING                                EMPLOYEE_PK                                 12        168          8
BING                                CONTACT_INFO                                12        176          8
BING                                EXPIRY                                      12        184          8
*** free space ***                                                              12        192         40
BING                                EMPLOYEES_PK                                12        232          8
*** free space ***                                                              12        240        112
BING                                TDE_TEST                                    12        352          8
BING                                EMP                                         12        360          8
BING                                DEPT                                        12        368          8
BING                                DUMMY                                       12        376          8
BING                                OOMPQUA                                     12        384          8
BING                                CATEGORIES_TAB                              12        392          8
BING                                SYS_C008061                                 12        400          8
BING                                SYS_C008060                                 12        408          8
BING                                SYS_C008059                                 12        416          8
BING                                SYS_C008058                                 12        424          8
BING                                SUBCATEGORY_REF_LIST_NESTEDTAB              12        432          8
BING                                SYS_FK0000074005N00009$                     12        440          8
HR                                  REGIONS                                     12        448          8
HR                                  REG_ID_PK                                   12        456          8
HR                                  COUNTRY_C_ID_PK                             12        464          8
HR                                  LOCATIONS                                   12        472          8
HR                                  LOC_ID_PK                                   12        480          8
HR                                  DEPARTMENTS                                 12        488          8
HR                                  DEPT_ID_PK                                  12        496          8
HR                                  JOBS                                        12        504          8
HR                                  JOB_ID_PK                                   12        512          8
HR                                  EMPLOYEES                                   12        520          8
HR                                  EMP_EMP_ID_PK                               12        528          8
HR                                  EMP_EMAIL_UK                                12        536          8
HR                                  JOB_HISTORY                                 12        544          8
HR                                  JHIST_EMP_ID_ST_DATE_PK                     12        552          8
HR                                  EMP_DEPARTMENT_IX                           12        560          8
HR                                  EMP_JOB_IX                                  12        568          8
HR                                  EMP_MANAGER_IX                              12        576          8
HR                                  EMP_NAME_IX                                 12        584          8
HR                                  DEPT_LOCATION_IX                            12        592          8
HR                                  JHIST_JOB_IX                                12        600          8
HR                                  JHIST_EMPLOYEE_IX                           12        608          8
HR                                  JHIST_DEPARTMENT_IX                         12        616          8
HR                                  LOC_CITY_IX                                 12        624          8
HR                                  LOC_STATE_PROVINCE_IX                       12        632          8
*** free space ***                                                              12        640        480

48 rows selected.

                                                                                                                                                

Object extents do not need to be next to each other for a given database object, so segments for a given object could span the entire report. Notice that the blocks of free space span lengths from 40 blocks to 480 blocks; depending upon the overall size of the tablespace and its activity it’s possible to see a long list of free space entries, all 8 blocks in length. Since the minimum extent in an LMT is 8 blocks smaller tables, populated, used and then dropped, can ‘fragment’ larger areas of free space in a data file. For an LMT such expanses of free space will not be coalesced into a larger extent, primarily because these expanses will be concatenated to create larger extents when requested.

Having the ability to visualize free space allocations allows the DBA to understand, and anticipate, some of the space errors Oracle can throw. The most common of these is the ORA-01654 error:

ORA-01654: unable to extend [object type] [object name] by x in tablespace [tablespace name]

where x is the number of blocks requested. Mapping the reported tablespace will reveal all of the free space areas so the contiguous size of each one can be computed. It will be discovered that Oracle is, indeed, correct; to correct this error an additional data file will need to be created in the affected tablespace, after which the fatally affected transaction can be repeated with resounding success. Yes, there’s free space in the other data files, but it wasn’t enough in one “chunk” for Oracle to use in that particular case. This doesn’t mean that the space Oracle couldn’t use for this transaction won’t be used for another, smaller transaction, it will, but Oracle won’t try to ‘divide and conquer’ the space request to use the scattered free space resident in that particular data file. Each extent is composed of contiguous spans of database blocks; database blocks must be contiguous within an extent, but remember that database blocks are LOGICAL structures composed of physical locations on a disk or stripe, and those disk addresses do NOT need to be contiguous on the physical media. So, the mapper query maps LOGICAL database structures, not physical disk locations. Oracle is smart enough to work with the operating system and the device drivers to locate data on a disk or stripe so we don’t need to know where, on that disk or stripe, that data actually resides. What Oracle is telling us in that error is that there are not enough logical blocks in a contiguous arrangement in any data file in the given tablesapce to create the next extent that object requires. Rather than start writing data and failing mid-way through Oracle checks ahead to find a space in any one of the associated data files to contain the next extent, and, not finding one, reports the ORA-01654 error and terminates the transaction. Semi-regular mapping of heavily used tablespaces, added to regular free space checks across the database, will put the DBA into a position of control for space issues, proactively adding space before any errors are thrown. It’s a win/win for both sides.

Additional queries that can be run to monitor tablespace size are shown below:

set linesize 132 tab off trimspool on
set pagesize 105
set pause off
set feedb on

column “TOTAL AVAIL (MB)” format 999,999,999,990.00
column “TOTAL PHYS ALLOC (MB)” format 9,999,990.00
column “USED (MB)” format 9,999,990.00
column “FREE (MB)” format 9,999,990.00
column “% USED OF AVAIL” format 990.00
column “% USED OF ALLOC” format 990.00

select a.tablespace_name,
nvl(b.tot_used,0)/(1024*1024) “USED (MB)”, a.physical_bytes/(1024*1024) “TOTAL PHYS ALLOC (MB)”,
a.bytes_alloc/(1024*1024) “TOTAL AVAIL (MB)”, (nvl(b.tot_used,0)/a.physical_bytes)*100 “% USED OF ALLOC”,
(nvl(b.tot_used,0)/a.bytes_alloc)*100 “% USED OF AVAIL” from ( select tablespace_name, sum(bytes) physical_bytes, sum(decode(autoextensible,’NO’,bytes,’YES’,maxbytes)) bytes_alloc from dba_data_files group by tablespace_name ) a, ( select tablespace_name, sum(bytes) tot_used from dba_segments group by tablespace_name ) b where a.tablespace_name = b.tablespace_name (+)
and a.tablespace_name not in (select distinct tablespace_name from dba_temp_files)
and a.tablespace_name not like ‘UNDO%’
order by 6
/

This script computes available space across all tablespaces (except TEMP and UNDO) for both “normal” and autoextend data files. Taking into account that a datafile can extend beyond its current size produces less scary numbers and percentages when it comes to available free space. The output of that query:

TABLESPACE_NAME                    USED (MB) TOTAL PHYS ALLOC (MB)    TOTAL AVAIL (MB) % USED OF ALLOC % USED OF AVAIL
------------------------------ ------------- --------------------- ------------------- --------------- ---------------
USERS                                   2.81                  8.75           32,767.98           32.14            0.01
SYSTEM                                282.63                290.00           32,767.98           97.46            0.86
SYSAUX                                378.50                410.00           32,767.98           92.32            1.16

3 rows selected.

Notice that both the currently allocated size and the overall size the tablespace can attain are reported along with the percentage of the space, both currently allocated and available due to autoextend, currently used. Looking at the SYSTEM and SYSAUX tablespaces they are almost completely full based on the currently allocated size — it’s when we consider the additional space autoextend brings to the table that the usage report shows more reasonable numbers.

Another query reports how many extensions a datafile has based on the available space and the NEXT setting:

select file_name, bytes, maxbytes,
increment_by*(bytes/blocks) “INCREMENT”, maxbytes-bytes remaining, trunc((maxbytes-bytes)/(increment_by*(bytes/blocks))) EXTENSIONS
from dba_data_files
where autoextensible = ‘YES’
/

which produces:

FILE_NAME                                                           BYTES     MAXBYTES    INCREMENT    REMAINING   EXTENSIONS
------------------------------------------------------------ ------------ ------------ ------------ ------------ ------------
/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf              104857600  34359721984      5242880  34254864384         6533
/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf               429916160  34359721984     10485760  33929805824         3235
/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf               304087040  34359721984     10485760  34055634944         3247
/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf                  9175040  34359721984      1310720  34350546944        26207

4 rows selected.

This query provides the DBA with two areas that can be managed, the autoextend increment size and the number of extensions available given the current settings. It’s possible that the current increment won’t allow all of the remaining space to be allocated on autoextend; using the remaining and increment columns the DBA can adjust the current NEXT value to one more conducive to allocating the remaining space. After changing the increment value additional extensions will show to be available, ensuring that all of the remaining space that can be allocated will be allocated.

Combining these three queries into one script, or making individual scripts and an additional one to combine them, will make it convenient to put it into a scheduler for regular executions. Maintaining a rolling window of reports will give the DBA information on tablespace usage over time, providing both the DBA and the enterprise data necessary to plan storage additions to keep ahead of demand. A daily execution should be sufficient to provide all of the information necessary to monitor space usage and prepare the DBA for impending space additions. The enterprise part of this process comes when a stripe or array approaches capacity and needs to be expanded. Such reports should be sufficient to predict when the storage farm will need attention, hopefully preventing a work stoppage because space is no longer available.

Space can be a forgotten aspect in database administration in this day and age; the greater the awareness the DBA has in this regard the less likely the DBA will be caught off-guard and much less likely end users will be caught wanting space. The old adage “No news is good news.” does not apply to database storage. The tools provided should prepare the DBA to stay on top of storage usage and requirements so that business will continue without interruption. One can never be too prepared.

Now, where did I put that map???

May 11, 2022

Getting Lateral

Filed under: General — dfitzjarrell @ 07:47

In-line views are widely used in many applications, and they do make life easier for both DBAs and developers. One issue with such views used to be the inability to reference columns outside the view context to make them correlated to the driving query. Oracle, since version 12, has provided a solution to these issues with the LATERAL and APPLY operators. Using these is not difficult and can make work easier across the DBA and development teams. Let’s look into these operators and see how their use can make short work of more complex statements.

To illustrate these concepts and operators examples will be executed using these various constructs; these examples were taken from Tim Hall and modified. The examples start by creating and populating two tables:

SQL>
SQL> --
SQL> -- Create and populate required tables for examples
SQL> --
SQL> create table depts (
  2    dept_id   number(2) constraint depts_pk primary key,
  3    dept_name varchar2(14),
  4    loc        varchar2(13)
  5  );

Table created.

SQL>
SQL> insert all
  2  into depts values (10,'ACCOUNTING','NEW YORK')
  3  into depts values (20,'RESEARCH','DALLAS')
  4  into depts values (30,'SALES','CHICAGO')
  5  into depts values (40,'OPERATIONS','BOSTON')
  6  select * From dual;

4 rows created.

SQL> commit;

Commit complete.

SQL>
SQL>
SQL> create table emps (
  2    emp_id   number(4) constraint emps_pk primary key,
  3    emp_name varchar2(10),
  4    job           varchar2(9),
  5    mgr_id    number(4),
  6    hiredt      date,
  7    sal           number(7,2),
  8    comm    number(7,2),
  9    dept_id number(2) constraint emp_dept_id_fk references depts(dept_id)
 10  );

Table created.

SQL>
SQL> insert all
  2  into emps values (7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20)
  3  into emps values (7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,30)
  4  into emps values (7521,'WARD','SALESMAN',7698,to_date('22-2-1981','dd-mm-yyyy'),1250,500,30)
  5  into emps values (7566,'JONES','MANAGER',7839,to_date('2-4-1981','dd-mm-yyyy'),2975,NULL,20)
  6  into emps values (7654,'MARTIN','SALESMAN',7698,to_date('28-9-1981','dd-mm-yyyy'),1250,1400,30)
  7  into emps values (7698,'BLAKE','MANAGER',7839,to_date('1-5-1981','dd-mm-yyyy'),2850,NULL,30)
  8  into emps values (7782,'CLARK','MANAGER',7839,to_date('9-6-1981','dd-mm-yyyy'),2450,NULL,10)
  9  into emps values (7788,'SCOTT','ANALYST',7566,to_date('13-JUL-87','dd-mm-rr')-85,3000,NULL,20)
 10  into emps values (7839,'KING','PRESIDENT',NULL,to_date('17-11-1981','dd-mm-yyyy'),5000,NULL,10)
 11  into emps values (7844,'TURNER','SALESMAN',7698,to_date('8-9-1981','dd-mm-yyyy'),1500,0,30)
 12  into emps values (7876,'ADAMS','CLERK',7788,to_date('13-JUL-87', 'dd-mm-rr')-51,1100,NULL,20)
 13  into emps values (7900,'JAMES','CLERK',7698,to_date('3-12-1981','dd-mm-yyyy'),950,NULL,30)
 14  into emps values (7902,'FORD','ANALYST',7566,to_date('3-12-1981','dd-mm-yyyy'),3000,NULL,20)
 15  into emps values (7934,'MILLER','CLERK',7782,to_date('23-1-1982','dd-mm-yyyy'),1300,NULL,10)
 16  select * From dual;

14 rows created.

SQL> commit;

Commit complete.

And now the fun begins — a fairly common query, using an in-line view, is executed that attempts to reference a column from the driving query inside the in-line view. Since the in-line view has a different context than the driving query the reference attempt fails, spectacularly:

SQL>
SQL> --
SQL> -- Join to a in-line view and attempt to reference a column value from the driving query
SQL> --
SQL> -- This fails, miserably
SQL> --
SQL> select dept_name, emp_name
  2  from   depts d
  3         cross join (select emp_name
  4                     from   emps e
  5                     where  e.dept_id = d.dept_id)
  6  order by 1, 2;
                   where  e.dept_id = d.dept_id)
                                            *
ERROR at line 5:
ORA-00904: "D"."DEPT_ID": invalid identifier

.

Because Oracle can’t ‘see’ the depts table from the emps in-line view the ORA-00904 is thrown. Obviously the in-line view needs to reference the driving table columns to provide useful information but because both tables are not present in the view definition the usual method of correlation can’t be used. Putting the depts table in the view would solve one part of the problem but it adds complexity because there are now two copies of the depts table in use increasing the work the database needs to do while leaving the driving copy of depts unreferenced, creating a Cartesian join producing 56 rows of essentially useless output:

SQL> --
SQL> -- Add the depts table to the in-line view
SQL> --
SQL> -- The error goes away, but the results aren't correct
SQL> --
SQL> select dept_name, emp_name
  2  from   depts d
  3         cross join (select emp_name
  4                     from   emps e, depts f
  5                     where  e.dept_id = f.dept_id)
  6  order by 1, 2;

DEPT_NAME      EMP_NAME
-------------- ----------
ACCOUNTING     ADAMS
ACCOUNTING     ALLEN
ACCOUNTING     BLAKE
ACCOUNTING     CLARK
ACCOUNTING     FORD
ACCOUNTING     JAMES
ACCOUNTING     JONES
ACCOUNTING     KING
ACCOUNTING     MARTIN
ACCOUNTING     MILLER
ACCOUNTING     SCOTT

DEPT_NAME      EMP_NAME
-------------- ----------
ACCOUNTING     SMITH
ACCOUNTING     TURNER
ACCOUNTING     WARD
OPERATIONS     ADAMS
OPERATIONS     ALLEN
OPERATIONS     BLAKE
OPERATIONS     CLARK
OPERATIONS     FORD
OPERATIONS     JAMES
OPERATIONS     JONES
OPERATIONS     KING

DEPT_NAME      EMP_NAME
-------------- ----------
OPERATIONS     MARTIN
OPERATIONS     MILLER
OPERATIONS     SCOTT
OPERATIONS     SMITH
OPERATIONS     TURNER
OPERATIONS     WARD
RESEARCH       ADAMS
RESEARCH       ALLEN
RESEARCH       BLAKE
RESEARCH       CLARK
RESEARCH       FORD

DEPT_NAME      EMP_NAME
-------------- ----------
RESEARCH       JAMES
RESEARCH       JONES
RESEARCH       KING
RESEARCH       MARTIN
RESEARCH       MILLER
RESEARCH       SCOTT
RESEARCH       SMITH
RESEARCH       TURNER
RESEARCH       WARD
SALES          ADAMS
SALES          ALLEN

DEPT_NAME      EMP_NAME
-------------- ----------
SALES          BLAKE
SALES          CLARK
SALES          FORD
SALES          JAMES
SALES          JONES
SALES          KING
SALES          MARTIN
SALES          MILLER
SALES          SCOTT
SALES          SMITH
SALES          TURNER

DEPT_NAME      EMP_NAME
-------------- ----------
SALES          WARD

56 rows selected.

SQL>

Using the LATERAL operator in the cross join allows Oracle to ‘reach outside’ the context of the in-line view to ‘find’ columns in the depts table in the driving query, resulting in successful execution:

SQL>
SQL> --
SQL> -- Let's try again, using the lateral operator to the cross join
SQL> --
SQL> -- This allows referencing columns from the driving query in in-line views
SQL> --
SQL> select dept_name, emp_name
  2  from   depts d
  3         cross join lateral (select emp_name
  4                             from   emps e
  5                             where  e.dept_id = d.dept_id)
  6  order by 1, 2;

DEPT_NAME      EMP_NAME
-------------- ----------
ACCOUNTING     CLARK
ACCOUNTING     KING
ACCOUNTING     MILLER
RESEARCH       ADAMS
RESEARCH       FORD
RESEARCH       JONES
RESEARCH       SCOTT
RESEARCH       SMITH
SALES          ALLEN
SALES          BLAKE
SALES          JAMES

DEPT_NAME       EMP_NAME
-------------- ----------
SALES          MARTIN
SALES          TURNER
SALES          WARD

14 rows selected.

SQL>

CROSS APPLY can also be used. The next two queries return the same result set — one uses APPLY and the other uses the LATERAL operator:

SQL> --
SQL> -- CROSS APPLY is a variant of the ANSI CROSS JOIN
SQL> --
SQL> -- It applies to correlated in-line views
SQL> --
SQL> select dept_name, emp_id, emp_name
  2  from   depts d
  3         cross apply (select emp_id, emp_name
  4                      from   emps e
  5                      where  sal >= 2000
  6                      and    e.dept_id = d.dept_id)
  7  order by 1, 2, 3;

DEPT_NAME           EMP_ID EMP_NAME
-------------- ----------- ----------
ACCOUNTING            7782 CLARK
ACCOUNTING            7839 KING
RESEARCH              7566 JONES
RESEARCH              7788 SCOTT
RESEARCH              7902 FORD
SALES                 7698 BLAKE

6 rows selected.

SQL>
SQL> --
SQL> -- Use CROSS JOIN LATERAL to return the same results
SQL> --
SQL> select dept_name, emp_id, emp_name
  2  from   depts d
  3         cross join lateral (select emp_id, emp_name
  4                             from   emps e
  5                             where  sal >= 2000
  6                             and    e.dept_id = d.dept_id)
  7  order by 1, 2, 3;

DEPT_NAME           EMP_ID EMP_NAME
-------------- ----------- ----------
ACCOUNTING            7782 CLARK
ACCOUNTING            7839 KING
RESEARCH              7566 JONES
RESEARCH              7788 SCOTT
RESEARCH              7902 FORD
SALES                 7698 BLAKE

6 rows selected.

SQL>

The only difference between the two queries is that LATERAL requires the JOIN operator, APPLY does not as it performs the join by definition. A third query, using INNER JOIN LATERAL, will again return the same results:

SQL> --
SQL> -- or an INNTER JOIN LATERAL will also work
SQL> --
SQL> select dept_name, emp_id, emp_name
  2  from   depts d
  3         inner join lateral (select emp_id, emp_name
  4                             from   emps e
  5                             where  sal >= 2000
  6                             and    e.dept_id = d.dept_id) e
  7                    on e.dept_id = d.dept_id
  8  order by 1, 2, 3;

DEPT_NAME           EMP_ID EMP_NAME
-------------- ----------- ----------
ACCOUNTING            7782 CLARK
ACCOUNTING            7839 KING
RESEARCH              7566 JONES
RESEARCH              7788 SCOTT
RESEARCH              7902 FORD
SALES                 7698 BLAKE

6 rows selected.

SQL>

Pipelined table functions can be used in place of the in-line view in all three implementations, as the following examples illustrate:

SQL> --
SQL> -- Use a pipelined function in place of a correlated in-line view
SQL> --
SQL> -- CROSS APPLY, INNER JOIN LATERAL and CROSS JOIN LATERAL return the same results
SQL> --
SQL> -- Create the type and Pipelined Table Function.
SQL> --
SQL> create type t_tab as table of number;
  2  /

Type created.

SQL>
SQL> create or replace function get_tab (p_dept_id in number)
  2    return t_tab pipelined
  3  as
  4  begin
  5    if p_dept_id != 10 then
  6      for i in (select level as numval
  7                from   dual
  8                connect by level <= 2)
  9      loop
 10        pipe row (i.numval);
 11      end loop;
 12    end if;
 13
 14    return;
 15  end;
 16  /

Function created.

SQL>
SQL>
SQL> -- CROSS APPLY
SQL> select dept_name, b.*
  2  from   depts d
  3         cross apply (table(get_tab(d.dept_id))) b
  4  order by 1, 2;

DEPT_NAME      COLUMN_VALUE
-------------- ------------
OPERATIONS                1
OPERATIONS                2
RESEARCH                  1
RESEARCH                  2
SALES                     1
SALES                     2

6 rows selected.

SQL>
SQL> -- CROSS JOIN LATERAL
SQL> select dept_name, b.*
  2  from   depts d
  3         cross join lateral (select * from table(get_tab(d.dept_id))) b
  4  order by 1, 2;

DEPT_NAME      COLUMN_VALUE
-------------- ------------
OPERATIONS                1
OPERATIONS                2
RESEARCH                  1
RESEARCH                  2
SALES                     1
SALES                     2

6 rows selected.

SQL>
SQL> -- INNER JOIN LATERAL
SQL> select dept_name, b.*
  2  from   depts d
  3         inner join lateral (select * from table(get_tab(d.dept_id))) b
  4                    on 1 = 1
  5  order by 1, 2;

DEPT_NAME      COLUMN_VALUE
-------------- ------------
OPERATIONS                1
OPERATIONS                2
RESEARCH                  1
RESEARCH                  2
SALES                     1
SALES                     2

6 rows selected.

SQL>

APPLY can also be used with the OUTER keyword, as the following example shows:

SQL> --
SQL> -- OUTER APPLY performs an outer join with a correlated in-line view providing correlation support
SQL> --
SQL> select dept_name, emp_id, emp_name
  2  from   depts d
  3         outer apply (select emp_id, emp_name
  4                      from   emps e
  5                      where  sal >= 2000
  6                      and    e.dept_id = d.dept_id)
  7  order by 1, 2, 3;

DEPT_NAME           EMP_ID EMP_NAME
-------------- ----------- ----------
ACCOUNTING            7782 CLARK
ACCOUNTING            7839 KING
OPERATIONS
RESEARCH              7566 JONES
RESEARCH              7788 SCOTT
RESEARCH              7902 FORD
SALES                 7698 BLAKE

7 rows selected.

SQL>

This query also returns the same results:

SQL> --
SQL> -- LEFT JOIN LATERAL also works
SQL> --
SQL> select dept_name, emp_id, emp_name
  2  from   depts d
  3         left join lateral (select emp_id, emp_name
  4                            from   emps e
  5                            where  sal >= 2000
  6                            and    e.dept_id = d.dept_id) e
  7                   on e.dept_id = d.dept_id
  8  order by 1, 2, 3;

DEPT_NAME           EMP_ID EMP_NAME
-------------- ----------- ----------
ACCOUNTING            7782 CLARK
ACCOUNTING            7839 KING
OPERATIONS
RESEARCH              7566 JONES
RESEARCH              7788 SCOTT
RESEARCH              7902 FORD
SALES                 7698 BLAKE

7 rows selected.

SQL>

And more examples of APPLY and LATERAL queries that return the same results, making it more convenient for a DBA or developer to return data — the syntax the coder is most familiar with can most likely be used to return the desired rows. Below are more examples of the ‘more than one way to skin a fish’ philosophy:

SQL> --
SQL> -- These two return the same results
SQL> --
SQL> -- OUTER APPLY
SQL> select dept_name, b.*
  2  from   depts d
  3         outer apply (table(get_tab(d.dept_id))) b
  4  order by 1, 2;

DEPT_NAME      COLUMN_VALUE
-------------- ------------
ACCOUNTING
OPERATIONS                1
OPERATIONS                2
RESEARCH                  1
RESEARCH                  2
SALES                     1
SALES                     2

7 rows selected.

SQL>
SQL> -- LEFT JOIN LATERAL
SQL> select dept_name, b.*
  2  from   depts d
  3         left join lateral (select * from table(get_tab(d.dept_id))) b
  4                   on 1=1
  5  order by 1, 2;

DEPT_NAME      COLUMN_VALUE
-------------- ------------
ACCOUNTING
OPERATIONS                1
OPERATIONS                2
RESEARCH                  1
RESEARCH                  2
SALES                     1
SALES                     2

7 rows selected.

SQL>

The LATERAL and APPLY operators give DBAs and developers more latitude when writing queries by allowing correlation between the driving table and the associated in-line view. In-line views can be tested prior to use to ensure they return the proper rows before they are used in joins via the LATERAL and APPLY operators to generate the final rowsource. Having the ability to create such queries can be quite helpful to data architects wanting to ensure proper relational functionality in schema designs, especially when an in-line view becomes more complex. Let’s add a table into the in-line view and see what happens:

SQL> create table bonus(
  2  emp_id     number(4) constraint bonus_pk primary key,
  3  low_val number(7,2),
  4  hi_val     number(7,2),
  5  bonus_dt date);

Table created.

SQL>
SQL> insert all
  2  into bonus values (7499,160,500,sysdate-20)
  3  into bonus values (7521,125,400,sysdate-25)
  4  into bonus values (7566,295,600,sysdate-20)
  5  into bonus values (7654,125,140,sysdate-30)
  6  into bonus values (7698,285,600,sysdate-30)
  7  into bonus values (7782,245,600,sysdate-20)
  8  into bonus values (7844,150,230,sysdate-10)
  9  select * From dual;

7 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select dept_name, emp_id, emp_name
  2  from   depts d
  3         outer apply (select e.emp_id, e.emp_name
  4                      from   emps e, bonus b
  5                      where  e.sal >= 2000
  6                      and    b.hi_val > 400
  7                      and    e.dept_id = d.dept_id
  8                      and    e.emp_id = b.emp_id)
  9  order by 1, 2, 3;

DEPT_NAME          EMP_ID EMP_NAME
-------------- ---------- ----------
ACCOUNTING           7782 CLARK
OPERATIONS
RESEARCH             7566 JONES
SALES                7698 BLAKE

SQL>

The in-line view is now itself a join and lateral returns the correct rows; as the views become even more complex the ability to correlate columns from the driving table inside the view is welcome as a rewrite to eliminate the view may not be possible. Yes, the WITH clause can be used in the rewrite but that makes every source essentially a view and some constructs may not be WITH-friendly. LATERAL and APPLY shouldn’t have that issue (although robust testing hasn’t been performed so there may be a case where these operators have issues).

LATERAL and APPLY are extremely useful tools for the DBA and developer alike. Using them can simplify query coding by making the ‘impossible’ possible.

“To dream the impossible dream…”

May 3, 2022

Visiting The Cascades

Filed under: General — dfitzjarrell @ 07:57

Relational databases are ‘relational’ because there are times when additional data is required in other tables that relates to the main, or parent, record. In an effort to preserve this parent->child relationship a construct called a foreign key constraint is provided. Foreign key constraints are key in providing referential integrity, so that child records are not abandoned. Creating such constraints prevents unreferenced child records from existing in the database, which allows applications to properly return data to the end users. They can, of course, hinder deleting data from the parent table as Oracle will throw the following error when foreign key records are discovered:

ORA-02292: integrity constraint (…) violated – child record found

The often tedious process of finding and deleting these child records begins. However, this can be avoided if the foreign key constraint is created, well, properly. Let’s look into that and see how Oracle can do this for you.

Creating foreign key constraints is, honestly, pretty simple. The basic syntax is illustrated below:

alter table x add constraint x_fk foreign key (y) references z(q);

where y is, obviously, the key column in the child table and q is the referenced column in the parent table. A “shortcut”, if you will, is available provided the foreign key and primary key columns have the same name:

alter table x add constraint x_fk foreign key (y) references z;

Such a basic foreign key constraint will effectively prevent orphan child records but it won’t police the child table by deleting referenced child rows when a parent row is deleted. There is another option:

alter table x add constraint x_fk foreign key (y) references z on delete cascade;

Adding the ‘on delete cascade’ clause to a foreign key creation positions Oracle to ‘automagically’ delete child records when they are found by a delete of parent records. The following example illustrates this.

First, we create the desired tables:

SQL>
SQL> create table parent(
2 p_id number not null,
3 p_val varchar2(45) not null,
4 p_dt date not null,
5 p_comm varchar2(60),
6 constraint parent_pk primary key (p_id))
7 ;
Table created.
SQL>
SQL> create table child(
2 c_id number not null,
3 p_id number not null,
4 c_val varchar2(45) not null,
5 c_dt date not null,
6 c_comm varchar2(50),
7 constraint child_fk foreign key (p_id) references parent);
Table created.

Populate the tables:

SQL>
SQL> begin
2 for i in 1..1000 loop
3 insert into parent
4 values(i, 'Row '||i, sysdate+i, 'This is row '||i);
5 insert into child
6 values(i, i, 'Row '||i, sysdate+i, 'This is row '||i);
7 end loop;
8
9 commit;
10 end;
11 /
PL/SQL procedure successfully completed.

Now attempt to delete from the parent table; since child records exist we get the ORA-02292 error thrown:

SQL>
SQL> delete from parent
2 where p_id between 199 and 343;
delete from parent
*
ERROR at line 1:
ORA-02292: integrity constraint (BING.CHILD_FK) violated - child record found

Rollback the current transaction:

SQL>
SQL> rollback;

Rollback complete.

Drop the current foreign key constraint and re-create it adding the ‘on delete cascade’ option:

SQL>
SQL> alter table child drop constraint child_fk;
Table altered.
SQL>
SQL> alter table child add constraint child_fk foreign key (p_id) references parent on delete cascade;
Table altered.
SQL>

The delete from the parent table is attempted again, this time with different results:

SQL>
SQL> delete from parent
2 where p_id between 199 and 343;
145 rows deleted.

Proving that all related records no longer exist:

SQL>
SQL> select * from child where p_Id between 199 and 343;
no rows selected
SQL>
SQL> select * from parent where p_id between 199 and 343;
no rows selected

Both parent and child rows are deleted in one operation, ensuring the referential integrity of the data. This, however, may be rather scary to some DBAs and developers as now inadvertent deletes won’t throw the ORA-02292 error and may result in missing data. As long as the application doesn’t automatically commit such transactions these deletes can easily and quickly be undone — it’s not unusual to have a warning pop-up window appear to require the user to think about what he or she has just done and proceed no further until the action has been confirmed or rejected. And even if the delete is confirmed and then found to be in error flashback query can ‘save the day’ by allowing the data, in both tables, to be restored. (That is covered in another post.) Yes, creating foreign key constraints using ‘on delete cascade’ should be carefully considered, but they can be useful when developers design around them by providing a confirmation mechanism prior to allowing the end user to commit his or her work.

To prove how Oracle can just as easily undo these deletes we rollback the transaction, then execute the same two queries again:

SQL>
SQL> rollback;
Rollback complete.
SQL>
SQL> select * from child where p_Id between 199 and 343;
  C_ID       P_ID C_VAL                                         C_DT      C_COMM                                                                                                                                                  

   199        199 Row 199                                       17-NOV-22 This is row 199                                                                                                                                         
   200        200 Row 200                                       18-NOV-22 This is row 200                                                                                                                                         
   201        201 Row 201                                       19-NOV-22 This is row 201                                                                                                                                         
   202        202 Row 202                                       20-NOV-22 This is row 202                                                                                                                                         
   203        203 Row 203                                       21-NOV-22 This is row 203                                                                                                                                         
   204        204 Row 204                                       22-NOV-22 This is row 204                                                                                                                                         
   205        205 Row 205                                       23-NOV-22 This is row 205                                                                                                                                         
   206        206 Row 206                                       24-NOV-22 This is row 206                                                                                                                                         

327 327 Row 327 25-MAR-23 This is row 327
328 328 Row 328 26-MAR-23 This is row 328
329 329 Row 329 27-MAR-23 This is row 329
330 330 Row 330 28-MAR-23 This is row 330
331 331 Row 331 29-MAR-23 This is row 331
332 332 Row 332 30-MAR-23 This is row 332
333 333 Row 333 31-MAR-23 This is row 333
334 334 Row 334 01-APR-23 This is row 334
335 335 Row 335 02-APR-23 This is row 335
336 336 Row 336 03-APR-23 This is row 336
337 337 Row 337 04-APR-23 This is row 337
338 338 Row 338 05-APR-23 This is row 338
339 339 Row 339 06-APR-23 This is row 339
340 340 Row 340 07-APR-23 This is row 340
341 341 Row 341 08-APR-23 This is row 341
342 342 Row 342 09-APR-23 This is row 342
343 343 Row 343 10-APR-23 This is row 343
145 rows selected.
SQL>
SQL> select * from parent where p_id between 199 and 343;
  P_ID P_VAL                                         P_DT      P_COMM                                                                                                                                                             

   199 Row 199                                       17-NOV-22 This is row 199                                                                                                                                                    
   200 Row 200                                       18-NOV-22 This is row 200                                                                                                                                                    
   201 Row 201                                       19-NOV-22 This is row 201                                                                                                                                                    
   202 Row 202                                       20-NOV-22 This is row 202                                                                                                                                                    
   203 Row 203                                       21-NOV-22 This is row 203                                                                                                                                                    
   204 Row 204                                       22-NOV-22 This is row 204                                                                                                                                                    
   205 Row 205                                       23-NOV-22 This is row 205                                                                                                                                                    
   206 Row 206                                       24-NOV-22 This is row 206                                                                                                                                                    

327 Row 327 25-MAR-23 This is row 327
328 Row 328 26-MAR-23 This is row 328
329 Row 329 27-MAR-23 This is row 329
330 Row 330 28-MAR-23 This is row 330
331 Row 331 29-MAR-23 This is row 331
332 Row 332 30-MAR-23 This is row 332
333 Row 333 31-MAR-23 This is row 333
334 Row 334 01-APR-23 This is row 334
335 Row 335 02-APR-23 This is row 335
336 Row 336 03-APR-23 This is row 336
337 Row 337 04-APR-23 This is row 337
338 Row 338 05-APR-23 This is row 338
339 Row 339 06-APR-23 This is row 339
340 Row 340 07-APR-23 This is row 340
341 Row 341 08-APR-23 This is row 341
342 Row 342 09-APR-23 This is row 342
343 Row 343 10-APR-23 This is row 343
145 rows selected.
SQL>

It’s as if a miracle occurred — all of the deleted data in both tables is restored as though nothing had happened. What bears repeating is commits can cause problems with undesired deletes, especially if the undo retention period has been exceeded, since flashback query can no longer restore the missing data.

To be fair and complete flashback query is not the only method of restoring data deleted by accident. DBMS_LOGMNR can provide statements to undo such deletes, with the caveat that starting the Log Miner processes can impact performance. Yet another post on that utility is available on this blog so that material won’t be covered here. So even if data IS deleted that shouldn’t have been, at least two methods are available to resurrect the missing rows after a commit is executed. That may make the DBA rest a bit easier if the developers decide to let Oracle take the reins and handle child record deletes automatically.

It’s an act worth considering.

April 27, 2022

“Put That Right Here…”

Filed under: General — dfitzjarrell @ 08:42

Sometimes interesting problems come out of processes that appear to be basic, boring tasks. Database migrations are a good example, especially if the migration requires a re-working of datetime calculations between one RDBMS and another. What may be taken for granted in the source RDBMS may produce incorrect results in the destination RDBMS when datetime arithmetic and limitations for some data types are not fully understood. Let’s look into that and, using a real-life example, see what can go wrong.

Oracle, for decades, has made datetime arithmetic basic and, well, simple; using SYSDATE it’s incredibly easy to create dates in the future as well as in the past:

SQL> alter session set nls_date_Format = 'DD-MON-RRRR HH24:MI:SS';
Session altered.
SQL> select sysdate, sysdate+10, sysdate-7 from dual;

SYSDATE              SYSDATE+10           SYSDATE-7
-------------------- -------------------- --------------------
26-APR-2022 09:47:01 06-MAY-2022 09:47:01 19-APR-2022 09:47:01

SQL>

In one SQL statement we have reported the current datetime, a datetime in the future and one in the past, all with simple addition and subtraction and notice that the time portion of the datetime value is also supplied. Other RDBMS engines are not as forgiving; Postgres requires that the datetime be converted to an interval prior to finding the difference between two datetime values, so that the date and the time elements are properly reported. A simple datetime difference in Oracle is coded:

select sysdate – expiry_date from expiry;

which returns the difference, including the fractional part of a day, which is fairly easily converted to days, hours and minutes with some basic mathematics:

SQL> select trunc(expiry_dt - sysdate) days,
  2         trunc(((expiry_dt - sysdate) - trunc(expiry_dt - sysdate))*24) hours,
  3         trunc(((expiry_dt - sysdate) - trunc(expiry_dt - sysdate))*1440)-
  4         trunc(((expiry_dt - sysdate) - trunc(expiry_dt - sysdate))*24)*60 minutes
  5  from expiry
  6  where
  7*        trunc(expiry_dt - sysdate) < 100
SQL> /

      DAYS      HOURS    MINUTES
---------- ---------- ----------
        26         23         47
        53         23         47
        80         23         47

SQL>

(I said basic, not short.) The only mathematical “gyrations” needed involve getting the fractional part of the day isolated and then getting that converted to hours and minutes with basic multiplication. No conversion of any date values to, say, intervals, was necessary to return the days and fraction thereof.

Postgres can do date subtraction, however if the date value is not converted to an interval with the proper time units it produces integer differences, meaning the time portion of the difference is “lost”. To address that the desired interval can be added to the datetime provided by now() or current_timestamp (minutes, seconds, hours) to return a more granular result. Executing a pg_sleep() within the plpgsql block doesn’t affect calls like current_timestamp and now() as the value from the initial call is preserved in the block context. As an example two functions are created, one that modifies a datetime value by adding an interval and one that simply relies upon the delay from a pg_sleep() call to provide a more recent time value; the code is provided below:

drop function test;
drop function test2;
set client_min_messages = notice;

CREATE FUNCTION test (nonsense integer)
RETURNS interval AS $$
DECLARE
   fst_date timestamp;
   scd_date timestamp;
BEGIN
    fst_date := now();
    raise notice 'First date: %',fst_date;
    scd_date := now() + interval '60 second';
    raise notice 'Second date: %',scd_date;
    RETURN scd_date - fst_date;
END;
$$ LANGUAGE plpgsql;


CREATE FUNCTION test2 (nonsense integer)
RETURNS interval AS $$
DECLARE
   fst_date timestamp;
   scd_date timestamp;
BEGIN
    fst_date := now();
    raise notice 'First date: %',fst_date;
    perform pg_sleep(5);
    scd_date := now();
    raise notice 'Second date: %',scd_date;
    RETURN scd_date - fst_date;
END;
$$ LANGUAGE plpgsql;

select test(1);
select test2(1);

The results from executing these functions are shown below — notice that subsequent calls to now() after a pg_sleep() execution (for 5 seconds) provide the same datetime value:

psql:/home/postgres/files/test.sql:38: NOTICE:  First date: 2022-04-27 07:12:39.635446
psql:/home/postgres/files/test.sql:38: NOTICE:  Second date: 2022-04-27 07:13:39.635446
   test
----------
 00:01:00
(1 row)

psql:/home/postgres/files/test.sql:39: NOTICE:  First date: 2022-04-27 07:12:39.636306
psql:/home/postgres/files/test.sql:39: NOTICE:  Second date: 2022-04-27 07:12:39.636306
  test2
----------
 00:00:00
(1 row)

[Longer values supplied to pg_sleep() provide the same results.]

A “raise notice” call, plus setting client_min_messages to notice, provides the NOTICE lines in the above output. Both functions call now() to set the datetime value; test also adds an interval of 60 seconds to the value returned by now() and does not call pg_sleep() to delay the second now() assignment. Function test2 uses only pg_sleep() in an attempt to get Postgres to return a more current value for now() — the results tell the tale. This can be confusing to those used to Oracle and its ability to use a dbms_lock.sleep() call to delay time and allow the RDBMS to return the current datetime when SYSDATE is called. When converting Oracle functions and procedures to PLPGSQL this can result in erroneous results as calculating datetime differences won’t ‘behave’ as Oracle developers expect. This is especially true when datetime differences in minutes or seconds are expected because the datetime calls won’t provide current results since the PLPGSQL context doesn’t appear to function that way. In certain cases an interval value can be used in a multiplication scenario to essentially convert the current datetime value to an interval type, providing hours, minutes and seconds when a datetime difference is calculated. Datetime values stored in table columns, when fetched, can undergo this transformation. Thorough testing is required to ensure that datetime differences within a Postgres installation provide the desired and expected results.

Code migrations between RDBMS engines can be daunting for a novice or moderately skilled developer as assumptions based on current behavior may not hold true in the destination RDBMS. Robust and rigorous testing should be done to “shake out” the code changes to ensure that correct functionality and results are provided. Failing to provide proper testing scenarios can cause issues “down the road” when migrated code doesn’t perform exactly as it did in the source implementation. Migrations take patience; when done properly that patience will be rewarded.

Put that code over here…

April 21, 2022

Whatcha Doin’???

Filed under: General — dfitzjarrell @ 09:32

Standby databases have become fairly commonplace in recent years. giving the DBA even more to monitor and manage during the normal business day. Five views are specific to the standby configuration:

V_$STANDBY_EVENT_HISTOGRAM
V_$STANDBY_LOG
GV_$STANDBY_LOG
GV_$MANAGED_STANDBY
V_$MANAGED_STANDBY

The GV_$ views are created for standby databases in a RAC configuration but can be used to monitor all standby databases. Let’s look at these views, what they report and how the information can be used to manage and triage a standby database.

V_$STANDBY_EVENT_HISTOGRAM reports on the apply lag for the standby (currently this is the only configured event for this view). It provides a history of the lag times and the dates when they were recorded. Sample output is shown below:

SQL> select * From v$standby_event_histogram;

NAME                                                                           TIME UNIT                          COUNT LAST_TIME_UPDATED
---------------------------------------------------------------- ------------------ ---------------- ------------------ --------------------
apply lag                                                                         0 seconds                     2029622 04/21/2022 10:24:36
apply lag                                                                         1 seconds                      126569 04/21/2022 10:24:29
apply lag                                                                         2 seconds                       74288 04/21/2022 10:21:33
apply lag                                                                         3 seconds                      113061 04/21/2022 10:15:17
apply lag                                                                         4 seconds                       94382 04/21/2022 10:15:18
apply lag                                                                         5 seconds                       82232 04/21/2022 09:43:01
apply lag                                                                         6 seconds                       74034 04/21/2022 09:33:13
apply lag                                                                         7 seconds                       68011 04/21/2022 06:57:59
apply lag                                                                         8 seconds                       63748 04/21/2022 06:58:00
apply lag                                                                         9 seconds                       60386 04/21/2022 06:58:01
apply lag                                                                        10 seconds                       57632 04/21/2022 06:58:02
apply lag                                                                        11 seconds                       55451 04/21/2022 06:58:03
apply lag                                                                        12 seconds                       53546 04/21/2022 06:35:55
apply lag                                                                        13 seconds                       51840 04/21/2022 06:35:56
apply lag                                                                        14 seconds                       50377 04/21/2022 06:35:57
apply lag                                                                        15 seconds                       49168 04/21/2022 06:35:58
apply lag                                                                        16 seconds                       47965 04/21/2022 06:35:59
apply lag                                                                        17 seconds                       47093 04/21/2022 06:36:00
apply lag                                                                        18 seconds                       46096 04/21/2022 06:36:01
apply lag                                                                        19 seconds                       45119 04/21/2022 06:34:08
apply lag                                                                        20 seconds                       44140 04/21/2022 06:29:02
apply lag                                                                        21 seconds                       43160 04/21/2022 06:29:03
apply lag                                                                        22 seconds                       42343 04/21/2022 06:29:04
apply lag                                                                        23 seconds                       41651 04/21/2022 06:29:05
apply lag                                                                        24 seconds                       41023 04/21/2022 06:29:06
apply lag                                                                        25 seconds                       40473 04/21/2022 06:29:07
apply lag                                                                        26 seconds                       39835 04/21/2022 06:29:08

NAME                                                                           TIME UNIT                          COUNT LAST_TIME_UPDATED
---------------------------------------------------------------- ------------------ ---------------- ------------------ --------------------
apply lag                                                                        27 seconds                       39214 04/21/2022 06:29:09
apply lag                                                                        28 seconds                       38762 04/21/2022 06:29:10
apply lag                                                                        29 seconds                       38298 04/21/2022 06:29:11
apply lag                                                                        30 seconds                       37779 04/21/2022 06:29:12
apply lag                                                                        31 seconds                       37332 04/21/2022 06:29:13
apply lag                                                                        32 seconds                       36856 04/21/2022 06:29:14
apply lag                                                                        33 seconds                       36459 04/21/2022 06:26:00
apply lag                                                                        34 seconds                       36030 04/21/2022 06:26:01
apply lag                                                                        35 seconds                       35446 04/21/2022 06:26:02
apply lag                                                                        36 seconds                       34974 04/21/2022 06:26:03
apply lag                                                                        37 seconds                       34433 04/21/2022 06:26:04
apply lag                                                                        38 seconds                       33739 04/21/2022 06:26:05
apply lag                                                                        39 seconds                       33063 04/21/2022 06:26:06
apply lag                                                                        40 seconds                       32409 04/21/2022 06:25:22
apply lag                                                                        41 seconds                       31876 04/21/2022 06:25:23
apply lag                                                                        42 seconds                       31362 04/21/2022 06:25:24
apply lag                                                                        43 seconds                       30789 04/21/2022 06:25:25
apply lag                                                                        44 seconds                       30172 04/21/2022 06:25:26
apply lag                                                                        45 seconds                       29661 04/21/2022 06:25:27
apply lag                                                                        46 seconds                       29120 04/21/2022 06:21:55
apply lag                                                                        47 seconds                       28504 04/21/2022 06:21:56
apply lag                                                                        48 seconds                       27965 04/21/2022 06:21:57
apply lag                                                                        49 seconds                       27307 04/21/2022 06:21:58
apply lag                                                                        50 seconds                       26734 04/21/2022 06:21:59
apply lag                                                                        51 seconds                       26183 04/21/2022 06:22:00
apply lag                                                                        52 seconds                       25681 04/21/2022 06:22:01
apply lag                                                                        53 seconds                       25235 04/21/2022 06:22:02

NAME                                                                           TIME UNIT                          COUNT LAST_TIME_UPDATED
---------------------------------------------------------------- ------------------ ---------------- ------------------ --------------------
apply lag                                                                        54 seconds                       24746 04/21/2022 06:22:03
apply lag                                                                        55 seconds                       24250 04/21/2022 06:22:04
apply lag                                                                        56 seconds                       23672 04/21/2022 06:22:05
apply lag                                                                        57 seconds                       23003 04/21/2022 06:22:06
apply lag                                                                        58 seconds                       21705 04/21/2022 06:22:07
apply lag                                                                        59 seconds                       19321 04/21/2022 06:15:38
apply lag                                                                         1 minutes                           0
apply lag                                                                         2 minutes                       17021 04/21/2022 06:15:39
apply lag                                                                         3 minutes                          19 04/05/2022 09:48:45

63 rows selected.

SQL>
                 

(Notice that records with a 0 second lag have no date/time stamp because this is not an issue.) Using this view can provide starting points for lag investigations, hopefully so the issue can be diagnosed and resolved. Regular queries of this view should be executed to keep abreast of the apply lag during the day. Note that processes such as AWS Data Migration Service continuous replication can cause the standby to experience lags; such processes can be monitored through the V$SESSION_LONGOPS view.

V_$STANDBY_LOG (and GV_$STANDBY_LOG) report on the status and performance of the standby redo logs; sample output is shown below:

SQL> select * From v$standby_log
2 /

        GROUP# DBID                                                THREAD#          SEQUENCE#              BYTES          BLOCKSIZE               USED ARC STATUS          FIRST_CHANGE# FIRST_TIME

  NEXT_CHANGE# NEXT_TIME                           LAST_CHANGE# LAST_TIME

            10 3605096953                                                1             211077         1048576000                512           16694784 YES ACTIVE            91181333104 21-APR-2022 10:37:13
                                                    91181336755 21-APR-2022 10:37:28

            11 UNASSIGNED                                                1                  0         1048576000                512                  0 NO  UNASSIGNED


            12 UNASSIGNED                                                1                  0         1048576000                512                  0 NO  UNASSIGNED


            13 UNASSIGNED                                                1                  0         1048576000                512                  0 NO  UNASSIGNED


            14 UNASSIGNED                                                1                  0         1048576000                512                  0 YES UNASSIGNED

SQL >

For a single-instance standby the DBID need not be reported. As each standby log is accessed the report provides relevant information on the sequence number, status, first change number in the log and a progression of last change number as the standby logs are written to by the LGWR process. The GV_$ analog also reports the instance id, useful in a RAC configuration. From this view even more information can be returned by creating a bit more ‘useful’ script:

select group#,
thread#,
sequence#,
used,
round((used/bytes)100, 2) “%FULL”, archived, status, first_change#, first_time, last_change#, last_time, (last_time – first_time)86400 seconds_active,
round(used/((last_time – first_time)*86400),0) bytes_per_sec,
last_change#-first_change# changes
from v$standby_log;

The ouput of this query is provided below:


      GROUP#      THREAD#    SEQUENCE#         USED   %FULL ARC STATUS     FIRST_CHANGE# FIRST_TIME                    LAST_CHANGE# LAST_TIME                     SECONDS_ACTIVE BYTES_PER_SEC      CHANGES
------------ ------------ ------------ ------------ ------- --- ---------- ------------- ----------------------------- ------------ ----------------------------- -------------- ------------- ------------
          10            1       211077     59317760    5.66 YES ACTIVE       91181333104 21-APR-2022 10:37:13           91181347780 21-APR-2022 10:37:51                      38       1560994        14676
          11            1            0            0    0.00 NO  UNASSIGNED
          12            1            0            0    0.00 NO  UNASSIGNED
          13            1            0            0    0.00 NO  UNASSIGNED
          14            1            0            0    0.00 YES UNASSIGNED

Calculated values provide transfer rate per second, the active seconds for the log and the total number of recorded changes. This is another query that should be run occasionally every work day to monitor standby log activity.

V_$MANAGED_STANDBY gives the following output:

SQL> select * From v$managed_standby;

PROCESS            PID STATUS       CLIENT_P CLIENT_PID                               CLIENT_DBID                              GROUP#                                    RESETLOG_ID      THREAD#    SEQUENCE#       BLOCK#
--------- ------------ ------------ -------- ---------------------------------------- ---------------------------------------- ---------------------------------------- ------------ ------------ ------------ ------------
      BLOCKS   DELAY_MINS KNOWN_AGENTS ACTIVE_AGENTS
------------ ------------ ------------ -------------
ARCH              8032 CLOSING      ARCH     8032                                     3605096953                               10                                         1003098742            1       211075      1726464
        1496            0            0             0

ARCH              8632 CONNECTED    ARCH     8632                                     3605096953                               N/A                                                 0            0            0            0
           0            0            0             0

ARCH              8634 CLOSING      ARCH     8634                                     3605096953                               10                                         1003098742            1       211077      1730560
        1063            0            0             0

ARCH              8638 CLOSING      ARCH     8638                                     3605096953                               11                                         1003098742            1       211076      1726464
         676            0            0             0

MRP0             10498 APPLYING_LOG N/A      N/A                                      N/A                                      N/A                                        1003098742            1       211078       597932
     2048000            0           17            17

RFS              12996 IDLE         ARCH     8832                                     3605096953                               N/A                                                 0            0            0            0
           0            0            0             0

RFS              30990 IDLE         UNKNOWN  8834                                     3605096953                               N/A                                                 0            0            0            0
           0            0            0             0

RFS              17662 IDLE         UNKNOWN  8825                                     3605096953                               N/A                                                 0            0            0            0
           0            0            0             0

RFS              12512 RECEIVING    LGWR     8881                                     3605096953                               1                                          1003098742            1       211078       597931

PROCESS            PID STATUS       CLIENT_P CLIENT_PID                               CLIENT_DBID                              GROUP#                                    RESETLOG_ID      THREAD#    SEQUENCE#       BLOCK#
--------- ------------ ------------ -------- ---------------------------------------- ---------------------------------------- ---------------------------------------- ------------ ------------ ------------ ------------
      BLOCKS   DELAY_MINS KNOWN_AGENTS ACTIVE_AGENTS
------------ ------------ ------------ -------------
           2            0            0             0


9 rows selected.

SQL>

This can be used to monitor apply progress for the standby; the following query reports a more concise output that is easier to read:

select *
from v$managed_standby
where thread# > 0;

Formatting the output to remove dead space and report on only active threads produces:

SQL> @standby_apply

PROCESS            PID STATUS       CLIENT_P CLIENT_PI CLIENT_DBID  GROUP#     RESETLOG_ID      THREAD#    SEQUENCE#       BLOCK#       BLOCKS   DELAY_MINS KNOWN_AGENTS ACTIVE_AGENTS
--------- ------------ ------------ -------- --------- ------------ --------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ -------------
ARCH              8032 CLOSING      ARCH     8032      3605096953   10          1003098742            1       211075      1726464         1496            0            0             0
ARCH              8634 CLOSING      ARCH     8634      3605096953   10          1003098742            1       211077      1730560         1063            0            0             0
ARCH              8638 CLOSING      ARCH     8638      3605096953   11          1003098742            1       211076      1726464          676            0            0             0
MRP0             10498 APPLYING_LOG N/A      N/A       N/A          N/A         1003098742            1       211078      1130676      2048000            0           17            17
RFS              12512 RECEIVING    LGWR     8881      3605096953   1           1003098742            1       211078      1129050         1627            0            0             0

SQL >

Repeated executions will report apply progress for the standby, allowing the DBA to better see how the apply is running.

Yet another way to monitor standby database operations is the V$SESSION_LONGOPS view. The following query will produce a concise, yet usable, report on the standby activity:

set linesize 230 numwidth 15 pagesize 30 numwidth 18
column message format a95
column pct_done format 990.90
column sorter noprint
column min_remain format 999,990.90

select message, lower(message) sorter, sofar, case when totalwork=0 then sofar else totalwork end totalwork, (sofar/case when totalwork = 0 then sofar else totalwork end)*100 pct_done, round(time_remaining/60, 2) min_remain
from v$session_longops
where sofar <> totalwork
order by sorter;

The sorter column converts all of the message text to lower case so that entries are ordered alphabetically. Sample output is shown below:

SSQL> @longops_progress

MESSAGE                                                                                                      SOFAR          TOTALWORK PCT_DONE  MIN_REMAIN
----------------------------------------------------------------------------------------------- ------------------ ------------------ -------- -----------
Media Recovery: Active Apply Rate : 5724 out of 0 KB/sec done                                                 5724               5724   100.00
Media Recovery: Active Time : 984371 out of 0 Seconds done                                                  984371             984371   100.00
Media Recovery: Apply Time per Log : 104 out of 0 Seconds done                                                 104                104   100.00
Media Recovery: Average Apply Rate : 1171 out of 0 KB/sec done                                                1171               1171   100.00
Media Recovery: Checkpoint Time per Log : 1 out of 0 Seconds done                                                1                  1   100.00
Media Recovery: Elapsed Time : 4625569 out of 0 Seconds done                                               4625569            4625569   100.00
Media Recovery: Log Files : 9274 out of 0 Files done                                                          9274               9274   100.00
Media Recovery: Maximum Apply Rate : 42092 out of 0 KB/sec done                                              42092              42092   100.00
Media Recovery: Redo Applied : 5290078 out of 0 Megabytes done                                             5290078            5290078   100.00
select from v$logmnr_contents: Redo Blocks : 386465453992154 out of 0 Redo Blocks done             386465453992154    386465453992154   100.00
select from v$logmnr_contents: Redo Blocks : 389278657571034 out of 0 Redo Blocks done             389278657571034    389278657571034   100.00
select from v$logmnr_contents: Redo Blocks : 390884975339738 out of 0 Redo Blocks done             390884975339738    390884975339738   100.00
select from v$logmnr_contents: Redo Blocks : 391722493962458 out of 0 Redo Blocks done             391722493962458    391722493962458   100.00
select from v$logmnr_contents: Redo Blocks : 391924357425370 out of 0 Redo Blocks done             391924357425370    391924357425370   100.00
select from v$logmnr_contents: Redo Blocks : 391941537294554 out of 0 Redo Blocks done             391941537294554    391941537294554   100.00
select from v$logmnr_contents: Redo Blocks : 445031628040410 out of 0 Redo Blocks done             445031628040410    445031628040410   100.00
select from v$logmnr_contents: Redo Blocks : 445031628040410 out of 0 Redo Blocks done             445031628040410    445031628040410   100.00
select from v$logmnr_contents: Redo Blocks : 445031628040410 out of 0 Redo Blocks done             445031628040410    445031628040410   100.00

18 rows selected.

SQL >

The standby in this example is currently being used for AWS DMS replication, which explains the appearance of the v$logmnr_contents queries. The normal standby processes are categorized as Media Recovery, on the top of the query output, and reports the apply rate, time the standby is active, apply time per log, elapsed time, the number of log files processed, the maxiimum apply rte and the megabytes of redo processed. Should an apply lag be present it will also be shown in the Media Recovery output. V$SESSION_LONGOPS reports on a combination of areas covered by the other views, but in a single report which may be easier to generate and digest.

All of the queries provided can be put into a single script and configured for scheduled execution, either through cron (on *NIX systems), the Windows task manager or any other available scheduling product to produce regular reports that can be emailed to the DBA team. Sucb reports are extremely useful when managing a standby configuration as important areas of concern can be reported and, should issues arise, be addressed in a timely manner. Simply decide upon a “reasonable” schedule based upon the usage the standby is subject to so that reports will be both accurate and timely. Remember, the faster the DBA can recognize and identify an issue the faster it can be addressed and, hopefully, resolved. That’s good for everybody.

So, whatcha doin’?

Next Page »

Create a free website or blog at WordPress.com.