Oracle Tips and Tricks — David Fitzjarrell

April 24, 2024

Compatibility Issues

Filed under: General — dfitzjarrell @ 09:56
Tags: , , ,

Oracle can be set to use a broad range of version settings, controlled by the compatible setting. Normally upon install this is set to the current version of the database that’s been installed. There may be times, however, when a lower compatibility setting is warranted, possibly for performance reasons. Let’s look at an example where a fairly recent version of Oracle (19c) is configured to use 11.2.0.4 compatibility.

The compatible setting does several things:

  • sets datafile compatibility
  • sets character limits based on the version
    ( can change the datafile headers
  • enables version-specific features based upon structures available in the listed version

When a database is started compatibility is checked and, if necessary, changes are made to bring any datafiles from an older release up to specification for the version configured. Because of this compatibility can never be set to a lower version than specified in the version-specific documentation. When upgrading an 11.2.0.4 database compatibility can remain at 11.2.0.4 to allow prior backups to be used in the event a database restore is required. Once a database is set to a higher compatibility the changes made are not reversible.

Compatibility and optimizer features are different. The optimizer_features_enable parameter sets the optimizer version of the database engine and can be changed at any time to any valid version. Knowing that information it should seem logical that a 19.3.0.0 dsatabase cab be set to 11.2.0.4 compatibility AND 19.1.0 for the optimizer features. The optimizer setting enables execution pathways available to the version setting. Execution steps like index skip scan can be enabled even though the database’s physical compatibility is set to 11.2.0.4. This can create an interesting situation with Data Pump.

With the 11.2.0.4 compatibility file and object names may be limited to 35 characters. This is certainly the case with Data Pump. Job names, associated table names are generated using a sequence, and as that sequence increases so can the length of the file/job/object names. The sequence in question is AQ$_KUPC$DATAPUMP_QUETAB_1_N, owned by SYS. As this sequences increases the names generated become longer until they cross that 35 character threshold. At that point (the sequence value is greater than 9999) errors get thrown:

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Production
ORA-31626: job does not exist
ORA-31638: cannot attach to job SYS_IMPORT_FULL_01 for user xxxx
ORA-06512: at “SYS.KUPV$FT”, line 1142
ORA-06512: at “SYS.KUPV$FT”, line 1744
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPV$FT_INT”, line 498
ORA-39077: unable to subscribe agent KUPC$A_1_104552960533000 to queue “KUPC$C_1_20200218104552_0”
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPC$QUE_INT”, line 294
ORA-00972: identifier is too long
ORA-06512: at “SYS.DBMS_AQADM_SYS”, line 9006
ORA-06512: at “SYS.DBMS_PRVTAQIS”, line 1558
ORA-06512: at “SYS.DBMS_PRVTAQIS”, line 3431
ORA-06512: at “SYS.DBMS_RULE_ADM”, line 296
ORA-06512: at “SYS.DBMS_RULEADM_INTERNAL”, line 106
ORA-24000: invalid value “SYS”.”KUPC$C_1_20200218104552_0$10016″, RULE SET should be of the form [SCHEMA.]NAME
ORA-00972: identifier is too long
ORA-06512: at “SYS.DBMS_RULEADM_INTERNAL”, line 109
ORA-06512: at “SYS.DBMS_RULEADM_INTERNAL”, line 97
ORA-06512: at “SYS.DBMS_RULE_ADM”, line 290
ORA-06512: at “SYS.DBMS_PRVTAQIS”, line 3386
ORA-06512: at “SYS.DBMS_PR

and data pump exits unsuccessfully.

One solution to this issue is to set compatible to match the database version. An excellent option if performance is not impacted. However, if the compatibility is set to 11.2.0.4 to retain the performance profile of the database prior to upgrade, this is not a viable solution. Fear not, help is at hand.

It is possible to rebuild the sequence and the associated pieces using scripts provided by Oracle, used to create these objects at database creation:

@$ORACLE_HOME/rdbms/admin/catdph.sql
@$ORACLE_HOME/rdbms/admin/prvtdtde.plb
@$ORACLE_HOME/rdbms/admin/catdpb.sql
@$ORACLE_HOME/rdbms/admin/dbmspump.sql
@$ORACLE_HOME/rdbms/admin/utlrp.sql

The first four scripts drop and recreate the various Data Pump objects, with the final script performing a recompile of the objects to address any that may have been invalidated when the previous objects were dropped. Note that the sequence in question is NOT recreated by these scripts. That sequence wil; be recreated once either expdp or impdp is called. This, of course, resets the sequence to 0 and provides shorter file/job/object names that the 11.2.0.4 structures can handle.

If this issue affects an installation is would be prudent to check the sequence value periodically. The following PL/SQL code can be used:

set serveroutput on size unlimited

declare
cursor get_seq_val is
select last_number
from dba_sequences
where sequence_name = ‘AQ$_KUPC$DATAPUMP_QUETAB_1_N’;

    v_curr_val      number:=0;
    v_sql           varchar2(400);

begin
open get_seq_val;
fetch get_Seq_val into v_curr_val;
close get_seq_val;
dbms_output.put_line(‘Current value is: ‘||v_curr_val);

    if v_curr_val > 9999 then
            dbms_output.put_line('Sequence value too large, rebuild.');
    end if;

end;
/

Creating another script that executes the rebuild scripts would make it easier to fix the sequence size issue. A reference to that script could be output instead of the generic message shown above. Please note the database does NOT need to be in upgrade mode or restarted to execute the rebuild scripts.

It should be noted that this situation can be created when Data Pump utilities are run frequently, and may only occur in development and/or test environments. It would be wise to set up a script to return the current sequence value so it can be checked occasionally to ensure it is not nearing the magic limit of 10000 (if compatible for the database is set to 11.2.x.a).

Sometimes the most unexpected things can throw a monkey wrench into the works. Forewarned is forearmed, as they say, and this should provide enough warning to allow the DBA to avert disaster with Data Pump. Keep a watchful eye on the current sequence value and Data Pump should stay working regardless of how often is is used. Best to be safe rather than sorry.

Unless being awakened in the middle of the night is the goal.

January 3, 2024

`”We’re Running Out Of Room!”

Filed under: General — dfitzjarrell @ 14:37

pace management for a database doesn’t stop with the datafiles as the file system can also present challenges for the DBA. Even though sufficient space may exist inside datafiles the file system itself may be an issue. Depending upon the operating system in use utilities and commands may be available to diagnose such problems. Let’s look at the Linux/U)NIX operating systems and see what they offer.

The most basic command to report disk space is df, which displays various information about one or more mounted file systems. It’s use is simple:

$ df

Typical output is shown below:

Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 64938036 0 64938036 0% /dev
tmpfs 64979944 0 64979944 0% /dev/shm
tmpfs 64979944 3457632 61522312 6% /run
tmpfs 64979944 0 64979944 0% /sys/fs/cgroup
/dev/nvme0n1p2 209702892 82315092 127387800 40% /
/dev/nvme4n1 20961280 2529568 18431712 13% /var
/dev/nvme9n1 10475520 1480616 8994904 15% /tmp
/dev/nvme5n1 20961280 449020 20512260 3% /home
/dev/nvme1n1 10475520 106980 10368540 2% /var/tmp
/dev/nvme6n1 31447040 21430336 10016704 69% /opt
/dev/nvme2n1 10475520 4429264 6046256 43% /var/log
/dev/nvme7n1 10475520 8232828 2242692 79% /var/log/audit
/dev/nvme3n1 16644784028 6943909888 8889933796 44% /backup
/dev/nvme8n1 17111375772 14625941024 1626424908 90% /u02
tmpfs 12995988 0 12995988 0% /run/user/1000

The header provides the basic descriptions of the data to follow. In this example the values are reported in kilobytes, but this may not be true for other installations. Three useful switches are available:

-k
-m
-h

which display the output size values in kilobytes, megabytes or in human-readable form. Let’s give each one a test, first df -k:

$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 64938036 0 64938036 0% /dev
tmpfs 64979944 0 64979944 0% /dev/shm
tmpfs 64979944 3465824 61514120 6% /run
tmpfs 64979944 0 64979944 0% /sys/fs/cgroup
/dev/nvme0n1p2 209702892 82316100 127386792 40% /
/dev/nvme4n1 20961280 2529548 18431732 13% /var
/dev/nvme9n1 10475520 1480564 8994956 15% /tmp
/dev/nvme5n1 20961280 449020 20512260 3% /home
/dev/nvme1n1 10475520 106980 10368540 2% /var/tmp
/dev/nvme6n1 31447040 21409276 10037764 69% /opt
/dev/nvme2n1 10475520 4427520 6048000 43% /var/log
/dev/nvme7n1 10475520 8232828 2242692 79% /var/log/audit
/dev/nvme3n1 16644784028 6943909888 8889933796 44% /backup
/dev/nvme8n1 17111375772 14625941024 1626424908 90% /u02
tmpfs 12995988 0 12995988 0% /run/user/1000

Now let’s run df -m:

$ df -m
Filesystem 1M-blocks Used Available Use% Mounted on
devtmpfs 63417 0 63417 0% /dev
tmpfs 63457 0 63457 0% /dev/shm
tmpfs 63457 3385 60073 6% /run
tmpfs 63457 0 63457 0% /sys/fs/cgroup
/dev/nvme0n1p2 204788 80388 124401 40% /
/dev/nvme4n1 20470 2472 17999 13% /var
/dev/nvme9n1 10230 1446 8785 15% /tmp
/dev/nvme5n1 20470 439 20032 3% /home
/dev/nvme1n1 10230 105 10126 2% /var/tmp
/dev/nvme6n1 30710 20908 9803 69% /opt
/dev/nvme2n1 10230 4316 5915 43% /var/log
/dev/nvme7n1 10230 8040 2191 79% /var/log/audit
/dev/nvme3n1 16254672 6781162 8681576 44% /backup
/dev/nvme8n1 16710328 14283146 1588306 90% /u02
tmpfs 12692 0 12692 0% /run/user/1000

Notice the values are reported in megabytes which may make them easier to read and less cumbersome to deal with. looking now at df -h we see:

$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 62G 0 62G 0% /dev
tmpfs 62G 0 62G 0% /dev/shm
tmpfs 62G 3.4G 59G 6% /run
tmpfs 62G 0 62G 0% /sys/fs/cgroup
/dev/nvme0n1p2 200G 79G 122G 40% /
/dev/nvme4n1 20G 2.5G 18G 13% /var
/dev/nvme9n1 10G 1.5G 8.6G 15% /tmp
/dev/nvme5n1 20G 439M 20G 3% /home
/dev/nvme1n1 10G 105M 9.9G 2% /var/tmp
/dev/nvme6n1 30G 21G 9.6G 69% /opt
/dev/nvme2n1 10G 4.3G 5.8G 43% /var/log
/dev/nvme7n1 10G 7.9G 2.2G 79% /var/log/audit
/dev/nvme3n1 16T 6.5T 8.3T 44% /backup
/dev/nvme8n1 16T 14T 1.6T 90% /u02
tmpfs 13G 0 13G 0% /run/user/1000

The precision of the values is reduced but the units abbreviation is displayed allowing the DBA to see, at a quick glance, the storage numbers.

The Use% column is arguably the most important value found in this output as it reveals the total used space on the device. The Avail column displays the free space remaining for the device; the Use% value is computed from the Size and Avail columns. As can be seen the df command is a good starting point for space investigations.

As good as df is it may not provide the depth of information the DBA may need to pinpoint the source of space issues. Enter the du command, a utility that provides information on every object stored in a given directory or on an entire file system. Usually invoked with the -a switch du reports object sizes in blocks along with a grand total at the end of the output. Examining the /dev file system with du – a reveals:

du -a

0 ./vcsa6
0 ./vcs6
0 ./snd/seq
0 ./snd/timer
0 ./snd
0 ./vhost-vsock
0 ./vhost-net
0 ./vhci
0 ./vfio/vfio
0 ./vfio
0 ./uinput
0 ./ppp
0 ./initctl
0 ./hugepages
0 ./mqueue
0 ./disk/by-label/root
0 ./disk/by-label
0 ./disk/by-partuuid/6264d520-3fb9-423f-8ab8-7a0a8e3d3562
0 ./disk/by-partuuid/fac7f1fb-3e8d-4137-a512-961de09a5549
0 ./disk/by-partuuid
0 ./disk/by-uuid/3276edf0-0d19-46db-826e-b5cf9cfd7b91
0 ./disk/by-uuid/209b92d1-3b0e-4ae9-b097-6f1a28febc31
0 ./disk/by-uuid/076f218c-9306-4532-b805-33d413d60a51
0 ./disk/by-uuid/39447692-bf1a-41f6-b579-41452c622067
0 ./disk/by-uuid/20e84628-8c68-465a-9edd-ef116d18efda
0 ./disk/by-uuid/68c15f1a-2be1-43bb-b94c-29dd8757bc24
0 ./disk/by-uuid/3e0db4d4-1971-456a-b6d6-530797fe4d13
0 ./disk/by-uuid/c0a1cf76-5b5e-4e6a-b7f1-d928452ecb14
0 ./disk/by-uuid/dbd9bce3-60fc-4bee-a6bf-9df2d0111acb
0 ./disk/by-uuid/9143e9e3-9fbf-4e77-bf82-466129c9ebfa
0 ./disk/by-uuid/aba1f778-fd32-4bf5-9919-18c8cebd559f
0 ./disk/by-uuid
0 ./disk/by-path/pci-0000:00:1e.0-nvme-1
0 ./disk/by-path/pci-0000:00:04.0-nvme-1-part2
0 ./disk/by-path/pci-0000:00:04.0-nvme-1-part1
0 ./disk/by-path/pci-0000:00:17.0-nvme-1
0 ./disk/by-path/pci-0000:00:04.0-nvme-1
0 ./disk/by-path/pci-0000:00:16.0-nvme-1
0 ./disk/by-path/pci-0000:00:1d.0-nvme-1
0 ./disk/by-path/pci-0000:00:19.0-nvme-1
0 ./disk/by-path/pci-0000:00:1c.0-nvme-1
0 ./disk/by-path/pci-0000:00:1f.0-nvme-1
0 ./disk/by-path/pci-0000:00:18.0-nvme-1
0 ./disk/by-path/pci-0000:00:1b.0-nvme-1
0 ./disk/by-path/pci-0000:00:1a.0-nvme-1
0 ./disk/by-path
0 ./disk/by-id/wwn-nvme.1d0f-766f6c3065313635613235353936306635613736-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part1
0 ./disk/by-id/wwn-nvme.1d0f-766f6c3065313635613235353936306635613736-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part2
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3035333436363932613663316666313466-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol05346692a6c1ff14f
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3065313635613235353936306635613736-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part2
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0e165a255960f5a76-part2
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3065313635613235353936306635613736-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part1
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0e165a255960f5a76-part1
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3032356238323634323231323365663038-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol025b826422123ef08
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3065313635613235353936306635613736-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0e165a255960f5a76
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3039353230326531333131653764333435-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol095202e1311e7d345
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3035343837386435333238326633383634-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol054878d53282f3864
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3065613230613733613639646139623836-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0ea20a73a69da9b86
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3032373332373465373565386138366538-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0273274e75e8a86e8
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3062656531663461323431353434386531-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0bee1f4a2415448e1
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3037356566623137326166363433386235-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol075efb172af6438b5
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3030303637653037363063613866303064-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol00067e0760ca8f00d
0 ./disk/by-id/nvme-nvme.1d0f-766f6c3038626562633137353662303736626533-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
0 ./disk/by-id/nvme-Amazon_Elastic_Block_Store_vol08bebc1756b076be3
0 ./disk/by-id
0 ./disk
0 ./block/259:8
0 ./block/259:2
0 ./block/259:1
0 ./block/259:6
0 ./block/259:0
0 ./block/259:3
0 ./block/259:11
0 ./block/259:9
0 ./block/259:5
0 ./block/259:4
0 ./block/259:7
0 ./block/259:12
0 ./block/259:10
0 ./block
0 ./ng4n1
0 ./ng8n1
0 ./ng9n1
0 ./ng3n1
0 ./ng6n1
0 ./ng5n1
0 ./nvme6n1
0 ./nvme9n1
0 ./nvme5n1
0 ./nvme8n1
0 ./nvme3n1
0 ./nvme4n1
0 ./ng7n1
0 ./ng2n1
0 ./nvme2n1
0 ./nvme7n1
0 ./ng10n1
0 ./ng1n1
0 ./nvme10n1
0 ./nvme1n1
0 ./ng0n1
0 ./nvme0n1p2
0 ./nvme0n1p1
0 ./nvme0n1
0 ./nvme10
0 ./nvme9
0 ./nvme8
0 ./nvme7
0 ./nvme6
0 ./nvme5
0 ./nvme4
0 ./nvme3
0 ./nvme2
0 ./nvme1
0 ./nvme0
0 ./rtc
0 ./char/7:134
0 ./char/7:6
0 ./char/13:68
0 ./char/242:10
0 ./char/242:4
0 ./char/242:8
0 ./char/242:2
0 ./char/242:9
0 ./char/242:5
0 ./char/242:6
0 ./char/242:7
0 ./char/242:3
0 ./char/242:0
0 ./char/242:1
0 ./char/243:9
0 ./char/243:8
0 ./char/243:7
0 ./char/243:10
0 ./char/243:5
0 ./char/243:6
0 ./char/243:4
0 ./char/243:3
0 ./char/243:2
0 ./char/243:0
0 ./char/243:1
0 ./char/7:129
0 ./char/7:128
0 ./char/7:1
0 ./char/7:0
0 ./char/246:0
0 ./char/4:9
0 ./char/4:8
0 ./char/4:7
0 ./char/4:63
0 ./char/4:62
0 ./char/4:61
0 ./char/4:60
0 ./char/4:6
0 ./char/4:59
0 ./char/4:58
0 ./char/4:57
0 ./char/4:56
0 ./char/4:55
0 ./char/4:54
0 ./char/4:53
0 ./char/4:52
0 ./char/4:51
0 ./char/4:50
0 ./char/4:5
0 ./char/4:49
0 ./char/4:48
0 ./char/4:47
0 ./char/4:46
0 ./char/4:45
0 ./char/4:44
0 ./char/4:43
0 ./char/4:42
0 ./char/4:41
0 ./char/4:40
0 ./char/4:4
0 ./char/4:39
0 ./char/4:38
0 ./char/4:37
0 ./char/4:36
0 ./char/4:35
0 ./char/4:34
0 ./char/4:33
0 ./char/4:32
0 ./char/4:31
0 ./char/4:30
0 ./char/4:3
0 ./char/4:29
0 ./char/4:28
0 ./char/4:27
0 ./char/4:26
0 ./char/4:25
0 ./char/4:24
0 ./char/4:23
0 ./char/4:22
0 ./char/4:21
0 ./char/4:20
0 ./char/4:2
0 ./char/4:19
0 ./char/4:18
0 ./char/4:17
0 ./char/4:16
0 ./char/4:15
0 ./char/4:14
0 ./char/4:13
0 ./char/4:12
0 ./char/4:11
0 ./char/4:10
0 ./char/4:1
0 ./char/4:0
0 ./char/5:0
0 ./char/5:2
0 ./char/5:1
0 ./char/162:0
0 ./char/202:9
0 ./char/202:8
0 ./char/202:7
0 ./char/202:6
0 ./char/202:5
0 ./char/202:4
0 ./char/202:3
0 ./char/202:2
0 ./char/202:15
0 ./char/202:14
0 ./char/202:13
0 ./char/202:12
0 ./char/202:11
0 ./char/202:10
0 ./char/202:1
0 ./char/202:0
0 ./char/10:63
0 ./char/10:231
0 ./char/10:144
0 ./char/10:227
0 ./char/10:183
0 ./char/10:228
0 ./char/10:62
0 ./char/10:235
0 ./char/1:5
0 ./char/1:9
0 ./char/1:8
0 ./char/1:4
0 ./char/1:3
0 ./char/1:11
0 ./char/1:1
0 ./char/1:7
0 ./char/13:63
0 ./char/203:9
0 ./char/203:8
0 ./char/203:7
0 ./char/203:6
0 ./char/203:5
0 ./char/203:4
0 ./char/203:3
0 ./char/203:2
0 ./char/203:15
0 ./char/203:14
0 ./char/203:13
0 ./char/203:12
0 ./char/203:11
0 ./char/203:10
0 ./char/203:1
0 ./char/203:0
0 ./char/4:64
0 ./char/4:67
0 ./char/4:66
0 ./char/4:65
0 ./char/13:66
0 ./char/251:0
0 ./char/13:65
0 ./char/13:64
0 ./char/10:236
0 ./char/13:32
0 ./char/13:67
0 ./char
0 ./uhid
0 ./mapper/control
0 ./mapper
0 ./net/tun
0 ./net
0 ./loop-control
0 ./fuse
0 ./log
0 ./stderr
0 ./stdout
0 ./stdin
0 ./fd
0 ./core
0 ./pts/0
0 ./pts/ptmx
0 ./pts
0 ./shm/orahpatch_CH33
0 ./shm
0 ./cpu_dma_latency
0 ./mcelog
0 ./rtc0
0 ./input/event4
0 ./input/by-path/platform-pcspkr-event-spkr
0 ./input/by-path/platform-i8042-serio-0-event-kbd
0 ./input/by-path/platform-i8042-serio-1-event-mouse
0 ./input/by-path/platform-i8042-serio-1-mouse
0 ./input/by-path
0 ./input/event3
0 ./input/mouse0
0 ./input/event2
0 ./input/event1
0 ./input/event0
0 ./input/mice
0 ./input
0 ./usbmon0
0 ./nvram
0 ./hpet
0 ./raw/rawctl
0 ./raw
0 ./ttyS3
0 ./ttyS2
0 ./ttyS1
0 ./ttyS0
0 ./ptmx
0 ./autofs
0 ./snapshot
0 ./cpu/15/cpuid
0 ./cpu/15/msr
0 ./cpu/15
0 ./cpu/14/cpuid
0 ./cpu/14/msr
0 ./cpu/14
0 ./cpu/13/cpuid
0 ./cpu/13/msr
0 ./cpu/13
0 ./cpu/12/cpuid
0 ./cpu/12/msr
0 ./cpu/12
0 ./cpu/11/cpuid
0 ./cpu/11/msr
0 ./cpu/11
0 ./cpu/10/cpuid
0 ./cpu/10/msr
0 ./cpu/10
0 ./cpu/9/cpuid
0 ./cpu/9/msr
0 ./cpu/9
0 ./cpu/8/cpuid
0 ./cpu/8/msr
0 ./cpu/8
0 ./cpu/7/cpuid
0 ./cpu/7/msr
0 ./cpu/7
0 ./cpu/6/cpuid
0 ./cpu/6/msr
0 ./cpu/6
0 ./cpu/5/cpuid
0 ./cpu/5/msr
0 ./cpu/5
0 ./cpu/4/cpuid
0 ./cpu/4/msr
0 ./cpu/4
0 ./cpu/3/cpuid
0 ./cpu/3/msr
0 ./cpu/3
0 ./cpu/2/cpuid
0 ./cpu/2/msr
0 ./cpu/2
0 ./cpu/1/cpuid
0 ./cpu/1/msr
0 ./cpu/1
0 ./cpu/0/cpuid
0 ./cpu/0/msr
0 ./cpu/0
0 ./cpu
0 ./hwrng
0 ./tty63
0 ./tty62
0 ./tty61
0 ./tty60
0 ./tty59
0 ./tty58
0 ./tty57
0 ./tty56
0 ./tty55
0 ./tty54
0 ./tty53
0 ./tty52
0 ./tty51
0 ./tty50
0 ./tty49
0 ./tty48
0 ./tty47
0 ./tty46
0 ./tty45
0 ./tty44
0 ./tty43
0 ./tty42
0 ./tty41
0 ./tty40
0 ./tty39
0 ./tty38
0 ./tty37
0 ./tty36
0 ./tty35
0 ./tty34
0 ./tty33
0 ./tty32
0 ./tty31
0 ./tty30
0 ./tty29
0 ./tty28
0 ./tty27
0 ./tty26
0 ./tty25
0 ./tty24
0 ./tty23
0 ./tty22
0 ./tty21
0 ./tty20
0 ./tty19
0 ./tty18
0 ./tty17
0 ./tty16
0 ./tty15
0 ./tty14
0 ./tty13
0 ./tty12
0 ./tty11
0 ./tty10
0 ./tty9
0 ./tty8
0 ./tty7
0 ./tty6
0 ./tty5
0 ./tty4
0 ./tty3
0 ./tty2
0 ./tty1
0 ./vcsa1
0 ./vcs1
0 ./vcsa
0 ./vcs
0 ./tty0
0 ./console
0 ./tty
0 ./kmsg
0 ./urandom
0 ./random
0 ./full
0 ./zero
0 ./port
0 ./null
0 ./mem
0 ./vga_arbiter
0 .

The /dev/file system contains entries for all configured devices for the server; since device configurations consume no space the block counts are all 0. For other file systems will have non-zero values. Since du -a reports on all objects in the specified location it scn show which objects are consuming the largest amounts of space, better directing the DBA to areas of concern.

As good as du and df are neither one provides a date/time stamp showing which files/objects have been most recently modified. To get that information a common command, ls, is used. Command switches of interest for such an investigation are:

l long listing, including file size and modification date
t order the results by time, most recently modified on top
r reverse the time ordering
R perform a recursive report

Going back to the /dev file system executing an ls -ltrR reports:

$ ls -ltrR
.:
total 0
drwxrwxrwt. 2 root root 40 Dec 27 21:12 mqueue
drwxr-xr-x. 18 root root 360 Dec 27 21:12 cpu
drwxr-xr-x. 2 root root 60 Dec 27 21:12 raw
drwxr-xr-x. 2 root root 0 Dec 27 21:12 pts
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 stdout -> /proc/self/fd/1
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 stdin -> /proc/self/fd/0
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 stderr -> /proc/self/fd/2
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 fd -> /proc/self/fd
lrwxrwxrwx. 1 root root 11 Dec 27 21:12 core -> /proc/kcore
lrwxrwxrwx. 1 root root 28 Dec 27 21:12 log -> /run/systemd/journal/dev-log
crw——-. 1 root root 10, 239 Dec 27 21:12 uhid
drwxr-xr-x. 2 root root 60 Dec 27 21:12 net
drwxr-xr-x. 2 root root 60 Dec 27 21:12 mapper
drwxr-xr-x. 7 root root 140 Dec 27 21:12 disk
drwxr-xr-x. 2 root root 300 Dec 27 21:12 block
drwxr-xr-x. 2 root root 0 Dec 27 21:12 hugepages
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 initctl -> /run/initctl
crw——-. 1 root root 10, 223 Dec 27 21:12 uinput
crw——-. 1 root root 108, 0 Dec 27 21:12 ppp
crw——-. 1 root root 10, 241 Dec 27 21:12 vhost-vsock
crw——-. 1 root root 10, 238 Dec 27 21:12 vhost-net
crw——-. 1 root root 10, 137 Dec 27 21:12 vhci
drwxr-xr-x. 2 root root 60 Dec 27 21:12 vfio
drwxr-xr-x. 2 root root 80 Dec 27 21:12 snd
crw-rw—-. 1 root disk 10, 237 Dec 27 21:12 loop-control
crw-rw-rw-. 1 root root 10, 229 Dec 27 21:12 fuse
crw——-. 1 root root 243, 0 Dec 27 21:12 nvme0
crw——-. 1 root root 243, 1 Dec 27 21:12 nvme1
crw——-. 1 root root 243, 4 Dec 27 21:12 nvme4
crw——-. 1 root root 243, 3 Dec 27 21:12 nvme3
crw——-. 1 root root 243, 2 Dec 27 21:12 nvme2
crw——-. 1 root root 242, 3 Dec 27 21:12 ng2n1
crw——-. 1 root root 242, 1 Dec 27 21:12 ng1n1
crw——-. 1 root root 242, 0 Dec 27 21:12 ng0n1
crw——-. 1 root root 243, 7 Dec 27 21:12 nvme7
crw——-. 1 root root 243, 5 Dec 27 21:12 nvme5
crw——-. 1 root root 242, 7 Dec 27 21:12 ng3n1
crw——-. 1 root root 243, 6 Dec 27 21:12 nvme6
brw-rw—-. 1 root disk 259, 0 Dec 27 21:12 nvme0n1
crw——-. 1 root root 242, 4 Dec 27 21:12 ng7n1
crw——-. 1 root root 242, 5 Dec 27 21:12 ng5n1
crw——-. 1 root root 242, 9 Dec 27 21:12 ng4n1
crw——-. 1 root root 243, 9 Dec 27 21:12 nvme9
crw——-. 1 root root 243, 8 Dec 27 21:12 nvme8
crw——-. 1 root root 242, 6 Dec 27 21:12 ng6n1
crw——-. 1 root root 242, 8 Dec 27 21:12 ng9n1
crw——-. 1 root root 242, 10 Dec 27 21:12 ng8n1
crw——-. 1 root root 243, 10 Dec 27 21:12 nvme10
brw-rw—-. 1 root disk 259, 2 Dec 27 21:12 nvme0n1p2
crw——-. 1 root root 242, 2 Dec 27 21:12 ng10n1
crw-rw—-. 1 root dialout 4, 67 Dec 27 21:12 ttyS3
crw-rw—-. 1 root dialout 4, 66 Dec 27 21:12 ttyS2
crw-rw—-. 1 root dialout 4, 65 Dec 27 21:12 ttyS1
crw——-. 1 root root 251, 0 Dec 27 21:12 rtc0
lrwxrwxrwx. 1 root root 4 Dec 27 21:12 rtc -> rtc0
brw-rw—-. 1 root disk 259, 1 Dec 27 21:12 nvme0n1p1
brw-rw—-. 1 root disk 259, 12 Dec 27 21:12 nvme6n1
brw-rw—-. 1 root disk 259, 10 Dec 27 21:12 nvme5n1
drwxr-xr-x. 3 root root 200 Dec 27 21:12 input
brw-rw—-. 1 root disk 259, 7 Dec 27 21:12 nvme3n1
brw-rw—-. 1 root disk 259, 11 Dec 27 21:12 nvme8n1
brw-rw—-. 1 root disk 259, 8 Dec 27 21:12 nvme9n1
brw-rw—-. 1 root disk 259, 6 Dec 27 21:12 nvme2n1
brw-rw—-. 1 root disk 259, 5 Dec 27 21:12 nvme7n1
brw-rw—-. 1 root disk 259, 9 Dec 27 21:12 nvme4n1
brw-rw—-. 1 root disk 259, 3 Dec 27 21:12 nvme1n1
brw-rw—-. 1 root disk 259, 4 Dec 27 21:12 nvme10n1
crw-rw-rw-. 1 root root 1, 9 Dec 27 21:12 urandom
crw-rw-rw-. 1 root root 1, 8 Dec 27 21:12 random
crw-r—–. 1 root kmem 1, 4 Dec 27 21:12 port
crw-rw-rw-. 1 root root 1, 3 Dec 27 21:12 null
crw-r—–. 1 root kmem 1, 1 Dec 27 21:12 mem
crw-r–r–. 1 root root 1, 11 Dec 27 21:12 kmsg
crw-rw-rw-. 1 root root 1, 7 Dec 27 21:12 full
crw-rw-rw-. 1 root root 1, 5 Dec 27 21:12 zero
crw——-. 1 root root 10, 144 Dec 27 21:12 nvram
crw——-. 1 root root 10, 227 Dec 27 21:12 mcelog
crw——-. 1 root root 10, 183 Dec 27 21:12 hwrng
crw——-. 1 root root 10, 228 Dec 27 21:12 hpet
crw——-. 1 root root 10, 62 Dec 27 21:12 cpu_dma_latency
crw-r–r–. 1 root root 10, 235 Dec 27 21:12 autofs
crw——-. 1 root root 10, 63 Dec 27 21:12 vga_arbiter
crw——-. 1 root root 10, 231 Dec 27 21:12 snapshot
crw-rw-rw-. 1 root tty 5, 0 Dec 27 21:12 tty
crw——-. 1 root root 5, 1 Dec 27 21:12 console
crw–w—-. 1 root tty 4, 16 Dec 27 21:12 tty16
crw–w—-. 1 root tty 4, 13 Dec 27 21:12 tty13
crw–w—-. 1 root tty 4, 12 Dec 27 21:12 tty12
crw–w—-. 1 root tty 4, 11 Dec 27 21:12 tty11
crw–w—-. 1 root tty 4, 10 Dec 27 21:12 tty10
crw–w—-. 1 root tty 4, 1 Dec 27 21:12 tty1
crw–w—-. 1 root tty 4, 0 Dec 27 21:12 tty0
crw–w—-. 1 root tty 4, 23 Dec 27 21:12 tty23
crw–w—-. 1 root tty 4, 22 Dec 27 21:12 tty22
crw–w—-. 1 root tty 4, 21 Dec 27 21:12 tty21
crw–w—-. 1 root tty 4, 20 Dec 27 21:12 tty20
crw–w—-. 1 root tty 4, 2 Dec 27 21:12 tty2
crw–w—-. 1 root tty 4, 19 Dec 27 21:12 tty19
crw–w—-. 1 root tty 4, 18 Dec 27 21:12 tty18
crw–w—-. 1 root tty 4, 17 Dec 27 21:12 tty17
crw–w—-. 1 root tty 4, 15 Dec 27 21:12 tty15
crw–w—-. 1 root tty 4, 14 Dec 27 21:12 tty14
crw–w—-. 1 root tty 4, 3 Dec 27 21:12 tty3
crw–w—-. 1 root tty 4, 29 Dec 27 21:12 tty29
crw–w—-. 1 root tty 4, 28 Dec 27 21:12 tty28
crw–w—-. 1 root tty 4, 27 Dec 27 21:12 tty27
crw–w—-. 1 root tty 4, 26 Dec 27 21:12 tty26
crw–w—-. 1 root tty 4, 25 Dec 27 21:12 tty25
crw–w—-. 1 root tty 4, 24 Dec 27 21:12 tty24
crw–w—-. 1 root tty 4, 37 Dec 27 21:12 tty37
crw–w—-. 1 root tty 4, 36 Dec 27 21:12 tty36
crw–w—-. 1 root tty 4, 35 Dec 27 21:12 tty35
crw–w—-. 1 root tty 4, 34 Dec 27 21:12 tty34
crw–w—-. 1 root tty 4, 33 Dec 27 21:12 tty33
crw–w—-. 1 root tty 4, 32 Dec 27 21:12 tty32
crw–w—-. 1 root tty 4, 31 Dec 27 21:12 tty31
crw–w—-. 1 root tty 4, 30 Dec 27 21:12 tty30
crw–w—-. 1 root tty 4, 44 Dec 27 21:12 tty44
crw–w—-. 1 root tty 4, 43 Dec 27 21:12 tty43
crw–w—-. 1 root tty 4, 42 Dec 27 21:12 tty42
crw–w—-. 1 root tty 4, 41 Dec 27 21:12 tty41
crw–w—-. 1 root tty 4, 40 Dec 27 21:12 tty40
crw–w—-. 1 root tty 4, 4 Dec 27 21:12 tty4
crw–w—-. 1 root tty 4, 39 Dec 27 21:12 tty39
crw–w—-. 1 root tty 4, 38 Dec 27 21:12 tty38
crw–w—-. 1 root tty 4, 52 Dec 27 21:12 tty52
crw–w—-. 1 root tty 4, 51 Dec 27 21:12 tty51
crw–w—-. 1 root tty 4, 50 Dec 27 21:12 tty50
crw–w—-. 1 root tty 4, 5 Dec 27 21:12 tty5
crw–w—-. 1 root tty 4, 49 Dec 27 21:12 tty49
crw–w—-. 1 root tty 4, 48 Dec 27 21:12 tty48
crw–w—-. 1 root tty 4, 47 Dec 27 21:12 tty47
crw–w—-. 1 root tty 4, 46 Dec 27 21:12 tty46
crw–w—-. 1 root tty 4, 45 Dec 27 21:12 tty45
crw–w—-. 1 root tty 4, 59 Dec 27 21:12 tty59
crw–w—-. 1 root tty 4, 58 Dec 27 21:12 tty58
crw–w—-. 1 root tty 4, 57 Dec 27 21:12 tty57
crw–w—-. 1 root tty 4, 56 Dec 27 21:12 tty56
crw–w—-. 1 root tty 4, 55 Dec 27 21:12 tty55
crw–w—-. 1 root tty 4, 54 Dec 27 21:12 tty54
crw–w—-. 1 root tty 4, 53 Dec 27 21:12 tty53
crw–w—-. 1 root tty 4, 9 Dec 27 21:12 tty9
crw–w—-. 1 root tty 4, 8 Dec 27 21:12 tty8
crw–w—-. 1 root tty 4, 7 Dec 27 21:12 tty7
crw–w—-. 1 root tty 4, 63 Dec 27 21:12 tty63
crw–w—-. 1 root tty 4, 62 Dec 27 21:12 tty62
crw–w—-. 1 root tty 4, 61 Dec 27 21:12 tty61
crw–w—-. 1 root tty 4, 60 Dec 27 21:12 tty60
crw–w—-. 1 root tty 4, 6 Dec 27 21:12 tty6
crw-rw—-. 1 root tty 7, 129 Dec 27 21:12 vcsa1
crw-rw—-. 1 root tty 7, 128 Dec 27 21:12 vcsa
crw-rw—-. 1 root tty 7, 1 Dec 27 21:12 vcs1
crw-rw—-. 1 root tty 7, 0 Dec 27 21:12 vcs
crw——-. 1 root root 246, 0 Dec 27 21:12 usbmon0
crw-rw—-. 1 root tty 7, 134 Dec 27 21:12 vcsa6
crw-rw—-. 1 root tty 7, 6 Dec 27 21:12 vcs6
drwxr-xr-x. 2 root root 3200 Dec 27 21:12 char
crw–w—-. 1 root tty 4, 64 Dec 27 21:12 ttyS0
drwxrwxrwt. 3 root root 60 Jan 3 14:45 shm
crw-rw-rw-. 1 root tty 5, 2 Jan 3 16:01 ptmx

./mqueue:
total 0

./cpu:
total 0
drwxr-xr-x. 2 root root 80 Dec 27 21:12 1
drwxr-xr-x. 2 root root 80 Dec 27 21:12 0
drwxr-xr-x. 2 root root 80 Dec 27 21:12 9
drwxr-xr-x. 2 root root 80 Dec 27 21:12 8
drwxr-xr-x. 2 root root 80 Dec 27 21:12 7
drwxr-xr-x. 2 root root 80 Dec 27 21:12 6
drwxr-xr-x. 2 root root 80 Dec 27 21:12 5
drwxr-xr-x. 2 root root 80 Dec 27 21:12 4
drwxr-xr-x. 2 root root 80 Dec 27 21:12 3
drwxr-xr-x. 2 root root 80 Dec 27 21:12 2
drwxr-xr-x. 2 root root 80 Dec 27 21:12 15
drwxr-xr-x. 2 root root 80 Dec 27 21:12 14
drwxr-xr-x. 2 root root 80 Dec 27 21:12 13
drwxr-xr-x. 2 root root 80 Dec 27 21:12 12
drwxr-xr-x. 2 root root 80 Dec 27 21:12 11
drwxr-xr-x. 2 root root 80 Dec 27 21:12 10

./cpu/1:
total 0
crw——-. 1 root root 203, 1 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 1 Dec 27 21:12 msr

./cpu/0:
total 0
crw——-. 1 root root 203, 0 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 0 Dec 27 21:12 msr

./cpu/9:
total 0
crw——-. 1 root root 203, 9 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 9 Dec 27 21:12 msr

./cpu/8:
total 0
crw——-. 1 root root 203, 8 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 8 Dec 27 21:12 msr

./cpu/7:
total 0
crw——-. 1 root root 203, 7 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 7 Dec 27 21:12 msr

./cpu/6:
total 0
crw——-. 1 root root 203, 6 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 6 Dec 27 21:12 msr

./cpu/5:
total 0
crw——-. 1 root root 203, 5 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 5 Dec 27 21:12 msr

./cpu/4:
total 0
crw——-. 1 root root 203, 4 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 4 Dec 27 21:12 msr

./cpu/3:
total 0
crw——-. 1 root root 203, 3 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 3 Dec 27 21:12 msr

./cpu/2:
total 0
crw——-. 1 root root 203, 2 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 2 Dec 27 21:12 msr

./cpu/15:
total 0
crw——-. 1 root root 203, 15 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 15 Dec 27 21:12 msr

./cpu/14:
total 0
crw——-. 1 root root 203, 14 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 14 Dec 27 21:12 msr

./cpu/13:
total 0
crw——-. 1 root root 203, 13 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 13 Dec 27 21:12 msr

./cpu/12:
total 0
crw——-. 1 root root 203, 12 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 12 Dec 27 21:12 msr

./cpu/11:
total 0
crw——-. 1 root root 203, 11 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 11 Dec 27 21:12 msr

./cpu/10:
total 0
crw——-. 1 root root 203, 10 Dec 27 21:12 cpuid
crw——-. 1 root root 202, 10 Dec 27 21:12 msr

./raw:
total 0
crw-rw—-. 1 root disk 162, 0 Dec 27 21:12 rawctl

./pts:
total 0
c———. 1 root root 5, 2 Dec 27 21:12 ptmx
crw–w—-. 1 ec2-user tty 136, 0 Jan 3 16:01 0

./net:
total 0
crw-rw-rw-. 1 root root 10, 200 Dec 27 21:12 tun

./mapper:
total 0
crw——-. 1 root root 10, 236 Dec 27 21:12 control

./disk:
total 0
drwxr-xr-x. 2 root root 80 Dec 27 21:12 by-partuuid
drwxr-xr-x. 2 root root 60 Dec 27 21:12 by-label
drwxr-xr-x. 2 root root 260 Dec 27 21:12 by-uuid
drwxr-xr-x. 2 root root 300 Dec 27 21:12 by-path
drwxr-xr-x. 2 root root 600 Dec 27 21:12 by-id

./disk/by-partuuid:
total 0
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 6264d520-3fb9-423f-8ab8-7a0a8e3d3562 -> ../../nvme0n1p2
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 fac7f1fb-3e8d-4137-a512-961de09a5549 -> ../../nvme0n1p1

./disk/by-label:
total 0
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 root -> ../../nvme0n1p2

./disk/by-uuid:
total 0
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 209b92d1-3b0e-4ae9-b097-6f1a28febc31 -> ../../nvme0n1p2
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 aba1f778-fd32-4bf5-9919-18c8cebd559f -> ../../nvme5n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 9143e9e3-9fbf-4e77-bf82-466129c9ebfa -> ../../nvme6n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 dbd9bce3-60fc-4bee-a6bf-9df2d0111acb -> ../../nvme3n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 20e84628-8c68-465a-9edd-ef116d18efda -> ../../nvme8n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 3276edf0-0d19-46db-826e-b5cf9cfd7b91 -> ../../nvme9n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 076f218c-9306-4532-b805-33d413d60a51 -> ../../nvme2n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 68c15f1a-2be1-43bb-b94c-29dd8757bc24 -> ../../nvme4n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 3e0db4d4-1971-456a-b6d6-530797fe4d13 -> ../../nvme7n1
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 c0a1cf76-5b5e-4e6a-b7f1-d928452ecb14 -> ../../nvme10n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 39447692-bf1a-41f6-b579-41452c622067 -> ../../nvme1n1

./disk/by-path:
total 0
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 pci-0000:00:04.0-nvme-1 -> ../../nvme0n1
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 pci-0000:00:04.0-nvme-1-part2 -> ../../nvme0n1p2
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 pci-0000:00:04.0-nvme-1-part1 -> ../../nvme0n1p1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 pci-0000:00:1b.0-nvme-1 -> ../../nvme6n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 pci-0000:00:1a.0-nvme-1 -> ../../nvme5n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 pci-0000:00:18.0-nvme-1 -> ../../nvme3n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 pci-0000:00:1d.0-nvme-1 -> ../../nvme8n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 pci-0000:00:1e.0-nvme-1 -> ../../nvme9n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 pci-0000:00:17.0-nvme-1 -> ../../nvme2n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 pci-0000:00:16.0-nvme-1 -> ../../nvme1n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 pci-0000:00:1c.0-nvme-1 -> ../../nvme7n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 pci-0000:00:19.0-nvme-1 -> ../../nvme4n1
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 pci-0000:00:1f.0-nvme-1 -> ../../nvme10n1

./disk/by-id:
total 0
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-nvme.1d0f-766f6c3065313635613235353936306635613736-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 -> ../../nvme0n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol0e165a255960f5a76 -> ../../nvme0n1
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 nvme-nvme.1d0f-766f6c3065313635613235353936306635613736-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part2 -> ../../nvme0n1p2
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol0e165a255960f5a76-part2 -> ../../nvme0n1p2
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 wwn-nvme.1d0f-766f6c3065313635613235353936306635613736-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part2 -> ../../nvme0n1p2
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 nvme-nvme.1d0f-766f6c3065313635613235353936306635613736-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part1 -> ../../nvme0n1p1
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol0e165a255960f5a76-part1 -> ../../nvme0n1p1
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 wwn-nvme.1d0f-766f6c3065313635613235353936306635613736-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part1 -> ../../nvme0n1p1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-nvme.1d0f-766f6c3030303637653037363063613866303064-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 -> ../../nvme6n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol00067e0760ca8f00d -> ../../nvme6n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-nvme.1d0f-766f6c3038626562633137353662303736626533-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 -> ../../nvme5n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol08bebc1756b076be3 -> ../../nvme5n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-nvme.1d0f-766f6c3037356566623137326166363433386235-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 -> ../../nvme3n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol075efb172af6438b5 -> ../../nvme3n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-nvme.1d0f-766f6c3035343837386435333238326633383634-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 -> ../../nvme8n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol054878d53282f3864 -> ../../nvme8n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-nvme.1d0f-766f6c3035333436363932613663316666313466-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 -> ../../nvme9n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol05346692a6c1ff14f -> ../../nvme9n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-nvme.1d0f-766f6c3032356238323634323231323365663038-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 -> ../../nvme2n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol025b826422123ef08 -> ../../nvme2n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-nvme.1d0f-766f6c3065613230613733613639646139623836-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 -> ../../nvme4n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-nvme.1d0f-766f6c3039353230326531333131653764333435-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 -> ../../nvme1n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-nvme.1d0f-766f6c3032373332373465373565386138366538-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 -> ../../nvme7n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol0ea20a73a69da9b86 -> ../../nvme4n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol095202e1311e7d345 -> ../../nvme1n1
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol0273274e75e8a86e8 -> ../../nvme7n1
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 nvme-nvme.1d0f-766f6c3062656531663461323431353434386531-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001 -> ../../nvme10n1
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 nvme-Amazon_Elastic_Block_Store_vol0bee1f4a2415448e1 -> ../../nvme10n1

./block:
total 0
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 259:0 -> ../nvme0n1
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 259:2 -> ../nvme0n1p2
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 259:1 -> ../nvme0n1p1
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 259:12 -> ../nvme6n1
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 259:10 -> ../nvme5n1
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 259:7 -> ../nvme3n1
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 259:11 -> ../nvme8n1
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 259:8 -> ../nvme9n1
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 259:6 -> ../nvme2n1
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 259:9 -> ../nvme4n1
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 259:5 -> ../nvme7n1
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 259:3 -> ../nvme1n1
lrwxrwxrwx. 1 root root 11 Dec 27 21:12 259:4 -> ../nvme10n1

./hugepages:
total 0

./vfio:
total 0
crw——-. 1 root root 10, 196 Dec 27 21:12 vfio

./snd:
total 0
crw-rw—-. 1 root audio 116, 33 Dec 27 21:12 timer
crw-rw—-. 1 root audio 116, 1 Dec 27 21:12 seq

./input:
total 0
crw-rw—-. 1 root input 13, 32 Dec 27 21:12 mouse0
crw-rw—-. 1 root input 13, 63 Dec 27 21:12 mice
crw-rw—-. 1 root input 13, 67 Dec 27 21:12 event3
crw-rw—-. 1 root input 13, 65 Dec 27 21:12 event1
crw-rw—-. 1 root input 13, 64 Dec 27 21:12 event0
crw-rw—-. 1 root input 13, 66 Dec 27 21:12 event2
crw-rw—-. 1 root input 13, 68 Dec 27 21:12 event4
drwxr-xr-x. 2 root root 120 Dec 27 21:12 by-path

./input/by-path:
total 0
lrwxrwxrwx. 1 root root 9 Dec 27 21:12 platform-i8042-serio-1-mouse -> ../mouse0
lrwxrwxrwx. 1 root root 9 Dec 27 21:12 platform-i8042-serio-1-event-mouse -> ../event3
lrwxrwxrwx. 1 root root 9 Dec 27 21:12 platform-i8042-serio-0-event-kbd -> ../event2
lrwxrwxrwx. 1 root root 9 Dec 27 21:12 platform-pcspkr-event-spkr -> ../event4

./char:
total 0
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 243:0 -> ../nvme0
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 243:1 -> ../nvme1
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 243:4 -> ../nvme4
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 243:3 -> ../nvme3
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 243:2 -> ../nvme2
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 242:1 -> ../ng1n1
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 242:0 -> ../ng0n1
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 243:7 -> ../nvme7
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 243:5 -> ../nvme5
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 242:7 -> ../ng3n1
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 242:3 -> ../ng2n1
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 243:6 -> ../nvme6
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 242:9 -> ../ng4n1
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 242:5 -> ../ng5n1
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 242:4 -> ../ng7n1
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 243:9 -> ../nvme9
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 243:8 -> ../nvme8
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 242:6 -> ../ng6n1
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 242:8 -> ../ng9n1
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 242:10 -> ../ng8n1
lrwxrwxrwx. 1 root root 9 Dec 27 21:12 243:10 -> ../nvme10
lrwxrwxrwx. 1 root root 9 Dec 27 21:12 242:2 -> ../ng10n1
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:66 -> ../ttyS2
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:65 -> ../ttyS1
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 251:0 -> ../rtc0
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:64 -> ../ttyS0
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 13:32 -> ../input/mouse0
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:67 -> ../ttyS3
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 203:2 -> ../cpu/2/cpuid
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 203:15 -> ../cpu/15/cpuid
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 203:12 -> ../cpu/12/cpuid
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 203:11 -> ../cpu/11/cpuid
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 203:1 -> ../cpu/1/cpuid
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 203:0 -> ../cpu/0/cpuid
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 203:9 -> ../cpu/9/cpuid
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 203:8 -> ../cpu/8/cpuid
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 203:7 -> ../cpu/7/cpuid
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 203:6 -> ../cpu/6/cpuid
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 203:5 -> ../cpu/5/cpuid
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 203:4 -> ../cpu/4/cpuid
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 203:3 -> ../cpu/3/cpuid
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 203:14 -> ../cpu/14/cpuid
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 203:13 -> ../cpu/13/cpuid
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 203:10 -> ../cpu/10/cpuid
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 1:9 -> ../urandom
lrwxrwxrwx. 1 root root 9 Dec 27 21:12 1:8 -> ../random
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 1:7 -> ../full
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 1:4 -> ../port
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 13:63 -> ../input/mice
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 1:3 -> ../null
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 1:11 -> ../kmsg
lrwxrwxrwx. 1 root root 6 Dec 27 21:12 1:1 -> ../mem
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 1:5 -> ../zero
lrwxrwxrwx. 1 root root 18 Dec 27 21:12 10:62 -> ../cpu_dma_latency
lrwxrwxrwx. 1 root root 17 Dec 27 21:12 10:236 -> ../mapper/control
lrwxrwxrwx. 1 root root 9 Dec 27 21:12 10:235 -> ../autofs
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 10:228 -> ../hpet
lrwxrwxrwx. 1 root root 9 Dec 27 21:12 10:227 -> ../mcelog
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 10:183 -> ../hwrng
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 202:13 -> ../cpu/13/msr
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 202:12 -> ../cpu/12/msr
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 202:11 -> ../cpu/11/msr
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 202:10 -> ../cpu/10/msr
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 202:1 -> ../cpu/1/msr
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 202:0 -> ../cpu/0/msr
lrwxrwxrwx. 1 root root 14 Dec 27 21:12 10:63 -> ../vga_arbiter
lrwxrwxrwx. 1 root root 11 Dec 27 21:12 10:231 -> ../snapshot
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 10:144 -> ../nvram
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 202:6 -> ../cpu/6/msr
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 202:5 -> ../cpu/5/msr
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 202:4 -> ../cpu/4/msr
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 202:3 -> ../cpu/3/msr
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 202:2 -> ../cpu/2/msr
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 202:15 -> ../cpu/15/msr
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 202:14 -> ../cpu/14/msr
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 202:9 -> ../cpu/9/msr
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 202:8 -> ../cpu/8/msr
lrwxrwxrwx. 1 root root 12 Dec 27 21:12 202:7 -> ../cpu/7/msr
lrwxrwxrwx. 1 root root 13 Dec 27 21:12 162:0 -> ../raw/rawctl
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 5:2 -> ../ptmx
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 5:1 -> ../console
lrwxrwxrwx. 1 root root 6 Dec 27 21:12 5:0 -> ../tty
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:13 -> ../tty13
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:12 -> ../tty12
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:11 -> ../tty11
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:10 -> ../tty10
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 4:1 -> ../tty1
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 4:0 -> ../tty0
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:21 -> ../tty21
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:20 -> ../tty20
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 4:2 -> ../tty2
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:19 -> ../tty19
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:18 -> ../tty18
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:17 -> ../tty17
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:16 -> ../tty16
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:15 -> ../tty15
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:14 -> ../tty14
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 4:3 -> ../tty3
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:29 -> ../tty29
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:28 -> ../tty28
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:27 -> ../tty27
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:26 -> ../tty26
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:25 -> ../tty25
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:24 -> ../tty24
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:23 -> ../tty23
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:22 -> ../tty22
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:36 -> ../tty36
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:35 -> ../tty35
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:34 -> ../tty34
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:33 -> ../tty33
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:32 -> ../tty32
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:31 -> ../tty31
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:30 -> ../tty30
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:43 -> ../tty43
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:42 -> ../tty42
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:41 -> ../tty41
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:40 -> ../tty40
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 4:4 -> ../tty4
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:39 -> ../tty39
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:38 -> ../tty38
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:37 -> ../tty37
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:51 -> ../tty51
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:50 -> ../tty50
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 4:5 -> ../tty5
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:49 -> ../tty49
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:48 -> ../tty48
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:47 -> ../tty47
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:46 -> ../tty46
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:45 -> ../tty45
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:44 -> ../tty44
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:58 -> ../tty58
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:57 -> ../tty57
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:56 -> ../tty56
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:55 -> ../tty55
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:54 -> ../tty54
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:53 -> ../tty53
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:52 -> ../tty52
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 4:9 -> ../tty9
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 4:8 -> ../tty8
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 4:7 -> ../tty7
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:63 -> ../tty63
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:62 -> ../tty62
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:61 -> ../tty61
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:60 -> ../tty60
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 4:6 -> ../tty6
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 4:59 -> ../tty59
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 7:129 -> ../vcsa1
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 7:128 -> ../vcsa
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 7:1 -> ../vcs1
lrwxrwxrwx. 1 root root 6 Dec 27 21:12 7:0 -> ../vcs
lrwxrwxrwx. 1 root root 10 Dec 27 21:12 246:0 -> ../usbmon0
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 13:67 -> ../input/event3
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 13:65 -> ../input/event1
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 13:64 -> ../input/event0
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 13:66 -> ../input/event2
lrwxrwxrwx. 1 root root 15 Dec 27 21:12 13:68 -> ../input/event4
lrwxrwxrwx. 1 root root 7 Dec 27 21:12 7:6 -> ../vcs6
lrwxrwxrwx. 1 root root 8 Dec 27 21:12 7:134 -> ../vcsa6

./shm:
total 0
drwx——. 2 oracle oracle 40 Dec 27 21:13 orahpatch_CH33
ls: cannot open directory ‘./shm/orahpatch_CH33’: Permission denied
[ec2-user@mut01-ora1 dev]$

This can produce a rather extensive listing, however when space issues arise such output can be extremely helpful in identifying a file or set of files causing the problem.

Output from any of these commands can be directed to a file for easier handling. Since /tmp is rarely an issue directing the output from any of these commands to a file on /tmp will keep the output separate from the file system under investigation. Remember to remove the generated file once the investigation has completed and the issue is resolved.

Be mindful of both the configured space and the available space on any device under investigation; the Use% column of df should be carefully considered before making any changes. 90% usage of a 16 TB file system still provides 1.6 TM of space; in such cases it may be better to adjust any thresholds monitoring tools are using to report issues. A 5% threshold on that same 16 TB file system provides a cushion of 800 GB, plenty of space to allow storage to be added before any real emergency would exist.

Space management for database systems doesn’t stop at the tablespace level, so the DBA should be familiar with space management utilities provided by the operating system vendor. Remember, too, that more than one command or function may be useful in investigating space issues; space investigations are progressive, and it may be necessary to dig deeper to find the source and, as a result, a solution. Perseverance can pay big dividends where space is concerned; patience is key to solving many a space issue. Hopefully the tools presented here will be useful to those DBAs working on UNIX/Linux platforms.

Go forth, and conquer.

December 22, 2023

At Your Service

Filed under: General — dfitzjarrell @ 10:20

In a world of ever-increasing access account security has become a major concern. More often than not accounts used for automated processes gather more and more privileges, on the assumption that they are used only by managed, automated processes and execute vetted code, The argument is that these automated processes can’t go rogue and do damage with these elevated privileges, which is true, sort of. In and of itself the database does not police where these account connections originate, which can pose a significant security risk should this vetted code, or passwords for these privileged accounts, become known.

Oracle has, for decades, provided after logon triggers, which can be utilized to ‘police’ accounts by reporting where they originate and, if necessary, disallowing such logons. Of course it will be necessary to record such connection ‘violations’ in a table so that they may be investigated. A “complete” script to do this is shown below:

— Create reference table of usernames and machines

create table db_audit.ip_restrict(
username varchar2(128),
machine varchar2(128))
tablespace db_audit_data;

alter table db_audit.ip_restrict add constraint ip_rest_pk primary key(username, machine) tablespace db_audit_data;

— Populate table

insert into db_audit.ip_restrict
select distinct username, machine
from v$session;

commit;

delete from db_audit.ip_restrict
where username hot like ‘%USER%’
and username like ‘%local’
and username not like ‘SPLUNK%’;

commit;

— Create access violations table

create table db_audit.access_violations(
username varchar2(128),
machine varchar2(128),
terminal varchar2(128),
violation_dt date)
tablespace db_audit_data;

alter table db_audit.access_violations add constraint acc_viol_pk primary key(username, terminal, violation_dt);

— Violating session info table

create table db_audit.violating_sessions
as select * from v$session
where 0 = 1;

And the trigger:


— Create after logon trigger to verify user can

— login to the database

— Application users can ONLY login from their configured

— machines

— Should prevent regular users from connecting with service

— accounts for security reasons

create or replace trigger ip_rest_trg
after logon on database
declare
v_user dba_users.username%type;
v_mach v$session.machine%type;
v_valid number:=0;
begin
select username, machine
into v_user, v_mach
from v$session
where sid = sys_context(‘USERENV’,’SID’);

select 1
into v_valid
from db_audit.ip_restrict
where username = v_user
and machine = v_mach
    and username in 
( <list accounts individually here>);


if v_valid <> 1 
and v_user in 
( <list accounts individually here>)
then
    insert into db_audit.access_violations
    select v_user, v_mach, sys_context('userenv','terminal'), sysdate from dual;

    insert into db_audit.violating_sessions
    select * from v$session
    where sid = sys_context('userenv','sid');

    raise_application_error(-20998, 'Invalid access, logon denied.');
end if;

end;
/

The process works because it us presumed that all current logons by these declared service accounts are originating from valid servers. The DBA must verify this information in the reference table prior to implementing this process. Once the account/server reference table data is accurate the trigger can be implemented and logons by these service accounts policed. Any violations will be recorded for later examination.

Notice that no service accounts are actually listed in this code; each database installation will have its own set of service accounts and these will be specifically listed in the trigger code where the placeholder is found. The format is:

(‘acct’,’acct’,’acct’,…)

For example if the accounts of interest are BOB, ED, FRED and ORVILLE that section of code would appear as:


select username, machine
into v_user, v_mach
from v$session
where sid = sys_context(‘USERENV’,’SID’);

select 1
into v_valid
from db_audit.ip_restrict
where username = v_user
and machine = v_mach
    and username in 
('BOB','ED','FRED','ORVILLE');

if v_valid <> 1 
and v_user in 
('BOB','ED','FRED','ORVILLE');
then
    insert into db_audit.access_violations
    select v_user, v_mach, sys_context('userenv','terminal'), sysdate from dual;

    insert into db_audit.violating_sessions
    select * from v$session
    where sid = sys_context('userenv','sid');

    raise_application_error(-20998, 'Invalid access, logon denied.');
end if;

end;
/

The reference table data does need regular maintenance since new service accounts can be created, logging in from new servers. Such additions should not be surprises, making such maintenance easier. As long as the reference table data is current all invalid logins should be captured and reported. Of course when new accounts are added the trigger code will need to be modified and recompiled to include those new entries. Accounts not specifically listed will pass through the trigger unscathed, so not including new service accounts will allow those accounts to connect from any machine and not be audited, so due diligence is required to maintain the audit process.

Once a violation is discovered it is recorded into the access_violations table and the current session information is inserted into the violating_sessions table before the rogue session is terminated. This provides the DBA team with all of the information required to trace the offending session to the user who tried to create it.

It may be that not all ‘violating’ sessions are actually suspicious — password changes for service accounts do need to be tested and connections using these accounts can therefore originate from servers not otherwise used for those logins. Password testing should be a known event and, when such changes occur, the trigger can be disabled for the period those tests are active. Another possibility is to temporarily add the service account/machine pair to the reference table; it is required that these temporary entries be immediately removed once testing is completed. And another possibility is to modify the reference table and the trigger code to utilize a VALID column, populated with Y and N, to indicate wihch rows in the table should be used for connection restriction. Setting VALID to Y For testing entries allows t hem to connect during the testing window; remember to reset those test entries to N once testing has successfully completed.

Security is not easy, but it is worth the effort. Preventing unauthorized logins by privileged accounts is necessary to deter hackers and those who want elevated privileges they do not normally possess. Service accounts are designed for automated processes that execute known, unchanging code on a regular schedule using passwords stored securely and retrieved with API calls to the calling application, preventing prying eyes from seeing these passwords. By the same token such API calls may also be hijacked and used to gain access to these supercharged accounts. These are the situations the trigger shown here is designed to thwart. As long as the DBA team stays vigilant the incidence of successful rogue privileged logins should be 0.

Monitoring access to a database by privileged accounts should be a must for any database system in current use. The trigger and code provided here should be of help to DBAs wanting to secure their systems from external as well as internal attacks. Not every threat to a database is from outside the enterprise; internal attacks are even more insidious as they are already inside the firewall which is one of the first lines of defense used to restrict access. Monitoring both sides of that barrier is the only way to truly capture and disable those attacks. Remember that such attacks are crimes of opportunity. The method provided here is one way to reduce those opportunities and guard the gate from intruders.

“ID, please.”

November 7, 2023

A Preventable Disaster

Filed under: General — dfitzjarrell @ 07:39

.

Let me tell you a story…

Not so long ago, in a land not so far away, a mournful cry was heard:

“Why am I not seeing current data from the standby?!?”

“Let me check … oh, shadoobie noobie …”

“Shadoobie noobie?!?!?!?!? What the heck?????”

“Seems the standby is WAAAAAAAAY behind the primary.”

“How far behind??”

“Ummm, I don’t know …”

“And HOW did this happen?!?!?!?”

“Well, I went on vacation and I guess no one was watching the standby…”

And it continued on, descending into new levels of Hell rarely seen. The end result was a rebuild of the standby database, which inconvenienced the users and consumed resources better used for other tasks. Let’s now look at ways this debacle could have been avoided.

Oracle provides a number of views to monitor a standby database and its replication tasks, and these views can be used to construct an automated process to check various aspects of standby replication progress, including issues with the replication processes. Some tasks are run on the primary, others on the standby, but a complete picture of standby status can be revealed in minutes, or less. Putting such tasks into a scheduled job, along with email notifications, can put problems front and center to the DBA team.

V$MANAGED_STANDBY provides information on the LNS and ARCH processes, LNS when queried from the primary and all replication processes on the standby. Regular excursions into this helpful view can immediately reveal synchronization issues when they occur. Given a standard Data Guard setup, with standby redo logs, at least one additional logfile is created to be a buffer to keep the LNS/LGWR process working. In this situation, however, the most common cause is a network link issue, making the standby unreachable. In a worst-case scenario when the network connectivity is restored there will likely be more data to transfer than the standby redo logs can handle. Yes, apply works rather quickly but with the flood of data arrives after the network connectivity is restored may cause issues. If the outage was brief then the missing transactions will be found in the archivelogs and the recovery process will find those and apply them. But if the outage isn’t detected until a day or more has passed then intervention is necessary, using RMAN to restore the missing logs to the primary. Restoring to the primary uses the data guard processes to transfer those logs to the standby and register 0them so that the ARCH process can use them. OF course it helps to know WHICH logs are AWOL and Oracle knows this information. The V$ARCHIVE_LAG view reports the log sequence range that is missing; providing those sequence number to RMAN will get those logs restored, resuming the apply process and getting the standby database current with the primary. This can be monitored using the query below:

SELECT *
FROM V$MANAGED_STANDBY
WHERE THREAD# > 0
AND GROUP# <> ‘N/A’
AND STATUS <> ‘CLOSING’

Normally that query would return one row of data, however once the ARCH process applies missing transactions multiple rows of data will be returned, allowing a DBA to closely monitor the processes involved. Once the standby is current a single row will be returned, for the currently running apply process.

When standby redo logs are in use (which should be always for releases 11.2 and later) the archive logs are simply a second source of redo information. Standby redo logs are populated at the time the redo logs are written on the primary, making apply a near-real-time process. The benefit here is the faster access to the redo stream, basically eliminating the normal wait for a log switch to send the archivelogs to the standby (a wait of up to 15 minutes, or more). Since the archivelogs are not the primary source of data of such a standby configuration it may “slip the mind” of the DBA to monitor the destination to ensure sufficient space is available for those logs. Properly configured the standby redo logs should be the in use by LGWR/LNS so a full secondary archivelog destination shouldn’t interrupt recovery. This DOESN’T mean ignore the issue, get it resolved as quickly as possible as these logs are the safety net for Data Guard.

Reviewing the information:

  • Monitor the network link between primary and standby
  • Verify redo is being applied to the standby
  • Monitor the standby redo logs
  • Set up a script to report on important aspects of the Data Guard configuration
    and report errors
  • Be diligent in addressing and resolving reported errors

To simplify matters the following script is provided to monitor the primary and standby databases:

set linesize 300 trimspool on numwidth 15 pagesize 30 feedback on
column name format a60
column message format a80
column client_pid format a10
column client_dbid format a12
column group# format 99990
column thread# format 9999999
column pid format a12
column con_id noprint

prompt *
prompt * Logs sent to standby *
prompt *
SELECT THREAD#, SEQUENCE#, CREATOR, REGISTRAR,case when APPLIED=’NO’ and registrar = ‘ARCH’ then ‘ARCHIVED FILE’ else applied end applied FROM V$ARCHIVED_LOG
WHERE SEQUENCE# >= (SELECT MAX(SEQUENCE#) – 5 FROM V$ARCHIVED_LOG)
ORDER BY SEQUENCE#;

prompt *
prompt * Protection mode *
prompt *
SELECT DATABASE_ROLE, DB_UNIQUE_NAME INSTANCE, OPEN_MODE, –
PROTECTION_MODE, PROTECTION_LEVEL, SWITCHOVER_STATUS –
FROM V$DATABASE;

prompt *
prompt * Fast-start failover *
prompt *
SELECT FS_FAILOVER_STATUS FSFO_STATUS, FS_FAILOVER_CURRENT_TARGET –
TARGET_STANDBY, FS_FAILOVER_THRESHOLD THRESHOLD, –
FS_FAILOVER_OBSERVER_PRESENT OBS_PRES –
FROM V$DATABASE;

prompt *
prompt * Archive log gap *
prompt *
SELECT *
FROM V$ARCHIVE_GAP;

col group# format a5
prompt *
prompt * Redo apply *
prompt *
SELECT PROCESS, PID, STATUS , CLIENT_PROCESS, CLIENT_PID, CLIENT_DBID,
GROUP# , RESETLOG_ID, THREAD#, SEQUENCE#, BLOCK# , BLOCKS
FROM V$MANAGED_STANDBY
WHERE THREAD# > 0
AND GROUP# <> ‘N/A’
AND STATUS <> ‘CLOSING’;

col group# format 99990
prompt *
prompt * Standby redo logs *
prompt *
select group#, thread#, sequence#, used, archived, status, first_change#, first_time, last_change#, last_time
from v$standby_log
where status <> ‘UNASSIGNED’;

prompt *
prompt * Synchronization report *
prompt *
SELECT ARCHIVED_THREAD#, ARCHIVED_SEQ#, APPLIED_THREAD#, APPLIED_SEQ#
FROM V$ARCHIVE_DEST_STATUS
WHERE APPLIED_THREAD# > 0;

prompt *
prompt * Registered logs report *
prompt *
SELECT REGISTRAR, CREATOR, THREAD#, SEQUENCE#, FIRST_CHANGE#,
NEXT_CHANGE# FROM V$ARCHIVED_LOG
WHERE SEQUENCE# >= (SELECT MAX(SEQUENCE#) – 5 FROM V$ARCHIVED_LOG)
AND THREAD# > 0;

prompt *
prompt * Log history *
prompt *
SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#
FROM V$LOG_HISTORY
WHERE SEQUENCE# >= (SELECT MAX(SEQUENCE#) – 10 FROM V$ARCHIVED_LOG);

clear columns

Notice the areas this script covers — these should be regularly monitored. THe script even includes checking if a gap is present in the archivelog sequences, an area that may be overlooked in many organizations. It also reports on the redo data sent to the standby, as either standby log entries or archivelog files, good information to have should something go awry.

Getting this into cron, for example, would be a simple task of writing a shell script wrapper, like the one shown below, where the commands provided above are saved as mon_standby.sql:

!/usr/bin/ksh

—————————————————————————–

mon_standby.sh

—————————————————————————–

#

Report on standby database

#

Revision History:

Vrsn Date Author Description

—- ———- ——————- —————————————-

1.0 Oct 2023 David Fitzjarrell Initial version

#

—————————————————————————–

#

#

Find database and set the proper environment

#
set -A database ps -ef | grep [p]mon | awk -F"_" '{print $3}'

for i in ${database[@]}

#

Set the environment for the database

#
do

. $HOME/setEnv.sh > /dev/null

#

Start SQL*Plus and generate a scripts/query duration report for each running database

#

sqlplus /nolog <<EOF

connect / as sysdba
spool @HOME/standby/reports/standby.log
@$HOME/standby/mon_standby.sql
spool off
EOF

done

#

Make the report readable by all – Why?

#
cd @HOME/standby/reports

chmod 666 *.log
mv standby.log standby_date "+%m%d%y%H%M".log

#

Clean up reports older than 31 days

#
find . -mtime +31 -exec /bin/rm -f {} \;

exit 0

Of course necessary directories must be created or the script edited to point to the desired locations. Also use whichever script properly sets the environment so the proper database is selected and reported on. Scheduled twice daily, at a minimum, should generate reports the DBA team can use to monitor the primary/standby configuration. Place this script on both the primary and the standby; making this a regular part of daily database health checks should prevent unknown outages/missing data on the standby database.

Knowing that a standby database is a company’s first line of defense in the battle for high availability it should be of primary concern to the DBA team to keep it running as smoothly as possible. Regular monitoring and switchover drills can ensure the DBAs will be prepared when the unthinkable happens. If such checks are not part of the daily health check process add them now to prevent interruptions in service. It’s cheap insurance that every company should employ.

Because such stories should have happier endings.

October 6, 2023

“LOB that over here!”

Filed under: General — dfitzjarrell @ 08:35

One of the few errors that strikes fear in the heart of a DBA is the dreaded:

ORA-01555 snapshot too old
and
ORA-22924 snapshot too old

Of course there are plenty of blogs instructing the DBA to simply “increase the undo_retention”, and there are cases where this works as expected. However, LOBs can be different as two different mechanisms exist for undo management. A LOB column can be configured to use retention to manage before mages of the data, but that can be confusing as each LOB column MAY have its own retention setting. The DBA_LOBS view reports whether LOG column uses retention or pctversion to manage undo, and the associated setting being used. Let’s -dig into this a bit deeper.

LOB columns can have before images of data managed by time (retention) or by space (pctversion), Retention uses the traditional UNDO tablespace to store before images of data, and is subject to the configured retention period. There is only one way to assign retention to a LOB:

  • Use the configured unto_retention value set for the database

Different LOB columns can display different retention settings depending upon when retention was set. The default, out of the box, retention id 900 seconds. When a LOB is changed to use retention rather than pctversion the current undo_retention value is set. Unfortunately should undo_retention be modified after a LOB is changed that new value will NOT be reflected in any LOB columns set to use retention. DBA_LOBS will provide pctversion and retention values for all LOBs in the database with three possible combinations:

1) Retention is set to a specific value and pctversion is NULL
2) pctversion is set and retention is NULL
3) Both are NULL

When BOTH retention and pctversion are NULL then the LOB is likely partitioned and DBA_LOB_PARTITIONS should supply the undo management settings. If partitioning is not available then it’s likely that the LOB is set to retention and the default undo_retention setting for the database is used. When pctversion is configured the UNDO tablespace is NOT used, storing any before images of the data in an area of the LOB reserved for this purpose. DBA_LOBS will report the percent of the LOB storage reserved for undo management. Increasing the pctversion setting will allocate more of the LOB storage for before images, hopefully correcting the source of the ORA-01555/ORA-22924 error.

Let’s go back to table X and column Y and address a “snapshot too old” error. We will begin with the easiest to address, Y is using the database-wide undo_retention setting, and it’s set to 1200 seconds. A query involving column Y throws an ORA-22924 error; the DBA checks the alert log for the query duration. Setting the system-wide undo_retention to the query duration (or slightly longer) should preserve the before image long enough to successfully execute the problem query. A repeat run should be error-free. If not, repeat the process, using the new query duration reported in the alert log.

Let us now say that LOB Y has its own retention setting of 900 seconds (set when that LOB was altered to use retention rather than pctversion).. As discussed previously changing the undo_retention won’t fix anything, the LOB itself must be modified.a By default when a LOB is set to retention the current undo_retention is set for the LOB being modified. Until the LOB is switched to pctversion and back to retention no change to undo_retention will be applied. Recently the system-wide undo_retention was set to 28800 seconds. The process of setting the new undo_retention for that LOB is as follows:

SQL> alter table X modify lob (Y) (pctversion 10);

Table altered.

SQL> alter table X modify lob (Y) (retention);

Table altered.

Examining DBA_LOBS for the current settings we find that retention is now set to 28800, up considerably from the initial setting of 900.

If pctversion is the undo method then simply altering the current pctversion setting is the action to take. If the LOB is set to a pctversion of 10 then setting it to a larger value should address the problem:

SQL> alter table X modify lob (Y) (pctversion 20);

Table altered.

Now 20 percent of the LOB storage is reserved for updates, which should correct the error.

In all of these situations the DBA must monitor the problem query to ensure it completes error-free. Since addressing such errors is an iterative process the initial change may not correct the error. Remember that SQL*Plus reports the query duration up until the error is thrown, so changes to the undo settings will likely increase the duration if the error is still being thrown. Know that it is not unusual for undo_retention to be set as high as 50000.

Another possibility is that retention is configured but the undo is guaranteed; guaranteed undo can throw other errors besides the “usual” 1555/22924 errors. Checking undo see if undo is guaranteed is fairly straightforward:

SQL> select retention from dba_tablespaces where tablespace_name like ‘%UNDO%’ ;

RETENTION

NOGUARANTEE

SQL>

If GUARANTEE is returned it may be worth the effort to remove the undo guarantee. [Guaranteed undo retention can be great for long-0running queries and transactions but can cause issues for other transactions as they may fail for lack of undo space. Normally unexpired undo can be “stolen: by other transactions if the space is needed to keep the transaction alive. A guarantee on the UNDO space prevents these steals from occurring. Guaranteed retention is a tablespace-level setting; as such it is very unusual to have UNDO guarantees in place for that very reason.] Be aware that changes to a production database usually require an outage and a maintenance request of some form along with approval of the work to be completed. Again monitor the database closely after such changes are completed to ensure other problems do not arise from the change.

LOB columns can be an interesting challenge for a DBA, especially when “snapshot too old” errors are thrown. Knowing the “ins and outs” of LOB undo management should make the task of managing LOB columns a bit easier. The DBA_LOBS view is the source for that information — that should be the first place the DBA should be looking when ORA-01555/ORA-22924 errors appear. The more the DBA knows the faster the situation can be resolved, and that makes EVERYONE happy.

Happiness is a good thing.

September 26, 2023

Getting Fresh

Filed under: stats — dfitzjarrell @ 07:24

Statistics are the backbone of the CBO so it would make sense to keep them fairly current. Oracle does this with is scheduled statistics job but even that can not produce current statistics for every table every tun. Database size is usually the reason due to the way the scheduled job executes. Let’s look at at that ans see where it leads.

As installed and scheduled by Oracle the daily statistics job runs during an eight hour window; the issue here is that the job will stop processing tables/indexes when that window closes. Depending upon the size of the table/index Oracle may choose to pause the execution before the eight-hour execution time ends for the day. This, of course, leaves the tables in a mixed state where some are analyzed that day and others either have been recently analyzed or are waiting until the next window opens.
Let us take, for example, an Oracle database that’s 12 TB in size — there will be several, if not many, large to very large tables, with indexes, to be processed by the automated statistics job. Let us also say that Oracle can process 3 TB of data wit the statistics job; it can be seen that it will take at least four executions of the statistics job to process every table/index in the database. Wouldn’t it be convenient to generate a report listing those tables and indexes? Siince that data is already in the database it’s a simple matter of extracting it in a usable form.

Regardless of whether the Partitioning option has been purchased there will be partitioned tables/indexes present in the data dictionary. To obtain an accurate listing the query in question will need to access:

DBA_TABLES
DBA_TA\B_PARTITIONS
DBA_INDEXES
DBA_IND_PARTITIONS

[Remember that ALL tables/indexes in the database will be included in the statistics job and these Oracle-owned partitioned tables will be interspersed throughout the entire table/index listing.] Information in that output should include:

The analyze date
The object owner
The count of objects analyzed for that owner
A complete listing of the tables/partitions/indexes analyzed on that date

The script should also allow for the user to provide a number of days prior to the current date to obtain information from (sysdate -2, sysdate -3, etc.) so that useful investigations can be performed. With that information in mind a possible script to create such a report would be:

rem
rem Set paraemters
rem
set linesize 230 pagesize 10000 verify off trimspool on
break on last_analyzed skip 1
col owner format a35
col table_name like owner
col partition_name like owner

rem
rem Get overall table count
rem

col tabcount new_value tab_ct noprint
col sys_date new_value sysdt noprint

select count(*) tabcount from dba_tables;
select to_char(sysdate, ‘MMDDRRRR’) sys_date from dual;

rem
rem Generate counts, bu owner, for offset provided
rem Spool output to a fle named with the current date
rem

select nvl(trunc(t.last_analyzed), trunc(p.last_analyzed)) last_analyzed, t.owner, count() tables, round((count()/&tab_ct)100,2) pct_analyzeed from dba_tables t left outer join dba_tab_Partitions p on p.table_owner = t.owner and p.table_name = t.table_name where t.last_analyzed is not null and t.last_analyzed >= trunc(sysdate – &&1) group by nvl(trunc(t.last_analyzed), trunc(p.last_analyzed)), t.owner union all select nvl(trunc(t.last_analyzed), trunc(p.last_analyzed)) last_analyzed, t.owner, count() indexes, round((count()/&tab_ct)100,2) pct_analyzeed
from dba_indexes t left outer join dba_ind_Partitions p
on p.index_owner = t.owner and p.index_name = t.index_name
where t.last_analyzed is not null
and t.last_analyzed >= trunc(sysdate – &&1)
group by nvl(trunc(t.last_analyzed), trunc(p.last_analyzed)), t.owner
order by 1
/

pause

rem
rem Produce listing of owner, table_name, partition_name analyzed during the oiffset period
rem

select t.owner, t.table_name, p.partition_name, nvl(t.num_rows, p.num_rows) num_rows, nvl(trunc(t.last_analyzed), trunc(p.last_analyzed)) last_analyzed
from dba_tables t left outer join dba_tab_Partitions p
on p.table_owner = t.owner and p.table_name = t.table_name
where t.last_analyzed is not null
and t.last_analyzed >= trunc(sysdate – &&1)
union all
select t.owner, t.index_name, p.partition_name, nvl(t.num_rows, p.num_rows) num_rows, nvl(trunc(t.last_analyzed), trunc(p.last_analyzed)) last_analyzed
from dba_indexes t left outer join dba_ind_Partitions p
on p.index_owner = t.owner and p.index_name = t.index_name
where t.last_analyzed is not null
and t.last_analyzed >= trunc(sysdate – &&1)
order by 5, 1, 2, 3
/

undefine 1
spool off

Going through the code let’s see what it does. The first block sets column formatting and report breaks so that the output fits on a single line per object and each date of analysis has its own block of data. This should make it easier to find objects and analysis dates for troubleshooting purposes.

Next up is the count by owner report for all owners in the database. The total count includes tables and indexes owner by each listed account.

The last section of code generates the full list of analyzed objects — tables, partitions and indexes — processed on the reported date., This portion of the report is the “meat and potatoes” of the output, where the curious DBA can access to see when a table in question was last analyzed, useful when performance issues arise.

The output can be spooled to a file for later use since the itemized list can be rather lengthy. Please note that after the owner/count report is generated the script pauses so the DBA can get a sense of the volume processed during the requested time period. Hitting the Enter key resumes the output generation.

THe first part of the report could look like this:

AST_ANALYZED OWNER TABLES PCT_ANALYZEED


22-SEP-2023 00:00:00 SYS 8 .29
PLANINURTIAN 1 .04
BUBA 4 .15
PLANINURTIAN 1 .04
GRAKNOR 1 .04
SYS 12 .44
BUBA 8 .29
GRAKNOR 3 .11

23-SEP-2023 00:00:00 SYS 18 .66
PLANINURTIAN 1 .04
BUBA 1 .04
BUBADW 1 .04
SYS 23 .84
PLANINURTIAN 1 .04
BUBA 2 .07
BUBADW 1 .04

24-SEP-2023 00:00:00 BUBADW 67 2.44
SYSTEM 3 .11
SMORTT 1 .04
BUBA 14 .51
SYS 489 17.83
BUBA 8 .29
SMORTT 2 .07
BUBADW 53 1.93
SYS 493 17.97
SYSTEM 1 .04
DBSNMP 2 .07

The complete listing will not be shown due to its large size. (User names were altered for privacy reasons.).

Given that such a script would be executed for multiple days a single table can occur more than once is a single output file. Since the analyze date is also provided it should be a simple task to determine the last time a given object was processed.

Simply because a table or index was processed, say, a week ago that is not a reliable indicator of stale statistics. The DBA_TAB_STATISTICS and DBA_IND_STATISTICS views both contain a column named STALE_STATS that can be queried to see if Oracle considers the statistics stale. Three possible values are found for STALE_STATS:

YES
NO

Rows with a value for STALE_STATS can occur for these reasons:

1) Sufficient statistics may not exist for the listed table
2) The table may be a fixed object
3) The listed table may be partitioned

[Fixed object statistics are generated by using dbms_stats.gather_fixed_objects_stats() as they are NOT gathered during a normal statistics run]. Since the data in partitioned tables lies in the partitions, and some partitions may have stale statistics a determination for the entire table cannot be made, hence the value. This is why the code above queries the partitioned tables view. The ability to generate a listing of all objects analyzed on a given date can be of great help to a DBA investigating issues regarding performance.

Regular executions of the provided code, and generating output files for later reference, can produce a pattern of analysis that can help the DBA resolve problems such ass slow queries creating performance bottlenecks. Knowing what is causing the slowness is half the battle a DBA can face; with that data in hand a plan of attack can be formulated to resolve the problem.

Fresher IS better, as far as statistics go.

September 20, 2023

Out Of Sorts

Filed under: disaster recovery,General,replication — dfitzjarrell @ 18:27

Occasionally the unthinkable can occur and the DBA can be left with a standby database that is no longer synchronizing with the primary. A plethora of “advice”will soon follow that discovery, most of it much like this:

“Well, ya gotta rebuild it.”

Of course the question to ask is “how far out of synch is the standby>” That question is key in determining how to attack this situation. Let’s go through the two most common occurrences of this and see how to address them.

Let’s start with the most drastic — you’re over 500 logs behind the primary (don’t laugh, I’m aware of at least one site where this occurred). Depending upon how RMAN is configured the DBA may not be able to restore all of the missing logs. At this point it will most likely be just as fast to recreate the standby from the primary using RMAN. Since that process is well-documented it won’t be discussed here. Please note that this is an act to perform when the next method can’t be used.

The standby is a “reasonable” number of logs behind — the fastest and least intrusive way to fix this is to recover the missing archivelogs from RMAN backups. Again this allows RMAN to come to the rescue. Going through the process step by step should make this easy for most DBAs. Let’s begin.

Step one is to determine how many logs comprise the gap that prevents the standby from getting caught up — Oracle 0provides that information in the V$ARCHIVE_GAP view:

THREAD# NUMBER
LOW_SEQUENCE# NUMBER
HIGH_SEQUENCE# NUMBER
CON_ID NUMBER

This view provides the log sequence information for the recovery gap in the form of the lowest sequence missing and the highest sequence missing. Using this view provides the information for the RMAN archive restore command shown below:

run {
allocate channel d1 device type disk;
restore archivelog from logseq &1 until logseq &2;
}

Putting that command into a script allows the DBA to pass in the two sequence values n the RMAN command line:

RMAN > @restore_logs.rman 1000 1230

This will find and restore archivelogs with the provided sequence numbers. Now the question becomes “where do these logs et restored to?” The commonly expected destination is the standby server, which is where these logs eventually need to go. The DBA can to that, but t hen the DBA must also create a script to register those logs with the standby database after they have been restored. A simpler and cleaner approach is to restore those logs to the primary and let the standby configuration handle all of the work. Restoring to the primary writes the restored logs to the configured archive log destination; the ARCH process then performs its usual duties of copying those logs to the standby server and registering them automatically. With this technique all the DBA needs to do is get the sequence range and restore them to the primary — Oracle takes over and completes the tasks at hand and the standby starts recovering as soon as the first restored log is transferred to the standby server.

This behavior can be verified by executing the following query on the standby database:

SELECT *
FROM V$MANAGED_STANDBY
WHERE THREAD# > 0
AND GROUP# <> ‘N/A’;

The output will show all of the logs being applied to the standby, regardless of whether they are archived redo logs or standby redo logs. Repeated execution of that query will show the log sequence begin applied by either LGWR or ARCH. As the log restores progress and the apply process works through them the standby closes the gap and again becomes synchronized with the primary.

This is why at least one additional standby redo log is configured for the standby — depending upon how large the gap has become that “extra” standby redo log wil get used to hold the LGWR data sent to the standby. There is nothing preventing the DBA from configuring MORE than one extra standby redo log, but it is expected that the DBA is actually monitoring the standby and can address a gab before it overflows the standby redo logs already configured. To see how many standby redo logs are involoved the following query ncan be executed:

select group#, thread#, sequence#, used, round(used*100/bytes, 2) pct_full, archived, status, first_change#, first_time, last_change#, last_time
from v$standby_log
where exists (select 1 from v$database where database_role = ‘PHYSICAL STANDBY’);

which produces output similar to:

  GROUP#  THREAD#   SEQUENCE#        USED PCT_FULL ARC STATUS       FIRST_CHANGE# FIRST_TIME              LAST_CHANGE# LAST_TIME

      10    1      300465   271206912    25.86 YES ACTIVE        110812847402 05-SEP-2023 16:46:29        110812981549 05-SEP-2023 16:47:56
      11    1       0       0     0.00 NO  UNASSIGNED
      12    1       0       0     0.00 YES UNASSIGNED
      13    1       0       0     0.00 YES UNASSIGNED
      14    1       0       0     0.00 YES UNASSIGNED

During a gap situation there will likely be several standby redo logs populated simultaneously. As the gap closes these logs will be read and the transactions applied, releasing them to return to UNASSIGNED status.

Once the archivelog restore has started on the primary the entire process can be monitored from the standby server using the two queries provided above. An occasional query of V$ARCHIVE_GAP will show the gap, or gaps, closing as the logs are restored and shipped to the standby. If a gap remains simply repeat the process, using the newest log sequence numbers. In most cases only a single execution of this process will be required to set things right.

When a standby databse is “out of sorts” with the primary most likely all that needs to occur is to restore the missing archivelogs to the primary and let Oracle take over. Rarely will the synchronization be so far “out of whack”: that a rebuild of the standby is required. Provided the DBA has properly configured the backup and recovery process for the database in question it should be a trivial matter to restore the standby to it’s rightful state.

Be prepared. It’s not just for the Boy Scouts.

September 13, 2023

What Constraints?

Filed under: General — dfitzjarrell @ 12:20

Constraints help keep data quality high by preventing conditions that can “contaminate” the data pool. They come in various forms:

  • Not null constraints
  • Check constraints
  • Unique constraints
  • Referential constraints

All have a purpose, when sued properly, but sometimes conditions change and a constraint that was once perfectly acceptable now interferes with application functionality or performance. Dropping constraints is easy, rebuilding them may not be. Oracle has thought the constraint issues and has provided ways to extract existing definitions so constraints won’t be lost.

The DBMS_METADATA.GET_DDL function is probably the fastest and easiest method to extract such definitions from the data dictionary. The basic parameters to provide are the object_type, the name and the schema:

select dbms_metadata.get_ddl(‘CONSTRAINT’,”MY_CHECK_CONSTRAINT’,’BOB’) from dual;

If the named constraint exists the definition wil be displayed. Properly formatted the definition can be saved to a SQL file for later use.

What if you don’t know the constraint name? The DBA_CONSTRAINTS view comes to the rescue:

select constraint_name, constraint_type, owner from dba_constraints where owner = ‘BOB’ and table_name = ‘SNORPLE’;

That query will return all constraints created against the table of interest. Of course you could let SQL*Plus create the commands for you:

select ‘select dbms_metadata.get_ddl(”CONSTRAINT”,”’||constraint_name||”’,”’||owner||”’)i from dual;’
from dba_constraints
where table_name = ‘ENROLLMENT_PERIOD’
and constraint_type <> ‘R’
union
select ‘select dbms_metadata.get_ddl(”REF_CONSTRAINT”,”’||constraint_name||”’,”’||owner||”’)i from dual;’
from dba_constraints
where table_name = ‘ENROLLMENT_PERIOD’
and constraint_type = ‘R’
/

ave the output to a file and it can be used to generate commands to replace every constraint for a given table, even referential constraints. DBMS_METADATA.GET_DDL knows w hat to extract for each type of constraint, including referenced tables. Add these lines to the start of the file to ensure all of the output is visible and SQL statements are properly terminated:

column text format a230
set linesize 230 pagesize 0
set long 100000

begin
dbms_metadata.set_transform_param(dbms_metadata.session_transform, ‘SQLTERMINATOR’, true);
dbms_metadata.set_transform_param(dbms_metadata.session_transform, ‘PRETTY’, true);
end;
/

These additions set the SQL*Plus formatting and set DBMS_METADATA to include the command terminator and make the output a bit more user-friendly. Notice that two different constraint types are used — referential constraints have a different definition in DBMS_METADATA.a Examining the output created we see:

ALTER TABLE “BOB”.”SNOOPLE” MODIFY (“SNOOPLE_KEY” NOT NULL ENABLE);

ALTER TABLE “BOB”.”SNOOPLE” MODIFY (“BEGIN_DATE” NOT NULL ENABLE);

ALTER TABLE “BOB”.”SNOOPLE” MODIFY (“DATE_CREATED” NOT NULL ENABLE);

ALTER TABLE “BOB”.”SNOOPLE” ADD CONSTRAINT “EP_BEGIN_END_DATE_CHK” CHECK (end_date IS NULL OR end_date >= begin_date) ENABLE NOVALIDATE;

ALTER TABLE “BOBDW”.”SNOOPLE” MODIFY (“SNOOPLE_KEY” NOT NULL ENABLE);

ALTER TABLE “BOBDW”.”SNOOPLE” MODIFY (“BEGIN_DATE” NOT NULL ENABLE);

ALTER TABLE “BOBDW”.”SNOOPLE” MODIFY (“DATE_CREATED” NOT NULL ENABLE);

ALTER TABLE “ED”.”SNOOPLE” MODIFY (“SNOOPLE_KEY” NOT NULL ENABLE);

ALTER TABLE “ED”.”SNOOPLE” MODIFY (“START_DATE” NOT NULL ENABLE);

ALTER TABLE “ED”.”SNOOPLE” MODIFY (“END_DATE” NOT NULL ENABLE);

ALTER TABLE “ED”.”SNOOPLE” ADD CONSTRAINT “SNARD_FK” FOREIGN KEY (“SNARD_KEY”)
REFERENCES “ED”.”CLAIMANT_STATUS” (“SNARD_KEY”) ENABLE;

ALTER TABLE “BOB”.”SNOOPLE” ADD CONSTRAINT “SNOOPLE_PK” PRIMARY KEY (“SNOOPLE_KEY”)
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE “BOB_X” ENABLE;

ALTER TABLE “BOBDW”.”SNOOPLE” ADD CONSTRAINT “SNOOPLE_PK” PRIMARY KEY (“SNOOPLE_KEY”)
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE “BOBDW_DATA” ENABLE;

ALTER TABLE “ED”.”SNOOPLE” ADD CONSTRAINT “SNOOPLE_PK” PRIMARY KEY (“SNOOPLE_KEY”)
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE “ED_DATA” ENABLE;

All statements are complete and contain the configured SQL terminator. These constraints can now be dropped, if necessary, and rebuilt just as they were.

Databases change over time because data requirements can change over the lifetime of an application. Because of this constraints can be added, dropped or altered because of those data changes. Having a mechanism to generate code for existing constraints should be in every DBA’s back pocket. It’s good that Oracle recognizes this and provides the DBMS_METADATA package to retrieve this information into an immediately usable form in the unfortunate event that a constraint gets, or needs to be, dropped. Using this package as the first step in managing constraints may keep the DBA from losing all of his or her hair should the unmentionable occur.

It’s a thought worth having.

September 12, 2023

Externally yours

Filed under: General — dfitzjarrell @ 13:53

Every so often it may be necessary to provide data that can change on a regular basis. Unfortunately for a “standard” table this can be complicated and tedious especially if the definition changes. Oracle provides a somewhat unique solution with the external table. Let’s see what that id, now to create one and how relatively simple it can be to alter the definition and load data.

An external table takes advantage of a tool named SQL*Loader, Oracle’s bulk data loading utility using flat files. Shown below is a scriipt to create just such a table:

CONNECT / AS SYSDBA;
drop table ch33.sec_audit_ext_tbl;
— create the external table
CREATE TABLE ch33.sec_audit_ext_tbl
(audit_dt varchar2(20),
db varchar2(12),
emp_name varchar2(80),
email varchar2(80),
acct_status varchar2(24),
priv_granted varchar2(45)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY data_pump_dir
ACCESS PARAMETERS
(
records delimited by newline
badfile data_pump_dir:’loader_tst%a_%p.bad’
logfile data_pump_dir:’loader_tst%a_%p.log’
fields terminated by ‘;’
missing field values are null
( audit_dt, db, emp_name,email, acct_status, priv_granted
)
)
LOCATION (‘user_privs_semicolon.lst’)
)
PARALLEL
REJECT LIMIT UNLIMITED;

It looks … interesting and possibly confusing, so let’s discuss each part in some detail. It starts, as any other table creation script, with the create table statement, including column names and data types:

CREATE TABLE ch33.sec_audit_ext_tbl
(audit_dt varchar2(20),
db varchar2(12),
emp_name varchar2(80),
email varchar2(80),
acct_status varchar2(24),
priv_granted varchar2(45)
)

From here it differs as it uses features found in SQL*Loader. The next section describes the table as external and provides the relevant information the utility needs to build and populate the table, like record delimiters,column delimiters, the directory where the source flat file is located and files to log the build and log bad records in the source file. Explaining this section further:

 ORGANIZATION EXTERNAL                                                                  <== Declare this an external table
 ( 
   TYPE ORACLE_LOADER                                                                   <== Use SQL*Loader to read and populate data
   DEFAULT DIRECTORY data_pump_dir                                                      <== Oracle directory where files will be read and written
   ACCESS PARAMETERS 
   ( 
     records delimited by newline                                                       <== Newlines define records
     badfile data_pump_dir:'loader_tst%a_%p.bad'                                        <== File where bad records will be recorded    
     logfile data_pump_dir:'loader_tst%a_%p.log' a                                      <== Log file
     fields terminated by ';'                                                           <== Column values are separated with semicolons
     missing field values are null                                                      <== "Missing" values are considered NULL
     ( audit_dt, db, emp_name,email, acct_status, priv_granted                          <== columns to populate in each row
     ) 
   ) 
   LOCATION ('user_privs_semicolon.lst')                                                <== File name to load
 ) 

Next is whether to use serial or parallel processing followed by the reject limit to reach before failing the table creation. Siince SQL*Loader is the tool of choice it will follow the usual rule for bulk data loading.

As this table relies upon the flat file used as the source the data can be re-loaded by simply modifying the file specified in the LOCATION parameter. Using this example let’s assume there are two possible flat files:

   user_privs_semicolon.lst  
   user_privs_semicolo2n.lst  

This table can use one, the other, or both files as the data source. Obviously changing the file name is the first way to change the table data loaded. The second is to add file names, enclosed in single-quotes and separated by commas, to load multiple files at the same time. To make this a bit clearer presume a numeric data is used as part of the file name:

   user_privs_semicolon_01012023.lst  
   user_privs_semicolon_01312023.lst  

`
Now we know the extract dates for this data and we can load either file individually or load them both. If the data changes regularly there may be weekly extracts:

   user_privs_semicolon_01012023.lst  
   user_privs_semicolon_01082023.lst  
   user_privs_semicolon_01152023.lst  
   user_privs_semicolon_01222023.lst  
   user_privs_semicolon_01292023.lst  
  ...

Given that frequency of file creation it’s possible to load data for roughly an entire month by defining LOCATION as follows:

   LOCATION ('user_privs_semicolon_01012023.lst','user_privs_semicolon_01082023.lst', user_privs_semicolon_01152023.lst','user_privs_semicolon_01222023.lst','user_privs_semicolon_01292023.lst')

Presuming no errors occur during the load phase of the external table creation all of the rows will appear in the external table. Changing the data is a matter of dropping the current table definition, editing the script to change flat files and executing the modified script.

External tables build quickly, yet return data a bit slower than a conventional table the first time they are accessed, due to the nature of SQL*Loader. Once a session as accessed the table no additional loading is necessary and results return with the same speed as data from permanent, physical tables. As long as the user maintains his or her current session queries will be speedy. Should the session be disconnected due to inactivity or a network interruption the table will need to be reloaded. As each session gets a memory-resident copy of the data many users can access the same data without any impact to query speed.

Just as simply the external table definition can change so if fields are added or removed from the extract simple edits to the source script can easily add or remove columns. Since external tables load the definition into memory they consume no physical storage and leave no “bodies” behind when dropped. For example if the Oracle username is to be included in the next extract the script would be changed to:

CONNECT / AS SYSDBA;
drop table ch33.sec_audit_ext_tbl;
— create the external table
CREATE TABLE ch33.sec_audit_ext_tbl
(audit_dt varchar2(20),
db varchar2(12),
username varchar2(35),
emp_name varchar2(80),
email varchar2(80),
acct_status varchar2(24),
priv_granted varchar2(45)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY data_pump_dir
ACCESS PARAMETERS
(
records delimited by newline
badfile data_pump_dir:’loader_tst%a_%p.bad’
logfile data_pump_dir:’loader_tst%a_%p.log’
fields terminated by ‘;’
missing field values are null
( audit_dt, db, username, emp_name,email, acct_status, priv_granted
)
)
LOCATION (‘user_privs_semicolon.lst’)
)
PARALLEL
REJECT LIMIT UNLIMITED;

The column is added to the definition and to the list of columns to populate on load. The table is now changed to include the Oracle username value and the column is located exactly where the developer wants it. It really is that simple.

Creating and using external tables is a fairly operation, once the basic structure of the external table creation command is understood. Simple edits make it convenient to change the table structure, the data loaded, or both, allowing the suer to tailor the table or tables as the data and/or query requirements dictate.

Data in external tables cannot be updated using the traditional “UPDATE schplitzenflibber …” syntax as they are read-only objects. Updating data requires that the source file be edited and the table rebuild/reloaded. This, of course, ensures that the data in= an external table cannot be inadvertently changed bu an errant insert/update/delete statement:

33prap22(uat001-wl-oradb): SYS > update ch33.sec_audit_ext_tbl set db = ‘SNARD’;
update ch33.sec_audit_ext_tbl set db = ‘SNARD’
*
ERROR at line 1:
ORA-30657: operation not supported on external organized table

Inserts and deletes will be met with the same error, making external tables ideal for storing reference data that cannot be changed by the end user.

Trailing spaces in fields are not removed by a VARCHAR2 declaration so such spaces must be removed. Edits to a source file can be made after the table is created and will be reflected in the output the next time the table is queried.

External tables aren’t for every situation, but they can be extremely handy when reference data can change frequently and the data is not to be updated by end users. They may take some practice in their setup but they load quickly, they can be modified with little effort, they occupy no physical storage and the data can’t be changed by conventional insert/update/delete statements. One down side to external tables is the data cannot be encrypted –no tablespace contains them so TDE can’t be used. Oracle support reports that there is no method to encrypt external table data, so external tables are not a good choice when PHI and/or PII is to be used. However since an external table can be used to store non-sensitive data any PII or PHI can be stored in a conventional table using TDE and encrypted tablesapces external tables are well-suited for non-sensitive lookup data that references any PII/PHI values.

External tables can be very useful when it’s necessary to augment table data with values not otherwise stored in the database. They can eliminate the tedious task of matching values from flat file to database columns by putting such data into a form SQL*Plus can access, saving time, energy and the patience of the end user. And, the more the end users can do for themselves the less likely they are to go to the DBA. It’s a win/win situation.

And we all like winning.

September 8, 2023

“It’s your undoing…”

Filed under: General — dfitzjarrell @ 14:55

Transactions are the lifeblood of a relational database, Oracle included. Every insert, update and delete is important, but so is the ability to rollback, or undo, any current transaction should an error or mistake occur. Getting some transactions ‘right’ can involve some trial and error on the part of the developer as queries used to provide the data set to modify are tuned and adjusted. Oracle automatically provides the ability to rollback uncommitted transactions and, in the process, allows the DBA to monitor the undo area for usage. Although automatic undo shouldn’t be an issue, provided the UNDO tablespace is properly sized for the workload, it can be helpful to see which SQL statements are consuming undo, and how much they are consuming. Let’s look into the ways a DBA can find this information.

Automatic undo has been in use for quite a while, utilizing a locally managed tablespace created specifically for this purpose. Undo segments are created automatically to meet the changing demand. As these segments are used the number of extents increases allowing even the largest transactions to generate sufficient undo to rollback the transaction if the need arises. Of course there is the dreaded ORA-01555 error, thrown when changes overwrite undo records being used by a long-running transaction. Manual rollback is also possible, should a transaction update more data than intended and need to be modified to provide correct results, provided the action has not yet been committed. This should be common knowledge to the DBA.

Four views can give the DBA information on the undo operations and usage:

DBA_HIST_UNDOSTAT
DBA_UNDO_EXTENTS
GV_$UNDOSTAT
V_$UNDOSTAT

DBA_HIST_UNDOSTAT is part of the Diagnostic and Tuning Pack views used by AWR and ASH reports. It collects limited historical data about the undo stream and its usage; snapshots are collected from the data in GV_$UNDOSTAT and kept in accordance with the configured retention period for the workload repository.

DBA_UNDO_EXTENTS provides information on the allocated undo extents, including segment name, extent number and the associated size.

GV_$UNDOSTAT, along with V_$UNDOSTAT, allow the DBA to monitor undo activity through statistical information such as the blocks consumed, the transaction count for the reported interval, various “steal” counts, active, unexpired and expired block counts and the tuned undo retention value. Such data helps the DBA track usage by date and time so that periods of peak activity can be determined.

The following views also report on undo, using the older rollback terminology in their naming:

DBA_ROLLBACK_SEGS
GV_$ROLLSTAT
KU$_ROLLBACK_VIEW
V_$ROLLNAME
V_$ROLLSTAT

These do provide different information than the %UNDO% views, adding another dimension to the reporting data. All nine should be considered when reporting on UNDO usage.

Other views outside of those nine can also contribute to UNDO reporting; the following query produces a useful report on undo usage and what SQL, if any, is responsible:

set linesize 240
COLUMN sid FORMAT 99999
COLUMN command_or_action FORMAT A90 HEADING ‘SQL Command or Action’ WRAP
COLUMN current_time FORMAT A19 HEADING ‘Current Time’
COLUMN rollback_mb FORMAT 99,990.90 HEADING ‘Undo|MB Used’
COLUMN rows_per_sec FORMAT 99,999,990.90 HEADING ‘TXN Rows|Per Sec’
COLUMN start_time FORMAT a19 HEADING ‘TXN Start Time’
COLUMN status FORMAT A8 HEADING ‘Status’
COLUMN used_urec FORMAT 9,999,999,990 HEADING ‘Undo Rows|Written’
COLUMN username FORMAT A15 HEADING ‘Username’
COLUMN program format a25

SELECT s.sid
, TO_CHAR(TO_DATE(t.start_time, ‘MM/DD/YY HH24:MI:SS’), ‘DD-MON-YY HH24:MI:SS’) start_time
, TO_CHAR(sysdate, ‘DD-MON-YY HH24:MI:SS’) current_time
, CASE WHEN (sysdate – TO_DATE(t.start_time, ‘MM/DD/YY HH24:MI:SS’)) = 0
THEN t.used_urec
ELSE t.used_urec / NVL(((sysdate – TO_DATE(t.start_time, ‘MM/DD/YY HH24:MI:SS’)) * 86400), 1)
END rows_per_sec
, SUBSTR(s.username, 1, 15) username
, program
, DECODE(BITAND(t.flag,128), 0, NULL, ‘Rollback’) status
, t.used_urec
, ROUND(r.rssize / 1024 / 1024, 2) rollback_mb
, DECODE(sq.sql_text, NULL, DECODE(aa.name, NULL, ‘UNKNOWN’, aa.name), sq.sql_text) command_or_action
FROM v$transaction t
, v$session s
, v$rollstat r
, v$sql sq
, audit_actions aa
WHERE (t.xidusn = r.usn)
AND (t.addr = s.taddr (+))
AND ( s.sql_hash_value = sq.hash_value (+)
AND s.sql_address = sq.address (+))
AND (s.command = aa.action)
ORDER BY t.start_time, s.sid;

Looking at the data the above query returns we find a wealth of information:

                         TXN Rows                                 Undo Rows       Undo

SID TXN Start Time Current Time Per Sec Username PROGRAM Status Written MB Used SQL Command or Action


145 01-JUN-22 15:38:06 01-JUN-22 15:38:16 3,755.30 BING sqlplus@a633a772bc3b (TNS 37,553 4.99 delete from t1 where mod(id,100) >= 10
V1-V3)

145 01-JUN-22 15:38:06 01-JUN-22 15:38:16 3,755.30 BING sqlplus@a633a772bc3b (TNS 37,553 4.99 delete from t1 where mod(id,100) >= 10
V1-V3)

145 01-JUN-22 15:38:06 01-JUN-22 15:38:16 3,755.30 BING sqlplus@a633a772bc3b (TNS 37,553 4.99 delete from t1 where mod(id,100) >= 10
V1-V3)

The DBA now has the undo rows written, the megabytes of UNDO space consumed and the SQL statement generating this undo usage. Repeated execution of this query will return the current UNDO usage for all active transactions generating UNDO, including recursive operations executed by the database engine such as data dictionary management processes. Such actions are likely to be listed with a SQL command as UNKNOWN in the query output since no name is assigned in the AUDIT_ACTIONS view. This query allows the DBA to dynamically monitor UNDO usage by transaction so long-running operations, that can consume large quantities of space, can be identified and possibly tuned to reduce the UNDO demand.

how does one go about tuning UNDO? With releases of Oracle starting with 10.2 the DBMS_UNDO_ADV package is available:

FUNCTION BEST_POSSIBLE_RETENTION RETURNS NUMBER
Argument Name Type In/Out Default?


STARTTIME DATE IN
ENDTIME DATE IN
FUNCTION BEST_POSSIBLE_RETENTION RETURNS NUMBER
FUNCTION BEST_POSSIBLE_RETENTION RETURNS NUMBER
Argument Name Type In/Out Default?


S1 NUMBER IN
S2 NUMBER IN
FUNCTION LONGEST_QUERY RETURNS NUMBER
Argument Name Type In/Out Default?


STARTTIME DATE IN
ENDTIME DATE IN
FUNCTION LONGEST_QUERY RETURNS NUMBER
FUNCTION LONGEST_QUERY RETURNS NUMBER
Argument Name Type In/Out Default?


S1 NUMBER IN
S2 NUMBER IN
FUNCTION RBU_MIGRATION RETURNS NUMBER
Argument Name Type In/Out Default?


STARTTIME DATE IN
ENDTIME DATE IN
FUNCTION RBU_MIGRATION RETURNS NUMBER
FUNCTION REQUIRED_RETENTION RETURNS NUMBER
Argument Name Type In/Out Default?


STARTTIME DATE IN
ENDTIME DATE IN
FUNCTION REQUIRED_RETENTION RETURNS NUMBER
FUNCTION REQUIRED_RETENTION RETURNS NUMBER
Argument Name Type In/Out Default?


S1 NUMBER IN
S2 NUMBER IN
FUNCTION REQUIRED_UNDO_SIZE RETURNS NUMBER
Argument Name Type In/Out Default?


RETENTION NUMBER IN
STARTTIME DATE IN
ENDTIME DATE IN
FUNCTION REQUIRED_UNDO_SIZE RETURNS NUMBER
Argument Name Type In/Out Default?


RETENTION NUMBER IN
FUNCTION REQUIRED_UNDO_SIZE RETURNS NUMBER
Argument Name Type In/Out Default?


RETENTION NUMBER IN
S1 NUMBER IN
S2 NUMBER IN
FUNCTION REQUIRED_UNDO_SIZE RETURNS NUMBER
Argument Name Type In/Out Default?


RETENTION DBMS_UADV_ARR IN
UTBSIZE DBMS_UADV_ARR IN/OUT
STARTTIME DATE IN
ENDTIME DATE IN
FUNCTION REQUIRED_UNDO_SIZE RETURNS NUMBER
Argument Name Type In/Out Default?


RETENTION DBMS_UADV_ARR IN
UTBSIZE DBMS_UADV_ARR IN/OUT
FUNCTION REQUIRED_UNDO_SIZE RETURNS NUMBER
Argument Name Type In/Out Default?


RETENTION DBMS_UADV_ARR IN
UTBSIZE DBMS_UADV_ARR IN/OUT
S1 NUMBER IN
S2 NUMBER IN
FUNCTION UNDO_ADVISOR RETURNS VARCHAR2
Argument Name Type In/Out Default?


STARTTIME DATE IN
ENDTIME DATE IN
INSTANCE NUMBER IN
FUNCTION UNDO_ADVISOR RETURNS VARCHAR2
Argument Name Type In/Out Default?


INSTANCE NUMBER IN
FUNCTION UNDO_ADVISOR RETURNS VARCHAR2
Argument Name Type In/Out Default?


S1 NUMBER IN
S2 NUMBER IN
INSTANCE NUMBER IN
FUNCTION UNDO_AUTOTUNE RETURNS BOOLEAN
Argument Name Type In/Out Default?


AUTOTUNE_ENABLED BOOLEAN OUT
FUNCTION UNDO_HEALTH RETURNS NUMBER
Argument Name Type In/Out Default?


PROBLEM VARCHAR2 OUT
RECOMMENDATION VARCHAR2 OUT
RATIONALE VARCHAR2 OUT
RETENTION NUMBER OUT
UTBSIZE NUMBER OUT
FUNCTION UNDO_HEALTH RETURNS NUMBER
Argument Name Type In/Out Default?


STARTTIME DATE IN
ENDTIME DATE IN
PROBLEM VARCHAR2 OUT
RECOMMENDATION VARCHAR2 OUT
RATIONALE VARCHAR2 OUT
RETENTION NUMBER OUT
UTBSIZE NUMBER OUT
FUNCTION UNDO_HEALTH RETURNS NUMBER
Argument Name Type In/Out Default?


S1 NUMBER IN
S2 NUMBER IN
PROBLEM VARCHAR2 OUT
RECOMMENDATION VARCHAR2 OUT
RATIONALE VARCHAR2 OUT
RETENTION NUMBER OUT
UTBSIZE NUMBER OUT
FUNCTION UNDO_INFO RETURNS BOOLEAN
Argument Name Type In/Out Default?


TABLE_SPACE_NAME VARCHAR2 OUT
TABLE_SPACE_SIZE NUMBER OUT
AUTO_EXTEND BOOLEAN OUT
UNDO_RETENTION NUMBER OUT
RETENTION_GUARANTEE BOOLEAN OUT

Two functions appear to be quite useful — undo_advisor and undo_health. These functions perform transactions to test the UNDO tablespace so they cannot be executed using selects. Taking undo_health “for a spin” we have:

1 declare
2 PROBLEM VARCHAR2(4000);
3 RECOMMENDATION VARCHAR2(4000);
4 RATIONALE VARCHAR2(4000);
5 RETENTION NUMBER;
6 UTBSIZE NUMBER;
7 retval number;
8 begin
9 retval :=dbms_undo_adv.undo_health(sysdate-1, sysdate,problem,recommendation,rationale,retention,utbsize);
10 dbms_output.put_line(problem);
11 dbms_output.put_line(recommendation);
12 dbms_output.put_line(rationale);
13 dbms_output.put_line(retention);
14 dbms_output.put_line(utbsize);
15* end;
SQL > /
No problem found
0
0

PL/SQL procedure successfully completed.

The function can return up to five values, depending upon what, if any, problem is discovered. As can be seen from the above example no issues are present in the undo tablespace. In similar fashion the undo_advisor is called, and it returns one value. Taking that function for a test drive we find:

1 declare
2 retval varchar2(4000);
3 begin
4 retval :=dbms_undo_adv.undo_advisor(sysdate-1, sysdate,1);
5 dbms_output.put_line(retval);
6* end;
SQL > /
Finding 1:The undo tablespace is OK.

PL/SQL procedure successfully completed.

The output is fairly sparse when no issues are present, but when the UNDO tablespace is healthy effusive output is unnecessary.

Other functions, like best_possible_retention, can be executed through a query. This one outputs, obviously, the best possible undo_retention setting to use:

SQL> select dbms_undo_adv.best_possible_retention(sysdate-1, sysdate) from dual;

DBMS_UNDO_ADV.BEST_POSSIBLE_RETENTION(SYSDATE-1,SYSDATE)

                     2514577

The value it can generate may seem a bit … large when compared to the default setting of 900. Remember this is a recommended setting, not a value chiseled in stone, so it may be accepted or ignored. As long as such recommendations are not implemented with a guarantee there should be no issues with setting a retention value that large.

Part of the tuning process is monitoring the undo system after changes are made. DBA_UNDO_EXTENTS shows the available extents and their expiration status; the larger the undo_retention the longer extents will remain unexpired. Unexpired extents are not an issue, as Oracle is prepared to “steal” unexpired undo extents if they are needed, unless the retention is guaranteed. Guaranteeing undo retention runs the risk of errors in query execution since Oracle cannot “steal” unexpired extents when they are needed, even if they are not being used.

How do you know if undo is being “stolen”? The v$undostat view reports on that and more:

SQL > desc v$undostat
Name Null? Type


BEGIN_TIME DATE
END_TIME DATE
UNDOTSN NUMBER
UNDOBLKS NUMBER
TXNCOUNT NUMBER
MAXQUERYLEN NUMBER
MAXQUERYID VARCHAR2(13)
MAXCONCURRENCY NUMBER
UNXPSTEALCNT NUMBER
UNXPBLKRELCNT NUMBER
UNXPBLKREUCNT NUMBER
EXPSTEALCNT NUMBER
EXPBLKRELCNT NUMBER
EXPBLKREUCNT NUMBER
SSOLDERRCNT NUMBER
NOSPACEERRCNT NUMBER
ACTIVEBLKS NUMBER
UNEXPIREDBLKS NUMBER
EXPIREDBLKS NUMBER
TUNED_UNDORETENTION NUMBER
CON_ID NUMBER

UNDOBLKS, TXNCOUNT, MAXQUERYLEN, MAXQUERYID and UNXPSTEALCNT are worth monitoring since these reflect the volume of UNDO generated and managed by Oracle. Oracle “stealing” unexpired undo extents often indicates an UNDO tablespace that may not be large enough to handle the transaction load users are generating. Let’s look at some typical output from that view:

BEGIN_TIME END_TIME UNDOTSN UNDOBLKS TXNCOUNT MAXQUERYLEN MAXQUERYID MAXCONCURRENCY UNXPSTEALCNT UNXPBLKRELCNT UNXPBLKREUCNT EXPSTEALCNT EXPBLKRELCNT EXPBLKREUCNT SSOLDERRCNT


NOSPACEERRCNT ACTIVEBLKS UNEXPIREDBLKS EXPIREDBLKS TUNED_UNDORETENTION CON_ID


18-OCT-2022 16:51:20 18-OCT-2022 16:54:42 2 0 41 0 0 0 0 0 0 0 0 0
0 160 221312 16512 444650 0

18-OCT-2022 16:41:20 18-OCT-2022 16:51:20 2 0 12 0 0 0 0 0 0 0 0 0
0 160 221312 16512 437282 0

18-OCT-2022 16:31:20 18-OCT-2022 16:41:20 2 17 150 1580 f3yfg50ga0r8n 2 0 0 0 0 0 0 0
0 160 221312 16512 415260 0

The MAXQUERYLEN is in seconds, and the MAXQUERYID is the SQL_ID for that query. Notice that all of the various count variables record 0, an indication that UNDO is properly sized. Should any counts increase it is likely that the UNDO tablespace will need one or more additional data files. When data files are added to an UNDO tablespace ensure they are set to autoextend, to minimize the attention that tablespace may need (it is easier to let Oracle automatically extend the available data files than it is to constantly monitor file size and manually extend one or more files after an error is thrown).

Querying v$undostat along with regular execution of the two DBMS_UNDO_ADV scripts provided can provide a fairly robust suite of checks for UNDO health. The scripts take very little time to run and do not affect currently running transactions so regular executions throughout the work day are not a concern.

A “quick and dirty” check on undo consumption can be generated with this query:

et linesize 230 pagesize 30
col username format a30
col target format a67

select s.username, s.sid, s.serial#, t.ubablk, t.ubarec, lo.sofar, lo.totalwork, lo.target
from v$session s join v$transaction t on s.taddr = t.addr
left outer join v$session_longops lo on lo.sid = s.sid and lo.serial# = s.serial#
/

It reports the username, session identifiers, undo blocks used, undo records written, the table being accessed, the work done so far and the total work for the operation reported. Sample output follows:

USERNAME SID SERIAL# UBABLK UBAREC SOFAR TOTALWORK TARGET


CH33_USER 1710 57816 56446 67 1140540 1140540 CH33.TIMS_DOCUMENT
CH33_USER 15 9561 233585 58
CH33_USER 3134 11670 326184 35
CH33_USER 3416 7942 1929966 33
CH33_USER 4294 40398 295056 17
CH33_USER 635 33966 289612 53
SWEAMS_USER 581 36936 295056 17
SWEAMS_USER 3969 62960 56446 67
CH33_USER 3428 1681 1009409 33

9 rows selected.

This query can be useful when seeing large volumes of undo that are not released in a short period of time, signalling the DBA to monitor the UNDO tablespace allocations and the undo segments in use. Should the UNDO tablespace usage pass 90% many monitoring tools will provide a space warning — of course, depending upon the autoextend state of the datafiles the warning may be ignorable. A query like this:

set linesize 132 tab off trimspool on
set pagesize 105
set pause off
set feedb on

column “TOTAL AVAIL (MB)” format 999,999,999,990.00
column “TOTAL PHYS ALLOC (MB)” format 9,999,990.00
column “USED (MB)” format 9,999,990.00
column “FREE (MB)” format 9,999,990.00
column “% USED OF AVAIL” format 990.00
column “% USED OF ALLOC” format 990.00

select a.tablespace_name,
nvl(b.tot_used,0)/(10241024) “USED (MB)”, a.physical_bytes/(10241024) “TOTAL PHYS ALLOC (MB)”,
a.bytes_alloc/(10241024) “TOTAL AVAIL (MB)”, (nvl(b.tot_used,0)/a.physical_bytes)100 “% USED OF ALLOC”,
(nvl(b.tot_used,0)/a.bytes_alloc)100 “% USED OF AVAIL” from ( select tablespace_name, sum(bytes) physical_bytes, sum(decode(autoextensible,’NO’,bytes,’YES’,maxbytes)) bytes_alloc from dba_data_files group by tablespace_name ) a, ( select tablespace_name, sum(bytes) tot_used from dba_segments group by tablespace_name ) b where a.tablespace_name = b.tablespace_name (+) — and (nvl(b.tot_used,0)/a.bytes_alloc)100 > 10
and a.tablespace_name not in (select distinct tablespace_name from dba_temp_files)
order by 6
/

reports tablespace usage relative to the currently allocated space and also calculates usage based upon the maximum size the autoextend files can attain:

TABLESPACE_NAME USED (MB) TOTAL PHYS ALLOC (MB) TOTAL AVAIL (MB) % USED OF ALLOC % USED OF AVAIL


USERS 6.13 5,521.00 32,767.98 0.11 0.02
SYSTEM 1,994.13 2,010.00 32,767.98 99.21 6.09
SYSAUX 2,150.25 13,400.00 32,767.98 16.05 6.56
UNDOTBS1 42,492.50 118,781.00 131,071.94 35.77 32.42

Monitoring tools usually trigger oh the value in the first percent used column, which can differ significantly from value in the second percent used column. A query like this one can help the DBA distinguish the between an actual alert and “noise” and save him or her from needlessly adding storage. It also keeps management from worrying about essentially nothing, and everyone can appreciate that.

In the event the DBA must add space to the UNDO tablespace note that the datafiles are set to autoextend. This is easily accomplished using the following syntax:

alter tablespace undotbs1 add datafile ‘/my/data/files/undotbs4.dbf’ size 20g autoextend on next 128M maxsize unlimited;

Of course the UNDO tablespace name and the fully qualified file name are unique to each database installation, so before adding space verify the file names and locations already configured for the UNDO tablespace. What may seem odd here is the 128M autoextend allocations; checking what Oracle created will confirm that value. Do not change that value as it may adversely affect UNDO performance.

Even with automatic undo management in the database it still falls on the shoulders of the DBA to regularly monitor UNDO space ans usage. Regular database health checks should include such checks to provide at least a daily glimpse in the UNDO arena. If UNDO fails the database stops processing transactions until the issue is resolved, which would indeed be a business disaster visible to all. Being in the “hot seat” is NEVER comfortable, and anything someone can do to prevent that from occurring is definitely effort well-spent. A few minutes each day monitoring UNDO is cheap insurance that can prevent a grinding halt to data processing at the most inopportune time

UNDO is a critical part of the transaction pathway and should be given a prominent place in any database monitoring script. Not monitoring UNDO is a flaw that needs correction — the queries provided here can give the DBA the “heads up” notice needed to ensure every transaction can complete as expected. And they will give the DBA knowledge of the UNDO usage patterns so that at peak times of activity there is sufficient UNDO space to handle the load. A calm DBA is a useful DBA, and a prepared DBA is, obviously, calm. So monitor UNDO before it can become an issue.

Unless you LIKE being Chicken Little.

Next Page »

Blog at WordPress.com.