Oracle Tips and Tricks — David Fitzjarrell

July 14, 2016

Size Does Matter

Filed under: General — dfitzjarrell @ 11:07

"You see, to tall men I'm a midget, and to short men I'm a giant;
to the skinny ones I'm a fat man, and to the fat ones I'm a thin man."
-- Norton Juster, The Phantom Tollbooth

Someone in one of the Oracle forums asked the following question:

How to estimate datafile size of tablespace when it is almost full? Could someone please explain?

which is ambiguous to say the least, since no information was provided by the person posting as to system configuration, database size, transaction volume, etc. Several responses later the original poster replied:

My question is how much space to allocate to datafile when tablespace of it reaches almost max size?

which, on the face of it, seems to be a more ‘answerable’ question. Information is still missing, however, so it’s difficult to provide any real solution to the person who asked. Since Oracle doesn’t really monitor tablespace growth –it does monitor file size and space within the datafiles but those are only part of the required information — it’s up to the DBA (which, presumably, is you) to monitor such growth. It isn’t difficult, but it does take some preparation before any results can be returned. Let’s look at one way tablespace monitoring can be achieved.

The following script sets up a table and a sequence that will be populated by another script running as a scheduled job or task on the database server:

create table tablespace_monitor(
	mon_run	number,
	run_dt	date,
	tablespace_name varchar2(35),
	available  number,
	used	number,
	free	number,
	pct_free number(9,2))
tablespace users;

create sequence tab_mon_seq
start with 1 increment by 1 nocycle nomaxvalue nocache;

Created when connected as SYS as SYSDBA this will hold the necessary data to monitor tablespaces and their growth. This table will be populated by the following script:

insert into tablespace_monitor (tablespace_name, available, used, free, pct_free)
select tname_a tablespace_name, 
       ttl_avail available, 
       (ttl_avail - ttl_free) used, 
       ttl_free free, 
       round((ttl_free/ttl_avail)*100,2) "PCT FREE"
 (select tablespace_name tname_a, sum(user_bytes) ttl_avail
 from dba_data_files
 group by tablespace_name) avail left join
 (select tablespace_name tname_f, sum(bytes) ttl_free
 from dba_free_space
 group by tablespace_name) free on tname_f = tname_a
select tablespace_name, 
       sum(bytes_used+bytes_free) ttl_avail,
       round((sum(bytes_free)/sum(bytes_used+bytes_free))*100,2) pct_free
from v$temp_space_header
group by tablespace_name);


column next_seq noprint
select tab_mon_seq.nextval next_seq from dual;

update tablespace_monitor
set mon_run = tab_mon_seq.currval,
    run_dt = sysdate
where mon_run is null;


The provided scripts have been tested on and work as expected. The second script is scheduled to run once daily to provide daily tablespace size data, which will be used by the next script to generate a report:

set linesize 200 numwidth 13 pagesize 40
column "CONSUMED/FREED (+)" format a20
column "TABLESPACE NAME" format a15

select m1.tablespace_name "TABLESPACE NAME", "BYTES FREE", m2.pct_free "PCT FREE",
      case when trunc(( -*1024)) > 0 then lpad(to_char(round(( -*1024),2)),20)
            when trunc(( - between 1 and 999 then lpad(to_char(round(( -,2)),20)
           when trunc(( -*1024)) > 0 then 'MB'
            when abs(trunc(( - between 1 and 999 then 'KB'
            else null end unit,
       case when trunc(((( -*1024))/((m2.run_dt - m1.run_dt)*1440))) > 0 then round(((( -*1024))/((m2.run_dt - m1.run_dt)*1440)),2)
            when trunc(((( - - m1.run_dt)*1440))) between 1 and 999 then round(((( - - m1.run_dt)*1440)),2)
            when trunc(((( -*1024))/((m2.run_dt - m1.run_dt)*1440))) >  0 then 'MB'
            when abs(trunc(((( - - m1.run_dt)*1440)))) between 1 and 999 then 'KB'
            else null end "UNIT PER MIN",
       round((m2.run_dt - m1.run_dt)*1440,2) "WINDOW IN MIN",
       to_char(m1.run_dt, 'RRRR-MM-DD HH24:MI:SS') "BEGIN DATE/TIME",
       to_char(m2.run_dt, 'RRRR-MM-DD HH24:MI:SS') "LAST RUN DATE/TIME"
from tablespace_monitor m1, tablespace_monitor m2
where m2.mon_run = (select max(mon_run) from tablespace_monitor)
and m1.mon_run = (select min(mon_run) from tablespace_monitor)
and m2.tablespace_name = m1.tablespace_name

spool &1
spool off

set linesize 80

The script takes the most recent data then takes the previous snapshot data and generates the difference, showing the growth in the last 24 hours based on running this script once per day. Of course it can be run more than once per day, but the report will only be generated on the most recent data and the run prior to that; if it is desired to run the population script more than once per day it’s recommended that the report also be run more than once per day, to report on all of the generated values stored in the tablespace_monitor table.

The report looks like this:

--------------- ------------- ------------- -------------------- -- ------------- -- ------------- ------------------- -------------------
INDX                103809024           100                    0                0             9.87 2016-07-14 10:00:20 2016-07-14 10:10:12
SYSAUX               62193664          7.61                 2.19 MB        227.03 KB          9.87 2016-07-14 10:00:20 2016-07-14 10:10:12
SYSTEM               62849024          7.06                    0                0             9.87 2016-07-14 10:00:20 2016-07-14 10:10:12
TEMP                 42991616         68.33                    0                0             9.87 2016-07-14 10:00:20 2016-07-14 10:10:12
UNDOTBS1           1986789376         97.47                   23 MB          2.33 MB          9.87 2016-07-14 10:00:20 2016-07-14 10:10:12
USERS             33880604672         98.69               88 (+) MB          8.92 MB          9.87 2016-07-14 10:00:20 2016-07-14 10:10:12

Over a period of time growth trends can be determined for each tablespace, and space additions can be planned accordingly. Since it’s not likely that the grown patterns will be linear it’s a good idea to make note of reports of exceptionally large space usage and the period of time in which they occur. This should provide a window to plan on storage needs and give time to the System Admins and Storage Admins to prepare additional storage for use.

Storage cannot be managed without usage and growth data yet it seems that some DBAs try to do just that; a S.W.A.G CAN get you there on occasion but more often than not it’s simply glorified guesswork without a crystal ball or Ouija board handy. Regardless of how the end-users see them databases are part of a complex managed data entry and retrieval system that should not be left to chance or ‘sophisticated’ guesswork. Planning and monitoring are necessities in the IT environment; the database should not be left ‘out in the cold’ to suffer on its own.

Whether the provided scripts are used to monitor tablespace growth or you decide to write your own implementing tablespace monitoring should be at or near the top of every DBAs ‘to-do’ list. Keeping ‘tabs’ on how much space is being used in which tablespace can only help the DBA by ensuring that the users don’t run out of space because usage wasn’t tracked. Emergency space additions take time and interrupt the business flow; scheduling such additions can prevent the end-users from complaining and will keep management happy because lack of space didn’t stop the flow of transactions. Which means everybody wins.

It does seem that size is important.


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Create a free website or blog at

%d bloggers like this: